Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Topics - Md. Al-Amin

Pages: 1 ... 25 26 [27] 28
391
MCT / Microprocessors
« on: August 20, 2013, 09:52:29 AM »
Microprocessors

In the 1970s the fundamental inventions by Federico Faggin (Silicon Gate MOS ICs with self aligned gates along with his new random logic design methodology) changed the design and implementation of CPUs forever. Since the introduction of the first commercially available microprocessor (the Intel 4004) in 1970, and the first widely used microprocessor (the Intel 8080) in 1974, this class of CPUs has almost completely overtaken all other central processing unit implementation methods. Mainframe and minicomputer manufacturers of the time launched proprietary IC development programs to upgrade their older computer architectures, and eventually produced instruction set compatible microprocessors that were backward-compatible with their older hardware and software. Combined with the advent and eventual success of the ubiquitous personal computer, the term CPU is now applied almost exclusively[a] to microprocessors. Several CPUs (denoted 'cores') can be combined in a single processing chip.

Previous generations of CPUs were implemented as discrete components and numerous small integrated circuits (ICs) on one or more circuit boards. Microprocessors, on the other hand, are CPUs manufactured on a very small number of ICs; usually just one. The overall smaller CPU size as a result of being implemented on a single die means faster switching time because of physical factors like decreased gate parasitic capacitance. This has allowed synchronous microprocessors to have clock rates ranging from tens of megahertz to several gigahertz. Additionally, as the ability to construct exceedingly small transistors on an IC has increased, the complexity and number of transistors in a single CPU has increased many fold. This widely observed trend is described by Moore's law, which has proven to be a fairly accurate predictor of the growth of CPU (and other IC) complexity.[6]

While the complexity, size, construction, and general form of CPUs have changed enormously since 1950, it is notable that the basic design and function has not changed much at all. Almost all common CPUs today can be very accurately described as von Neumann stored-program machines. As the aforementioned Moore's law continues to hold true,[6] concerns have arisen about the limits of integrated circuit transistor technology. Extreme miniaturization of electronic gates is causing the effects of phenomena like electromigration and subthreshold leakage to become much more significant. These newer concerns are among the many factors causing researchers to investigate new methods of computing such as the quantum computer, as well as to expand the usage of parallelism and other methods that extend the usefulness of the classical von Neumann model.
Operation

The fundamental operation of most CPUs, regardless of the physical form they take, is to execute a sequence of stored instructions called a program. The instructions are kept in some kind of computer memory. There are four steps that nearly all CPUs use in their operation: fetch, decode, execute, and writeback.

The first step, fetch, involves retrieving an instruction (which is represented by a number or sequence of numbers) from program memory. The location in program memory is determined by a program counter (PC), which stores a number that identifies the current position in the program. After an instruction is fetched, the PC is incremented by the length of the instruction word in terms of memory units.[c] Often, the instruction to be fetched must be retrieved from relatively slow memory, causing the CPU to stall while waiting for the instruction to be returned. This issue is largely addressed in modern processors by caches and pipeline architectures (see below).
The instruction that the CPU fetches from memory is used to determine what the CPU is to do. In the decode step, the instruction is broken up into parts that have significance to other portions of the CPU. The way in which the numerical instruction value is interpreted is defined by the CPU's instruction set architecture (ISA).[d] Often, one group of numbers in the instruction, called the opcode, indicates which operation to perform. The remaining parts of the number usually provide information required for that instruction, such as operands for an addition operation. Such operands may be given as a constant value (called an immediate value), or as a place to locate a value: a register or a memory address, as determined by some addressing mode. In older designs the portions of the CPU responsible for instruction decoding were unchangeable hardware devices. However, in more abstract and complicated CPUs and ISAs, a microprogram is often used to assist in translating instructions into various configuration signals for the CPU. This microprogram is sometimes rewritable so that it can be modified to change the way the CPU decodes instructions even after it has been manufactured.

After the fetch and decode steps, the execute step is performed. During this step, various portions of the CPU are connected so they can perform the desired operation. If, for instance, an addition operation was requested, the arithmetic logic unit (ALU) will be connected to a set of inputs and a set of outputs. The inputs provide the numbers to be added, and the outputs will contain the final sum. The ALU contains the circuitry to perform simple arithmetic and logical operations on the inputs (like addition and bitwise operations). If the addition operation produces a result too large for the CPU to handle, an arithmetic overflow flag in a flags register may also be set.

The final step, writeback, simply "writes back" the results of the execute step to some form of memory. Very often the results are written to some internal CPU register for quick access by subsequent instructions. In other cases results may be written to slower, but cheaper and larger, main memory. Some types of instructions manipulate the program counter rather than directly produce result data. These are generally called "jumps" and facilitate behavior like loops, conditional program execution (through the use of a conditional jump), and functions in programs.[e] Many instructions will also change the state of digits in a "flags" register. These flags can be used to influence how a program behaves, since they often indicate the outcome of various operations. For example, one type of "compare" instruction considers two values and sets a number in the flags register according to which one is greater. This flag could then be used by a later jump instruction to determine program flow.

After the execution of the instruction and writeback of the resulting data, the entire process repeats, with the next instruction cycle normally fetching the next-in-sequence instruction because of the incremented value in the program counter. If the completed instruction was a jump, the program counter will be modified to contain the address of the instruction that was jumped to, and program execution continues normally. In more complex CPUs than the one described here, multiple instructions can be fetched, decoded, and executed simultaneously. This section describes what is generally referred to as the "classic RISC pipeline", which in fact is quite common among the simple CPUs used in many electronic devices (often called microcontroller). It largely ignores the important role of CPU cache, and therefore the access stage of the pipeline.
Design and implementation
Main article: CPU design

The basic concept of a CPU is as follows:

Hardwired into a CPU's design is a list of basic operations it can perform, called an instruction set. Such operations may include adding or subtracting two numbers, comparing numbers, or jumping to a different part of a program. Each of these basic operations is represented by a particular sequence of bits; this sequence is called the opcode for that particular operation. Sending a particular opcode to a CPU will cause it to perform the operation represented by that opcode. To execute an instruction in a computer program, the CPU uses the opcode for that instruction as well as its arguments (for instance the two numbers to be added, in the case of an addition operation). A computer program is therefore a sequence of instructions, with each instruction including an opcode and that operation's arguments.

The actual mathematical operation for each instruction is performed by a subunit of the CPU known as the arithmetic logic unit or ALU. In addition to using its ALU to perform operations, a CPU is also responsible for reading the next instruction from memory, reading data specified in arguments from memory, and writing results to memory.
In many CPU designs, an instruction set will clearly differentiate between operations that load data from memory, and those that perform math. In this case the data loaded from memory is stored in registers, and a mathematical operation takes no arguments but simply performs the math on the data in the registers and writes it to a new register, whose value a separate operation may then write to memory.
Control unit

Main article: Control unit

The control unit of the CPU contains circuitry that uses electrical signals to direct the entire computer system to carry out stored program instructions. The control unit does not execute program instructions; rather, it directs other parts of the system to do so. The control unit must communicate with both the arithmetic/logic unit and memory.
Integer range

The way a CPU represents numbers is a design choice that affects the most basic ways in which the device functions. Some early digital computers used an electrical model of the common decimal (base ten) numeral system to represent numbers internally. A few other computers have used more exotic numeral systems like ternary (base three). Nearly all modern CPUs represent numbers in binary form, with each digit being represented by some two-valued physical quantity such as a "high" or "low" voltage.[f]
 
 
MOS 6502 microprocessor in a dual in-line package, an extremely popular 8-bit design.
Related to number representation is the size and precision of numbers that a CPU can represent. In the case of a binary CPU, a bit refers to one significant place in the numbers a CPU deals with. The number of bits (or numeral places) a CPU uses to represent numbers is often called "word size", "bit width", "data path width", or "integer precision" when dealing with strictly integer numbers (as opposed to floating point). This number differs between architectures, and often within different parts of the very same CPU. For example, an 8-bit CPU deals with a range of numbers that can be represented by eight binary digits (each digit having two possible values), that is, 28 or 256 discrete numbers. In effect, integer size sets a hardware limit on the range of integers the software run by the CPU can utilize.[g]

Integer range can also affect the number of locations in memory the CPU can address (locate). For example, if a binary CPU uses 32 bits to represent a memory address, and each memory address represents one octet (8 bits), the maximum quantity of memory that CPU can address is 232 octets, or 4 GiB. This is a very simple view of CPU address space, and many designs use more complex addressing methods like paging to locate more memory than their integer range would allow with a flat address space.
Higher levels of integer range require more structures to deal with the additional digits, and therefore more complexity, size, power usage, and general expense. It is not at all uncommon, therefore, to see 4- or 8-bit microcontrollers used in modern applications, even though CPUs with much higher range (such as 16, 32, 64, even 128-bit) are available. The simpler microcontrollers are usually cheaper, use less power, and therefore generate less heat, all of which can be major design considerations for electronic devices. However, in higher-end applications, the benefits afforded by the extra range (most often the additional address space) are more significant and often affect design choices. To gain some of the advantages afforded by both lower and higher bit lengths, many CPUs are designed with different bit widths for different portions of the device. For example, the IBM System/370 used a CPU that was primarily 32 bit, but it used 128-bit precision inside its floating point units to facilitate greater accuracy and range in floating point numbers.[4] Many later CPU designs use similar mixed bit width, especially when the processor is meant for general-purpose usage where a reasonable balance of integer and floating point capability is required.
Clock rate

Main article: Clock rate

The clock rate is the speed at which a microprocessor executes instructions. Every computer contains an internal clock that regulates the rate at which instructions are executed and synchronizes all the various computer components. The CPU requires a fixed number of clock ticks (or clock cycles) to execute each instruction. The faster the clock, the more instructions the CPU can execute per second.
Most CPUs, and indeed most sequential logic devices, are synchronous in nature.[h] That is, they are designed and operate on assumptions about a synchronization signal. This signal, known as a clock signal, usually takes the form of a periodic square wave. By calculating the maximum time that electrical signals can move in various branches of a CPU's many circuits, the designers can select an appropriate period for the clock signal.

This period must be longer than the amount of time it takes for a signal to move, or propagate, in the worst-case scenario. In setting the clock period to a value well above the worst-case propagation delay, it is possible to design the entire CPU and the way it moves data around the "edges" of the rising and falling clock signal. This has the advantage of simplifying the CPU significantly, both from a design perspective and a component-count perspective. However, it also carries the disadvantage that the entire CPU must wait on its slowest elements, even though some portions of it are much faster. This limitation has largely been compensated for by various methods of increasing CPU parallelism. (see below)

However, architectural improvements alone do not solve all of the drawbacks of globally synchronous CPUs. For example, a clock signal is subject to the delays of any other electrical signal. Higher clock rates in increasingly complex CPUs make it more difficult to keep the clock signal in phase (synchronized) throughout the entire unit. This has led many modern CPUs to require multiple identical clock signals to be provided to avoid delaying a single signal significantly enough to cause the CPU to malfunction. Another major issue as clock rates increase dramatically is the amount of heat that is dissipated by the CPU. The constantly changing clock causes many components to switch regardless of whether they are being used at that time. In general, a component that is switching uses more energy than an element in a static state. Therefore, as clock rate increases, so does energy consumption, causing the CPU to require more heat dissipation in the form of CPU cooling solutions.

One method of dealing with the switching of unneeded components is called clock gating, which involves turning off the clock signal to unneeded components (effectively disabling them). However, this is often regarded as difficult to implement and therefore does not see common usage outside of very low-power designs. One notable late CPU design that uses clock gating to reduce the power requirements of the videogame console is that of the IBM PowerPC-based Xbox 360. It utilizes extensive clock gating in which it is used.[7] Another method of addressing some of the problems with a global clock signal is the removal of the clock signal altogether. While removing the global clock signal makes the design process considerably more complex in many ways, asynchronous (or clockless) designs carry marked advantages in power consumption and heat dissipation in comparison with similar synchronous designs. While somewhat uncommon, entire asynchronous CPUs have been built without utilizing a global clock signal. Two notable examples of this are the ARM compliant AMULET and the MIPS R3000 compatible MiniMIPS. Rather than totally removing the clock signal, some CPU designs allow certain portions of the device to be asynchronous, such as using asynchronous ALUs in conjunction with superscalar pipelining to achieve some arithmetic performance gains. While it is not altogether clear whether totally asynchronous designs can perform at a comparable or better level than their synchronous counterparts, it is evident that they do at least excel in simpler math operations. This, combined with their excellent power consumption and heat dissipation properties, makes them very suitable for embedded computers.[8]

Parallelism

Main article: Parallel computing
 
 
Model of a subscalar CPU. Notice that it takes fifteen cycles to complete three instructions.

The description of the basic operation of a CPU offered in the previous section describes the simplest form that a CPU can take. This type of CPU, usually referred to as subscalar, operates on and executes one instruction on one or two pieces of data at a time.
This process gives rise to an inherent inefficiency in subscalar CPUs. Since only one instruction is executed at a time, the entire CPU must wait for that instruction to complete before proceeding to the next instruction. As a result, the subscalar CPU gets "hung up" on instructions which take more than one clock cycle to complete execution. Even adding a second execution unit (see below) does not improve performance much; rather than one pathway being hung up, now two pathways are hung up and the number of unused transistors is increased. This design, wherein the CPU's execution resources can operate on only one instruction at a time, can only possibly reach scalar performance (one instruction per clock). However, the performance is nearly always subscalar (less than one instruction per cycle).

Attempts to achieve scalar and better performance have resulted in a variety of design methodologies that cause the CPU to behave less linearly and more in parallel. When referring to parallelism in CPUs, two terms are generally used to classify these design techniques. Instruction level parallelism (ILP) seeks to increase the rate at which instructions are executed within a CPU (that is, to increase the utilization of on-die execution resources), and thread level parallelism (TLP) purposes to increase the number of threads (effectively individual programs) that a CPU can execute simultaneously. Each methodology differs both in the ways in which they are implemented, as well as the relative effectiveness they afford in increasing the CPU's performance for an application.
Instruction level parallelism

Main articles: Instruction pipelining and Superscalar
 
 
Basic five-stage pipeline. In the best case scenario, this pipeline can sustain a completion rate of one instruction per cycle.
One of the simplest methods used to accomplish increased parallelism is to begin the first steps of instruction fetching and decoding before the prior instruction finishes executing. This is the simplest form of a technique known as instruction pipelining, and is utilized in almost all modern general-purpose CPUs. Pipelining allows more than one instruction to be executed at any given time by breaking down the execution pathway into discrete stages. This separation can be compared to an assembly line, in which an instruction is made more complete at each stage until it exits the execution pipeline and is retired.

Pipelining does, however, introduce the possibility for a situation where the result of the previous operation is needed to complete the next operation; a condition often termed data dependency conflict. To cope with this, additional care must be taken to check for these sorts of conditions and delay a portion of the instruction pipeline if this occurs. Naturally, accomplishing this requires additional circuitry, so pipelined processors are more complex than subscalar ones (though not very significantly so). A pipelined processor can become very nearly scalar, inhibited only by pipeline stalls (an instruction spending more than one clock cycle in a stage).
 
 
Simple superscalar pipeline. By fetching and dispatching two instructions at a time, a maximum of two instructions per cycle can be completed.

Further improvement upon the idea of instruction pipelining led to the development of a method that decreases the idle time of CPU components even further. Designs that are said to be superscalar include a long instruction pipeline and multiple identical execution units.[9] In a superscalar pipeline, multiple instructions are read and passed to a dispatcher, which decides whether or not the instructions can be executed in parallel (simultaneously). If so they are dispatched to available execution units, resulting in the ability for several instructions to be executed simultaneously. In general, the more instructions a superscalar CPU is able to dispatch simultaneously to waiting execution units, the more instructions will be completed in a given cycle.

Most of the difficulty in the design of a superscalar CPU architecture lies in creating an effective dispatcher. The dispatcher needs to be able to quickly and correctly determine whether instructions can be executed in parallel, as well as dispatch them in such a way as to keep as many execution units busy as possible. This requires that the instruction pipeline is filled as often as possible and gives rise to the need in superscalar architectures for significant amounts of CPU cache. It also makes hazard-avoiding techniques like branch prediction, speculative execution, and out-of-order execution crucial to maintaining high levels of performance. By attempting to predict which branch (or path) a conditional instruction will take, the CPU can minimize the number of times that the entire pipeline must wait until a conditional instruction is completed. Speculative execution often provides modest performance increases by executing portions of code that may not be needed after a conditional operation completes. Out-of-order execution somewhat rearranges the order in which instructions are executed to reduce delays due to data dependencies. Also in case of Single Instructions Multiple Data — a case when a lot of data from the same type has to be processed, modern processors can disable parts of the pipeline so that when a single instruction is executed many times, the CPU skips the fetch and decode phases and thus greatly increases performance on certain occasions, especially in highly monotonous program engines such as video creation software and photo processing.

In the case where a portion of the CPU is superscalar and part is not, the part which is not suffers a performance penalty due to scheduling stalls. The Intel P5 Pentium had two superscalar ALUs which could accept one instruction per clock each, but its FPU could not accept one instruction per clock. Thus the P5 was integer superscalar but not floating point superscalar. Intel's successor to the P5 architecture, P6, added superscalar capabilities to its floating point features, and therefore afforded a significant increase in floating point instruction performance.

Both simple pipelining and superscalar design increase a CPU's ILP by allowing a single processor to complete execution of instructions at rates surpassing one instruction per cycle (IPC).[j] Most modern CPU designs are at least somewhat superscalar, and nearly all general purpose CPUs designed in the last decade are superscalar. In later years some of the emphasis in designing high-ILP computers has been moved out of the CPU's hardware and into its software interface, or ISA. The strategy of the very long instruction word (VLIW) causes some ILP to become implied directly by the software, reducing the amount of work the CPU must perform to boost ILP and thereby reducing the design's complexity.
Thread-level parallelism

Another strategy of achieving performance is to execute multiple programs or threads in parallel. This area of research is known as parallel computing. In Flynn's taxonomy, this strategy is known as Multiple Instructions-Multiple Data or MIMD.
One technology used for this purpose was multiprocessing (MP). The initial flavor of this technology is known as symmetric multiprocessing (SMP), where a small number of CPUs share a coherent view of their memory system. In this scheme, each CPU has additional hardware to maintain a constantly up-to-date view of memory. By avoiding stale views of memory, the CPUs can cooperate on the same program and programs can migrate from one CPU to another. To increase the number of cooperating CPUs beyond a handful, schemes such as non-uniform memory access (NUMA) and directory-based coherence protocols were introduced in the 1990s. SMP systems are limited to a small number of CPUs while NUMA systems have been built with thousands of processors. Initially, multiprocessing was built using multiple discrete CPUs and boards to implement the interconnect between the processors. When the processors and their interconnect are all implemented on a single silicon chip, the technology is known as a multi-core processor.
It was later recognized that finer-grain parallelism existed with a single program. A single program might have several threads (or functions) that could be executed separately or in parallel. Some of the earliest examples of this technology implemented input/output processing such as direct memory access as a separate thread from the computation thread. A more general approach to this technology was introduced in the 1970s when systems were designed to run multiple computation threads in parallel. This technology is known as multi-threading (MT). This approach is considered more cost-effective than multiprocessing, as only a small number of components within a CPU is replicated to support MT as opposed to the entire CPU in the case of MP. In MT, the execution units and the memory system including the caches are shared among multiple threads. The downside of MT is that the hardware support for multithreading is more visible to software than that of MP and thus supervisor software like operating systems have to undergo larger changes to support MT. One type of MT that was implemented is known as block multithreading, where one thread is executed until it is stalled waiting for data to return from external memory. In this scheme, the CPU would then quickly switch to another thread which is ready to run, the switch often done in one CPU clock cycle, such as the UltraSPARC Technology. Another type of MT is known as simultaneous multithreading, where instructions of multiple threads are executed in parallel within one CPU clock cycle.
For several decades from the 1970s to early 2000s, the focus in designing high performance general purpose CPUs was largely on achieving high ILP through technologies such as pipelining, caches, superscalar execution, out-of-order execution, etc. This trend culminated in large, power-hungry CPUs such as the Intel Pentium 4. By the early 2000s, CPU designers were thwarted from achieving higher performance from ILP techniques due to the growing disparity between CPU operating frequencies and main memory operating frequencies as well as escalating CPU power dissipation owing to more esoteric ILP techniques.

CPU designers then borrowed ideas from commercial computing markets such as transaction processing, where the aggregate performance of multiple programs, also known as throughput computing, was more important than the performance of a single thread or program.

This reversal of emphasis is evidenced by the proliferation of dual and multiple core CMP (chip-level multiprocessing) designs and notably, Intel's newer designs resembling its less superscalar P6 architecture. Late designs in several processor families exhibit CMP, including the x86-64 Opteron and Athlon 64 X2, the SPARC UltraSPARC T1, IBM POWER4 and POWER5, as well as several video game console CPUs like the Xbox 360's triple-core PowerPC design, and the PS3's 7-core Cell microprocessor.
Data parallelism

Main articles: Vector processor and SIMD

A less common but increasingly important paradigm of CPUs (and indeed, computing in general) deals with data parallelism. The processors discussed earlier are all referred to as some type of scalar device.[k] As the name implies, vector processors deal with multiple pieces of data in the context of one instruction. This contrasts with scalar processors, which deal with one piece of data for every instruction. Using Flynn's taxonomy, these two schemes of dealing with data are generally referred to as SIMD (single instruction, multiple data) and SISD (single instruction, single data), respectively. The great utility in creating CPUs that deal with vectors of data lies in optimizing tasks that tend to require the same operation (for example, a sum or a dot product) to be performed on a large set of data. Some classic examples of these types of tasks are multimedia applications (images, video, and sound), as well as many types of scientific and engineering tasks. Whereas a scalar CPU must complete the entire process of fetching, decoding, and executing each instruction and value in a set of data, a vector CPU can perform a single operation on a comparatively large set of data with one instruction. Of course, this is only possible when the application tends to require many steps which apply one operation to a large set of data.

Most early vector CPUs, such as the Cray-1, were associated almost exclusively with scientific research and cryptography applications. However, as multimedia has largely shifted to digital media, the need for some form of SIMD in general-purpose CPUs has become significant. Shortly after inclusion of floating point execution units started to become commonplace in general-purpose processors, specifications for and implementations of SIMD execution units also began to appear for general-purpose CPUs. Some of these early SIMD specifications like HP's Multimedia Acceleration eXtensions (MAX) and Intel's MMX were integer-only. This proved to be a significant impediment for some software developers, since many of the applications that benefit from SIMD primarily deal with floating point numbers. Progressively, these early designs were refined and remade into some of the common, modern SIMD specifications, which are usually associated with one ISA. Some notable modern examples are Intel's SSE and the PowerPC-related AltiVec (also known as VMX).[l]

Performance

Further information: Computer performance and Benchmark (computing)
The performance or speed of a processor depends on the clock rate (generally given in multiples of hertz) and the instructions per clock (IPC), which together are the factors for the instructions per second (IPS) that the CPU can perform.[10] Many reported IPS values have represented "peak" execution rates on artificial instruction sequences with few branches, whereas realistic workloads consist of a mix of instructions and applications, some of which take longer to execute than others. The performance of the memory hierarchy also greatly affects processor performance, an issue barely considered in MIPS calculations. Because of these problems, various standardized tests, often called "benchmarks" for this purpose—such as SPECint – have been developed to attempt to measure the real effective performance in commonly used applications.

Processing performance of computers is increased by using multi-core processors, which essentially is plugging two or more individual processors (called cores in this sense) into one integrated circuit.[11] Ideally, a dual core processor would be nearly twice as powerful as a single core processor. In practice, however, the performance gain is far less, only about 50%,[11] due to imperfect software algorithms and implementation. Increasing the number of cores in a processor (i.e. dual-core, quad-core, etc.) increases the workload that can be handled. This means that the processor can now handle numerous asynchronous events, interrupts, etc. which can take a toll on the CPU (Central Processing Unit) when overwhelmed. These cores can be thought of as different floors in a processing plant, with each floor handling a different task. Sometimes, these cores will handle the same tasks as cores adjacent to them if a single core is not enough to handle the information.
http://en.wikipedia.org/wiki/File:80486dx2-large.jpg

392
MCT / To Keep Mind Good
« on: August 19, 2013, 03:04:30 PM »
মন ভালো করতে

আমাদের মন ভালো থাকলেই শরীর ভালো থাকে। দেহের সুস্থতার থেকে মনের সুস্থতা অনেক বেশি জরুরি। কারণ আমাদের মনই যে  কোনো ব্যাপারে প্রথমে সাড়া দেয়। আর তাই শুধু শারীরিক সুস্থতা ও কায়িক পরিশ্রম দিয়েই একজন মানুষ সবসময় ভালো থাকতে পারে না, যদি তার আত্মিক বা মানসিক স্বাস্থ্য ভালো না হয়। মন ভালো রাখার কিছু উপায় জেনে নিন:

ক্ষমা

ধরুন আপনার সঙ্গে একজনের খারাপ সম্পর্ক আছে। আপনার মনের মধ্যে তার ছবি কল্পনা করে ও আপনার উচ্চ সত্তা থেকে ভালোবাসার শক্তি নামিয়ে এনে বার বার বলুন, তোমাকে ক্ষমা করে দিলাম। এক পর্যায়ে আপনি মনে মনে চিন্তা করতে থাকুন যে এই সমস্যাটা ঠিক হয়ে গেছে এবং আপনি লোকটিকে ক্ষমা করতে পেরেছেন।
দ্বিতীয়বার যখন আপনি এটি করতে যাবেন দেখবেন আপনার মধ্যে লোকটির প্রতি একটু হলেও ভালোবাসা এসেছে। আর যদি ভালোবাসা নাও আসে তবে আবার আপনি একইভাবে এটি করতে থাকুন। একসময় দেখবেন সত্যিই আপনি তাকে ক্ষমা করতে পেরেছেন। যে কোনো বিষয়ে আপনি এ পদ্ধতি ব্যবহার করতে পারেন।

শান্ত থাকার যোগ

আমরা বেশির ভাগ সময় আমাদের নিজেদের কামনা-বাসনা নিয়ে খুব বেশি চিন্তিত হয়ে পড়ি। সবসময় আমাদের মধ্যে দুশ্চিন্তা কাজ করে: ‘আমার কী হবে? আমি এটি পাব কি পাব না? এটি পেতে আমার কী করা উচিত বা অনুচিত?’ কিন্তু এটা না করে স্থির থাকুন। নিজেকে আট বছরের  বালক বা বালিকা ভাবুন। নিজের দোষ-গুণ সম্পর্কে নিজেকে নিরপেক্ষ কিন্তু নরমভাবে প্রশ্ন করুন। নীরবতাকে মনের মধ্যে আহ্বান করুন এবং বলতে থাকুন:‘নীরবতা এসো’,‘শান্ত হও’। একটু পরেই দেখবেন আপনার মন শান্ত হয়ে গেছে। যখনই অশান্ত হয়ে পড়বেন তখনই এটি করতে থাকবেন।

স্থির হওয়ার ব্যায়াম

একটি চেয়ারে বসুন ও পা দুটিকে মেঝেতে রাখুন। চোখ বন্ধ করুন ও মনে মনে চিন্তা করুন যেন আপনার মেরুদর শেষ প্রান্তে, যোগের ভাষায় যাকে কু-লী বলে, সেখানে একটি বৈদ্যুতিক তার লাগানো রয়েছে। এই বৈদ্যুতিক তার আপনার মাথার ওপরের শান্ত সাগরের মতো পৃথিবীর ঠিক মাঝখান থেকে ঝরনাধারার মতো নেমে এসেছে। এটি আপনার দেহে ঢুকে আপনার দেহের সকল বর্জ্যপদার্থ ও খারাপ কিছু চুষে নিচ্ছে। আপনি নিজেকে খুব হালকা বোধ করছেন। প্রথম প্রথম এটি করতে শান্ত জায়গার প্রয়োজন হবে। পরে, আপনি এটি আয়ত্ত করতে পারলে যে কোনো স্থানে বা জায়গায় যেমন- অফিসে, রাস্তায়, লোকালয়ে করতে পারবেন। মন শান্ত রাখার জন্য এটি একটি মহাষৌধ। এটির উপকারিতা আপনি প্রতি মুহূর্তে বুঝতে পারবেন।
তিন চক্রকে সক্রিয় রাখা

বিশুদ্ধ চক্র, অনাহত চক্র ও মণিপুর চক্রের মধ্যে দিব্য আলো, আনন্দ, চেতনা খেলা করতে থাকে। তাই এই চক্রগুলো সক্রিয় রাখা খুব জরুরি। কণ্ঠ, হৃদয় ও প্লীহার ওপরে চাপড়াতে থাকুন। এতে এই চক্রগুলো সক্রিয় হবে। দিনে দু মিনিট করে আপনি এটি করতে থাকুন।

স্নায়ু উত্তেজক ব্যায়াম

জড়তা কোনো ভালো জিনিস নয়। এটিকে যোগের ভাষায় ‘তামসিক ভাব’ বলা হয়ে থাকে। স্নায়ু উত্তেজিত করতে ও জড়তা দূর করতে আপনি বিভিন্ন যোগ ব্যায়াম করতে পারেন। যেমন, আপনার এক হাতের তালুর একটু ওপরে অন্য হাতের বুড়ো আঙ্গুল দিয়ে চাপ দিন। আস্তে আস্তে আঙ্গুলগুলো দিয়ে বুড়ো আঙ্গুলের দিকে নামিয়ে আনুন। পনের বার এটা করুন। এতে আপনার জড়তা দূর হবে।
‘কি ফো’ ব্যায়াম

কি ফো নিতে মাত্র দু মিনিট লাগবে। আপনার দু হাত হালকাভাবে মুঠো করুন এবং আপনার সম্পুর্ণ দেহের ওপর চাপড়াতে থাকুন। মাথা থেকে আরম্ভ করে খুলি, মগজ ও ঘাড় ছাড়িয়ে সম্পুর্ণ দেহে এটা করতে থাকুন। এতে আপনার রক্তচলাচল বেড়ে গিয়ে আপনার দেহে শক্তি উৎপন্ন করতে সাহায্য করবে। আমাদের মাথা চিন্তা করার স্থান। মন সবসময় কোনো না কোনো বিষয় নিয়ে চিন্তা করেই চলেছে এবং একটি সমাধানে আসার চেষ্টা করে যাচ্ছে, তা সে ভালো হোক বা খারাপ।

কুকুর অথবা বিড়াল পোষা


বেশির ভাগ মানুষ স্বার্থপর হয়ে থাকে। আবার অনেকের মধ্যে পশুবৃত্তি আছে। প্রাণীদের আচার-আচরণ সংক্রামক। কুকুর প্রভুভক্ত। এদের নিস্বার্থ ভালোবাসা, শিশুসুলভ আচরণ, খেলাপ্রিয়তা ও অল্পে সন্তুষ্ট থাকার প্রবণতা আপনার মধ্যেও সংক্রামক রোগের মতো প্রবাহিত হয়ে থাকে। তাই, স্বার্থপর মানুষের সঙ্গ না দিয়ে প্রাণীদের সঙ্গ দেওয়া অনেক ভালো।
সাগরের পানিতে সাঁতার

আমাদের সম্পুর্ণ দেহের ওপর একটি বলয় আছে যেটিকে ‘সূক্ষ্ম দেহ’ বলে। এই সূক্ষ্ম দেহ অলৌকিক আভা দিয়ে তৈরি, যেটি আমাদের দৈহিক ও আত্মিক সুস্থতা প্রকাশ করে থাকে। আমাদের দেহকে এই  অলৌকিক আভা প্রকাশের জন্য পরিষ্কার-পরিচ্ছন্ন রাখতে হবে। আর তাই প্রতিদিন ভোরে সাগরের পানিতে স্নান করতে হবে। কারণ লবণাক্ত পানিকে প্রাকৃতিক পরিষ্কারক বলা হয়। দেহের অতিরিক্ত বর্জ্যপদার্থ বের করে দিতে ও দেহে মিনারেলের সমতা আনতে লবণের গুরুত্ব অপরিসীম। দেহ পরিষ্কার না থাকলে রোগ দেহে বাসা বাঁধবে ও জীবনটাকে বোঝা ও বিরক্তিকর মনে  হবে।

রঙতুলি ব্যবহার

ছোট শিশু মানেই নিষ্পাপ ও পবিত্র কিছু। ছোট শিশুদের মতো রঙপেন্সিল নিয়ে আঁকতে আরম্ভ করুন।  চোখ বন্ধ রেখে কিছুক্ষণ ধ্যান করে মন শান্ত করুন। এরপর, আপনার মস্তিষ্ক সচল করার জন্য কাগজের ওপর একটি বৃত্ত আঁকুন এবং এটিকে আট ভাগে ভাগ করুন। এই আট ভাগে আপনার ইচ্ছামতো রঙ দিয়ে বৃত্তটি পূরণ করুন। ছবি আঁকার পদ্ধতি সম্পর্কে আপনি জানুন বা না জানুন এসব নিয়ে কোনো চিন্তা করবেন না।
গাছ লাগানো

গাছ লাগানো খুব ভালো একটা অভ্যাস। বাগান করা মনের খোরাক জোগায়। গাছ লাগানো ও গাছের পরিচর্যা আপনাকে প্রকৃতির কাছে নিয়ে যাবে, প্রকৃতিপ্রেমিক করে তুলবে, প্রকৃতির মতো উদার হতে সাহায্য করবে। বাড়িতে করা বাগান থেকে আপনি সতেজ বাতাস পাবেন। তাছাড়া, আপনি রান্নার জন্য তাজা সবজি পাবেন।
এছাড়াও মনোবিজ্ঞানীরা সকলকে প্রতিদিন কমপক্ষে ৩০ মিনিট প্রাণ খুলে হাসার পরামর্শ দিয়েছেন।আরও কিছু উপায়ে হয়তো আপনি আনন্দে থাকতে পারেন। যেমন- শত ব্যাস্ততার মাঝেও অন্তত সপ্তাহের একটি দিন বা একটি ঘণ্টা প্রিয়জনের সঙ্গে কাটান। তাদের নিয়ে বেড়াতে যান কোনো পছন্দের জায়গায়।
অবসরের সময়গুলোতে পরিবারের সকলকে নিয়ে টিভিতে পছন্দের কোনো অনুষ্ঠানও দেখতে পারেন। রাতে ভালো কোনো গল্পের বইও পড়তে পারেন। সিনেমা হলে গিয়ে স্বপরিবারে বা বন্ধুদের নিয়ে

দেখে আসতে পারেন ভালো কোনো চলচ্চিত্র।

সর্বোপরি হতাশা আর দুঃশ্চিন্তা থেকে মুক্তি পেতে মনে রাখতে হবে আপনি যেমনই হন না কেন আপনার মত পৃথিবীতে আর দ্বিতীয় কেউ কোথাও নেই। পৃথিবীকে দেবার মতন আপনার কাছে এখনও অনেক কিছুই বাকি। তাই নিজেকে অহেতুক অন্যের চেয়ে ছোট না ভেবে নিজের মত করে বাঁচুন এবং আনন্দে থাকুন।
http://www.banglanews24.com/LifeStyle/detailsnews.php?nssl=3490

393
MCT / Central Processing Unit (CPU)
« on: August 18, 2013, 03:24:59 PM »
Central Processing Unit

"CPU" redirects here. For other uses, see CPU (disambiguation).
"Computer processor" redirects here. For other uses, see Processor (computing).
 
An Intel 80486DX2 CPU, as seen from above.
 
An Intel 80486DX2, as seen from below

A central processing unit (CPU), also referred to as a central processor unit,[1] is the hardware within a computer that carries out the instructions of a computer program by performing the basic arithmetical, logical, and input/output operations of the system. The term has been in use in the computer industry at least since the early 1960s.[2] The form, design, and implementation of CPUs have changed over the course of their history, but their fundamental operation remains much the same.
A computer can have more than one CPU; this is called multiprocessing. Some integrated circuits (ICs) can contain multiple CPUs on a single chip; those ICs are called multi-core processors.

Two typical components of a CPU are the arithmetic logic unit (ALU), which performs arithmetic and logical operations, and the control unit (CU), which extracts instructions from memory and decodes and executes them, calling on the ALU when necessary.
Not all computational systems rely on a central processing unit. An array processor or vector processor has multiple parallel computing elements, with no one unit considered the "center". In the distributed computing model, problems are solved by a distributed interconnected set of processors.

The abbreviation CPU is sometimes used incorrectly by people who are not computer specialists to refer to the cased main part of a desktop computer containing the motherboard, processor, disk drives, etc., i.e., not the display monitor or keyboard.
Contents

•   1 History
o   1.1 Transistor and integrated circuit CPUs
o   1.2 Microprocessors
•   2 Operation
•   3 Design and implementation
o   3.1 Control unit
o   3.2 Integer range
o   3.3 Clock rate
o   3.4 Parallelism
   3.4.1 Instruction level parallelism
   3.4.2 Thread-level parallelism
   3.4.3 Data parallelism
•   4 Performance
•   5 See also
•   6 References and notes
o   6.1 Notes
o   6.2 References
•   7 External links

History
Main article: History of general purpose CPUs
 
 
EDVAC, one of the first stored program computers.

Computers such as the ENIAC had to be physically rewired to perform different tasks, which caused these machines to be called "fixed-program computers." Since the term "CPU" is generally defined as a device for software (computer program) execution, the earliest devices that could rightly be called CPUs came with the advent of the stored-program computer.
The idea of a stored-program computer was already present in the design of J. Presper Eckert and John William Mauchly's ENIAC, but was initially omitted so that it could be finished sooner. On June 30, 1945, before ENIAC was made, mathematician John von Neumann distributed the paper entitled First Draft of a Report on the EDVAC. It was the outline of a stored-program computer that would eventually be completed in August 1949.[3] EDVAC was designed to perform a certain number of instructions (or operations) of various types. These instructions could be combined to create useful programs for the EDVAC to run. Significantly, the programs written for EDVAC were stored in high-speed computer memory rather than specified by the physical wiring of the computer. This overcame a severe limitation of ENIAC, which was the considerable time and effort required to reconfigure the computer to perform a new task. With von Neumann's design, the program, or software, that EDVAC ran could be changed simply by changing the contents of the memory.

Early CPUs were custom-designed as a part of a larger, sometimes one-of-a-kind, computer. However, this method of designing custom CPUs for a particular application has largely given way to the development of mass-produced processors that are made for many purposes. This standardization began in the era of discrete transistor mainframes and minicomputers and has rapidly accelerated with the popularization of the integrated circuit (IC). The IC has allowed increasingly complex CPUs to be designed and manufactured to tolerances on the order of nanometers. Both the miniaturization and standardization of CPUs have increased the presence of digital devices in modern life far beyond the limited application of dedicated computing machines. Modern microprocessors appear in everything from automobiles to cell phones and children's toys.

While von Neumann is most often credited with the design of the stored-program computer because of his design of EDVAC, others before him, such as Konrad Zuse, had suggested and implemented similar ideas. The so-called Harvard architecture of the Harvard Mark I, which was completed before EDVAC, also utilized a stored-program design using punched paper tape rather than electronic memory. The key difference between the von Neumann and Harvard architectures is that the latter separates the storage and treatment of CPU instructions and data, while the former uses the same memory space for both. Most modern CPUs are primarily von Neumann in design, but elements of the Harvard architecture are commonly seen as well.[citation needed]
Relays and vacuum tubes (thermionic valves) were commonly used as switching elements; a useful computer requires thousands or tens of thousands of switching devices. The overall speed of a system is dependent on the speed of the switches. Tube computers like EDVAC tended to average eight hours between failures, whereas relay computers like the (slower, but earlier) Harvard Mark I failed very rarely.[2] In the end, tube based CPUs became dominant because the significant speed advantages afforded generally outweighed the reliability problems. Most of these early synchronous CPUs ran at low clock rates compared to modern microelectronic designs (see below for a discussion of clock rate). Clock signal frequencies ranging from 100 kHz to 4 MHz were very common at this time, limited largely by the speed of the switching devices they were built with.
Transistor and integrated circuit CPUs
 
 
CPU, core memory, and external bus interface of a DEC PDP-8/I. Made of medium-scale integrated circuits.
The design complexity of CPUs increased as various technologies facilitated building smaller and more reliable electronic devices. The first such improvement came with the advent of the transistor. Transistorized CPUs during the 1950s and 1960s no longer had to be built out of bulky, unreliable, and fragile switching elements like vacuum tubes and electrical relays. With this improvement more complex and reliable CPUs were built onto one or several printed circuit boards containing discrete (individual) components.
During this period, a method of manufacturing many interconnected transistors in a compact space was developed. The integrated circuit (IC) allowed a large number of transistors to be manufactured on a single semiconductor-based die, or "chip." At first only very basic non-specialized digital circuits such as NOR gates were miniaturized into ICs. CPUs based upon these "building block" ICs are generally referred to as "small-scale integration" (SSI) devices. SSI ICs, such as the ones used in the Apollo guidance computer, usually contained up to a few score transistors. To build an entire CPU out of SSI ICs required thousands of individual chips, but still consumed much less space and power than earlier discrete transistor designs. As microelectronic technology advanced, an increasing number of transistors were placed on ICs, thus decreasing the quantity of individual ICs needed for a complete CPU. MSI and LSI (medium- and large-scale integration) ICs increased transistor counts to hundreds, and then thousands.

In 1964 IBM introduced its System/360 computer architecture which was used in a series of computers that could run the same programs with different speed and performance. This was significant at a time when most electronic computers were incompatible with one another, even those made by the same manufacturer. To facilitate this improvement, IBM utilized the concept of a microprogram (often called "microcode"), which still sees widespread usage in modern CPUs.[4] The System/360 architecture was so popular that it dominated the mainframe computer market for decades and left a legacy that is still continued by similar modern computers like the IBM zSeries. In the same year (1964), Digital Equipment Corporation (DEC) introduced another influential computer aimed at the scientific and research markets, the PDP-8. DEC would later introduce the extremely popular PDP-11 line that originally was built with SSI ICs but was eventually implemented with LSI components once these became practical. In stark contrast with its SSI and MSI predecessors, the first LSI implementation of the PDP-11 contained a CPU composed of only four LSI integrated circuits.[5]

Transistor-based computers had several distinct advantages over their predecessors. Aside from facilitating increased reliability and lower power consumption, transistors also allowed CPUs to operate at much higher speeds because of the short switching time of a transistor in comparison to a tube or relay. Thanks to both the increased reliability as well as the dramatically increased speed of the switching elements (which were almost exclusively transistors by this time), CPU clock rates in the tens of megahertz were obtained during this period. Additionally while discrete transistor and IC CPUs were in heavy usage, new high-performance designs like SIMD (Single Instruction Multiple Data) vector processors began to appear. These early experimental designs later gave rise to the era of specialized supercomputers like those made by Cray Inc.
http://upload.wikimedia.org/wikipedia/commons/d/dc/Intel_80486DX2_top.jpg

394
MCT / Arithmatic Logic Unit (ALU)
« on: August 17, 2013, 04:08:43 PM »
From Wikipedia, the free encyclopedia
Jump to: navigation, search
Arithmetic And Logic Unit schematic symbol
Cascadable 8 Bit ALU Texas Instruments SN74AS888

In computing, an arithmetic and logic unit (ALU) is a digital circuit that performs integer arithmetic and logical operations. The ALU is a fundamental building block of the central processing unit of a computer, and even the simplest microprocessors contain one for purposes such as maintaining timers. The processors found inside modern CPUs and graphics processing units (GPUs) accommodate very powerful and very complex ALUs; a single component may contain a number of ALUs.

Mathematician John von Neumann proposed the ALU concept in 1945, when he wrote a report on the foundations for a new computer called the EDVAC. Research into ALUs remains as an important part of computer science, falling under Arithmetic and logic structures in the ACM Computing Classification System.
Contents

    1 Numerical systems
    2 Practical overview
        2.1 Complex operations
        2.2 Inputs and outputs
    3 See also
    4 Notes
    5 References
    6 External links

Numerical systems
Main article: Signed number representations

An ALU must process numbers using the same format as the rest of the digital circuit. The format of modern processors is almost always the two's complement binary number representation. Early computers used a wide variety of number systems, including ones' complement, two's complement, sign-magnitude format, and even true decimal systems, with various[NB 2] representation of the digits.


The ones' complement and two's complement number systems allow for subtraction to be accomplished by adding the negative of a number in a very simple way which negates the need for specialized circuits to do subtraction; however, calculating the negative in two's complement requires adding a one to the low order bit and propagating the carry. An alternative way to do two's complement subtraction of A−B is to present a one to the carry input of the adder and use ¬B rather than B as the second input. The arithmetic, logic and shift circuits introduced in previous sections can be combined into one ALU with common selection.
Practical overview

Most of a processor's operations are performed by one or more ALUs. An ALU loads data from input registers. Then an external control unit tells the ALU what operation to perform on that data, and then the ALU stores its result into an output register. The control unit is responsible for moving the processed data between these registers, ALU and memory.
Complex operations

Engineers can design an Arithmetic Logic Unit to calculate most operations. The more complex the operation, the more expensive the ALU is, the more space it uses in the processor, and the more power it dissipates. Therefore, engineers compromise. They make the ALU powerful enough to make the processor fast, yet not so complex as to become prohibitive. For example, computing the square root of a number might use:

    Calculation in a single clock Design an extraordinarily complex ALU that calculates the square root of any number in a single step.
    Calculation pipeline Design a very complex ALU that calculates the square root of any number in several steps. The intermediate results go through a series of circuits arranged like a factory production line. The ALU can accept new numbers to calculate even before having finished the previous ones. The ALU can now produce numbers as fast as a single-clock ALU, although the results start to flow out of the ALU only after an initial delay.
    Iterative calculation Design a complex ALU that calculates the square root through several steps. This usually relies on control from a complex control unit with built-in microcode.
    Co-processor Design a simple ALU in the processor, and sell a separate specialized and costly processor that the customer can install just beside this one, and implements one of the options above.
    Software libraries Tell the programmers that there is no co-processor and there is no emulation, so they will have to write their own algorithms to calculate square roots by software.
    Software emulation Emulate the existence of the co-processor, that is, whenever a program attempts to perform the square root calculation, make the processor check if there is a co-processor present and use it if there is one; if there is not one, interrupt the processing of the program and invoke the operating system to perform the square root calculation through some software algorithm.

The options above go from the fastest and most expensive one to the slowest and least expensive one. Therefore, while even the simplest computer can calculate the most complicated formula, the simplest computers will usually take a long time doing that because of the several steps for calculating the formula.

Powerful processors like the Intel Core and AMD64 implement option #1 for several simple operations, #2 for the most common complex operations and #3 for the extremely complex operations.
Inputs and outputs

The inputs to the ALU are the data to be operated on (called operands) and a code from the control unit indicating which operation to perform. Its output is the result of the computation. One thing designers must keep in mind is whether the ALU will operate on big-endian or little-endian numbers.

In many designs, the ALU also takes or generates inputs or outputs a set of condition codes from or to a status register. These codes are used to indicate cases such as carry-in or carry-out, overflow, divide-by-zero, etc.
A floating-point unit also performs arithmetic operations between two values, but they do so for numbers in floating-point representation, which is much more complicated than the two's complement representation used in a typical ALU. In order to do these calculations, a FPU has several complex circuits built-in, including some internal ALUs.

In modern practice, engineers typically refer to the ALU as the circuit that performs integer arithmetic operations (like two's complement and BCD). Circuits that calculate more complex formats like floating point, complex numbers, etc. usually receive a more specific name such as FPU.
See also[/size]

395
Law / Arrest,Remand and Our Rights
« on: August 17, 2013, 03:13:08 PM »
গ্রেফতার, রিমান্ড ও আমাদের  অধিকার
অ্যাডভোকেট সিরাজ প্রামাণিক



মানবাধিকার সংগঠন অধিকারের সাধারণ সম্পাদক আদিলুর রহমান খানের রিমান্ড আদেশ স্থগিত করেছেন হাইকোর্ট। গত সোমবার বিচারপতি বোরহান উদ্দিন ও বিচারপতি কাশিফা হোসেন সমন্বয়ে গঠিত হাইকোর্টের অবকাশকালীন বেঞ্চ এক আবেদনের শুনানি নিয়ে এ আদেশ দেন।
এর আগে গত শনিবার রাতে রাজধানীর গুলশান থেকে আদিলুর রহমান খানকে গ্রেফতার করে ঢাকা মহানগর গোয়েন্দা পুলিশ। পরদিন রোববার বিকেলে তাঁকে আদালতে হাজির করে ১০ দিনের রিমান্ডের আবেদন জানায় পুলিশ।

ঢাকার মুখ্য মহানগর হাকিম আদালতের বিচারক অমিত কুমার দে তাঁর পাঁচ দিনের রিমান্ড মঞ্জুর করেন। ওই রিমান্ড আদেশের বৈধতা চ্যালেঞ্জ করে হাইকোর্টে আবেদন করেন অধিকারের সাধারণ সম্পাদক।

শুনানি নিয়ে আদালত রুলও জারি করেন। ওই রিমান্ড আদেশ কেন বেআইনি ঘোষণা করা হবে না, তা জানতে চাওয়া হয়েছে।
পাঠক, এবার আসল কথায় আসি। মূলা চুরির অভিযোগে গ্রেফতারকৃতকেও রিমান্ডে নেয়ার ক্ষমতা আছে পুলিশের। এক্ষেত্রে পুলিশের ক্ষমতা অবারিত। ৫৪ ধারায় গ্রেফতারকৃত এমনকি আদালতে আত্মসমর্পণকারীকে রিমান্ডে নেয়ার ক্ষমতাও চর্চা করছে পুলিশ।

আদালতে ডিমান্ড করলেই ম্যাজিস্ট্রেট রিমান্ড দিতে অনেকটা বাধ্য। পুলিশের ডিমান্ডে মঞ্জুরকৃত রিমান্ডের কমান্ডিংও থাকে পুলিশেরই হাতে। যদিও ম্যাজিস্ট্রেটের কোনো আসামিকে পুলিশ রিমান্ডে দেওয়া না দেওয়ায় ক্ষমতা দু`টোই আছে।

তবে রিমান্ডে দেওয়ার ক্ষমতা যতো বেশি, না দেওয়ার ক্ষমতা ততো নয়। এক্ষেত্রে পুলিশের ডিমান্ডই বেশি কার্যকর। বলা যায়, এ কাজে নিরঙ্কুশ ক্ষমতা পুলিশের।আমাদের দেশে যুগের পর যুগ শুধু ফৌজদারি কার্যবিধির ৫৪ ধারায় ওয়ারেন্ট ছাড়া গ্রেপ্তারের ক্ষমতা স্বেচ্ছাচারভাবে প্রয়োগ হয়ে আসছে।
পুলিশের উড়ো খবর, সন্দেহ, শত্রুতার বশবর্তী হয়ে কিংবা প্রশাসনের ওপর রাজনৈতিক নেতাদের প্রভাব আমাদের দেশে যুগের পর যুগ চলে আসছে।
কিন্তু অন্যায় গ্রেপ্তার ও আটকাদেশ সংবিধানের উল্লিখিত বিধানের সঙ্গে সম্পূর্ণ অসঙ্গতিপূর্ণ।

১৮৯৮ সালের ফৌজদারি কার্যবিধির ৫৪ ধারায় বাংলাদেশের যে কোনো নাগরিকের নির্বিচারে গ্রেপ্তারের যে ক্ষমতা পুলিশের উপর অর্পণ করা হয়েছে, তা সংবিধানের চেতনার পরিপন্থী, এর প্রয়োগও ব্যক্তি স্বাধীনতায় আঘাত হানার শামিল।

ওয়ারেন্ট ছাড়া কিংবা কোনোরুপ অন্যায় ছাড়াই  আমলযোগ্য অপরাধের সঙ্গে সম্পৃক্ত বলে সন্দেহ হলেই কোনো ব্যক্তিকে ৫৪ ধারায় গ্রেপ্তারের ক্ষমতা দেয়া হয়েছে।
এ ক্ষমতার অপব্যবহার রোধ করার জন্যই উপমহাদেশের বিভিন্ন আদালতের মতো বাংলাদেশ সুপ্রিমকোর্টের হাইকোর্ট বিভাগও সুনির্দিষ্ট গাইডলাইন দিয়েছেন, যা আইন প্রয়োগকারী সংস্থা ও অধস্থন আদালতগুলোর জন্য মেনে চলার বাধ্যবাধকতা রয়েছে।

২০০৩ সালে বিচারপতি হামিদুল হক এবং সালমা মাসুদের সমন্বয়ে গঠিত ডিভিশন বেঞ্চ ব্লাষ্ট এর দায়ের করা রিট মামলার রায়ে এ ব্যাপারে বিস্তারিত মতামত, সুপারিশ ও নির্দেশনা প্রদান করেছেন।
কিন্তু মানবাধিকার সংগঠনগুলোর বিরামহীন প্রচষ্টার পরও সরকার আজ পর্যন্ত ওই রায়ের আলোকে কোনো পদক্ষেপ গ্রহণ করেনি কিংবা করতে পারেনি।
উল্লেখ্য, ওই রায়ে সরকারকে আইন সংশোধন সুপারিশ গ্রহণের জন্য ছয় মাস সময় দেয়া হয়েছিল, যা আপিল বিভাগ স্থগিত করে রেখেছেন।
কিন্তু রায়ে যে মতামত দেয়া হয়েছে তা পুলিশ প্রশাসন ও অধস্তন আদালতের ওপর বাধ্যকারী বটে।

উল্লিখিত রায়ে ৫৪ ধারার অনিয়ন্ত্রিত ক্ষমতাকে অসাংবিধানিক ও মৌখিক মানবাধিকারের পরিপন্থী বলে ঘোষণা করা হয়েছে।
আদালত রায়ে বলেন, ‘৫৪ ধারার ক্ষমতা প্রয়োগে চরমভাবে স্বেচ্ছাচারিতা লক্ষ্য করা যায়।

এ ধারার ভাষাতেও অস্পষ্টতা আছে। তবে কেবল সন্দেহের বশবর্তী হয়ে যে কোনো নাগরিকের ব্যক্তি স্বাধীনতা হরণ করে আটক বা প্রহরায় নেয়া অন্যায়, বেআইনি ও অসাংবিধানিক।
কারণ ৫৪ ধারায় ওয়ারেন্ট ছাড়া আটক এর যে ক্ষমতা দেয়া হয়েছে তা সংবিধানের তৃতীয় অধ্যায়ে বর্ণিত মৌলিক অধিকার সংক্রান্ত বিধানগুলোর পরিপন্থী।
৫৪ ধারায় গ্রেপ্তার যদি করতে হয় তা হতে হবে সুনির্দিষ্ট, বিশ্বাসযোগ্য ও যুক্তিসঙ্গত তথ্য-উপাত্তের ভিত্তিতে।
কোনোক্রমেই নাগরিকের ব্যক্তি স্বাধীনতা অন্যায়ভাবে হরণ করা যাবে না। ফৌজদারি কার্যবিধির ১৬৭ ধারার ক্ষমতাবলে যখন তখন রিমান্ড চাওয়া ও মঞ্জুর করা সংবিধানের চেতনার পরিপন্থী।’
এ অবিচার বন্ধ করা অত্যাবশ্যক।

আইনের শাসন ও মৌলিক গণতান্ত্রিক অধিকার রক্ষায় এ জাতি বুকের তাজা রক্ত ঢেলে দিয়ে দেশ স্বাধীন করেছিল।
কিন্তু আমরা আজও আইনের মাধ্যমে নির্যাতন বন্ধ করতে পারিনি। এটি খুবই হতাশ ও দুঃখজনক বটে।

উল্লিখিত রায়ে আরো বলা হয় ‘পুলিশ সুনির্দিষ্ট তথ্যের ভিত্তিতে আমলযোগ্য কোনো অপরাধের সঙ্গে সম্পৃক্ত হওয়ার অভিযোগের ওপর ভিত্তি করে যথাযথ সতর্কতা অবলম্বন করেই কেবল কোনো অভিযুক্ত বা সাক্ষীকে ৫৪ ধারায় গ্রেপ্তার করতে পারবে।

এক্ষেত্রেও গ্রেপ্তারের কারণ বিস্তারিতভাবে উল্লেখ করতে হবে। ডায়েরি সংরক্ষণ করতে হবে। গ্রেপ্তারের তারিখ, স্থান উল্লেখ করতে হবে। ডায়েরিতে অবশ্যই সন্দেহের কারণের ব্যাখ্যা লিপিবদ্ধ করা জরুরি।

কে তথ্য দিল, তার পরিচয় সুনির্দিষ্টভাবে উল্লেখ থাকতে হবে। গ্রেপ্তারের পরপরই গ্রেপ্তারকৃত ব্যক্তির আত্মীয়স্বজনকে সংবাদ দিতে হবে। গ্রেপ্তারের সময় সংশ্লিষ্ট ব্যক্তিকেও কারণ ব্যাখ্যা করতে হবে।’
উল্লিখিত রায় এ দেশে একটি যুগান্তকারী ঘটনা। হাইকোর্ট প্রত্যাশা করেছিলেন, মাসদার হোসাইন মামলার রায়ের সুপারিশের মাধ্যমে বিচার বিভাগ পৃথক হওয়ায় ২০০৩ সালের উল্লিখিত রিট মামলার রিমান্ড ও ৫৪ ধারা সংক্রান্ত সুপারিশের আলোকেও এ সংক্রান্ত আইন প্রণয়ন করা হবে।
কিন্তু অদ্যাবধি তা হয়নি। এটি খুবই দুঃখজনক। যার কারণে জনগণের অধিকার প্রতিনিয়ত ভূলুণ্ঠিত হচ্ছে।
মানুষের ব্যক্তি অধিকার ও সম্মান ক্ষুণ্ন হচ্ছে।

আমাদের সংবিধানের ৩৪ অনুচ্ছেদেও গ্রেপ্তার ও আটকের ব্যাপারে পদ্ধতিগত সুরক্ষা দিয়েছে যেমন ‘গ্রেপ্তারকৃত কোনো ব্যক্তিকে যথাশীঘ্র গ্রেপ্তারের কারণ জ্ঞাপন না করিয়া পুনরায় আটক রাখা যাবে না এবং উক্ত ব্যক্তিকে তার মনোনীত আইনজীবীর সহিত পরামর্শের ও তার দ্বারা আত্মপক্ষ সমর্থনের অধিকার হইতে বঞ্চিত করা যাবে না।’
আমাদের পবিত্র সংবিধানের বিধান প্রতিনিয়ত লংঘন হচ্ছে অথচ আমার সুশাসনের কথা বলছি।

পত্রিকায় প্রকাশিত সংবাদ থেকে জানা যায়, ঢাকা মহানগর পুলিশের যুগ্ম কমিশনার মনিরুল ইসলাম সাংবাদিকদের জানিয়েছেন, আদিলুর রহমানকে ফৌজদারি কার্যবিধির ৫৪ ধারায় গ্রেপ্তার করা হয়েছে। তবে তিনি তথ্যপ্রযুক্তি আইনের ৫৭ ধারার ১ ও ২ উপধারায় অপরাধ করেছেন।

অপরাধটি হচ্ছে, গত ৫ মে রাতে শাপলা চত্বরে হেফাজতে ইসলামের সমাবেশের ঘটনার ওপর প্রতিবেদন প্রকাশ করে এনজিও হিসেবে নিবন্ধিত তাঁর প্রতিষ্ঠান ‘অধিকার’। এতে কাল্পনিকভাবে ৬১ জন মারা গেছে বলে দাবি করা হয়। প্রতিবেদনের প্রচ্ছদে ফটোশপের মাধ্যমে ছবি বিকৃত করে তা জুড়ে দেওয়া হয়েছে। এতে দেশের ভাবমূর্তি নষ্ট করার চেষ্টা করা হয় এবং মানুষের ধর্মীয় অনুভূতিতে আঘাত হানা হয়।

তবে তথ্যপ্রযুক্তি আইনের ৫৭ ধারায় বলা হয়েছে, ‘কোনো ব্যক্তি যদি ইচ্ছাকৃতভাবে ওয়েবসাইটে বা অন্য কোনো ইলেকট্রনিক বিন্যাসে এমন কিছু প্রকাশ বা সম্প্রচার করেন, যাহা মিথ্যা ও অশ্লীল বা সংশ্লিষ্ট অবস্থা বিবেচনায় কেহ পড়িলে, দেখিলে বা শুনিলে নীতিভ্রষ্ট বা অসত্য হইতে উদ্বুদ্ধ হইতে পারেন অথবা যাহার দ্বারা মানহানি ঘটে, আইনশৃঙ্খলার অবনতি ঘটে বা ঘটার সম্ভাবনা সৃষ্টি হয় বা রাষ্ট্র বা ব্যক্তির ভাবমূর্তি ক্ষুণ্ণ হয় বা ধর্মীয় অনুভূতিতে আঘাত করে বা করিতে পারে বা এ ধরনের তথ্যাদির মাধ্যমে কোনো ব্যক্তি বা সংগঠনের বিরুদ্ধে উসকানি প্রদান করা হয়, তাহা হইলে তাহার এই কার্য হইবে অপরাধ।"

কোনো ব্যক্তি এর অধীন অপরাধ করলে তিনি অনধিক ১০ বছর কারাদণ্ডে অথবা অনধিক এক কোটি টাকা অর্থদণ্ডে দণ্ডিত হইবেন।’
আইনের যথাযথ প্রয়োগ সবাই চায়, অপপ্রযোগ নয়।
লেখকঃ সাংবাদিক, আইনগ্রন্থ প্রণেতা, এম.ফিল গবেষক ও আইনজীবী জজ কোর্ট, কুষ্টিয়া।
www.banglanews.com

396
MCT / Computer Operationg System
« on: August 17, 2013, 01:12:21 PM »
Computer Operating System

An operating system (OS) is a collection of software that manages computer hardware resources and provides common services for computer programs. The operating system is an essential component of the system software in a computer system. Application programs usually require an operating system to function[1].
Time-sharing operating systems, schedule tasks for efficient use of the system and may also include accounting for cost allocation of processor time, mass storage, printing, and other resources.

For hardware functions such as input and output and memory allocation, the operating system acts as an intermediary between programs and the computer hardware,[2][3] although the application code is usually executed directly by the hardware and will frequently make a system call to an OS function or be interrupted by it. Operating systems can be found on almost any device that contains a computer—from cellular phones and video game consoles to supercomputers and web servers.
Examples of popular modern operating systems include Android, BSD, iOS, GNU/Linux, OS X, QNX, Microsoft Windows,[4] Windows Phone, and IBM z/OS. All these, except Windows and z/OS, share roots in UNIX.

Contents
•   1 Types of operating systems
o   1.1 Real-time
o   1.2 Multi-user
o   1.3 Multi-tasking vs. single-tasking
o   1.4 Distributed
o   1.5 Embedded
•   2 History
o   2.1 Mainframes
o   2.2 Microcomputers
•   3 Examples of operating systems
o   3.1 UNIX and UNIX-like operating systems
   3.1.1 BSD and its descendants
   3.1.1.1 OS X
   3.1.2 Linux and GNU
   3.1.2.1 Google Chromium OS
o   3.2 Microsoft Windows
o   3.3 Other
•   4 Components
o   4.1 Kernel
   4.1.1 Program execution
   4.1.2 Interrupts
   4.1.3 Modes
   4.1.4 Memory management
   4.1.5 Virtual memory
   4.1.6 Multitasking
   4.1.7 Disk access and file systems
   4.1.8 Device drivers
o   4.2 Networking
o   4.3 Security
o   4.4 User interface
   4.4.1 Graphical user interfaces
•   5 Real-time operating systems
•   6 Operating system development as a hobby
•   7 Diversity of operating systems and portability
•   8 See also
•   9 References
•   10 Further reading
•   11 External links

Types of operating systems
    This section does not cite any references or sources. Please help improve this section by adding citations to reliable sources. Unsourced material may be challenged and removed. (February 2012)
Real-time

A real-time operating system is a multitasking operating system that aims at executing real-time applications. Real-time operating systems often use specialized scheduling algorithms so that they can achieve a deterministic nature of behavior. The main objective of real-time operating systems is their quick and predictable response to events. They have an event-driven or time-sharing design and often aspects of both. An event-driven system switches between tasks based on their priorities or external events while time-sharing operating systems switch tasks based on clock interrupts.
Multi-user

A multi-user operating system allows multiple users to access a computer system at the same time. Time-sharing systems and Internet servers can be classified as multi-user systems as they enable multiple-user access to a computer through the sharing of time. Single-user operating systems have only one user but may allow multiple programs to run at the same time.

Multi-tasking vs. single-tasking

A multi-tasking operating system allows more than one program to be running at the same time, from the point of view of human time scales. A single-tasking system has only one running program. Multi-tasking can be of two types: pre-emptive and co-operative. In pre-emptive multitasking, the operating system slices the CPU time and dedicates one slot to each of the programs. Unix-like operating systems such as Solaris and Linux support pre-emptive multitasking, as does AmigaOS. Cooperative multitasking is achieved by relying on each process to give time to the other processes in a defined manner. 16-bit versions of Microsoft Windows used cooperative multi-tasking. 32-bit versions of both Windows NT and Win9x, used pre-emptive multi-tasking. Mac OS prior to OS X used to support cooperative multitasking.
Distributed

Further information: Distributed system

A distributed operating system manages a group of independent computers and makes them appear to be a single computer. The development of networked computers that could be linked and communicate with each other gave rise to distributed computing. Distributed computations are carried out on more than one machine. When computers in a group work in cooperation, they make a distributed system.
Embedded

Embedded operating systems are designed to be used in embedded computer systems. They are designed to operate on small machines like PDAs with less autonomy. They are able to operate with a limited number of resources. They are very compact and extremely efficient by design. Windows CE and Minix 3 are some examples of embedded operating systems.
History

Main article: History of operating systems

See also: Resident monitor

Early computers were built to perform a series of single tasks, like a calculator. Basic operating system features were developed in the 1950s, such as resident monitor functions that could automatically run different programs in succession to speed up processing. Operating systems did not exist in their modern and more complex forms until the early 1960s.[5] Hardware features were added, that enabled use of runtime libraries, interrupts, and parallel processing. When personal computers became popular in the 1980s, operating systems were made for them similar in concept to those used on larger computers.
In the 1940s, the earliest electronic digital systems had no operating systems. Electronic systems of this time were programmed on rows of mechanical switches or by jumper wires on plug boards. These were special-purpose systems that, for example, generated ballistics tables for the military or controlled the printing of payroll checks from data on punched paper cards. After programmable general purpose computers were invented, machine languages (consisting of strings of the binary digits 0 and 1 on punched paper tape) were introduced that sped up the programming process (Stern, 1981).
 
 
OS/360 was used on most IBM mainframe computers beginning in 1966, including the computers that helped NASA put a man on the moon.

In the early 1950s, a computer could execute only one program at a time. Each user had sole use of the computer for a limited period of time and would arrive at a scheduled time with program and data on punched paper cards and/or punched tape. The program would be loaded into the machine, and the machine would be set to work until the program completed or crashed. Programs could generally be debugged via a front panel using toggle switches and panel lights. It is said that Alan Turing was a master of this on the early Manchester Mark 1 machine, and he was already deriving the primitive conception of an operating system from the principles of

the Universal Turing machine.[5]

Later machines came with libraries of programs, which would be linked to a user's program to assist in operations such as input and output and generating computer code from human-readable symbolic code. This was the genesis of the modern-day operating system. However, machines still ran a single job at a time. At Cambridge University in England the job queue was at one time a washing line from which tapes were hung with different colored clothes-pegs to indicate job-priority.[citation needed]
Mainframes

Main article: Mainframe computer

See also: History of IBM mainframe operating systems
Through the 1950s, many major features were pioneered in the field of operating systems, including batch processing, input/output interrupt, buffering, multitasking, spooling, runtime libraries, link-loading, and programs for sorting records in files. These features were included or not included in application software at the option of application programmers, rather than in a separate operating system used by all applications. In 1959 the SHARE Operating System was released as an integrated utility for the IBM 704, and later in the 709 and 7090 mainframes, although it was quickly supplanted by IBSYS/IBJOB on the 709, 7090 and 7094.
During the 1960s, IBM's OS/360 introduced the concept of a single OS spanning an entire product line, which was crucial for the success of the System/360 machines. IBM's current mainframe operating systems are distant descendants of this original system and applications written for OS/360 can still be run on modern machines.[citation needed]

OS/360 also pioneered the concept that the operating system keeps track of all of the system resources that are used, including program and data space allocation in main memory and file space in secondary storage, and file locking during update. When the process is terminated for any reason, all of these resources are re-claimed by the operating system.

The alternative CP-67 system for the S/360-67 started a whole line of IBM operating systems focused on the concept of virtual machines. Other operating systems used on IBM S/360 series mainframes included systems developed by IBM: COS/360 (Compatibility Operating System), DOS/360 (Disk Operating System), TSS/360 (Time Sharing System), TOS/360 (Tape Operating System), BOS/360 (Basic Operating System), and ACP (Airline Control Program), as well as a few non-IBM systems: MTS (Michigan Terminal System), MUSIC (Multi-User System for Interactive Computing), and ORVYL (Stanford Timesharing System).

Control Data Corporation developed the SCOPE operating system in the 1960s, for batch processing. In cooperation with the University of Minnesota, the Kronos and later the NOS operating systems were developed during the 1970s, which supported simultaneous batch and timesharing use. Like many commercial timesharing systems, its interface was an extension of the Dartmouth BASIC operating systems, one of the pioneering efforts in timesharing and programming languages. In the late 1970s, Control Data and the University of Illinois developed the PLATO operating system, which used plasma panel displays and long-distance time sharing networks. Plato was remarkably innovative for its time, featuring real-time chat, and multi-user graphical games.

In 1961, Burroughs Corporation introduced the B5000 with the MCP, (Master Control Program) operating system. The B5000 was a stack machine designed to exclusively support high-level languages with no machine language or assembler, and indeed the MCP was the first OS to be written exclusively in a high-level language – ESPOL, a dialect of ALGOL. MCP also introduced many other ground-breaking innovations, such as being the first commercial implementation of virtual memory. During development of the AS400, IBM made an approach to Burroughs to licence MCP to run on the AS400 hardware. This proposal was declined by Burroughs management to protect its existing hardware production. MCP is still in use today in the Unisys ClearPath/MCP line of computers.
UNIVAC, the first commercial computer manufacturer, produced a series of EXEC operating systems. Like all early main-frame systems, this batch-oriented system managed magnetic drums, disks, card readers and line printers. In the 1970s, UNIVAC produced the Real-Time Basic (RTB) system to support large-scale time sharing, also patterned after the Dartmouth BC system.

General Electric and MIT developed General Electric Comprehensive Operating Supervisor (GECOS), which introduced the concept of ringed security privilege levels. After acquisition by Honeywell it was renamed General Comprehensive Operating System (GCOS).
Digital Equipment Corporation developed many operating systems for its various computer lines, including TOPS-10 and TOPS-20 time sharing systems for the 36-bit PDP-10 class systems. Prior to the widespread use of UNIX, TOPS-10 was a particularly popular system in universities, and in the early ARPANET community.

From the late 1960s through the late 1970s, several hardware capabilities evolved that allowed similar or ported software to run on more than one system. Early systems had utilized microprogramming to implement features on their systems in order to permit different underlying computer architectures to appear to be the same as others in a series. In fact, most 360s after the 360/40 (except the 360/165 and 360/168) were microprogrammed implementations.
The enormous investment in software for these systems made since the 1960s caused most of the original computer manufacturers to continue to develop compatible operating systems along with the hardware. Notable supported mainframe operating systems include:

•   Burroughs MCP – B5000, 1961 to Unisys Clearpath/MCP, present.
•   IBM OS/360 – IBM System/360, 1966 to IBM z/OS, present.
•   IBM CP-67 – IBM System/360, 1967 to IBM z/VM, present.
•   UNIVAC EXEC 8 – UNIVAC 1108, 1967, to OS 2200 Unisys Clearpath Dorado, present.
Microcomputers
 
 
PC DOS was an early personal computer OS that featured a command line interface.
 
 
Mac OS by Apple Computer became the first widespread OS to feature a graphical user interface. Many of its features such as windows and icons would later become commonplace in GUIs.

The first microcomputers did not have the capacity or need for the elaborate operating systems that had been developed for mainframes and minis; minimalistic operating systems were developed, often loaded from ROM and known as monitors. One notable early disk operating system was CP/M, which was supported on many early microcomputers and was closely imitated by Microsoft's MS-DOS, which became wildly popular as the operating system chosen for the IBM PC (IBM's version of it was called IBM DOS or PC DOS). In the '80s, Apple Computer Inc. (now Apple Inc.) abandoned its popular Apple II series of microcomputers to introduce the Apple Macintosh computer with an innovative Graphical User Interface (GUI) to the Mac OS.

The introduction of the Intel 80386 CPU chip with 32-bit architecture and paging capabilities, provided personal computers with the ability to run multitasking operating systems like those of earlier minicomputers and mainframes. Microsoft responded to this progress by hiring Dave Cutler, who had developed the VMS operating system for Digital Equipment Corporation. He would lead the development of the Windows NT operating system, which continues to serve as the basis for Microsoft's operating systems line. Steve Jobs, a co-founder of Apple Inc., started NeXT Computer Inc., which developed the NEXTSTEP operating system. NEXTSTEP would later be acquired by Apple Inc. and used, along with code from FreeBSD as the core of Mac OS X.

The GNU Project was started by activist and programmer Richard Stallman with the goal of creating a complete free software replacement to the proprietary UNIX operating system. While the project was highly successful in duplicating the functionality of various parts of UNIX, development of the GNU Hurd kernel proved to be unproductive. In 1991, Finnish computer science student Linus Torvalds, with cooperation from volunteers collaborating over the Internet, released the first version of the Linux kernel. It was soon merged with the GNU user space components and system software to form a complete operating system. Since then, the combination of the two major components has usually been referred to as simply "Linux" by the software industry, a naming convention that Stallman and the Free Software Foundation remain opposed to, preferring the name GNU/Linux. The Berkeley

Software Distribution, known as BSD, is the UNIX derivative distributed by the University of California, Berkeley, starting in the 1970s. Freely distributed and ported to many minicomputers, it eventually also gained a following for use on PCs, mainly as FreeBSD, NetBSD and OpenBSD.

Examples of operating systems
UNIX and UNIX-like operating systems
 
 
Evolution of Unix systems
Main article: Unix

Unix was originally written in assembly language.[6] Ken Thompson wrote B, mainly based on BCPL, based on his experience in the MULTICS project. B was replaced by C, and Unix, rewritten in C, developed into a large, complex family of inter-related operating systems which have been influential in every modern operating system (see History).
The UNIX-like family is a diverse group of operating systems, with several major sub-categories including System V, BSD, and Linux. The name "UNIX" is a trademark of The Open Group which licenses it for use with any operating system that has been shown to conform to their definitions. "UNIX-like" is commonly used to refer to the large set of operating systems which resemble the original UNIX.

Unix-like systems run on a wide variety of computer architectures. They are used heavily for servers in business, as well as workstations in academic and engineering environments. Free UNIX variants, such as Linux and BSD, are popular in these areas.
Four operating systems are certified by the The Open Group (holder of the Unix trademark) as Unix. HP's HP-UX and IBM's AIX are both descendants of the original System V Unix and are designed to run only on their respective vendor's hardware. In contrast, Sun Microsystems's Solaris Operating System can run on multiple types of hardware, including x86 and Sparc servers, and PCs. Apple's OS X, a replacement for Apple's earlier (non-Unix) Mac OS, is a hybrid kernel-based BSD variant derived from NeXTSTEP, Mach, and FreeBSD.

nix interoperability was sought by establishing the POSIX standard. The POSIX standard can be applied to any operating system, although it was originally created for various Unix variants.
BSD and its descendants
 
 
The first server for the World Wide Web ran on NeXTSTEP, based on BSD.
Main article: Berkeley Software Distribution
A subgroup of the Unix family is the Berkeley Software Distribution family, which includes FreeBSD, NetBSD, and OpenBSD. These operating systems are most commonly found on webservers, although they can also function as a personal computer OS. The Internet owes much of its existence to BSD, as many of the protocols now commonly used by computers to connect, send and receive data over a network were widely implemented and refined in BSD. The world wide web was also first demonstrated on a number of computers running an OS based on BSD called NextStep.

BSD has its roots in Unix. In 1974, University of California, Berkeley installed its first Unix system. Over time, students and staff in the computer science department there began adding new programs to make things easier, such as text editors. When Berkely received new VAX computers in 1978 with Unix installed, the school's undergraduates modified Unix even more in order to take advantage of the computer's hardware possibilities. The Defense Advanced Research Projects Agency of the US Department of Defense took interest, and decided to fund the project. Many schools, corporations, and government organizations took notice and started to use Berkeley's version of Unix instead of the official one distributed by AT&T.
Steve Jobs, upon leaving Apple Inc. in 1985, formed NeXT Inc., a company that manufactured high-end computers running on a variation of BSD called NeXTSTEP. One of these computers was used by Tim Berners-Lee as the first webserver to create the World Wide Web.

Developers like Keith Bostic encouraged the project to replace any non-free code that originated with Bell Labs. Once this was done, however, AT&T sued. Eventually, after two years of legal disputes, the BSD project came out ahead and spawned a number of free derivatives, such as FreeBSD and NetBSD.
OS X
Main article: OS X
 
 
The standard user interface of OS X
OS X (formerly "Mac OS X") is a line of open core graphical operating systems developed, marketed, and sold by Apple Inc., the latest of which is pre-loaded on all currently shipping Macintosh computers. OS X is the successor to the original Mac OS, which had been Apple's primary operating system since 1984. Unlike its predecessor, OS X is a UNIX operating system built on technology that had been developed at NeXT through the second half of the 1980s and up until Apple purchased the company in early 1997. The operating system was first released in 1999 as Mac OS X Server 1.0, with a desktop-oriented version (Mac OS X v10.0 "Cheetah") following in March 2001. Since then, six more distinct "client" and "server" editions of OS X have been released, the most recent being OS X 10.8 "Mountain Lion", which was first made available on February 16, 2012 for developers, and was then released to the public on July 25, 2012. Releases of OS X are named after big cats.

Prior to its merging with OS X, the server edition – OS X Server – was architecturally identical to its desktop counterpart and usually ran on Apple's line of Macintosh server hardware. OS X Server included work group management and administration software tools that provide simplified access to key network services, including a mail transfer agent, a Samba server, an LDAP server, a domain name server, and others. With Mac OS X v10.7 Lion, all server aspects of Mac OS X Server have been integrated into the client version and the product re-branded as "OS X" (dropping "Mac" from the name). The server tools are now offered as an application.[7]
Linux and GNU
Main articles: GNU, Linux, and Linux kernel
 
 
Ubuntu, desktop Linux distribution
 
 
Android, a popular mobile operating system using the Linux kernel
Linux (or GNU/Linux) is a Unix-like operating system that was developed without any actual Unix code, unlike BSD and its variants. Linux can be used on a wide range of devices from supercomputers to wristwatches. The Linux kernel is released under an open source license, so anyone can read and modify its code. It has been modified to run on a large variety of electronics. Although estimates suggest that Linux is used on 1.82% of all personal computers,[8][9] it has been widely adopted for use in servers[10] and embedded systems[11] (such as cell phones). Linux has superseded Unix in most places[which?], and is used on the 10 most powerful supercomputers in the world.[12] The Linux kernel is used in some popular distributions, such as Red Hat, Debian, Ubuntu, Linux Mint and Google's Android.

The GNU project is a mass collaboration of programmers who seek to create a completely free and open operating system that was similar to Unix but with completely original code. It was started in 1983 by Richard Stallman, and is responsible for many of the parts of most Linux variants. Thousands of pieces of software for virtually every operating system are licensed under the GNU General Public License. Meanwhile, the Linux kernel began as a side project of Linus Torvalds, a university student from Finland. In 1991, Torvalds began work on it, and posted information about his project on a newsgroup for computer students and programmers. He received a wave of support and volunteers who ended up creating a full-fledged kernel. Programmers from GNU took notice, and members of both projects worked to integrate the finished GNU parts with the Linux kernel in order to create a full-fledged operating system.
Google Chromium OS

Main article: Google Chromium OS
Chromium is an operating system based on the Linux kernel and designed by Google. Since Chromium OS targets computer users who spend most of their time on the Internet, it is mainly a web browser with limited ability to run local applications, though it has a built-in file manager and media player. Instead, it relies on Internet applications (or Web apps) used in the web browser to accomplish tasks such as word processing.[13]
Microsoft Windows
Main article: Microsoft Windows
 
 
The USB flash drive
Microsoft Windows is a family of proprietary operating systems designed by Microsoft Corporation and primarily targeted to Intel architecture based computers, with an estimated 88.9 percent total usage share on Web connected computers.[9][14][15][16] The newest version is Windows 8 for workstations and Windows Server 2012 for servers. Windows 7 recently overtook Windows XP as most used OS.[17][18][19]

Microsoft Windows originated in 1985 as an operating environment running on top of MS-DOS, which was the standard operating system shipped on most Intel architecture personal computers at the time. In 1995, Windows 95 was released which only used MS-DOS as a bootstrap. For backwards compatibility, Win9x could run real-mode MS-DOS[20][21] and 16 bits Windows 3.x[22] drivers. Windows ME, released in 2000, was the last version in the Win9x family. Later versions have all been based on the Windows NT kernel. Current versions of Windows run on IA-32 and x86-64 microprocessors, although Windows 8 will support ARM architecture.[23] In the past, Windows NT supported non-Intel architectures.
Server editions of Windows are widely used. In recent years, Microsoft has expended significant capital in an effort to promote the use of Windows as a server operating system. However, Windows' usage on servers is not as widespread as on personal computers, as Windows competes against Linux and BSD for server market share.[24][25]
Other

There have been many operating systems that were significant in their day but are no longer so, such as AmigaOS; OS/2 from IBM and Microsoft; Mac OS, the non-Unix precursor to Apple's Mac OS X; BeOS; XTS-300; RISC OS; MorphOS and FreeMint. Some are still used in niche markets and continue to be developed as minority platforms for enthusiast communities and specialist applications. OpenVMS formerly from DEC, is still under active development by Hewlett-Packard. Yet other operating systems are used almost exclusively in academia, for operating systems education or to do research on operating system concepts. A typical example of a system that fulfills both roles is MINIX, while for example Singularity is used purely for research.
Other operating systems have failed to win significant market share, but have introduced innovations that have influenced mainstream operating systems, not least Bell Labs' Plan 9.
Components

The components of an operating system all exist in order to make the different parts of a computer work together. All user software needs to go through the operating system in order to use any of the hardware, whether it be as simple as a mouse or keyboard or as complex as an Internet component.
Kernel
 
 
A kernel connects the application software to the hardware of a computer.
Main article: Kernel (computing)
With the aid of the firmware and device drivers, the kernel provides the most basic level of control over all of the computer's hardware devices. It manages memory access for programs in the RAM, it determines which programs get access to which hardware resources, it sets up or resets the CPU's operating states for optimal operation at all times, and it organizes the data for long-term non-volatile storage with file systems on such media as disks, tapes, flash memory, etc.
Program execution

Main article: Process (computing)

The operating system provides an interface between an application program and the computer hardware, so that an application program can interact with the hardware only by obeying rules and procedures programmed into the operating system. The operating system is also a set of services which simplify development and execution of application programs. Executing an application program involves the creation of a process by the operating system kernel which assigns memory space and other resources, establishes a priority for the process in multi-tasking systems, loads program binary code into memory, and initiates execution of the application program which then interacts with the user and with hardware devices.
Interrupts

Main article: Interrupt

Interrupts are central to operating systems, as they provide an efficient way for the operating system to interact with and react to its environment. The alternative — having the operating system "watch" the various sources of input for events (polling) that require action — can be found in older systems with very small stacks (50 or 60 bytes) but are unusual in modern systems with large stacks. Interrupt-based programming is directly supported by most modern CPUs. Interrupts provide a computer with a way of automatically saving local register contexts, and running specific code in response to events. Even very basic computers support hardware interrupts, and allow the programmer to specify code which may be run when that event takes place.
When an interrupt is received, the computer's hardware automatically suspends whatever program is currently running, saves its status, and runs computer code previously associated with the interrupt; this is analogous to placing a bookmark in a book in response to a phone call. In modern operating systems, interrupts are handled by the operating system's kernel. Interrupts may come from either the computer's hardware or from the running program.

When a hardware device triggers an interrupt, the operating system's kernel decides how to deal with this event, generally by running some processing code. The amount of code being run depends on the priority of the interrupt (for example: a person usually responds to a smoke detector alarm before answering the phone). The processing of hardware interrupts is a task that is usually delegated to software called device driver, which may be either part of the operating system's kernel, part of another program, or both. Device drivers may then relay information to a running program by various means.
A program may also trigger an interrupt to the operating system. If a program wishes to access hardware for example, it may interrupt the operating system's kernel, which causes control to be passed back to the kernel. The kernel will then process the request. If a program wishes additional resources (or wishes to shed resources) such as memory, it will trigger an interrupt to get the kernel's attention.
Modes
Main articles: Protected mode and Supervisor mode
 
 
Privilege rings for the x86 available in protected mode. Operating systems determine which processes run in each mode.
Modern CPUs support multiple modes of operation. CPUs with this capability use at least two modes: protected mode and supervisor mode. The supervisor mode is used by the operating system's kernel for low level tasks that need unrestricted access to hardware, such as controlling how memory is written and erased, and communication with devices like graphics cards. Protected mode, in contrast, is used for almost everything else. Applications operate within protected mode, and can only use hardware by communicating with the kernel, which controls everything in supervisor mode. CPUs might have other modes similar to protected mode as well, such as the virtual modes in order to emulate older processor types, such as 16-bit processors on a 32-bit one, or 32-bit processors on a 64-bit one.

When a computer first starts up, it is automatically running in supervisor mode. The first few programs to run on the computer, being the BIOS or EFI, bootloader, and the operating system have unlimited access to hardware – and this is required because, by definition, initializing a protected environment can only be done outside of one. However, when the operating system passes control to another program, it can place the CPU into protected mode.

In protected mode, programs may have access to a more limited set of the CPU's instructions. A user program may leave protected mode only by triggering an interrupt, causing control to be passed back to the kernel. In this way the operating system can maintain exclusive control over things like access to hardware and memory.
The term "protected mode resource" generally refers to one or more CPU registers, which contain information that the running program isn't allowed to alter. Attempts to alter these resources generally causes a switch to supervisor mode, where the operating system can deal with the illegal operation the program was attempting (for example, by killing the program).
Memory management

Main article: Memory management

Among other things, a multiprogramming operating system kernel must be responsible for managing all system memory which is currently in use by programs. This ensures that a program does not interfere with memory already in use by another program. Since programs time share, each program must have independent access to memory.
Cooperative memory management, used by many early operating systems, assumes that all programs make voluntary use of the kernel's memory manager, and do not exceed their allocated memory. This system of memory management is almost never seen any more, since programs often contain bugs which can cause them to exceed their allocated memory. If a program fails, it may cause memory used by one or more other programs to be affected or overwritten. Malicious programs or viruses may purposefully alter another program's memory, or may affect the operation of the operating system itself. With cooperative memory management, it takes only one misbehaved program to crash the system.
Memory protection enables the kernel to limit a process' access to the computer's memory. Various methods of memory protection exist, including memory segmentation and paging. All methods require some level of hardware support (such as the 80286 MMU),

which doesn't exist in all computers.

In both segmentation and paging, certain protected mode registers specify to the CPU what memory address it should allow a running program to access. Attempts to access other addresses will trigger an interrupt which will cause the CPU to re-enter supervisor mode, placing the kernel in charge. This is called a segmentation violation or Seg-V for short, and since it is both difficult to assign a meaningful result to such an operation, and because it is usually a sign of a misbehaving program, the kernel will generally resort to terminating the offending program, and will report the error.

Windows 3.1-Me had some level of memory protection, but programs could easily circumvent the need to use it. A general protection fault would be produced, indicating a segmentation violation had occurred; however, the system would often crash anyway.
Virtual memory

Main article: Virtual memory
Further information: Page fault
 
 
Many operating systems can "trick" programs into using memory scattered around the hard disk and RAM as if it is one continuous chunk of memory, called virtual memory.
The use of virtual memory addressing (such as paging or segmentation) means that the kernel can choose what memory each program may use at any given time, allowing the operating system to use the same memory locations for multiple tasks.
If a program tries to access memory that isn't in its current range of accessible memory, but nonetheless has been allocated to it, the kernel will be interrupted in the same way as it would if the program were to exceed its allocated memory. (See section on memory management.) Under UNIX this kind of interrupt is referred to as a page fault.
When the kernel detects a page fault it will generally adjust the virtual memory range of the program which triggered it, granting it access to the memory requested. This gives the kernel discretionary power over where a particular application's memory is stored, or even whether or not it has actually been allocated yet.
In modern operating systems, memory which is accessed less frequently can be temporarily stored on disk or other media to make that space available for use by other programs. This is called swapping, as an area of memory can be used by multiple programs, and what that memory area contains can be swapped or exchanged on demand.
"Virtual memory" provides the programmer or the user with the perception that there is a much larger amount of RAM in the computer than is really there.[26]

Multitasking

Main articles: Computer multitasking and Process management (computing)
Further information: Context switch, Preemptive multitasking, and Cooperative multitasking
Multitasking refers to the running of multiple independent computer programs on the same computer; giving the appearance that it is performing the tasks at the same time. Since most computers can do at most one or two things at one time, this is generally done via time-sharing, which means that each program uses a share of the computer's time to execute.
An operating system kernel contains a piece of software called a scheduler which determines how much time each program will spend executing, and in which order execution control should be passed to programs. Control is passed to a process by the kernel, which allows the program access to the CPU and memory. Later, control is returned to the kernel through some mechanism, so that another program may be allowed to use the CPU. This so-called passing of control between the kernel and applications is called a context switch.
An early model which governed the allocation of time to programs was called cooperative multitasking. In this model, when control is passed to a program by the kernel, it may execute for as long as it wants before explicitly returning control to the kernel. This means that a malicious or malfunctioning program may not only prevent any other programs from using the CPU, but it can hang the entire system if it enters an infinite loop.
Modern operating systems extend the concepts of application preemption to device drivers and kernel code, so that the operating system has preemptive control over internal run-times as well.
The philosophy governing preemptive multitasking is that of ensuring that all programs are given regular time on the CPU. This implies that all programs must be limited in how much time they are allowed to spend on the CPU without being interrupted. To accomplish this, modern operating system kernels make use of a timed interrupt. A protected mode timer is set by the kernel which triggers a return to supervisor mode after the specified time has elapsed. (See above sections on Interrupts and Dual Mode Operation.)
On many single user operating systems cooperative multitasking is perfectly adequate, as home computers generally run a small number of well tested programs. The AmigaOS is an exception, having pre-emptive multitasking from its very first version. Windows NT was the first version of Microsoft Windows which enforced preemptive multitasking, but it didn't reach the home user market until Windows XP (since Windows NT was targeted at professionals).
Disk access and file systems
Main article: Virtual file system
 
 
Filesystems allow users and programs to organize and sort files on a computer, often through the use of directories (or "folders")
Access to data stored on disks is a central feature of all operating systems. Computers store data on disks using files, which are structured in specific ways in order to allow for faster access, higher reliability, and to make better use out of the drive's available space. The specific way in which files are stored on a disk is called a file system, and enables files to have names and attributes. It also allows them to be stored in a hierarchy of directories or folders arranged in a directory tree.
Early operating systems generally supported a single type of disk drive and only one kind of file system. Early file systems were limited in their capacity, speed, and in the kinds of file names and directory structures they could use. These limitations often reflected limitations in the operating systems they were designed for, making it very difficult for an operating system to support more than one file system.

While many simpler operating systems support a limited range of options for accessing storage systems, operating systems like UNIX and Linux support a technology known as a virtual file system or VFS. An operating system such as UNIX supports a wide array of storage devices, regardless of their design or file systems, allowing them to be accessed through a common application programming interface (API). This makes it unnecessary for programs to have any knowledge about the device they are accessing. A VFS allows the operating system to provide programs with access to an unlimited number of devices with an infinite variety of file systems installed on them, through the use of specific device drivers and file system drivers.
A connected storage device, such as a hard drive, is accessed through a device driver. The device driver understands the specific language of the drive and is able to translate that language into a standard language used by the operating system to access all disk drives. On UNIX, this is the language of block devices.

When the kernel has an appropriate device driver in place, it can then access the contents of the disk drive in raw format, which may contain one or more file systems. A file system driver is used to translate the commands used to access each specific file system into a standard set of commands that the operating system can use to talk to all file systems. Programs can then deal with these file systems on the basis of filenames, and directories/folders, contained within a hierarchical structure. They can create, delete, open, and close files, as well as gather various information about them, including access permissions, size, free space, and creation and modification dates.

Various differences between file systems make supporting all file systems difficult. Allowed characters in file names, case sensitivity, and the presence of various kinds of file attributes makes the implementation of a single interface for every file system a daunting task. Operating systems tend to recommend using (and so support natively) file systems specifically designed for them; for example, NTFS in Windows and ext3 and ReiserFS in Linux. However, in practice, third party drives are usually available to give support for the most widely used file systems in most general-purpose operating systems (for example, NTFS is available in Linux through NTFS-3g, and ext2/3 and ReiserFS are available in Windows through third-party software).
Support for file systems is highly varied among modern operating systems, although there are several common file systems which almost all operating systems include support and drivers for. Operating systems vary on file system support and on the disk formats they may be installed on. Under Windows, each file system is usually limited in application to certain media; for example, CDs must use ISO 9660 or UDF, and as of Windows Vista, NTFS is the only file system which the operating system can be installed on. It is possible to install Linux onto many types of file systems. Unlike other operating systems, Linux and UNIX allow any file system to be used regardless of the media it is stored in, whether it is a hard drive, a disc (CD,DVD...), a USB flash drive, or even contained within a file located on another file system.
Device drivers

Main article: Device driver

A device driver is a specific type of computer software developed to allow interaction with hardware devices. Typically this constitutes an interface for communicating with the device, through the specific computer bus or communications subsystem that the hardware is connected to, providing commands to and/or receiving data from the device, and on the other end, the requisite interfaces to the operating system and software applications. It is a specialized hardware-dependent computer program which is also operating system specific that enables another program, typically an operating system or applications software package or computer program running under the operating system kernel, to interact transparently with a hardware device, and usually provides the requisite interrupt handling necessary for any necessary asynchronous time-dependent hardware interfacing needs.
The key design goal of device drivers is abstraction. Every model of hardware (even within the same class of device) is different. Newer models also are released by manufacturers that provide more reliable or better performance and these newer models are often controlled differently. Computers and their operating systems cannot be expected to know how to control every device, both now and in the future. To solve this problem, operating systems essentially dictate how every type of device should be controlled. The function of the device driver is then to translate these operating system mandated function calls into device specific calls. In theory a new device, which is controlled in a new manner, should function correctly if a suitable driver is available. This new driver will ensure that the device appears to operate as usual from the operating system's point of view.
Under versions of Windows before Vista and versions of Linux before 2.6, all driver execution was co-operative, meaning that if a driver entered an infinite loop it would freeze the system. More recent revisions of these operating systems incorporate kernel preemption, where the kernel interrupts the driver to give it tasks, and then separates itself from the process until it receives a response from the device driver, or gives it more tasks to do.

Networking

Main article: Computer network
Currently most operating systems support a variety of networking protocols, hardware, and applications for using them. This means that computers running dissimilar operating systems can participate in a common network for sharing resources such as computing, files, printers, and scanners using either wired or wireless connections. Networks can essentially allow a computer's operating system to access the resources of a remote computer to support the same functions as it could if those resources were connected directly to the local computer. This includes everything from simple communication, to using networked file systems or even sharing another computer's graphics or sound hardware. Some network services allow the resources of a computer to be accessed transparently, such as SSH which allows networked users direct access to a computer's command line interface.
Client/server networking allows a program on a computer, called a client, to connect via a network to another computer, called a server. Servers offer (or host) various services to other network computers and users. These services are usually provided through ports or numbered access points beyond the server's network address. Each port number is usually associated with a maximum of one running program, which is responsible for handling requests to that port. A daemon, being a user program, can in turn access the local hardware resources of that computer by passing requests to the operating system kernel.
Many operating systems support one or more vendor-specific or open networking protocols as well, for example, SNA on IBM systems, DECnet on systems from Digital Equipment Corporation, and Microsoft-specific protocols (SMB) on Windows. Specific protocols for specific tasks may also be supported such as NFS for file access. Protocols like ESound, or esd can be easily extended over the network to provide sound from local applications, on a remote system's sound hardware.

Security

Main article: Computer security

A computer being secure depends on a number of technologies working properly. A modern operating system provides access to a number of resources, which are available to software running on the system, and to external devices like networks via the kernel.
The operating system must be capable of distinguishing between requests which should be allowed to be processed, and others which should not be processed. While some systems may simply distinguish between "privileged" and "non-privileged", systems commonly have a form of requester identity, such as a user name. To establish identity there may be a process of authentication. Often a username must be quoted, and each username may have a password. Other methods of authentication, such as magnetic cards or biometric data, might be used instead. In some cases, especially connections from the network, resources may be accessed with no authentication at all (such as reading files over a network share). Also covered by the concept of requester identity is authorization; the particular services and resources accessible by the requester once logged into a system are tied to either the requester's user account or to the variously configured groups of users to which the requester belongs.
In addition to the allow/disallow model of security, a system with a high level of security will also offer auditing options. These would allow tracking of requests for access to resources (such as, "who has been reading this file?"). Internal security, or security from an already running program is only possible if all possibly harmful requests must be carried out through interrupts to the operating system kernel. If programs can directly access hardware and resources, they cannot be secured.
External security involves a request from outside the computer, such as a login at a connected console or some kind of network connection. External requests are often passed through device drivers to the operating system's kernel, where they can be passed onto applications, or carried out directly. Security of operating systems has long been a concern because of highly sensitive data held on computers, both of a commercial and military nature. The United States Government Department of Defense (DoD) created the Trusted Computer System Evaluation Criteria (TCSEC) which is a standard that sets basic requirements for assessing the effectiveness of security. This became of vital importance to operating system makers, because the TCSEC was used to evaluate, classify and select trusted operating systems being considered for the processing, storage and retrieval of sensitive or classified information.
Network services include offerings such as file sharing, print services, email, web sites, and file transfer protocols (FTP), most of which can have compromised security. At the front line of security are hardware devices known as firewalls or intrusion detection/prevention systems. At the operating system level, there are a number of software firewalls available, as well as intrusion detection/prevention systems. Most modern operating systems include a software firewall, which is enabled by default. A software firewall can be configured to allow or deny network traffic to or from a service or application running on the operating system. Therefore, one can install and be running an insecure service, such as Telnet or FTP, and not have to be threatened by a security breach because the firewall would deny all traffic trying to connect to the service on that port.
An alternative strategy, and the only sandbox strategy available in systems that do not meet the Popek and Goldberg virtualization requirements, is the operating system not running user programs as native code, but instead either emulates a processor or provides a host for a p-code based system such as Java.

Internal security is especially relevant for multi-user systems; it allows each user of the system to have private files that the other users cannot tamper with or read. Internal security is also vital if auditing is to be of any use, since a program can potentially bypass the operating system, inclusive of bypassing auditing.
User interface
 
 
A screenshot of the Bourne Again Shell command line. Each command is typed out after the 'prompt', and then its output appears below, working its way down the screen. The current command prompt is at the bottom.
Main article: Operating system user interface
Every computer that is to be operated by an individual requires a user interface. The user interface is usually referred to as a shell and is essential if human interaction is to be supported. The user interface views the directory structure and requests services from the operating system that will acquire data from input hardware devices, such as a keyboard, mouse or credit card reader, and requests operating system services to display prompts, status messages and such on output hardware devices, such as a video monitor or printer. The two most common forms of a user interface have historically been the command-line interface, where computer commands are typed out line-by-line, and the graphical user interface, where a visual environment (most commonly a WIMP) is present.
Graphical user interfaces
 
 
A screenshot of the KDE Plasma Desktop graphical user interface. Programs take the form of images on the screen, and the files, folders (directories), and applications take the form of icons and symbols. A mouse is used to navigate the computer.
Most of the modern computer systems support graphical user interfaces (GUI), and often include them. In some computer systems, such as the original implementation of Mac OS, the GUI is integrated into the kernel.
While technically a graphical user interface is not an operating system service, incorporating support for one into the operating system kernel can allow the GUI to be more responsive by reducing the number of context switches required for the GUI to perform its output functions. Other operating systems are modular, separating the graphics subsystem from the kernel and the Operating System. In the 1980s UNIX, VMS and many others had operating systems that were built this way. Linux and Mac OS X are also built this way. Modern releases of Microsoft Windows such as Windows Vista implement a graphics subsystem that is mostly in user-space; however the graphics drawing routines of versions between Windows NT 4.0 and Windows Server 2003 exist mostly in kernel space. Windows 9x had very little distinction between the interface and the kernel.

Many computer operating systems allow the user to install or create any user interface they desire. The X Window System in conjunction with GNOME or KDE Plasma Desktop is a commonly found setup on most Unix and Unix-like (BSD, Linux, Solaris) systems. A number of Windows shell replacements have been released for Microsoft Windows, which offer alternatives to the included Windows shell, but the shell itself cannot be separated from Windows.

Numerous Unix-based GUIs have existed over time, most derived from X11. Competition among the various vendors of Unix (HP, IBM, Sun) led to much fragmentation, though an effort to standardize in the 1990s to COSE and CDE failed for various reasons, and were eventually eclipsed by the widespread adoption of GNOME and K Desktop Environment. Prior to free software-based toolkits and desktop environments, Motif was the prevalent toolkit/desktop combination (and was the basis upon which CDE was developed).
Graphical user interfaces evolve over time. For example, Windows has modified its user interface almost every time a new major version of Windows is released, and the Mac OS GUI changed dramatically with the introduction of Mac OS X in 1999.[27]
Real-time operating systems

Main article: Real-time operating system

A real-time operating system (RTOS) is an operating system intended for applications with fixed deadlines (real-time computing). Such applications include some small embedded systems, automobile engine controllers, industrial robots, spacecraft, industrial control, and some large-scale computing systems.

An early example of a large-scale real-time operating system was Transaction Processing Facility developed by American Airlines and IBM for the Sabre Airline Reservations System.
Embedded systems that have fixed deadlines use a real-time operating system such as VxWorks, PikeOS, eCos, QNX, MontaVista Linux and RTLinux. Windows CE is a real-time operating system that shares similar APIs to desktop Windows but shares none of desktop Windows' codebase.[citation needed] Symbian OS also has an RTOS kernel (EKA2) starting with version 8.0b.
Some embedded systems use operating systems such as Palm OS, BSD, and Linux, although such operating systems do not support real-time computing.

Operating system development as a hobby

See also: Hobbyist operating system development

Operating system development is one of the most complicated activities in which a computing hobbyist may engage. A hobby operating system may be classified as one whose code has not been directly derived from an existing operating system, and has few users and active developers. [28]

In some cases, hobby development is in support of a "homebrew" computing device, for example, a simple single-board computer powered by a 6502 microprocessor. Or, development may be for an architecture already in widespread use. Operating system development may come from entirely new concepts, or may commence by modeling an existing operating system. In either case, the hobbyist is his/her own developer, or may interact with a small and sometimes unstructured group of individuals who have like interests.

Examples of a hobby operating system include ReactOS and Syllable.
Diversity of operating systems and portability

Application software is generally written for use on a specific operating system, and sometimes even for specific hardware. When porting the application to run on another OS, the functionality required by that application may be implemented differently by that OS (the names of functions, meaning of arguments, etc.) requiring the application to be adapted, changed, or otherwise maintained.
This cost in supporting operating systems diversity can be avoided by instead writing applications against software platforms like Java or Qt. These abstractions have already borne the cost of adaptation to specific operating systems and their system libraries.
Another approach is for operating system vendors to adopt standards. For example, POSIX and OS abstraction layers provide commonalities that reduce porting costs.
www.wikipedia.com

397
MCT / The Time of Exercise
« on: August 12, 2013, 04:26:00 PM »


ব্যায়ামের সময়


আমরা আগের তুলনায় নিজেদের স্বাস্থ্য, সুস্থতা এবং সুন্দর ফিগারের বিষয়ে অনেক বেশি সচেতন। আমরা জানি সু-স্বাস্থ্যের জন্য নিয়মিত ব্যায়াম করার কোনো বিকল্প নেই। তবে ব্যস্ততা আমাদের জীবনে এভাবে জড়িয়ে রয়েছে যে মাত্র একঘণ্টা সময় খুঁজে বের করাও কঠিন।
সকালে অফিসের জন্য অনেকেই ব্যায়াম করার সুযোগ পাই না। অফিস থেকে ফিরতে সন্ধ্যা পেরিয়ে রাত। বাড়ি ফিরে আবার কত কাজ...ব্যায়াম করার সময় নেই। জানি এইতো বলবেন, কিন্তু সুস্থ থাকতে হলে কিছুটা সময় বের করতেই হবে।
কীভাবে? জেনে নিন:
সকাল
•   অনেকে ঘুম থেকে উঠে বিছানায় বসেই ব্যায়াম শুরু করেন। তবে এসময় ভারী ব্যায়াম না করাই ভালো। কারণ এক্সারসাইজের জন্য শরীরে যথেষ্ট পরিমানে এনার্জি থাকা প্রয়োজন
•   সময়ের অভাব থাকলে ঘুম থেকে ওঠার আধ ঘণ্টা পর হালকা জগিং বা মর্নিং ওয়ার্ক করুন
•   ঘুম থেকে ওঠার পর ফ্রেশ হয়ে নাস্তা করে কয়েক ঘণ্টা পর ব্যায়াম করুন
•   মনে রাখবেন কখনোই খালি পেটে ব্যায়াম করা যাবে না

বিকেল

•   ব্যায়াম করার জন্য সবচেয়ে উপযুক্ত সময় হচ্ছে দুপুরের পর বিকেলে। মানে ঘুম থেকে ওঠার ৬ ঘণ্টা পর এবং ১২ ঘণ্টার মধ্যে
•   যাদের ভারী এক্সারসাইজের পরিকল্পনা রয়েছে তারা দিনের বেলার যেকোনো একটি সময় বেছে নিন
•   লাঞ্চ করার পর বসে না থেকে হালকা হাঁটুন।

সন্ধ্যা

•   বাড়ি ফেরার পথে কিছুটা পথ হেঁটেই অাসুন
•   হাঁটার সময় খেয়াল রাখবেন যেন ১০ মিনিটে ১ কিলোমিটার পথ যেতে পারেন
•   সন্ধ্যা বেলা এক্সারসাইজ করতে পারেন। কিন্তু সে ক্ষেত্রে অবশ্যই এক্সারসাইজ করার আগে রিল্যাক্স করুন। যাতে এক্সারসাইজ করার সময় ক্লান্ত ভাব না থাকে।
•   যোগব্যায়াম করার জন্য সন্ধ্যা সবচেয়ে উপযুক্ত সময়।
•   এসময় আপনি ট্রেডমিল বা সাইক্লিংও করতে পারেন

শারীরিক ক্ষমতা ও বয়স অনুযায়ী ব্যায়াম করবেন,  ব্যাক পেইন বা শ্বাসকষ্ট থাকলে সব ধরনের ব্যায়াম করতে পারবেন না। তাই ব্যায়াম শুরু করার সিদ্ধান্ত নেওয়ার পর, বিশেষজ্ঞের পরামর্শ মতো খাবার এবং জীবন যাপনের সঠিক পদ্ধতিগুলোও মেনে চলুন।
এখন শীতে অনেকেই ব্যাডমিন্টন খেলছেন তাদের আলাদা করে ব্যায়াম না করলেও চলে। ভারী ব্যায়াম করতে হলে অবশ্যই অনেক সময় নিয়ে ওয়ার্ম আপ করে নিন। নিয়মিত ব্যায়াম করে সুস্থ থাকুন...
www.banglanews.24.com

398
MCT / Secret Speech
« on: August 12, 2013, 03:50:05 PM »
কে না চায় সারাদিন চাঙ্গা থাকতে? কে না চায় কাজের সময় সকল প্রাণশক্তির প্রয়োগে সময়ের সঠিক ব্যবহার করতে? সবাই কি পারি এমন?
পারি না বলেই আমরা খুঁজে ফিরি সেই আলাদিনের যাদুর চেরাগ যার বদৌলতে আমরা থাকতে পারবো সারাটা দিন সতেজ, কর্মক্ষম।
বাস্তবের দুনিয়ায় আলাদিনের চেরাগ নেই, তাই আজ আমরা জেনে নেবো কর্মক্ষম থাকার কিছু বিজ্ঞানভিত্তিক সমাধান যেগুলো আজ বিশ্বজুড়ে স্বীকৃত।

ঘুমাতে হবে পর্যাপ্ত পরিমাণে

ঘুমকে অবহেলা করা যাবে না। অবহেলা করেছেন কি মরেছেন। ঘুমের সময়ে শরীর তার ক্ষয় পুরণ করে, তার যন্ত্রাংশগুলোকে মেরামত করে, নতুন করে গড়ে তোলে শরীরকে। তাই ঘুম দরকার প্রয়োজনমত। প্রাপ্তবয়স্কদের সাধারনত ৭-৮ ঘন্টার ঘুমই যথেষ্ট তরতাজা হয়ে জেগে ওঠার জন্য।
ঘুমাতে যাবার জন্য ভালো সময় রাত ১০ টার মাঝে। এ সময় ঘুমালে লাভ হচ্ছে আমাদের দেহের গ্রোথ বা বৃদ্ধির জন্য নিয়োজিত হরমোন রাত ১১ থেকে রাত ১ টার মাঝে নিঃসৃত হয় যা ঘুমের সময় দেহের বৃদ্ধিকে ত্বরান্বিত করে।

প্রাতরাশে নেই কোন ফাঁকি

প্রাতরাশ হবে দিনের সবচেয়ে গুরুত্বপূর্ণ খাবার কেননা এটা সমগ্র দিন চলার গ্যাসোলিনের যোগান দেবে। ক্ষুধা না লাগলেও খেতে হবে ব্রেকফাস্ট।
ব্রেকফাস্ট শব্দকে ভাঙ্গলে আমরা পাই ব্রেক এবং ফাস্ট অর্থাৎ অনাহার থেকে মুক্ত হবার খাবার। আপনি অনাহারী থেকে কাজ শুরু করলে কাজে বার বার ব্রেক নিতে হবে। ব্রিটিশ এক গবেষণা থেকে দেখা যায় পর্যাপ্ত প্রাতরাশ আপনার স্ট্রেস হরমোন বা করটিসলের নিঃসরণকে বাধাগ্রস্ত করে যার ফলাফল স্ট্রেস বা ক্লান্তি কম আসা। তাই শুরু হোক প্রাতরাশের জয়গান আজ থেকে।

আহারে যোগ করুন আঁশযুক্ত খাবার

বলা হচ্ছে সারাদিনের আহারে থাকতে হবে ২৫ থেকে ৩০ গ্রাম আঁশ যদিও গড়ে মাত্র ১০ থেকে ১৫ গ্রাম আঁশ আমরা গ্রহণ করছি দিনে। আশের প্রতি এতো গুরুত্ব দেওয়া হওয়ার পেছনে কারণও রয়েছে। খাবারে আঁশ থাকলে সেটি শর্করা শোষণে দেরি করায়, যার ফলাফল হচ্ছে একবারে দ্রুতগতিতে রক্তে শর্করা প্রবেশ না করে ধীরে ধীরে মধ্যম গতিতে অনেক সময় নিয়ে প্রবেশ করা। অর্থাৎ অনেক সময় ধরে শক্তির যোগান বজায় থাকা যা একজন ডায়াবেটিস রোগীর জন্যও খুবই দরকারী এক ব্যবস্থা। আঁশের জন্য দেশি খাবারের মাঝে বরবটি, সজিনা, ঢেঁড়স ইত্যাদি বেশ ভালো কাজ করে। খেতে পারেন আপেল, জামরুল, পেয়ারা ইতাদি ফল। জামরুল অনেকেরই প্রিয় ফল যা একাধারে পানি ও আঁশ বহন করে যথেষ্ট পরিমাণে। প্রচণ্ড গরমে দেশি ফল হিসেবে জামরুলের মজাই আলাদা।
কম খাবেন, বার বার খাবেন
একবারে বেশি করে না খেয়ে কম করে বার বার খেতে বলা হচ্ছে বর্তমান সময়ে। একবারে অধিক খাবার খেলে সেটা হজম করতে দেহের অনেক শক্তি একবারে প্রয়োজন পড়ে যার ফলে ভুরিভোজের পরে আমরা ক্লান্তি অনুভব করি। প্রধান তিন আহারের পরিমাণ মাঝারি রেখে তার মাঝে মাঝে সময়ে ২ বার হাল্কা কিছু খেয়ে নিন। যাকে বলে স্ন্যাক্স, সেই স্ন্যাক্স গ্রহণ করুণ।

পান করুন প্রচুর পানি

পানি খান বা পান করুন বেশি করে।  পানি রক্তকে রাখে তরল, দেহের যন্ত্রাংশগুলোকে রাখে সতেজ। পানির অভাবে রক্ত হয়ে পড়ে ঘন, ঘন রক্তকে সারা দেহে সঞ্চালন করতে আমাদের আবেগের হৃদয়কে খাটুনী দিতে হয় বেশি, সে হয়ে পড়ে দুর্বল, আর আমরাও হয়ে পড়ি ক্লান্ত। প্রতি ২ থেকে ৪ ঘন্টায় একবার মূত্র ত্যাগের অভ্যাস গড়ে তুলুন, সেভাবে পানি পান করুন আর মূত্র যেনো হয় পরিস্কার অথবা হালকা হলদে।
মস্তিস্কের প্রয়োজন ওমেগা ৩ ফ্যাটি এসিড
মস্তিস্কের চনমনে ভাব আনার জন্য ওমেগা ৩ ফ্যাটি এসিডের ভূমিকা খুঁজে পেয়েছেন গবেষকেরা। আমাদের দেশের সামুদ্রিক মাছ যেমন ইলিশে রয়েছে প্রচুর পরিমাণে এই মেদঅম্ল। এই অম্ল দেহের শর্করাকে নিরেট মেদের বদলে গ্লাইকোজেনে রুপান্তরিত হতে সাহায্য করে আর গ্লাইকোজেন হলো আমাদের দেহের সঞ্চিত শক্তির প্রধান উৎস যা সাধারণ মেদের মতো অপকারী নয়।

চা বা কফি

ক্লান্তি এড়াতে জুড়ি নেই চা বা কফির। ১ বা ২ কাপ কফি হলেই চলে যায় অনেকের সারাটা দিন। আগামীকাল ভাইভা  পরীক্ষা, অনেক পড়া বাকি, সারারাত জাগতে হবে, এমন অবস্থায় গরম এক মগ কফির জুড়ি নেই। তবে ঘুমানোর আগে এসব পানীয় পান করলে কিন্তু ঘুমটাই নষ্ট হবে, তাই পান করুন হিসেব রেখে।
শ্বাস নিন গভীর ভাবে
আমাদের একটা সাধারণ অভ্যাস হলো হাল্কা করে বুক দিয়ে শ্বাস নেওয়া। পেট ফুলিয়ে শ্বাস নেয়ার মানে ফুসফুসে অতিরিক্ত অক্সিজেন প্রবেশ করা যা দেহের জ্বালানী পোড়াতে অধিক সাহায্য করে। ফলশ্রুতিতে মস্তিস্ক  হয়ে ওঠে চাঙ্গা, শরীর পায় অতিরিক্ত শক্তি।

গোসলের জুড়ি নেই

সতেজ থাকতে ঠাণ্ডা পানিতে গোসল বা শাওয়ার বেশ উপকারী। মনকে করবে উৎফুল্ল, দূর করবে দেহের ময়লা আর চোখের ঘুম। খুব ক্লান্তি এলে মুখে দিতে পারেন ঠাণ্ডা পানির ঝাপটা।
অনুভূতিকে প্রকাশ করুন, বেধে রাখবেন না
সব কিছু বেদনা, দুঃখ, কস্ট মনের মাঝে চেপে রেখে আমরা নিজেরদের ম্যাচিউরিটি প্রকাশ করতে চাই যা প্রকারন্তরে বয়ে আনে ক্লান্তির অনুভূতি।
কারো সাথে নিজের অনুভুতির বা সমস্যার আলোচনা আপনার মনের জানালায় প্রবাহিত করে তাজা হাওয়া আর আপনিও হয়ে ওঠেন তরতাজা মানসিক দিক দিয়ে।

হাঁটুন বা ব্যায়াম করুন

খুব ক্লান্ত লাগলে কিছুটা হেটে নিতে পারেন অথবা শরীর চর্চা।
গবেষকেরা দেখেছেন ক্লান্তি দূর করতে শরীর চর্চার ভালো ভূমিকা রয়েছে। শরীর চর্চা করলে শরীরে রক্ত চলাচল বৃদ্ধি পায়, হৃদয়ের গতি বাড়ে, দ্রুত শ্বাসপ্রশ্বাসের সাথে প্রবেশ করে অতিরিক্ত অক্সিজেন,  দ্রুত রক্ত সঞ্চালনের কারনে দেহ পায় অধিক জ্বালানী। এভাবে দেহ হয় `বুস্টেড আপ`।

শুনতে পারেন গান, প্রশান্তিদায়ক সঙ্গীত

সঙ্গীতের সুমধুর সুর মনকে এনে দেয় প্রশান্তি, আনন্দময় এক মুহূর্ত। দুশ্চিন্তা থেকে মুক্তি পেতে সঙ্গীত বেশ ভালো কাজ করে।
মানুষের ক্লান্তি আসবেই, দুর্বলতা থাকবেই। আমরা মেশিন নই তবু যতটা পারা যায় শক্তির সঠিক বিজ্ঞানসম্মত ব্যবহারের মাধ্যমে আমরা দেহের অপটিমাম ক্ষমতা ব্যবহার করতে পারি। পরিশেষে কবি গুরুর একটি কবিতা দিয়ে আজকের আলোচনার ইতি টানবো-

ক্লান্তি আমার ক্ষমা করো প্রভু,
পথে যদি পিছিয়ে পড়ি কভু॥


ডা. রায়হান কবীর খান, MBBS (DMC)
মেডিকেল অফিসার, ওজিএসবি
০১৬৭০৭৬৪২২৪www.banglanews.24.com

399
বিদ্রোহী -কাজী নজরুল ইসলাম
কাজী নজরুল ইসলাম
বল বীর -
বল উন্নত মম শির!
শির নেহারি আমারি, নত-শির ওই শিখর হিমাদ্রীর!
বল বীর -
বল মহাবিশ্বের মহাকাশ ফাড়ি’
চন্দ্র সূর্য্য গ্রহ তারা ছাড়ি’
ভূলোক দ্যুলোক গোলক ভেদিয়া,
খোদার আসন “আরশ” ছেদিয়া
উঠিয়াছি চির-বিস্ময় আমি বিশ্ব-বিধাত্রীর!
মম ললাটে রুদ্র-ভগবান জ্বলে রাজ-রাজটীকা দীপ্ত জয়শ্রীর!
বল বীর -
আমি চির-উন্নত শির!
আমি চিরদুর্দ্দম, দুর্বিনীত, নৃশংস,
মহা- প্রলয়ের আমি নটরাজ, আমি সাইক্লোন, আমি ধ্বংস,
আমি মহাভয়, আমি অভিশাপ পৃথ্বীর!
আমি দুর্ব্বার,
আমি ভেঙে করি সব চুরমার!
আমি অনিয়ম উচ্ছৃঙ্খল,
আমি দ’লে যাই যত বন্ধন, যত নিয়ম কানুন শৃংখল!
আমি মানি নাকো কোনো আইন,
আমি ভরা-তরী করি ভরা-ডুবি, আমি টর্পেডো, আমি ভীম,
ভাসমান মাইন!
আমি ধূর্জ্জটী, আমি এলোকেশে ঝড় অকাল-বৈশাখীর!
আমি বিদ্রোহী আমি বিদ্রোহী-সূত বিশ্ব-বিধাত্রীর!
বল বীর -
চির উন্নত মম শির!
আমি ঝঞ্ঝা, আমি ঘূর্ণী,
আমি পথ-সম্মুখে যাহা পাই যাই চূর্ণী!
আমি নৃত্য-পাগল ছন্দ,
আমি আপনার তালে নেচে যাই, আমি মুক্ত জীবনানন্দ।
আমি হাম্বীর, আমি ছায়ানট, আমি হিন্দোল,
আমি চল-চঞ্চল, ঠুমকি’ ছমকি’
পথে যেতে যেতে চকিতে চমকি’
ফিং দিয়া দিই তিন দোল্!
আমি চপলা-চপল হিন্দোল!
আমি তাই করি ভাই যখন চাহে এ মন যা’,
করি শত্রুর সাথে গলাগলি, ধরি মৃত্যুর সাথে পাঞ্জা,
আমি উদ্দাম, আমি ঝঞ্ঝা!
আমি মহামারী, আমি ভীতি এ ধরিত্রীর।
আমি শাসন-ত্রাসন, সংহার আমি উষ্ণ চির-অধীর।
বল বীর -
আমি চির-উন্নত শির!
আমি চির-দুরন্ত-দুর্ম্মদ,
আমি দুর্দ্দম, মম প্রাণের পেয়ালা হর্দ্দম্ হ্যায়্ হর্দ্দম্
ভরপুর মদ।
আমি হোম-শিখা, আমি সাগ্নিক, জমদগ্নি,
আমি যজ্ঞ, আমি পুরোহিত, আমি অগ্নি!
আমি সৃষ্টি, আমি ধ্বংস, আমি লোকালয়, আমি শ্মশান,
আমি অবসান, নিশাবসান।
আমি ইন্দ্রাণি-সূত হাতে চাঁদ ভালে সূর্য্য,
মম এক হাতে-বাঁকা বাঁশের বাঁশরী, আর হাতে রণ-তূর্য্য।
আমি কৃষ্ণ-কন্ঠ, মন্থন-বিষ পিয়া ব্যথা বারিধির।
আমি ব্যোমকেশ, ধরি বন্ধন-হারা ধারা গঙ্গোত্রীর।
বল বীর -
চির উন্নত মম শির।
আমি সন্ন্যাসী, সুর-সৈনিক
আমি যুবরাজ, মম রাজবেশ ম্লান গৈরিক!
আমি বেদুঈন, আমি চেঙ্গিস,
আমি আপনা ছাড়া করি না কাহারে কুর্ণিশ!
আমি বজ্র, আমি ঈশান-বিষাণে ওঙ্কার,
আমি ইস্ত্রাফিলের শিঙ্গার মহা-হুঙ্কার,
আমি পিনাক-পাণির ডমরু-ত্রিশূল, ধর্ম্মরাজের দন্ড,
আমি চক্র ও মহাশঙ্খ, আমি প্রণব-নাদ-প্রচন্ড!
আমি ক্ষ্যাপা দুর্বাসা-বিশ্বামিত্র-শিষ্য,
আমি দাবানল-দাহ, দাহন করিব বিশ্ব!
আমি প্রাণ-খোলা-হাসি উল্লাস, – আমি সৃষ্টি-বৈরী মহাত্রাস,
আমি মহা-প্রলয়ের দ্বাদশ রবির রাহু-গ্রাস!
আমি কভু প্রশান্ত, – কভু অশান্ত দারুণ স্বেচ্ছাচারী,
আমি অরুণ খুনের তরুণ, আমি বিধির দর্প-হারী!
আমি প্রভঞ্জনের উচ্ছাস, আমি বারিধির মহাকল্লোল,
আমি উজ্জ্বল আমি প্রোজ্জ্বল,
আমি উচ্ছল জল-ছল-ছল, চল-ঊর্মির হিন্দোল্ দোল!
আমি বন্ধন-হারা কুমারীর বেণী, তন্বী-নয়নে বহ্নি,
আমি ষোড়শীর হৃদি-সরসিজ প্রেম-উদ্দাম, আমি ধন্যি।
আমি উন্মন মন উদাসীর,
আমি বিধাতার বুকে ক্রন্দন-শ্বাস, হা-হুতাশ আমি হুতাশীর!
আমি বঞ্চিত ব্যথা পথবাসী চির-গৃহহারা যত পথিকের,
আমি অবমানিতের মরম-বেদনা, বিষ-জ্বালা, প্রিয়-লাঞ্ছিত
বুকে গতি ফের!
আমি অভিমানী চির-ক্ষুব্ধ হিয়ার কাতরতা, ব্যথা সুনিবিড়,
চিত- চুম্বন-চোর-কম্পন আমি থর-থর-থর প্রথম পরশ কুমারীর!
আমি গোপন প্রিয়ার চকিত চাহনি, ছল ক’রে দেখা অনুখন,
আমি চপল মেয়ের ভালোবাসা, তা’র কাঁকন-চুড়ির কন্-কন্।
আমি চির-শিশু, চির-কিশোর,
আমি যৌবন-ভীতু পল্লীবালার আঁচর কাঁচলি নিচোর!
আমি উত্তর-বায়ু, মলয়-অনিল, উদাসী পূরবী হাওয়া,
আমি পথিক-কবির গভীর রাগিণী, বেণু-বীনে গান গাওয়া!
আমি আকুল নিদাঘ-তিয়াসা, আমি রৌদ্র রবি,
আমি মরু-নির্ঝর ঝর-ঝর, আমি শ্যামলিমা ছায়া-ছবি! -
আমি তুরিয়ানন্দে ছুটে চলি এ কি উন্মাদ, আমি উন্মাদ!
আমি সহসা আমারে চিনেছি, আমার খুলিয়া গিয়াছে
সব বাঁধ!
আমি উত্থান, আমি পতন, আমি অচেতন-চিতে চেতন,
আমি বিশ্ব-তোরণে বৈজয়ন্তী, মানব বিজয় কেতন!
ছুটি ঝড়ের মতন করতালি দিয়া
স্বর্গ-মর্ত্ত্য করতলে,
তাজি বোরবাক্ আর উচ্চৈস্রবা বাহন আমার
হিম্মত-হ্রেস্বা হেঁকে চলে!
আমি বসুধা-বক্ষে আগ্নেয়াদ্রি, বাড়ব-বহ্নি, কালানল,
আমি পাতালে মাতাল অগ্নি-পাথর-কলরোল-কল-কোলাহল!
আমি তড়িতে চড়িয়া উড়ে চলি জোর তুড়ি দিয়া, দিয়া লম্ফ,
আণি ত্রাস সঞ্চারি ভুবনে সহসা, সঞ্চরি’ ভূমি-কম্প!
ধরি বাসুকির ফনা জাপটি’, -
ধরি স্বর্গীয় দূত জিব্রাইলের আগুনের পাখা সাপটি’!
আমি দেব-শিশু, আমি চঞ্চল,
আমি ধৃষ্ট আমি দাঁত দিয়া ছিঁড়ি বিশ্ব-মায়ের অঞ্চল!
আমি অর্ফিয়াসের বাঁশরী,
মহা- সিন্ধু উতলা ঘুম্-ঘুম্
ঘুম্ চুমু দিয়ে করি নিখিল বিশ্বে নিঝ্ঝুম্
মম বাঁশরী তানে পাশরি’
আমি শ্যামের হাতের বাঁশরী।
আমি রুষে উঠে’ যবে ছুটি মহাকাশ ছাপিয়া,
ভয়ে সপ্ত নরক হারিয়া দোজখ নিভে নিভে যায় কাঁপিয়া!
আমি বিদ্রোহ-বাহী নিখিল অখিল ব্যাপিয়া!
আমি প্লাবন-বন্যা,
কভু ধরণীরে করি বরণিয়া, কভু বিপুল ধ্বংস-ধন্যা -
আমি ছিনিয়া আনিব বিষ্ণু-বক্ষ হইতে যুগল কন্যা!
আমি অন্যায়, আমি উল্কা, আমি শনি,
আমি ধূমকেতু-জ্বালা, বিষধর কাল-ফণি!
আমি ছিন্নমস্তা চন্ডী, আমি রণদা সর্বনাশী,
আমি জাহান্নামের আগুনে বসিয়া হাসি পুষ্পের হাসি!
আমি মৃণ্ময়, আমি চিন্ময়,
আমি অজর অমর অক্ষয়, আমি অব্যয়!
আমি মানব দানব দেবতার ভয়,
বিশ্বের আমি চির দুর্জ্জয়,
জগদীশ্বর-ঈশ্বর আমি পুরুষোত্তম সত্য,
আমি তাথিয়া তাথিয়া মথিয়া ফিরি এ স্বর্গ-পাতাল-মর্ত্ত্য
আমি উন্মাদ, আমি উন্মাদ!!
আমি চিনেছি আমারে, আজিকে আমার খুলিয়া গিয়াছে
সব বাঁধ!!
আমি পরশুরামের কঠোর কুঠার,
নিঃক্ষত্রিয় করিব বিশ্ব, আনিব শান্তি শান্ত উদার!
আমি হল বলরাম স্কন্ধে,
আমি উপাড়ি’ ফেলিব অধীন বিশ্ব অবহেলে নব সৃষ্টির মহানন্দে।
মহা- বিদ্রোহী রণ-ক্লান্ত
আমি সেই দিন হব শান্ত,
যবে উৎপীড়িতের ক্রন্দন-রোল, আকাশে বাতাসে ধ্বনিবে না,
অত্যাচারীর খড়গ কৃপাণ ভীম রণ-ভূমে রণিবে না -
বিদ্রোহী রণ-ক্লান্ত
আমি আমি সেই দিন হব শান্ত!
আমি বিদ্রোহী ভৃগু, ভগবান বুকে এঁকে দিই পদ-চিহ্ন,
আমি স্রষ্টা-সূদন, শোক-তাপ-হানা খেয়ালী বিধির বক্ষ করিব-ভিন্ন!
আমি বিদ্রোহী ভৃগু, ভগবান বুকে এঁকে দেবো পদ-চিহ্ন!
আমি খেয়ালী বিধির বক্ষ করিব ভিন্ন!
আমি চির-বিদ্রোহী বীর -
আমি বিশ্ব ছাড়ায়ে উঠিয়াছি একা চির-উন্নত শির!

400
MCT / Freedom means to follow Hazrat Mohammad (S)
« on: July 10, 2013, 04:09:16 PM »
বিশ্বনবীকে (সা.) অনুসরণেই মুক্তি


 
পৃথিবীবাসীর জন্য মহান আল্লাহ তায়ালা অসংখ্য সুসংবাদের মধ্যে নিম্নের দোয়াটি কবুল হওয়ার ব্যাপারে নিশ্চয়তা দিয়েছেন।

দোয়াটি পাঠ করার সঙ্গে সঙ্গে কবুল হয়ে যায় এবং সঙ্গে সঙ্গে ফেরেস্তারা দোয়াটি নিয়ে হুজুরের (সা.) কাছে পৌঁছে দেন দোয়া পাঠকারীর নাম, পিতার নাম, মাতার নাম ও দাদার নামসহ।

দোয়াটি হচ্ছে-
আরবি উচ্চারণ: আল্লাহুম্মা ছল্লি আ’লা মুহাম্মাদিও ওয়া আ’লা আলি মুহাম্মাদিন্ কামা ছল্লাইতা আ’লা ইব্রাহিমা ওয়া আ’লা আলি ইব্রাহিমা ইন্নাকা হামিদুম্ মাজিদ্। আল্লাহুম্মা বারিক্ আ’লা মুহাম্মাদিও ওয়া আ’লা আলি মুহাম্মাদিন্ কামা বারাক্তা আ’লা ইব্রাহিমা ওয়া আ’লা আলি ইব্রাহিমা ইন্নাকা হামিদুম্ মাজিদ।

সহি হাদিস শরীফের আলোকে এই দরুদে ইব্রাহিম সঠিক দরুদ। তাই সালাতে আমাদের এই দরুদ পাঠ করতে হয়।

সর্বকালের সর্বশ্রেষ্ঠ মহামানব, নবীদের নবী, রাসূলদের রাসূল, আখেরি নবী হজরত মুহাম্মদ মোস্তফা (সা.) সম্পর্কে স্বয়ং আল্লাহপাক বলেছেন, ‘তাঁকে সর্ব উত্তম চরিত্র দিয়ে সৃষ্টি করা হয়েছে’। আল্লাহপাক স্বয়ং নবীজীর (সা.) ওপর দরুদ পাঠ করেন এবং তিনি আমাদের নির্দেশ দিয়েছেন নবীজীর (সা.) ওপর দরুদ পাঠ করতে। আল্লাহপাক মহানবীকে (সা.) সৃষ্টিকূলের জন্য রহমত হিসেবে সৃষ্টি করেছেন। তাঁর পরে কিয়ামত পর্যন্ত নবুওয়াতের দরজা সিলগালা করে দেওয়া হয়েছে। কারণ মহানবীর (সা.) মধ্যে আল্লাহপাক কিয়ামত সংগঠিত হওয়ার আগ পর্যন্ত পৃথিবীবাসীর চলার পথে কী দিক নির্দেশনা প্রয়োজন তার সবটুকু দিয়ে দিয়েছেন আর মহানবীর (সা.) ওপর নাজিল করেছেন মহাবিস্ময় আল কোরআন।

নবীজীর (সা.) ৪০ বছরের নবুওয়াতে জীবন ৩০ পারা চলন্ত কোরআন শরীফ। পাক কোরআন মহান আল্লাহপাকের কুদরতি জবানের বাণী। মাটির মধ্যে যেমন বালু মাটিকে এমন গুণ দ্বারা সৃষ্টি করা হয়েছে যাতে সে সহজের পানিকে শুষে নিতে পারে তেমনি মানুষের সিনাকে, হৃদয়কে এমনভাবে সৃষ্টি করা হয়েছে যাতে সে সহজেই মহাবিস্ময় আল কোরআন বুকে ধারণ করে রাখতে পারে, মুখস্ত করে রাখতে পারে।

আল্লাহপাক বলেছেন, ‘কোরআন আমিই নাজিল করেছি আর এর হেফাজতের দায়িত্ব আমার’। মানুষের সিনার মধ্যে পাক কোরআনকে আল্লাহ কীভাবে হেফাজত করছেন মানুষ হয়ে তা কী আমরা একটিবার ভেবে দেখেছি?

সিনা থেকে সিনায় সদা চলমান আল কোরআনের নূরে আলোকিত, দুনিয়ার দিশারী, আখেরাতের কাণ্ডারী, মহান আল্লাহপাকের সবচেয়ে প্রিয়ভাজন, যার রাজি-খুশির জন্য মহান আল্লাহপাক সবকিছুই করতে পারতেন সেই মহামানব, বিশ্বনবী হজরত মুহাম্মদ মোস্তফার (সা.) জন্ম ও ওফাত দিবস (১২ই রবিউল আউয়াল) ঈদে মিলাদুন্নবী (সা.)। পৃথিবীবাসীর জন্য আজকের দিনটি যেমন মহা আনন্দের দিন, তেমনি আবার মহা বিষাদেরও।
মহানবীর (সা.) পৃথিবীতে আগমন অথাৎ জন্মগ্রহণ একজন মুসলমানের কাছে এর চাইতে বেশি আর কোনো খুশির দিন থাকতে পারে না।

দুনিয়ায় যে মতটি সবচেয়ে বেশি প্রচলিত তা হচ্ছে, হজরত মুহাম্মদ (সা.) খ্রিষ্টাব্দ ৫৭০ সালের ১২ই রবিউল আউয়াল সোমবার স্মরণীয় মুহূর্তে রহমতে এলাহির ফায়সালা মোতাবেক জন্মগ্রহণ করেন। হুজুর (সা.) জন্মের পর তার মা দাদা আবদুল মোত্তালেবের কাছে পৌত্রের জন্মের সুসংবাদ পাঠান। এ খবর পেয়ে তিনি অত্যন্ত আনন্দিত মনে ঘরে আসেন এবং তাঁকে কাবা ঘরে নিয়ে গিয়ে আল্লাহর দরবারে দোয়া ও শোকর আদায় করেন। এ সময় তিনি তাঁর নাম রাখেন মোহাম্মদ। এ নাম আরবে পরিচিত ছিলো না। এরপর আরবের নিয়ম অনুযায়ী সপ্তম দিনে খতনা করান।
মায়ের পর আবু লাহাবের দাসী সুওয়ায়বা প্রথম তাঁকে দুধ পান করান। এ সময় সুওয়ায়বার কোলের শিশুর নাম ছিলো মাছরুহ। রসূলকে (সা.) দুধপান করানোর আগে সুওয়ায়বা হামজা ইবনে আবদুল মোত্তালেব, তারপরে আবু সালামা ইবনে আবদুল আসাদ মাখযুমীকেও দুধ পান করান।

পরবর্তীকালে আরবের সে সময়ের নিয়ম অনুযায়ী এ মহামানবের দুধ মা হিসেবে মা হালিমা নির্বাচিত হয়ে ভাগ্যবতী হয়ে যান। শিশু নবীকে (সা.) কোলে ঘরে নেওয়ার পথেই আল্লাহপাকের অসংখ্য রহমত নাজিল হতে থাকে হালিমার ওপর।
 
মহান আল্লাহপাক প্রায় সব নবীকে দিয়েই বকরি চড়িয়েছেন। এর মাজেজা মহান আল্লাহপাকই জানেন। হুজুরেপাককে (সা.) দিয়েও মহান আল্লাহপাক বকরি চড়িয়েছেন। আমরা সবাই জানি, গৃহপালিত পশুদের মধ্যে বকরি হচ্ছে সবচেয়ে বেশি বেপড়োয়া। এদের লালন পালন করা মোটেই সহজ কাজ নয়। সেই কঠিন কাজ দিয়েই নবীজীর (সা.) জীবন শুরু হয়েছিল। ছোট বেলা থেকে আরবের লোকেরা তাঁকে আল-আমিন বলে ডাকতো। আল-আমিন শব্দের অর্থ হচ্ছে বিশ্বাসী। যাকে আল্লাহপাক সারা সৃষ্টিকুলের জন্য রহমত হিসেবে সৃষ্টি করেছেন, তিনি আল-আমিন হবেন এটাই স্বাভাবিক।

নবুওয়াতের আগে:
দুধ ছাড়ানোর পরও শিশু মোহাম্মদ বনু সা’দ গোত্রেই ছিলেন। তাঁর বয়স যখন চার অথবা পাঁচ বছর তখন ‘বুক ফাড়ান’ ঘটনা ঘটে। সহী মুসলিম শরীফে হজরত আনাস (রা.) থেকে বর্ণিত হয়েছে। তিনি বলেন, হজরত জিবরাঈল (আ.) রসূলুল্লাহ সাল্লাল্লাহু আলাইহি ওয়া সাল্লামের (সা.) কাছে আগমন করেন। এ সময় তিনি অন্য শিশুদের সঙ্গে খেলা করছিলেন। জিবরাঈল (আ.) তাঁকে শুইয়ে বুক চিরে দিল বের করে তা থেকে রক্তপিণ্ড বের করে বললেন, এটা আপনার মাঝে শয়তানের অংশ। এরপর দিল একটি তশতরিতে রেখে জমজম কুপের পানি দিয়ে ধুয়ে তারপর যথাস্থানে স্থাপন করেন। ওই দিকে অন্য শিশুরা ছুটে গিয়ে তার ধাত্রীমাতা হালিমাকে বললো, মোহাম্মদকে মেরে ফেলা হয়েছে। এ কথায় পরিবারের লোকেরা ঝটপট ছুটে এসে দেখলো, তিনি বিবর্ণ মুখে বসে আছেন।

ঘটনার পর মা হালিমা ভয়ে শিশু নবীকে (সা.) তাঁর মায়ের কাছে ফিরিয়ে দেন। ছয় বছর বয়স পর্যন্ত তিনি মায়ের সঙ্গে কাটান। নবীজী (সা.) পিতা আবদুল্লাহর কবর জিয়ারত করার উদ্দেশে মা আমেনা শিশু মোহাম্মদকে (সা.) নিয়ে ৫০০ মাইলের রাস্তা সফর করেন। কোনো রেওয়াতে আছে, ফেরার পথে নবীজী (সা.) মাতা ইন্তেকাল করেন।

আবওয়ায় মা আমেনার ইন্তেকালের পর বৃদ্ধ আবদুল মোত্তালেব পৌত্রকে সঙ্গে নিয়ে মক্কায় যান। সেখানে তিনি শিশু মোহাম্মদকে (সা.) অত্যন্ত যত্ন ও ভালোবাসা দিয়ে লালন পালন করতে থাকেন। এর মধ্যে শিশু মোহাম্মদের চেহারার বরকতে রহমতের বৃষ্টি প্রার্থনার ঘটনা অনেকেই জানেন।

সহীহ বোখারী শরীফে হজরত জাবের ইবনে আবদুল্লাহ (রা.) থেকে বর্ণিত, কাবা ঘর যখন নির্মাণ করা হয়েছিল তখন নবী করিম সাল্লাল্লাহু আলাইহি ওয়া সাল্লাম পাথর বহন করেছিলেন। আসমান জমিনের বুকে আল্লাহপাকের কাছে সবচেয়ে সম্মানি মানুষটি নিজে কাজ করে দেখিয়ে দিয়ে গেছেন শ্রমের কষ্ট ও তার পারিশ্রমিক পরিশোধ করার নিয়ম কী?

হুজুর (সা.) তাঁর প্রশংসনীয় কাজ, উন্নত চরিত্র বৈশিষ্ট্য এবং দয়ার্দ্র স্বভাবের কারণে স্বতন্ত্র বৈশিষ্ট্যমণ্ডিত ছিলেন। তিনি ছিলেন সবার চেয়ে অধিক মানবীয় সৌজন্যবোধ সম্পন্ন, সবার চেয়ে উত্তম চরিত্রের অধিকারী, সম্মানিত প্রতিবেশী, সর্বাধিক দূরদর্শিতাসম্পন্ন শাসক, সবচেয়ে অধিক সত্যবাদী, সবার চেয়ে কোমলপ্রাণ ও সর্বাধিক পবিত্র পরিচ্ছন্ন মনের অধিকারী। ভালো কাজে ভালো কথায় তিনি ছিলেন সবার চেয়ে অগ্রসর এবং অতুলনীয়। এমনকি স্বজাতির লোকেরা তাঁর নামই আল-আমিন রেখেছিলেন। তাঁর মধ্যে ছিল প্রশংসনীয় গুণ বৈশিষ্ট্যের সমন্বয়। হজরত খাদিজা (রা.) তাঁর স্ত্রী সাক্ষ্য দিয়েছেন, তিনি বিপদগ্রস্তদের বোঝা বহন করতেন, দুঃখী-দরিদ্র লোকদের প্রতি সাহায্যের হাত বাড়াতেন, মেহমানদারী করতেন এবং সত্য ও ন্যায় প্রতিষ্ঠার কাজে সাহায্য করতেন।

হুজুর (সা.) জীবনকে দুই ভাগে ভাগ করা যায়। এক. মক্কী জীবন দুই. মাদানী জীবন। ইসলামের দাওয়াত তিনি প্রথম দিকে গোপনে দিয়েছেন, পরবর্তীকালে দিয়েছেন প্রকাশ্যে। নিজের পরিবার থেকে তিনি দাওয়াতের কাজ শুরু করেছিলেন। প্রকাশ্যে দাওয়াত দিতে গিয়ে তিনি কাফেরদের নির্যাতনের শিকার হয়েছেন। রক্তে ভেসে গেছে নবীজী (সা.) দেহ মোবারক। আসমানের ফেরেস্তারা অপেক্ষায় ছিলেন, কখন আল্লাহর হাবীব (সা.) বলবেন, এদের ধ্বংস করে দেওয়ার কথা বলবেন। কিন্তু তিনি তা না করে বরং বলেছেন, ওরা বোঝেনি, তাই আমাকে মেরেছে। তোমরা ওদের ওপর কোনো শাস্তিমূলক ব্যবস্থা গ্রহণ করবেনা। ওদের মেরে ফেললে আমি কাদের কাছে দ্বীনের এ দাওয়াত দেবো।

স্বয়ং মহান আল্লাহপাকই ব্যস্ত থাকতেন, নবীজীকে (সা.) কীভাবে খুশি রাখা যায়।
 
বর্ণিত আছে, মা আয়েশা (রা.) একবার হুজুরকে (সা.) প্রশ্ন করেছিলেন, হে আল্লাহর রসূল (সা.) আল্লাহপাক আপনার আগে পিছের সবগুণাহ তো মাফ করে দিয়েছেন তাহলে আপনি রাত জেগে জেগে এভাবে এতো এবাদত করে পা ফুলিয়ে ফেলেন কেন?

উত্তরে হুজুর (সা.) বলেছিলেন, ‘হে আয়শা, যে আল্লাহপাক আমার এতোবড় উপকার করলো আমি কী তাঁর শুকরিয়া আদায় করবো না?’

ইসলামের সব হুকুমগুলো এসেছে ওহির মাধ্যমে। শুধুমাত্র নামাজ ছাড়া। পবিত্র মেরাজের রাতে নবীজীকে (সা.) নিয়ে যখন হজরতে জিবরাঈল (আ.) ঊর্ধ্বগামী হয়ে সিদরাতুল মুনতাহা পর্যন্ত সঙ্গে গিয়ে সেখানে নবীজীকে (সা.) রেখে বলে আসেন, এর পরে আমার আর যাওয়ার এখতিয়ার নেই। আমি যদি এর পরে যাওয়ার চেষ্টা করি তাহলে আমার নূরের ছয়শত ডানা জ্বলে পুড়ে ছারখার হয়ে যাবে। তারপরও নবীজীকে (সা.) মহান আল্লাহপাক রফরফ নামক বিছানার মাধ্যমে আরশে আজিমে নিয়ে গেছেন। আল্লাহপাকের কাছে একান্ত সাক্ষাতে আল্লাহপাক আমাদের প্রিয় নবীকে (সা.) উপহার হিসেবে পাঁচ ওয়াক্ত নামাজ দিয়েছেন। অথাৎ মেরাজে যে উপহার নিয়ে নবীজী (সা.) দুনিয়াতে ফেরত এলেন সেই পাঁচ ওয়াক্ত নামাজ আমদের মধ্যে অনেক মুসলমানই আদায় করি না। অথচ এই নামাজই আমাদের জন্য মেরাজ স্বরূপ।

নবীজী (সা.) সমাজ, রাষ্ট্রকে সুন্দর করেছেন বেশি বেশি সালাম আদান প্রদানের মাধ্যমে। আমরাও যদি নিজের ঘর থেকে শুরু করে সমাজে বেশি বেশি সালাম আদান প্রদান করি তাহলে অবশ্যই আমাদের সমাজ সুন্দর হবে। শেষ জীবনে নবীজী (সা.) দুই সাহাবীর কাধের ওপর ভর করেও মসজিদে গিয়ে নামাজ আদায় করেছেন। অথচ তাঁরই উম্মত দাবি করে কত শত আজান আমার আপনার এ কান দিয়ে ঢুকে অন্য কান দিয়ে বের হয়ে যায় তবুও মসজিদে গিয়ে নামাজ আদায় করার সুযোগ পাই না। কিন্তু পৃথিবীর কেউ কী মৃত্যুর হাত থেকে রক্ষা পেয়েছেন? মৃত্যু আপনার আমার হবেই। এজন্য নবীজী (সা.) বেশি বেশি করে মৃত্যুর কথা স্মরণ করতে বলেছেন যাতে আমরা আল্লাহ ভীরু হতে পারি।

নবীজী (সা.) আমাদের সর্ব অবস্থায় শিরক গুণাহ থেকে বিরত থাকতে বলেছেন। এমন কথাও বলা হয়েছে, তোমাকে যদি কেটে টুকরো টুকরো করে আগুণে জ্বালিয় দেওয়া হয় এবং সেই পুড়া ছাই নিয়ে যদি ওই পাহাড়ের চূড়ায় উড়িয়ে দেওয়া হয়, তবুও খবরদার তুমি তখনও আল্লাহর সঙ্গে শিরক করবে না। কারণ শিরক গুণাহ খাস তওবা ছাড়া কখনোই ক্ষমা করা হবে না।

শিরকের ভয়াবহ আক্রমণ সম্পর্কে হুশিয়ার করা হয়েছে এভাবে, ‘অন্ধকার রাতে কালো পোশাক পড়ে কেউ যদি কালো কোনো পাথরের ওপর বসে থাকে সেই কালো পাথর বেয়ে একটি কালো পিপীলিকা যেভাবে তার শরীর বেয়ে ওঠবে ঠিক তেমনি শিরক তোমার ভেতর এমনিভাবে প্রবেশ করবে। অতএব সাবধান। শিরক থেকে নিজেকে সর্ব অবস্থায় বাঁচিয়ে চলো।

অথচ আমরা হামেশাই বলছি, ‘এই চাকরিটা না থাকলে আমি বাঁচতে পারতাম না’ এমন কথা বলা ১০০% শিরক। রিজিক দেওয়ার মালিক একমাত্র আল্লাহপাক। বেঁচে থাকা না থাকা আল্লাহপাকেরই হাতে।

জীবনের প্রতিটি ক্ষেত্রে হুজুর (সা.) দৃষ্টান্ত স্থাপন করে গেছেন। আমাদের প্রিয় নবী হুজুর (সা.) রাষ্ট্র প্রধান হয়ে দেখিয়ে গেছেন কীভাবে শান্তির রাষ্ট্র ব্যবস্থা কায়েম করা যায়। সবচেয়ে বড় যে সম্পদ দু’টো তিনি রেখে গেছেন তা হলো- এক. আল্লাহপাকের কিতাব আল কোরআন দুই. তাঁর সুন্নাহ। এ দু’টোকে আঁকড়ে থাকলে কোনোদিনও মানুষ পথভ্রষ্ট হবে না বলেও নিশ্চয়তা দিয়ে গেছেন।

আমাদের সেই প্রিয় নবীজী (সা.) আজ শুয়ে আছেন মদিনায়। ধন্য মদিনার মাটি। এই নবীজীর (সা.) ওফাতের খবর শুনে হজরত ওমর (রা.) জ্ঞানহারা হয়ে পড়েছিলেন। তিনি দাঁড়িযে বলতে লাগলেন, কিছু কিছু মোনাফেক মনে করে, রসূলের (সা.) ওফাত হয়েছে কিন্তু আসলে তাঁর ওফাত হয়নি। তিনি তাঁর প্রতিপালকের কাছে ঠিক সেভাবে গেছেন যেভাবে হজরত মূসা ইবনে ইমরান (আ.) গিয়েছিলেন। হজরত মুসা (আ.) তাঁর কওমের কাছ থেকে চল্লিশ দিন অনুপস্থিত থাকার পর পুনরায় ফিরে এসেছিলেন। অথচ তাঁর ফিরে আসার আগে তাঁর জাতির লোকেরা বলাবলি করছিল, মুসার (আ.) ওফাত হয়েছে। আল্লাহর শপথ, রসূল (সা.) ফিরে আসবেন এবং যারা মনে করছে তিনি মারা গেছেন, তিনি তাদের হাত পা কেটে ফেলবেন। কেমন নবী প্রেমিক মানুষ ছিলেন হজরত ওমর (রা.)!

হুজুরের (সা.) দেহ মোবারক একটি ইয়েমেনি চাদর দ্বারা আবৃত ছিল। হজরত আবু বকর সিদ্দিক (রা.) খবর শুনে মা আয়েশার (রা.) ঘরে প্রবেশ করেন এবং নবীজীর (সা.) কপাল মুবারকে একটি চুম্বন করে কেঁদে ওঠেন। এরপর বললেন, আমার মা বাবা আপনার ওপর কোরবান হোন, আল্লাহতায়ালা আপনার জন্য দু’টো মৃত্যু একত্রিত করবেন না। যে মৃত্যু আপনার জন্য লেখা ছিল তা হয়ে গেছে। এরপর তিনি বাইরে এসে হজরত ওমরকে (রা.) বললেন, ওমর তুমি বসে পড়ো। উপস্থিত জনতার উদ্দেশে তিনি বললেন, তোমাদের মধ্যে যে ব্যক্তি রসূলের (সা.) পূজা করতো সে যেন জেনে রাখে, তাঁর ওফাত হয়েছে। আর তোমাদের মধ্যে যে ব্যক্তি আল্লাহর এবাদত করতো যে যেন জেনে রাখে, আল্লাহতায়ালা চিরঞ্জীব, তিনি কখনো মৃত্যুবরণ করবেন না। পবিত্র কোরআনে আল্লাহ রব্বুল আলামিন বলেছেন, ‘মোহাম্মদ কেবল একজন রাসূল, তাঁর পূর্বে বহু রাসূল গত হয়ে গেছে। সুতরাং যদি তিনি মারা যান বা নিহত হন তবে কি তোমরা পৃষ্ঠ প্রদর্শন করবে? এবং কেউ পৃষ্ঠ প্রদর্শন করলে সে কখনো আল্লাহর ক্ষতি করবে না; বরং আল্লাহতায়ালা শীঘ্রই কৃতজ্ঞদের পুরস্কৃত করবেন।’ (সুরা আলে ইমরান, আয়াত ১৪৪)

মানসিক যন্ত্রণায় অস্থির সাহাবীরা আবু বকর সিদ্দিকের (রা.) বক্তব্য শুনে নিশ্চিত হন, প্রকৃতই রসূল (সা.) ইন্তেকাল করেছেন। ইবনে আব্বাস (রা.) বলেন, আল্লাহর শপথ, কেউ যেন জানতোই না, আল্লাহতায়ালা পবিত্র কোরআনে এ আয়াত নাজিল করেছেন। হজরত আবু বকরের (রা.) তেলাওয়াতের পর সবাই এ আয়াত মুখস্থ করেন। সবার মুখে মুখে তখন এ আয়াত ফিরছিল।

আসুন আমরা এই পবিত্র দিনে তো বটেই প্রতিদিন অসংখ্যবার বিশ্বনবীর (সা.) ওপর দরুদ পেশ করি। মহান আল্লাহপাক বিশ্বনবীর (সা.) তাঁর পরিবারবর্গের ওপর কোটি কোটি রহমত বর্ষণ করুণ। (আমিন)
বাংলানিউজটোয়েন্টিফোর.কম

401
MCT / The Term Design
« on: July 10, 2013, 02:13:01 PM »
Disign
From Wikipedia, the free encyclopedia
Jump to: navigation, search
For other uses, see Design (disambiguation).
 
 
All Saints Chapel in the Cathedral Basilica of St. Louis by Louis Comfort Tiffany. The building's structure and decoration are both examples of design.
 
 
Design, when applied to fashion, includes considering aesthetics as well as function in the final form.
Design is the creation of a plan or convention for the construction of an object or a system (as in architectural blueprints, engineering drawing, business process, circuit diagrams and sewing patterns).[1] Design has different connotations in different fields (see design disciplines below). In some cases the direct construction of an object (as in pottery, engineering, management, cowboy coding and graphic design) is also considered to be design.
More formally design has been defined as follows.

(noun) a specification of an object, manifested by an agent, intended to accomplish goals, in a particular environment, using a set of primitive components, satisfying a set of requirements, subject to constraints;
(verb, transitive) to create a design, in an environment (where the designer operates)[2]
Another definition for design is a roadmap or a strategic approach for someone to achieve a unique expectation. It defines the specifications, plans, parameters, costs, activities, processes and how and what to do within legal, political, social, environmental, safety and economic constraints in achieving that objective.[3]

Here, a "specification" can be manifested as either a plan or a finished product, and "primitives" are the elements from which the design object is composed.

With such a broad denotation, there is no universal language or unifying institution for designers of all disciplines. This allows for many differing philosophies and approaches toward the subject (see Philosophies and studies of design, below).
The person designing is called a designer, which is also a term used for people who work professionally in one of the various design areas, usually also specifying which area is being dealt with (such as a fashion designer, concept designer or web designer). A designer's sequence of activities is called a design process. The scientific study of design is called design science.[4][5][6]
Designing often necessitates considering the aesthetic, functional, economic and sociopolitical dimensions of both the design object and design process. It may involve considerable research, thought, modeling, interactive adjustment, and re-design.[7] Meanwhile, diverse kinds of objects may be designed, including clothing, graphical user interfaces, skyscrapers, corporate identities, business processes and even methods of designing.[8]
Contents
•   1 Design as a process
o   1.1 The Rational Model
   1.1.1 Example sequence of stages
   1.1.2 Criticism of The Rational Model
o   1.2 The Action-Centric Model
   1.2.1 Descriptions of design activities
   1.2.2 Criticism of the Action-Centric Perspective
•   2 Design disciplines
•   3 Philosophies and studies of design
o   3.1 Philosophies for guiding design
o   3.2 Approaches to design
o   3.3 Methods of designing
•   4 Terminology
o   4.1 Design and art
o   4.2 Design and engineering
o   4.3 Design and production
o   4.4 Process design
•   5 See also
•   6 Footnotes
•   7 Bibliography


Design as a process
Substantial disagreement exists concerning how designers in many fields, whether amateur or professional, alone or in teams, produce designs. Dorst and Dijkhuis argued that "there are many ways of describing design processes" and discussed "two basic and fundamentally different ways",[9] both of which have several names. The prevailing view has been called "The Rational Model",[10] "Technical Problem Solving"[11] and "The Reason-Centric Perspective".[12] The alternative view has been called "Reflection-in-Action",[11] "co-evolution"[13] and "The Action-Centric Perspective".[12]
The Rational Model

The Rational Model was independently developed by Simon[14] and Pahl and Beitz.[15] It posits that:
1.   designers attempt to optimize a design candidate for known constraints and objectives,
2.   the design process is plan-driven,
3.   the design process is understood in terms of a discrete sequence of stages.

The Rational Model is based on a rationalist philosophy[10] and underlies the Waterfall Model,[16] Systems Development Life Cycle[17] and much of the engineering design literature.[18] According to the rationalist philosophy, design is informed by research and knowledge in a predictable and controlled manner. Technical rationality is at the center of the process.[7]
Example sequence of stages
Typical stages consistent with The Rational Model include the following.
•   Pre-production design
o   Design brief or Parti pris – an early (often the beginning) statement of design goals
o   Analysis – analysis of current design goals
o   Research – investigating similar design solutions in the field or related topics
o   Specification – specifying requirements of a design solution for a product (product design specification)[19] or service.
o   Problem solving – conceptualizing and documenting design solutions
o   Presentation – presenting design solutions
•   Design during production
o   Development – continuation and improvement of a designed solution
o   Testing – in situ testing a designed solution
•   Post-production design feedback for future designs
o   Implementation – introducing the designed solution into the environment
o   Evaluation and conclusion – summary of process and results, including constructive criticism and suggestions for future improvements
•   Redesign – any or all stages in the design process repeated (with corrections made) at any time before, during, or after production.
Each stage has many associated best practices.[20]

Criticism of The Rational Model
The Rational Model has been widely criticized on two primary grounds
1.   Designers do not work this way – extensive empirical evidence has demonstrated that designers do not act as the rational model suggests.[21]
2.   Unrealistic assumptions – goals are often unknown when a design project begins, and the requirements and constraints continue to change.[22]
The Action-Centric Model
The Action-Centric Perspective is a label given to a collection of interrelated concepts, which are antithetical to The Rational Model.[12] It posits that:
1.   designers use creativity and emotion to generate design candidates,
2.   the design process is improvised,
3.   no universal sequence of stages is apparent – analysis, design and implementation are contemporary and inextricably linked[12]

The Action-Centric Perspective is based on an empiricist philosophy and broadly consistent with the Agile approach[23] and amethodical development.[24] Substantial empirical evidence supports the veracity of this perspective in describing the actions of real designers.[21] Like the Rational Model, the Action-Centric model sees design as informed by research and knowledge. However, research and knowledge are brought into the design process through the judgment and common sense of designers – by designers "thinking on their feet" – more than through the predictable and controlled process stipulated by the Rational Model. Designers' context-dependent experience and professional judgment take center stage more than technical rationality.[7]
Descriptions of design activities

At least two views of design activity are consistent with the Action-Centric Perspective. Both involve three basic activities.
In the Reflection-in-Action paradigm, designers alternate between "framing," "making moves," and "evaluate moves." "Framing" refers to conceptualizing the problem, i.e., defining goals and objectives. A "move" is a tentative design decision. The evaluation process may lead to further moves in the design.[11]
In the Sensemaking-Coevolution-Implementation Framework, designers alternate between its three titular activities. Sensemaking includes both framing and evaluating moves. Implementation is the process of constructing the design object. Coevolution is "the process where the design agent simultaneously refines its mental picture of the design object based on its mental picture of the context, and vice versa."[25]
Criticism of the Action-Centric Perspective

As this perspective is relatively new, it has not yet encountered much criticism. One possible criticism is that it is less intuitive than The Rational Model.
Design disciplines
•   Applied arts
•   Architecture
•   Communication design
•   Engineering design
•   Fashion design
•   Game design
•   Graphic design
•   Information Architecture
•   Industrial design
•   Instructional design
•   Interaction design
•   Interior design
•   Landscape architecture
•   Lighting design
•   Military Design Methodology[26]
•   Product design
•   Process design
•   Service design
•   Software design
•   Web design
•   Urban design
•   Visual design

Philosophies and studies of design
There are countless philosophies for guiding design as the design values and its accompanying aspects within modern design vary, both between different schools of thought and among practicing designers.[27] Design philosophies are usually for determining design goals. A design goal may range from solving the least significant individual problem of the smallest element, to the most holistic influential utopian goals. Design goals are usually for guiding design. However, conflicts over immediate and minor goals may lead to questioning the purpose of design, perhaps to set better long term or ultimate goals.
Philosophies for guiding design

Design philosophies are fundamental guiding principles that dictate how a designer approaches his/her practice. Reflections on material culture and environmental concerns (Sustainable design) can guide a design philosophy. One example is the First Things First manifesto which was launched within the graphic design community and states "We propose a reversal of priorities in favor of more useful, lasting and democratic forms of communication – a mindshift away from product marketing and toward the exploration and production of a new kind of meaning. The scope of debate is shrinking; it must expand. Consumerism is running uncontested; it must be challenged by other perspectives expressed, in part, through the visual languages and resources of design."[28]
In The Sciences of the Artificial by polymath Herbert A. Simon the author asserts design to be a meta-discipline of all professions.

"Engineers are not the only professional designers. Everyone designs who devises courses of action aimed at changing existing situations into preferred ones. The intellectual activity that produces material artifacts is no different fundamentally from the one that prescribes remedies for a sick patient or the one that devises a new sales plan for a company or a social welfare policy for a state. Design, so construed, is the core of all professional training; it is the principal mark that distinguishes the professions from the sciences. Schools of engineering, as well as schools of architecture, business, education, law, and medicine, are all centrally concerned with the process of design."[29]

Approaches to design

A design approach is a general philosophy that may or may not include a guide for specific methods. Some are to guide the overall goal of the design. Other approaches are to guide the tendencies of the designer. A combination of approaches may be used if they don't conflict.

Some popular approaches include:
•   KISS principle, (Keep it Simple Stupid), which strives to eliminate unnecessary complications.
•   There is more than one way to do it (TIMTOWTDI), a philosophy to allow multiple methods of doing the same thing.
•   Use-centered design, which focuses on the goals and tasks associated with the use of the artifact, rather than focusing on the

end user.
•   User-centered design, which focuses on the needs, wants, and limitations of the end user of the designed artifact.
•   Critical design uses designed artifacts as an embodied critique or commentary on existing values, morals, and practices in a

culture.
•   Service design designing or organizing the experience around a product, the service associated with a product's use.
•   Transgenerational design, the practice of making products and environments compatible with those physical and sensory impairments associated with human aging and which limit major activities of daily living.
•   Speculative design, the speculative design process doesn’t necessarily define a specific problem to solve, but establishes a provocative starting point from which a design process emerges. The result is an evolution of fluctuating iteration and reflection using designed objects to provoke questions and stimulate discussion in academic and research settings
Methods of designing

Main article: Design methods
Design Methods is a broad area that focuses on:
•   Exploring possibilities and constraints by focusing critical thinking skills to research and define problem spaces for existing products or services—or the creation of new categories; (see also Brainstorming)
•   Redefining the specifications of design solutions which can lead to better guidelines for traditional design activities (graphic, industrial, architectural, etc.);

•   Managing the process of exploring, defining, creating artifacts continually over time
•   Prototyping possible scenarios, or solutions that incrementally or significantly improve the inherited situation
•   Trendspotting; understanding the trend process.
Terminology

The word "design" is often considered ambiguous, as it is applied differently in a varying contexts.
 
 
The new terminal at Barajas airport in Madrid, Spain
Design and art

Today the term design is widely associated with the Applied arts as initiated by Raymond Loewy and teachings at the Bauhaus and Ulm School of Design (HfG Ulm) in Germany during the 20th Century.
The boundaries between art and design are blurred, largely due to a range of applications both for the term 'art' and the term 'design'. Applied arts has been used as an umbrella term to define fields of industrial design, graphic design, fashion design, etc. The term 'decorative arts' is a traditional term used in historical discourses to describe craft objects, and also sits within the umbrella of Applied arts. In graphic arts (2D image making that ranges from photography to illustration) the distinction is often made between fine art and commercial art, based on the context within which the work is produced and how it is traded.

To a degree, some methods for creating work, such as employing intuition, are shared across the disciplines within the Applied arts and Fine art. Mark Getlein suggests the principles of design are "almost instinctive", "built-in", "natural", and part of "our sense of 'rightness'."[30] However, the intended application and context of the resulting works will vary greatly.
 
 
A drawing for a booster engine for steam locomotives. Engineering is applied to design, with emphasis on function and the utilization of mathematics and science.
Design and engineering

In engineering, design is a component of the engineering process. Many overlapping methods and processes can be seen when comparing Product design, Industrial design and Engineering. The American Heritage Dictionary defines design as: "To conceive or fashion in the mind; invent," and "To formulate a plan", and defines engineering as: "The application of scientific and mathematical principles to practical ends such as the design, manufacture, and operation of efficient and economical structures, machines, processes, and systems.".[31][32] Both are forms of problem-solving with a defined distinction being the application of "scientific and mathematical principles". The increasingly scientific focus of engineering in practice, however, has raised the importance of new more "human-centered" fields of design.[33] How much science is applied in a design is a question of what is considered "science". Along with the question of what is considered science, there is social science versus natural science. Scientists at Xerox PARC made the distinction of design versus engineering at "moving minds" versus "moving atoms".
 
 
Jonathan Ive has received several awards for his design of Apple Inc. products like this MacBook. In some design fields, personal computers are also used for both design and production
Design and production

The relationship between design and production is one of planning and executing. In theory, the plan should anticipate and compensate for potential problems in the execution process. Design involves problem-solving and creativity. In contrast, production involves a routine or pre-planned process. A design may also be a mere plan that does not include a production or engineering process, although a working knowledge of such processes is usually expected of designers. In some cases, it may be unnecessary and/or impractical to expect a designer with a broad multidisciplinary knowledge required for such designs to also have a detailed specialized knowledge of how to produce the product.
Design and production are intertwined in many creative professional careers, meaning problem-solving is part of execution and the reverse. As the cost of rearrangement increases, the need for separating design from production increases as well. For example, a high-budget project, such as a skyscraper, requires separating (design) architecture from (production) construction. A Low-budget project, such as a locally printed office party invitation flyer, can be rearranged and printed dozens of times at the low cost of a few sheets of paper, a few drops of ink, and less than one hour's pay of a desktop publisher.
This is not to say that production never involves problem-solving or creativity, nor that design always involves creativity. Designs are rarely perfect and are sometimes repetitive. The imperfection of a design may task a production position (e.g. production artist, construction worker) with utilizing creativity or problem-solving skills to compensate for what was overlooked in the design process. Likewise, a design may be a simple repetition (copy) of a known preexisting solution, requiring minimal, if any, creativity or problem-solving skills from the designer.
 
 
An example of a business workflow process using Business Process Modeling Notation.
Process design
"Process design" (in contrast to "design process" mentioned above) refers to the planning of routine steps of a process aside from the expected result. Processes (in general) are treated as a product of design, not the method of design. The term originated with the industrial designing of chemical processes. With the increasing complexities of the information age, consultants and executives have found the term useful to describe the design of business processes as well as manufacturing processes.

Footnotes
1.   ^ Dictionary meanings in the Cambridge Dictionary of American English, at Dictionary.com (esp. meanings 1–5 and 7–8) and at AskOxford (esp. verbs).
2.   ^ Ralph, P. and Wand, Y. (2009). A proposal for a formal definition of the design concept. In Lyytinen, K., Loucopoulos, P., Mylopoulos, J., and Robinson, W., editors, Design Requirements Workshop (LNBIP 14), pp. 103–136. Springer-Verlag, p. 109 doi:10.1007/978-3-540-92966-6_6.
3.   ^ Don Kumaragamage, Y. (2011). Design Manual Vol 1
4.   ^ Simon (1996)
5.   ^ Alexander, C. (1964) Notes on the Synthesis of Form, Harvard University Press.
6.   ^ Eekels, J. (2000). "On the Fundamentals of Engineering Design Science: The Geography of Engineering Design Science, Part 1". Journal of Engineering Design 11 (4): 377–397. doi:10.1080/09544820010000962.
7.   ^ a b c Inge Mette Kirkeby (2011). "Transferable Knowledge". Architectural Research Quarterly 15 (1): 9–14.
8.   ^ Brinkkemper, S. (1996). "Method engineering: engineering of information systems development methods and tools". Information and Software Technology 38 (4): 275–280. doi:10.1016/0950-5849(95)01059-9.
9.   ^ Dorst and Dijkhuis 1995, p. 261
10.   ^ a b Brooks 2010
11.   ^ a b c Schön 1983
12.   ^ a b c d Ralph 2010
13.   ^ Dorst and Cross 2001
14.   ^ Newell and Simon 1972; Simon 1969
15.   ^ Pahl and Beitz 1996
16.   ^ Royce 1970
17.   ^ Bourque and Dupuis 2004
18.   ^ Pahl et al. 2007
19.   ^ Cross, N., 2006. T211 Design and Designing: Block 2, p. 99. Milton Keynes: The Open University.
20.   ^ Ullman, David G. (2009) The Mechanical Design Process, Mc Graw Hill, 4th edition ISBN 0-07-297574-1
21.   ^ a b Cross et al. 1992; Ralph 2010; Schön 1983
22.   ^ Brooks 2010; McCracken and Jackson 1982
23.   ^ Beck et al. 2001
24.   ^ Truex et al. 2000
25.   ^ Ralph 2010, p. 67
26.   ^ Headquarters, Department of the Army (May 2012). ADRP 5-0: The Operations Process. Washington D.C.: United States Army. pp. 2–4 to 2–11.
27.   ^ Holm, Ivar (2006). Ideas and Beliefs in Architecture and Industrial design: How attitudes, orientations and underlying assumptions shape the built environment. Oslo School of Architecture and Design. ISBN 82-547-0174-1.
28.   ^ First Things First 2000 a design manifesto. manifesto published jointly by 33 signatories in: Adbusters, the AIGA journal, Blueprint, Emigre, Eye, Form, Items fall 1999/spring 2000
29.   ^ Simon (1996), p. 111.
30.   ^ Mark Getlein, Living With Art, 8th ed. (New York: 2008) 121.
31.   ^ American Psychological Association (APA): design. The American Heritage Dictionary of the English Language, Fourth Edition. Retrieved January 10, 2007
32.   ^ American Psychological Association (APA): engineering. The American Heritage Dictionary of the English Language, Fourth Edition. Retrieved January 10, 2007
33.   ^ Faste 2001
Bibliography
    Look up design in Wiktionary, the free dictionary.
    Wikiquote has a collection of quotations related to: Design

    Wikimedia Commons has media related to: Design

•   Beck, K., Beedle, M., van Bennekum, A., Cockburn, A., Cunningham, W., Fowler, M., Grenning, J., Highsmith, J., Hunt, A., Jeffries, R., Kern, J., Marick, B., Martin, R.C., Mellor, S., Schwaber, K., Sutherland, J., and Thomas, D. Manifesto for agile software development, 2001.
•   Bourque, P., and Dupuis, R. (eds.) Guide to the software engineering body of knowledge (SWEBOK). IEEE Computer Society Press, 2004 ISBN 0-7695-2330-7.
•   Brooks, F.P. The design of design: Essays from a computer scientist, Addison-Wesley Professional, 2010 ISBN 0-201-36298-8.
•   Cross, N., Dorst, K., and Roozenburg, N. Research in design thinking, Delft University Press, Delft, 1992 ISBN 90-6275-796-0.
•   Dorst, K., and Cross, N. (2001). "Creativity in the design process: Co-evolution of problem-solution". Design Studies 22 (2): 425–437. doi:10.1016/0142-694X(94)00012-3.
•   Dorst, K., and Dijkhuis, J. "Comparing paradigms for describing design activity," Design Studies (16:2) 1995, pp 261–274.
•   Faste, R. (2001). "The Human Challenge in Engineering Design". International Journal of Engineering Education 17 (4–5): 327–331.
•   McCracken, D.D., and Jackson, M.A. (1982). "Life cycle concept considered harmful". SIGSOFT Software Engineering Notes 7 (2): 29–32. doi:10.1145/1005937.1005943.
•   Newell, A., and Simon, H. Human problem solving, Prentice-Hall, Inc., 1972.
•   Pahl, G., and Beitz, W. Engineering design: A systematic approach, Springer-Verlag, London, 1996 ISBN 3-540-19917-9.
•   Pahl, G., Beitz, W., Feldhusen, J., and Grote, K.-H. Engineering design: A systematic approach, (3rd ed.), Springer-Verlag, 2007 ISBN 1-84628-318-3.
•   Pirkl, James J. Transgenerational Design: Products for an Aging Population, Van Nostrand Reinhold, New York, NY, USA, 1994 ISBN 0-442-01065-6.
•   Ralph, P. "Comparing two software design process theories," International Conference on Design Science Research in Information Systems and Technology (DESRIST 2010), Springer, St. Gallen, Switzerland, 2010, pp. 139–153.
•   Royce, W.W. "Managing the development of large software systems: Concepts and techniques," Proceedings of Wescon, 1970.
•   Schön, D.A. The reflective practitioner: How professionals think in action, Basic Books, USA, 1983.
•   Simon, H.A. The sciences of the artificial, MIT Press, Cambridge, MA, USA, 1996 ISBN 0-262-69191-4.
•   Truex, D., Baskerville, R., and Travis, J. (2000). "Amethodical systems development: The deferred meaning of systems development methods". Accounting, Management and Information Technologies 10 (1): 53–79. doi:10.1016/S0959-8022
en.wikipedia.org/wiki/Design

402
MCT / Information on Design
« on: July 10, 2013, 11:18:20 AM »
Information on Design
Design as a Shared Activity

The nature of design is equally as complex as that of technology. Archer wrote that:
“Design is that area of human experience, skill and knowledge which is concerned with man’s ability to mould his environment to suit his material and spiritual needs.” 1

Design is essentially a rational, logical, sequential process intended to solve problems or, as Jones put it:
“initiate change in man-made things” 2

For the term “design process,” we can also read “problem-solving process”, which in all but its abstract forms works by consultation and consensus. The process begins with the identification and analysis of a problem or need and proceeds through a structured sequence in which information is researched and ideas explored and evaluated until the optimum solution to the problem or need is devised.

Yet, design has not always been a rational process; up until the Great War design was often a chaotic affair in that consultation and consensus were barely evident. Design was not a total process. The work of participants in the process was often compartmentalised, each having little if any input in matters which fell outside the boundaries of their specific expertise. Thus, participants explored their ideas unilaterally, with one or another participant, through virtue of their “expertise”, imposing constraints upon all others. In this way, the craftsman has a veto on matters to do with skill or availability of materials, the engineer had a veto on technological considerations, and the patron alone could impose considerations of taste and finance.

During the inter-war years the Bauhaus movement attempted to knit the design process into a coherent whole in that students were encouraged to study design in a way that was both total and detailed. That is, designers were expected to balance all the considerations that came to bear upon the design of particular artefacts, systems and environments. In this way, though, design quickly evolved into a closed activity - an activity in which all but the designers themselves has little if any valid input to make on questions of materials, taste . . . and so on. Designers came to exist within a social bubble, consulting no-one but other designers. The result was that many designs conceived particularly during the immediate post-Second World War period did little to satisfy the needs of users. Such designs were exemplified by the disastrous housing policies adopted by many local authorities in the UK who built residential tower block after residential tower block. These were essentially realisations of dreamy design concepts rather than solutions to the social, cultural and environmental needs of the local populations.

Recent years have marked a sharp reaction against the design movement, which has perhaps been personified by Prince Charles and has crusade against architectural “carbuncles”. Likewise, individuals within society have sought to express their own tastes, their own individuality, personal style and personal self-image through what they use and purchase. Thus it is that design is not an activity solely for engineers and designers but is a shared activity between those who design artefacts, systems and environments, those who make them and those who use them.
to be continued
Refernces:
1. Archer, B (1973) “The Need for Design Education.” Royal College of Art
2. Jones, J.C. (1970) “Design Methods and Technology: Seeds of Human Futures

403
MCT / Etymology of Computer Animation
« on: July 10, 2013, 10:48:41 AM »
Etymology
From Latin animātiō, "the act of bringing to life"; from animō ("to animate" or "give life to") and -ātiō ("the act of").[citation needed]
History
Main article: History of animation
 
 
Five images sequence from a vase found in Iran
 
 
An Egyptian burial chamber mural, approximately 4000 years old, showing wrestlers in action. Even though this may appear similar to a series of animation drawings, there was no way of viewing the images in motion. It does, however, indicate the artist's intention of depicting motion.

Early examples of attempts to capture the phenomenon of motion drawing can be found in paleolithic cave paintings, where animals are depicted with multiple legs in superimposed positions, clearly attempting to convey the perception of motion.
A 5,000 year old earthen bowl found in Iran in Shahr-i Sokhta has five images of a goat painted along the sides. This has been claimed to be an example of early animation. However, since no equipment existed to show the images in motion, such a series of images cannot be called animation in a true sense of the word.[1]

A Chinese zoetrope-type device had been invented in 180 AD.[2] The phenakistoscope, praxinoscope, and the common flip book were early popular animation devices invented during the 19th century.
The Voynich manuscript that date back to between 1404 and 1438 contains several series of illustrations of the same subject-matter and even few circles that – when spinned around the center – would create an illusion of a motion.[3]
These devices produced the appearance of movement from sequential drawings using technological means, but animation did not really develop much further until the advent of cinematography. The cinématographe was a projector, printer, and camera in one machine that allowed moving pictures to be shown successfully on a screen which was invented by history's earliest film makers, Auguste and Louis Lumière, in 1894.[4]

There is no single person who can be considered the "creator" of film animation, as there were several people working at about the same time on projects which could be considered animation.
Georges Méliès was a creator of special-effect films and was generally regarded as one of the first people to use animation. He discovered the technique by accident when stopping his camera from rolling in order to change something in the scene, and then continuing rolling the film. This idea was later known as stop-motion animation. Méliès' camera broke down while shooting a bus driving by. When he had fixed the camera, a hearse happened to be passing by just as Méliès restarted rolling the film; his end result was that he had managed to make a bus transform into a hearse. He was just one of the great contributors to the development of animation in the early years.

The earliest surviving stop-motion advertising film was an English short by Arthur Melbourne-Cooper called Matches: An Appeal (1899). Developed for the Bryant and May Matchsticks company, it involved stop-motion animation of wired-together matches writing a patriotic call to action on a blackboard.
J. Stuart Blackton was possibly the first American filmmaker to use the techniques of stop-motion and hand-drawn animation.

Introduced to film-making by Edison, he pioneered these concepts at the turn of the 20th century with his first copyrighted work, dated 1900. Several of his films, among them The Enchanted Drawing (1900) and Humorous Phases of Funny Faces (1906) were film versions of Blackton's "lightning artist" routine, and utilized modified versions of Méliès' early stop-motion techniques to make a series of blackboard drawings appear to move and reshape themselves. Humorous Phases of Funny Faces is regularly cited as the first true animated film, and Blackton is considered the first true animator.
 
 
Fantasmagorie by Emile Cohl, 1908
Another French artist, Émile Cohl, began drawing cartoon strips and created a film in 1908 called Fantasmagorie. The film largely consisted of a stick figure moving about and encountering all manner of morphing objects, such as a wine bottle that transforms into a flower. There were also sections of live action where the animator’s hands would enter the scene. The film was created by drawing each frame on paper and then shooting each frame onto negative film, which gave the picture a blackboard look. This makes Fantasmagorie the first animated film created by using what came to be known as traditional (hand-drawn) animation.
The author of the first puppet-animated film (The Beautiful Lukanida (1912)) was the Russian-born (ethnically Polish) director Wladyslaw Starewicz, known as Ladislas Starevich.[citation needed]
Following the successes of Blackton and Cohl, many other artists began experimenting with animation. One such was Winsor McCay, a successful newspaper cartoonist who created detailed animations that required a team of artists and painstaking attention to detail.

Each frame was drawn on paper, which invariably required backgrounds and characters to be redrawn and animated. Among McCay's most noted films are Little Nemo (1911), Gertie the Dinosaur (1914) and The Sinking of the Lusitania (1918).
The production of animated short films, typically referred to as "cartoons", became an industry of its own during the 1910s, and cartoon shorts were produced for showing in movie theaters. The most successful early animation producer was John Randolph Bray, who, along with animator Earl Hurd, patented the cel animation process which dominated the animation industry for the rest of the decade.

El Apóstol (Spanish: "The Apostle") was a 1917 Argentine animated film utilizing cutout animation, and the world's first animated feature film.[5] Unfortunately, a fire that destroyed producer Frederico Valle's film studio incinerated the only known copy of El Apóstol, and it is now considered a lost film.

Computer animation has become popular since Toy Story (1995), the first animated film completely made using this technique.
In 2008, the animation market was worth US$68.4 billion.[6]
Techniques
Traditional animation
Main article: Traditional animation
Traditional animation (also called cel animation or hand-drawn animation) was the process used for most animated films of the 20th century. The individual frames of a traditionally animated film are photographs of drawings that are first drawn on paper. To create the illusion of movement, each drawing differs slightly from the one before it. The animators' drawings are traced or photocopied onto transparent acetate sheets called cels, which are filled in with paints in assigned colors or tones on the side opposite the line drawings. The completed character cels are photographed one-by-one against a painted background by a rostrum camera onto motion picture film .

The traditional cel animation process became obsolete by the beginning of the 21st century. Today, animators' drawings and the backgrounds are either scanned into or drawn directly into a computer system. Various software programs are used to color the drawings and simulate camera movement and effects. The final animated piece is output to one of several delivery media, including traditional 35 mm film and newer media such as digital video. The "look" of traditional cel animation is still preserved, and the character animators' work has remained essentially the same over the past 70 years. Some animation producers have used the term "tradigital" to describe cel animation which makes extensive use of computer technology.
Examples of traditionally animated feature films include Pinocchio (United States, 1940), Animal Farm (United Kingdom, 1954), Akira (Japan, 1988), and L'Illusionniste (British-French, 2010). Traditional animated films which were produced with the aid of computer technology include The Lion King (US, 1994) Sen to Chihiro no Kamikakushi (Spirited Away) (Japan, 2001), Les Triplettes de Belleville (France, 2003), and The Secret of Kells (Irish-French-Belgian, 2009).
 
 
An example of traditional animation, a horse animated by rotoscoping from Eadweard Muybridge's 19th century photos
•   Full animation refers to the process of producing high-quality traditionally animated films that regularly use detailed drawings and plausible movement. Fully animated films can be made in a variety of styles, from more realistically animated works such as those produced by the Walt Disney studio (Beauty and the Beast, Aladdin, Lion King) to the more 'cartoon' styles of the Warner Bros. animation studio. Many of the Disney animated features are examples of full animation, as are non-Disney works such as The Secret of NIMH (US, 1982), The Iron Giant (US, 1999), and Nocturna (Spain, 2007).
•   Limited animation involves the use of less detailed and/or more stylized drawings and methods of movement. Pioneered by the artists at the American studio United Productions of America, limited animation can be used as a method of stylized artistic expression, as in Gerald McBoing Boing (US, 1951), Yellow Submarine (UK, 1968), and much of the anime produced in Japan. Its primary use, however, has been in producing cost-effective animated content for media such as television (the work of Hanna-Barbera, Filmation, and other TV animation studios) and later the Internet (web cartoons).
•   Rotoscoping is a technique patented by Max Fleischer in 1917 where animators trace live-action movement, frame by frame. The source film can be directly copied from actors' outlines into animated drawings, as in The Lord of the Rings (US, 1978), or used in a stylized and expressive manner, as in Waking Life (US, 2001) and A Scanner Darkly (US, 2006). Some other examples are: Fire and Ice (USA, 1983) and Heavy Metal (1981). http://en.wikipedia.org/wiki/Computer_animation

404
MCT / Computer Animation
« on: July 10, 2013, 10:31:24 AM »
Computer animation or CGI animation is the process used for generating animated images by using computer graphics. The more general term computer-generated imagery encompasses both static scenes and dynamic images, while computer animation only refers to moving images.
Modern computer animation usually uses 3D computer graphics, although 2D computer graphics are still used for stylistic, low bandwidth, and faster real-time renderings. Sometimes the target of the animation is the computer itself, but sometimes the target is another medium, such as film.

Computer animation is essentially a digital successor to the stop motion techniques used in traditional animation with 3D models and frame-by-frame animation of 2D illustrations. Computer generated animations are more controllable than other more physically based processes, such as constructing miniatures for effects shots or hiring extras for crowd scenes, and because it allows the creation of images that would not be feasible using any other technology. It can also allow a single graphic artist to produce such content without the use of actors, expensive set pieces, or props.
To create the illusion of movement, an image is displayed on the computer monitor and repeatedly replaced by a new image that is similar to it, but advanced slightly in time (usually at a rate of 24 or 30 frames/second). This technique is identical to how the illusion of movement is achieved with television and motion pictures.

For 3D animations, objects (models) are built on the computer monitor (modeled) and 3D figures are rigged with a virtual skeleton. For 2D figure animations, separate objects (illustrations) and separate transparent layers are used, with or without a virtual skeleton. Then the limbs, eyes, mouth, clothes, etc. of the figure are moved by the animator on key frames. The differences in appearance between key frames are automatically calculated by the computer in a process known as tweening or morphing. Finally, the animation is rendered.

For 3D animations, all frames must be rendered after modeling is complete. For 2D vector animations, the rendering process is the key frame illustration process, while tweened frames are rendered as needed. For pre-recorded presentations, the rendered frames are transferred to a different format or medium such as film or digital video. The frames may also be rendered in real time as they are presented to the end-user audience. Low bandwidth animations transmitted via the internet (e.g. 2D Flash, X3D) often use software on the end-users computer to render in real time as an alternative to streaming or pre-loaded high bandwidth animations.

Contents

A simple example

Computer animation example

The screen is blanked to a background color, such as black. Then, a goat is drawn on the screen. Next, the screen is blanked, but the goat is re-drawn or duplicated slightly to the left of its original position. This process is repeated, each time moving the goat a bit to the left. If this process is repeated fast enough, the goat will appear to move smoothly to the left. This basic procedure is used for all moving pictures in films and television.
The moving goat is an example of shifting the location of an object. More complex transformations of object properties such as size, shape, lighting effects often require calculations and computer rendering instead of simple re-drawing or duplication.

Explanation

o trick the eye and brain into thinking they are seeing a smoothly moving object, the pictures should be drawn at around 12 frames per second (frame/s) or faster (a frame is one complete image). With rates above 75-120 frames/s no improvement in realism or smoothness is perceivable due to the way the eye and brain process images. At rates below 12 frame/s most people can detect jerkiness associated with the drawing of new images which detracts from the illusion of realistic movement. Conventional hand-drawn cartoon animation often uses 15 frames/s in order to save on the number of drawings needed, but this is usually accepted because of the stylized nature of cartoons. Because it produces more realistic imagery computer animation demands higher frame rates to reinforce this realism.
Movie film seen in theaters in the United States runs at 24 frames per second, which is sufficient to create the illusion of continuous movement. For high resolution, adapters are used.

History
Main article: History of computer animation
See also: Timeline of computer animation in film and television
Early digital computer animation was developed at Bell Telephone Laboratories in the 1960s by Edward E. Zajac, Frank W. Sinden, Kenneth C. Knowlton, and A. Michael Noll. Other digital animation was also practiced at the Lawrence Livermore National Laboratory.

An early step in the history of computer animation was the sequel to the 1973 movie Westworld, a science-fiction film about a society in which robots live and work among humans. The sequel, Futureworld (1976), used 3D Wire-frame imagery which featured a computer-generated hand and face created by University of Utah graduates Edwin Catmull and Fred Parke. This imagery had originally appeared in their student film A Computer Animated Hand, which they completed in 1971.[1]
Developments in CGI technologies are reported each year at SIGGRAPH, an annual conference on computer graphics and interactive techniques which is attended each year by tens of thousands of computer professionals. Developers of computer games and 3D video cards strive to achieve the same visual quality on personal computers in real-time as is possible for CGI films and animation. With rapid advancement of real-time rendering quality, artists began to use game engines to render non-interactive movies, leading to the art form Machinima.

The first feature-length computer animated film was the 1995 movie Toy Story by Pixar.[2] It followed an adventure centered around toys and their owners. The groundbreaking film was the first of many fully computer animated films.
Computer animation created blockbuster films such as Toy Story 3 (2010), Avatar (2009), Shrek 2 (2004), Cars 2 (2011), and Life of Pi (2012).
Methods of animating virtual characters
 
 
In this .gif of a 2D Flash animation, each 'stick' of the figure is keyframed over time to create motion.
In most 3D computer animation systems, an animator creates a simplified representation of a character's anatomy, analogous to a skeleton or stick figure. The position of each segment of the skeletal model is defined by animation variables, or Avars. In human and animal characters, many parts of the skeletal model correspond to actual bones, but skeletal animation is also used to animate other things, such as facial features (though other methods for facial animation exist). The character "Woody" in Toy Story, for example, uses 700 Avars, including 100 Avars in the face. The computer does not usually render the skeletal model directly (it is invisible), but uses the skeletal model to compute the exact position and orientation of the character, which is eventually rendered into an image. Thus by changing the values of Avars over time, the animator creates motion by making the character move from frame to frame.

There are several methods for generating the Avar values to obtain realistic motion. Traditionally, animators manipulate the Avars directly. Rather than set Avars for every frame, they usually set Avars at strategic points (frames) in time and let the computer interpolate or 'tween' between them, a process called keyframing. Keyframing puts control in the hands of the animator, and has roots in hand-drawn traditional animation.

In contrast, a newer method called motion capture makes use of live action. When computer animation is driven by motion capture, a real performer acts out the scene as if they were the character to be animated. His or her motion is recorded to a computer using video cameras and markers, and that performance is then applied to the animated character.
Each method has its advantages, and as of 2007, games and films are using either or both of these methods in productions. Keyframe animation can produce motions that would be difficult or impossible to act out, while motion capture can reproduce the subtleties of a particular actor. For example, in the 2006 film Pirates of the Caribbean: Dead Man's Chest, actor Bill Nighy provided the performance for the character Davy Jones. Even though Nighy himself doesn't appear in the film, the movie benefited from his performance by recording the nuances of his body language, posture, facial expressions, etc. Thus motion capture is appropriate in situations where believable, realistic behavior and action is required, but the types of characters required exceed what can be done through conventional costuming.

Creating characters and objects on a computer
3D computer animation combines 3D models of objects and programmed or hand "keyframed" movement. Models are constructed out of geometrical vertices, faces, and edges in a 3D coordinate system. Objects are sculpted much like real clay or plaster, working from general forms to specific details with various sculpting tools. A bone/joint animation system is set up to deform the CGI model (e.g., to make a humanoid model walk). In a process called rigging, the virtual marionette is given various controllers and handles for controlling movement. Animation data can be created using motion capture, or keyframing by a human animator, or a combination of the two.

3D models rigged for animation may contain thousands of control points - for example, the character "Woody" in Pixar's movie Toy Story, uses 700 specialized animation controllers. Rhythm and Hues Studios labored for two years to create Aslan in the movie The Chronicles of Narnia: The Lion, the Witch and the Wardrobe which had about 1851 controllers, 742 in just the face alone. In the 2004 film The Day After Tomorrow, designers had to design forces of extreme weather with the help of video references and accurate meteorological facts. For the 2005 remake of King Kong, actor Andy Serkis was used to help designers pinpoint the gorilla's prime location in the shots and used his expressions to model "human" characteristics onto the creature. Serkis had earlier provided the voice and performance for Gollum in J. R. R. Tolkien's The Lord of the Rings trilogy.
Computer animation development equipment
 
 
A ray-traced 3-D model of a jack inside a cube, and the jack alone below.
Computer animation can be created with a computer and animation software. Some impressive animation can be achieved even with basic programs; however, the rendering can take a lot of time on an ordinary home computer. Because of this, video game animators tend to use low resolution, low polygon count renders, such that the graphics can be rendered in real time on a home computer. Photorealistic animation would be impractical in this context.
Professional animators of movies, television, and video sequences on computer games make photorealistic animation with high detail. This level of quality for movie animation would take tens to hundreds of years to create on a home computer. Many powerful workstation computers are used instead. Graphics workstation computers use two to four processors, and thus are a lot more powerful than a home computer, and are specialized for rendering. A large number of workstations (known as a render farm) are networked together to effectively act as a giant computer. The result is a computer-animated movie that can be completed in about one to five years (this process is not composed solely of rendering, however). A workstation typically costs $2,000 to $16,000, with the more expensive stations being able to render much faster, due to the more technologically advanced hardware that they contain. Professionals also use digital movie cameras, motion capture or performance capture, bluescreens, film editing software, props, and other tools for movie animation.
Modeling human faces

Main article: Computer facial animation
The realistic modeling of human facial features is both one of the most challenging and sought after elements in computer-generated imagery. Computer facial animation is a highly complex field where models typically include a very large number of animation variables. Historically speaking, the first SIGGRAPH tutorials on State of the art in Facial Animation in 1989 and 1990 proved to be a turning point in the field by bringing together and consolidating multiple research elements, and sparked interest among a number of researchers.[3]

The Facial Action Coding System (with 46 action units such as "lip bite" or "squint") which had been developed in 1976 became a popular basis for many systems.[4] As early as 2001 MPEG-4 included 68 facial animation parameters for lips, jaws, etc., and the field has made significant progress since then and the use of facial microexpression has increased.[4][5]
In some cases, an affective space such as the PAD emotional state model can be used to assign specific emotions to the faces of avatars. In this approach the PAD model is used as a high level emotional space, and the lower level space is the MPEG-4 Facial Animation Parameters (FAP). A mid-level Partial Expression Parameters (PEP) space is then used to in a two level structure: the PAD-PEP mapping and the PEP-FAP translation model.[6]
Realism in the future of computer animation

Realism in computer animation can mean making each frame look photorealistic, in the sense that the scene is rendered to resemble a photograph, or to making the animation of characters believable and lifelike. This article focuses on the second definition. Computer animation can be realistic with or without photorealistic rendering.
One of the greatest challenges in computer animation has been creating human characters that look and move with the highest degree of realism. Many animated films instead feature characters that are anthropomorphic animals (Finding Nemo, Ice Age, Bolt, Madagascar, Over the Hedge, Rio, Kung Fu Panda, Fantastic Mr. Fox, Alpha and Omega (film)) machines (Cars, WALL-E, Robots), insects(Antz, A Bug's Life, The Ant Bully, Bee Movie) fantasy creatures and characters (Monsters, Inc., Shrek, TMNT, Brave, Epic), or humans with nonrealistic, cartoon-like proportions (The Incredibles, Despicable Me, Up, Megamind, Jimmy Neutron: Boy Genius, Planet 51, Hotel Transylvania, Team Fortress 2).

Part of the difficulty in making pleasing, realistic human characters is the uncanny valley: a concept where, up to a point, people have an increasingly negative emotional response as a human replica looks and acts more and more human. Also, some materials that commonly appear in a scene like cloth, foliage, fluids, and hair have proven more difficult to faithfully recreate and animate than others. Consequently, special software and techniques have been developed to better simulate these specific elements.
In theory, realistic computer animation can reach a point where it is indistinguishable from real action captured on film. Where computer animation achieves this level of realism, it may have major repercussions for the film industry.[citation needed]
The goal of computer animation is not always to emulate live action as closely as possible. Computer animation can also be tailored to mimic or substitute for other types of animation, such as traditional stop motion animation (Flushed Away). Some of the long-standing basic principles of animation, like squash & stretch, call for movement that is not strictly realistic, and such principles still see widespread application in computer animation.

The popularity of websites that allow members to upload their own movies for others to view has created a growing community of amateur computer animators. With utilities and programs often included free with modern operating systems, many users can make their own animated movies and shorts. Several free and open source animation software applications exist as well. A popular amateur approach to animation is via the animated GIF format, which can be uploaded and seen on the web easily.
Detailed examples and pseudocode
In 2D computer animation, moving objects are often referred to as “sprites.” A sprite is an image that has a location associated with it. The location of the sprite is changed slightly, between each displayed frame, to make the sprite appear to move. The following pseudocode makes a sprite move from left to right:
var int x := 0, y := screenHeight / 2;
while x < screenWidth
drawBackground()
drawSpriteAtXY (x, y) // draw on top of the background
x := x + 5 // move to the right

Computer animation uses different techniques to produce animations. Most frequently, sophisticated mathematics is used to manipulate complex three dimensional polygons, apply “textures”, lighting and other effects to the polygons and finally rendering the complete image. A sophisticated graphical user interface may be used to create the animation and arrange its choreography. Another technique called constructive solid geometry defines objects by conducting boolean operations on regular shapes, and has the advantage that animations may be accurately produced at any resolution.
Let's step through the rendering of a simple image of a room with flat wood walls with a grey pyramid in the center of the room. The pyramid will have a spotlight shining on it. Each wall, the floor and the ceiling is a simple polygon, in this case, a rectangle. Each corner of the rectangles is defined by three values referred to as X, Y and Z. X is how far left and right the point is. Y is how far up and down the point is, and Z is far in and out of the screen the point is. The wall nearest us would be defined by four points: (in the order x, y, z). Below is a representation of how the wall is defined
(0, 10, 0)                        (10, 10, 0)

(0,0,0)                           (10, 0, 0)
The far wall would be:
(0, 10, 20)                        (10, 10, 20)

(0, 0, 20)                         (10, 0, 20)

The pyramid is made up of five polygons: the rectangular base, and four triangular sides. To draw this image the computer uses math to calculate how to project this image, defined by three dimensional data, onto a two dimensional computer screen.
First we must also define where our view point is, that is, from what vantage point will the scene be drawn. Our view point is inside the room a bit above the floor, directly in front of the pyramid. First the computer will calculate which polygons are visible. The near wall will not be displayed at all, as it is behind our view point. The far side of the pyramid will also not be drawn as it is hidden by the front of the pyramid.

Next each point is perspective projected onto the screen. The portions of the walls ‘furthest’ from the view point will appear to be shorter than the nearer areas due to perspective. To make the walls look like wood, a wood pattern, called a texture, will be drawn on them. To accomplish this, a technique called “texture mapping” is often used. A small drawing of wood that can be repeatedly drawn in a matching tiled pattern (like desktop wallpaper) is stretched and drawn onto the walls' final shape. The pyramid is solid grey so its surfaces can just be rendered as grey. But we also have a spotlight. Where its light falls we lighten colors, where objects blocks the light we darken colors.
Next we render the complete scene on the computer screen. If the numbers describing the position of the pyramid were changed and this process repeated, the pyramid would appear to move.
Computer-assisted vs computer-generated animation

To animate means "to give life to" and there are two basic ways that animators commonly do this.
Computer-assisted animation Computer assisted animation is usually classed as two-dimensional (2D) animation. Creators drawings either hand drawn (pencil to paper) or interactively drawn(drawn on the computer) using different assisting appliances and are positioned into specific software packages. Within the software package the creator will place drawings into different key frames which fundamentally create an outline of the most important movements. The computer will then fill in all the " in-between frames" commonly known as Tweening. Computer assisted animation is basically using new technologies to cut down the time scale that traditional animation could take, but still having the elements of traditional drawings of characters or objects.[7] Examples of Computer-assisted animation movies are: Beauty and the Beast, Antz.

Computer-generated animation Computer generated animation is known as 3 dimensional (3D) animation. Creators will design an object or character with an X,Y and Z axis. Unlike the traditional way of animation no pencil to paper drawings create the way computer generated animation works. The object or character created will then be taken into a software, key framing and tweening are also carried out in computer generated animation but are also a lot of techniques used that do not relate to traditional animation. Animators can break physical laws by using mathematical algorithms to cheat, mass, force and gravity rulings. Fundamentally, time scale and quality could be said to be a preferred way to produce animation as they are two major things that are enhanced by using computer generated animation. Another great aspect of CGA is the fact you can create a flock of creatures to act independently when created as a group. An animals fur can be programmed to wave in the wind and lie flat when it rains instead of programming each strand of hair separately.[7] Examples of Computer-generated animation movies are: Toy Story, The Incredibles and Shrek.http://en.wikipedia.org/wiki/Computer_animation

405
MCT / Class Schedule
« on: June 18, 2013, 12:22:52 PM »
Class Schedule of MTCA department is available in the attach file.

Pages: 1 ... 25 26 [27] 28