Friday, 17 February 2017

Computer evolution as per the generations has been tabulated below


Computer evolution as per the generations has been tabulated below
Generation
Years
Switching device
Storage device
Software
Applications
First
1949-55
Vacuum tubes
Acoustic delay lines and later magnetic drum. I K byte memory
Machine and assembly languages. Simple monitors
Mostly scientific, later simple business systems
Second
1956-65
Transistors
Magnetic core main memory, tapes and disk peripheral memory 100 Kbyte main memory
High level language. Fortran , Cobol, Algol batch operating system
Extensive business applications, engineering design optimization
Third
1966-75
Integrated circuits (IC)
High speed magnetic cores. Large disks (100 MB). IM byte main memory
Fortran IV, Cobol 68, PL/1, time shared operating system
Data base management systems, online systems
Fourth (first decade)
1975-84
Large scale integrated circuits, microprocessors
Semi conductor memory. Winchester disk. 10 M byte main memory 1000 M byte disks
Fortran 77, Pascal, Cobol 74
Personal computers, Integrated CAD/CAM. Real time control. Graphics oriented systems
Fourth generation (second decade)
1985-91
Very large scale IC. Over 3 million transistors per chip
Semiconductor memory. 1 GB main memory. 100 GB disk
C, C++ Java, Prolog
Simulation, visualization, parallel computing multimedia
Fifth generation
1991-present
Parallel computing and superconductors
Attachable hard drives, USB drives used to add memory
Use of artificial intelligence
Voice recognition and response to natural language


1.3 Classification
Earlier, computers were classified as microcomputers, minicomputers, super minicomputers, main frames computers and supercomputers. Due to technological advances, this classification is irrelevant in today’s time.
Now, all computers use microprocessors. Based on the mode of use, they can be classified as palmtop, laptop, desktop and work station.
1.3.1 Definitions, Concepts and Features
A computer is an electronic device that executes the instructions in a program. A computer has four functions:
Input
Accepts data
Processing
Processes data
Output
Produces output
Storage
Stores results

The computer is omnipresent mainly for following features:
Speed:.A computer can do billions of actions per second.
Reliability: .Failures are usually due to human error, one way or another.
Storage: .A computer can store huge amounts of data.
In technical parlance, the term computer refers specifically to an electronic computer. Virtually all computers are “digital” because they are composed of digital (electronic) circuits built with microscopic transistors. Therefore, they can only process digital data (discrete electronic signals).
Most “real world” data is “analog” (continuous electronic signals, e.g. light, sound, movement and so on). Therefore, it must be converted to digital (A/D conversion) when encoded and vice versa (D/A conversion) when being decoded.
Based on the above features, we can define a computer as essentially, an electronic device that can receive and store data and perform a set of instructions called programs. The computers act upon these programs in a pre-determined and predictable fashion to process the data in a desired manner.
The following words are so basic to computers that it is virtually impossible to talk about computers without using them. Therefore, below are some preliminary definitions and details shall be covered later:
Computer:an electronic machine that:
Processes computer data (digital) into human information (numeric, text, or physical) or controls electrical ..devices.
Microcomputer:computer based on a microprocessor
Computer System: hardware, software, data and procedures for using the system
Hardware:physical equipment of a computer system
Software:program that are installed and “run” on the computer
Firmware:software that is permanently stored in a computer’s read only memory
Program:set of step-by-step instructions, in a computer language, that commands a computer to execute a specific task in finite time.
1.4 Data Representation
The characters and numbers fed to a computer and the output from the computer must be in a form readable and usable by the people. For this purpose, natural language symbols and decimal digits are appropriate. These constitute the external data representation.
On the other hand, the representation of data inside a computer must match the technology used by the computer to store and process data. All data to be stored and processed in computers are transformed or coded as strings of two symbols, one symbol to represent each state. The two symbols used are 0 and 1. These are known as Binary digits or bits, an abbreviation for data representation.
There are 4 unique combinations of two bits:
00 01 10 11
There are 8 unique combinations or strings of 3 bits each:
000 001 010 011 100 101 110 111
Each unique string of bits may be used to represent or code a symbol. In order to code the 26 capital letters of English, at least 26 unique strings of bits are needed. Five bits are sufficient as 32 strings of 5 bits each can be formed.
Coding of characters has been standardized to facilitate exchange of recorded data between computers. The most popular standard is known as ASCII (American Standard Code for Information Interchange). This uses 7 bits to code each character.
Besides codes for characters, in this standard, codes are defined to convey information such as end of line, end of page and so on.

In addition to ASCII, another code known as ISCII (Indian Standard Code for Information Interchange) has been standardized by the Indian Standards Organization. It is an eight bit code which allows English and Indian script alphabets to be used simultaneously.
A string of bits used to represent a character is known as byte. Characters coded in ISCII need 8 bits for each character. The byte is commonly understood as a string of 8 bits.
Thus,
1 ..bit = 0 or 1, on or off
1.. byte = 8 bits
1 ..kilobyte (K or KB) = 1024 bytes
1 ..megabyte (MB) = 1024 kilobytes
You might wonder, why 1024 instead of 1000 bytes per kilobyte? That is because computers don't count by tens like we do. Computers count by twos and powers of 2. Therefore, 1024 is 2 times itself ten times, i.e.
1024 is 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2








No comments:

Post a Comment