Sie sind auf Seite 1von 5

Computer Language

First Generation

Second Generation

Third Generation

Fourth Generation

Fifth Generation

1. First Generation Language:The first generation languages, or 1GL are low-level languages that are machine language. Originally, no translator was used to compile or assemble the first-generation language. The first-generation programming instructions were entered through the front panel switches of the computer system. The main benefit of programming in a first-generation programming language is that the code a user writes can run very fast and efficiently, since it is directly executed by the CPU. However, machine language is a lot more difficult to learn than higher generational programming languages, and it is far more difficult to edit if errors occur. In addition, if instructions need to be added into memory at some location, then all the instructions after the insertion point need to be moved down to make room in memory to accommodate the new instructions. Doing so on a front panel with switches can be very difficult. Furthermore, portability is significantly reduced - in order to transfer code to a different computer it needs to be completely rewritten since the machine language for one computer could be significantly different from another computer. Architectural considerations make portability difficult too. For example, the number of registers on one CPU architecture could differ from those of another.

2. Second Generation Language:The second generation languages, or 2GL are also low-level languages that generally consist of assembly languages. The term was coined to provide a distinction from higher level third-generation programming languages (3GL) such as COBOL and earlier machine code languages. Second-generation programming languages have the following properties:

The code can be read and written by a programmer. To run on a computer it must
be converted into a machine readable form, a process called assembly.

The language is specific to a particular processor family and environment.

Second-generation languages are sometimes used in kernels and device drivers (though C is generally employed for this in modern kernels), but more often find use in extremely intensive processing such as games, video editing, graphic manipulation/rendering. One method for creating such code is by allowing a compiler to generate a machineoptimized assembly language version of a particular function. This code is then hand-tuned, gaining both the brute-force insight of the machine optimizing algorithm and the intuitive abilities of the human optimizer.

3. Third Generation Language:The third generation languages, or 3GL are high-level languages such as C. Most popular general-purpose languages today, such as C++, C#, Java, BASIC and Delphi, are also third-generation languages. Most 3GLs support structured programming.

4. Fourth Generation Language:The fourth generation languages, or 4GL are languages that consist of statements similar to statements in a human language. Fourth generation languages are commonly used in database programming and scripts. A fourth-generation programming language (1970s-1990) (abbreviated 4GL) is a programming language or programming environment designed with a specific purpose in mind, such as the development of commercial business software. In the history of computer science, the 4GL followed the 3GL in an upward trend toward higher abstraction and statement power. The 4GL was followed by efforts to define and use a 5GL. The natural-language, block-structured mode of the third-generation programming languages improved the process of software development. However, 3GL development methods can be slow and error-prone. It became clear that some applications could be developed more rapidly by adding a higher-level programming language and methodology which would generate the equivalent of very complicated 3GL instructions with fewer errors. In some senses, software engineering arose to handle 3GL development. 4GL and 5GL projects are more oriented toward problem solving and systems engineering. All 4GLs are designed to reduce programming effort, the time it takes to develop software, and the cost of software development. They are not always successful in this task, sometimes resulting in inelegant and unmaintainable code. However, given the right problem, the use of an appropriate 4GL can be spectacularly successful as was seen with MARK-IV and MAPPER (see History Section, Santa Fe real-time tracking of their freight cars the productivity gains were estimated to be 8 times over COBOL). The usability

improvements obtained by some 4GLs (and their environment) allowed better exploration for heuristic solutions than did the 3GL. A quantitative definition of 4GL has been set by Capers Jones, as part of his work on function point analysis. Jones defines the various generations of programming languages in terms of developer productivity, measured in function points per staff-month. A 4GL is defined as a language that supports 1220 FP/SM. This correlates with about 1627 lines of code per function point implemented in a 4GL. Fourth-generation languages have often been compared to domain-specific programming languages (DSLs). Some researchers state that 4GLs are a subset of DSLs. Given the persistence of assembly language even now in advanced development environments (MS Studio), one expects that a system ought to be a mixture of all the generations, with only very limited use of the first.

5. Fifth Generation Language:The fifth generation languages, or 5GL are programming languages that contain visual tools to help develop a program. A good example of a fifth generation language is Visual Basic. A fifth-generation programming language (abbreviated 5GL) is a programming language based around solving problems using constraints given to the program, rather than using an algorithm written by a programmer. Most constraint-based and logic programming languages and some declarative languages are fifth-generation languages. While fourth-generation programming languages are designed to build specific programs, fifth-generation languages are designed to make the computer solve a given problem without the programmer. This way, the programmer only needs to worry about what problems need to be solved and what conditions need to be met, without worrying about how to implement a routine or algorithm to solve them. Fifth-generation languages are used mainly in artificial intelligence research. Prolog, OPS5, and Mercury are examples of fifth-generation languages. These types of languages were also built upon Lisp, many originating on the Lisp machine, such as ICAD. Then, there are many frame languages, such as KL-ONE. In the 1990s, fifth-generation languages were considered to be the wave of the future, and some predicted that they would replace all other languages for system development, with the exception of low-level languages. Most notably, from 1982 to 1993 Japan put much research and money into their fifth generation computer systems project, hoping to design a massive computer network of machines using these tools. However, as larger programs were built, the flaws of the approach became more apparent. It turns out that starting from a set of constraints defining a particular problem, deriving an efficient algorithm to solve it is a very difficult problem in itself. This crucial step cannot yet be automated and still requires the insight of a human programmer.

Today, fifth-generation languages are back as a possible level of computer language. A number of software vendors currently claim that their software meets the visual "programming" requirements of the 5GL concept.

Compiler vs Interpreter:Compiler and interpreter, both basically serve the same purpose. They convert one level of language to another level. A compiler converts the high level instructions into machine language while an interpreter converts the high level instruction into some intermediate form and after that, the instruction is executed. Compiler A compiler is defined as a computer program that is used to convert high level instructions or language into a form that can be understood by the computer. Since computer can understand only in binary numbers so a compiler is used to fill the gap otherwise it would have been difficult for a human to find info in the 0 and 1 form. Earlier the compilers were simple programs which were used to convert symbols into bits. The programs were also very simple and they contained a series of steps translated by hand into the data. However, this was a very time consuming process. So, some parts were programmed or automated. This formed the first compiler. More sophisticated compliers are created using the simpler ones. With every new version, more rules added to it and a more natural language environment is created for the human programmer. The complier programs are evolving in this way which improves their ease of use. There are specific compliers for certain specific languages or tasks. Compliers can be multiple or multistage pass. The first pass can convert the high level language into a language that is closer to computer language. Then the further passes can convert it into final stage for the purpose of execution. Interpreter The programs created in high level languages can be executed by using two different ways. The first one is the use of compiler and the other method is to use an interpreter. High level instruction or language is converted into intermediate from by an interpreter. The advantage of using an interpreter is that the high level instruction does not goes through compilation stage which can be a time consuming method. So, by using an interpreter, the high level program is executed directly. That is the reason why some programmers use interpreters while making small sections as this saves time.

Almost all high level programming languages have compilers and interpreters. But some languages like LISP and BASIC are designed in such a way that the programs made using them are executed by an interpreter. Difference between compiler and interpreter A complier converts the high level instruction into machine language while an interpreter converts the high level instruction into an intermediate form. Before execution, entire program is executed by the compiler whereas after translating the first line, an interpreter then executes it and so on. List of errors is created by the compiler after the compilation process while an interpreter stops translating after the first error. An independent executable file is created by the compiler whereas interpreter is required by an interpreted program each time.

Das könnte Ihnen auch gefallen