Sie sind auf Seite 1von 6

Programming Languages

-programming languages specially developed so that you could pass your data and instructions to the
computer to do specific job

-There are two major types of programming languages, Low Level Languages and High Level Languages

-Low Level languages are further divided in to Machine language and Assembly language

-High Level Languages are, for scientific application FORTRAN and C languages are used. On the other
hand COBOL is used for business applications.

Machine Languages

-is the only language that is directly understood by the computer. It does not needs any translator
program

-The only advantage is that program of machine language run very fast

Assembly Language

-It is the first step to improve the programming structure. You should know that computer can handle
numbers and letter.

-The set of symbols and letters forms the Assembly Language and a translator program is required to
translate the Assembly Language to machine language

-This translator program is called `Assembler

-Assembly Language is easier to understand and saves a lot of time and effort.

-It is easier to correct errors and modify program instructions.

-Assembly Language has the same efficiency of execution as the machine level language

global _main

extern _printf

section .text

_main:

push message

call _printf

add esp

ret

message:

db 'Hello, World!'
High Level Languages

-assembly and machine level languages require deep knowledge of computer hardware where as in
higher language you have to know only the instructions in English words and logic of the problem.

-Higher level languages are simple languages that use English and mathematical symbols like +, -, %, /
etc. for its program construction

-Any higher level language has to be converted to machine language for the computer to understand

-For example COBOL (Common Business Oriented Language)

Compiler

-It is a program translator that translates the instruction of a higher level language to machine language.

-It is called compiler because it compiles machine language instructions for every program instructions
of higher level language

-Thus compiler is a program translator like assembler but more sophisticated. It scans the entire
program first and then translates it into machine code.

-The programs written by the programmer in higher level language is called source program. After this
program is converted to machine languages by the compiler it is called object program. A compiler can
translate only those source programs, which have been written, in that language.

Interpreter

-An interpreter is another type of program translator used for translating higher level language into
machine language.

-It takes one statement of higher level languages, translate it into machine language and immediately
execute it.

-Translation and execution are carried out for each statement.

-It differs from compiler, which translate the entire source program into machine code

-The advantage of interpreter compared to compiler is its fast response to changes in source program
do not require large memory in computer.

-The disadvantage of interpreter is that it is time consuming method because each time a statement in a
program is executed then it is first translated.

-Thus compiled machine language program runs much faster than an interpreted program.
Debugging is the process of finding and resolving defects or problems within a computer program that
prevent correct operation of computer software or a system.

Debugging ranges in complexity from fixing simple errors to performing lengthy and tiresome tasks of
data collection, analysis, and scheduling updates. The debugging skill of the programmer can be a major
factor in the ability to debug a problem, but the difficulty of software debugging varies greatly with the
complexity of the system, and also depends, to some extent, on the programming language(s) used and
the available tools, such as debuggers. Debuggers are software tools which enable the programmer to
monitor the execution of a program, stop it, restart it, set breakpoints, and change values in memory.
The term debugger can also refer to the person who is doing the debugging.

1800

Joseph Marie Jacquard teaches a loom to read punch cards, creating the first heavily multi-threaded
processing unit. His invention was fiercely opposed by the silk-weavers who foresaw the birth of Skynet.

1936

Alan Turing invents everything, the British courts do not approve and have him chemically castrated.

The Queen later pardoned him, but unfortunately he had already been dead for centuries at that time.

1936

Alonzo Church also invents everything with Turing, but from across the pond and was not castrated by
the Queen.
1957

John Backus creates FORTRAN which is the first language that real programmers use.

1959

Grace Hopper invents the first enterprise ready business oriented programming language and calls it the
“common business-oriented language” or COBOL for short.

1964

John Kemeny and Thomas Kurtz decide programming is too hard and they need to go back to basics,
they call their programming language BASIC.

1970

Niklaus Wirth makes Pascal become a thing along with a number of other languages, he likes making
languages.

He also invents Wirth’s law which makes Moore’s law obsolete because software developers will write
so bloated software that even mainframes cannot keep up. This will later be proven to be true with the
invention of Electron.js and the abstractions built on top of it.

1972

Dennis Ritchie got bored during work hours at Bell Labs so he decided to make C which had curly braces
so it ended up being a huge success. Afterwards he added segmentation faults and other developer
friendly features to aid productivity.

Still having a couple of hours remaining he and his buddies at Bell Labs decided to make an example
program demonstrating C, they make a operating system called Unix.

1983

Bjarne Stroustrup travels back to the future and notices that C is not taking enough time to compile, he
adds every feature he can think of to the language and names it C++.

Programmers everywhere adopt it so they have genuine excuses to watch cat videos and read xkcd
while working.

1991

Guido van Rossum writes a cooking book about eggs and spam.

1993

Roberto Ierusalimschy and friends decide they need a scripting language local to Brazil, during
localization an error was made that made indices start counting from 1 instead of 0, they named it Lua.

1994

Rasmus Lerdorf makes a template engine for his personal homepage CGI scripts, he releases his dotfiles
on the web.
The world decides to use these dotfiles for everything and in a frenzy Rasmus throws some extra
database bindings in there for the heck of it and calls it PHP.

1995

Yukihiro Matsumoto is not very happy, he notices other programmers are not happy. He creates Ruby to
make programmers happy. After creating Ruby “Matz” is happy, the Ruby community is happy,
everyone is happy.

1995

Brendan Eich takes the weekend off to design a language that will be used to power every single web
browser in the world and eventually also Skynet. He originally went to Netscape and said it was called
LiveScript but Java became popular during the code review so they decided they better use curly braces
and rename it to JavaScript.

Java turned out to be a trademark mess that would get them in trouble so JavaScript gets renamed to
ECMAScript during standardisation and everyone still calls it JavaScript.

2001

Anders Hejlsberg re-invents Java and calls it C# because programming in C feels cooler than Java.
Everyone loves this new version of Java for totally not being like Java.

2005

David Hanselmeyer Hansen creates a web framework called Ruby on Rails, people no longer remember
that the two are separate things.

2006

John Resig writes a helper library for JavaScript, everyone thinks it’s a language and make careers of
copy and pasting jQuery codes from the internets.

2010

Graydon Hoare also wants to make a language like C, he calls it Rust. Everyone demands that every
single piece of software be rewritten in Rust immediately. Graydon wants shinier things and starts
working on Swift for Apple.

2012

Anders Hjelsberg wants to write C# in web browsers, he designs TypeScript which is JavaScript but with
more Java in it.

2013

Jeremy Ashkenas wants to be happy like Ruby developers so he creates CoffeeScript which compiles to
be JavaScript but looks more like Ruby. Jeremy never became truly happy like Matz and Ruby
developers.
2014

Chris Lattner makes Swift with the primary design goal of not being Objective-C, in the end it looks like
Java.