Sie sind auf Seite 1von 3

c 


    

This is a   set of illustrations of the evolution of social networking. It is commissioned by PeopleBrowsr, in
celebration of the Advertising Research Foundation͛s 75th anniversary. Not comprehensive, but somewhat,
informative. From 1930 to 2011, highlights include:
¦| 1930: The Notificator.
¦| ÿanuary 1978: Computer Bulletin Board System.
¦| 1989: World Wide Web invented.
¦| 1989: Online gaming service Quantum Link changed name to America Online.
¦| 2003: The launch of Friendster, MySpace, LinkedIn.
¦| 2004: ͞The Facebook͟ was launched.
¦| 2006: Twitter launched.
¦| 2008: Burger King viral video.
¦| 2010: First tweet from space.


c  c

Websites have changed quite a bit since their inception and now include many different flavors and varieties. In
this article, I will describe for you a brief history of websites, show you how we arrived where we are today, and
provide some suggestions on which website technology may be right for you. Back in the 1990͛s, when websites
were becoming increasingly popular, most of the websites were static html. Static html implied that each page was
planned out and hand coded to match the plan. Many of these sites were created by specialized website
development firms who understood the complexities of this new technology. This creation by an outside firm
meant that once the site was created, it was not updated often unless the site owner knew html. Many of the sites
created during this time were simply extensions of an organization͛s existing marketing materials. The focus at this
time was to get a presence on the web quickly. Another reason organizations jumped into a website was the ability
to have organization domain specific email addresses like joe@example.com.

The blogging trend of 2000 ushered in a new era for websites. A weblog or blog is another way to say a webpage.
The term blog refers to a collection of blog entries, which are truly just web pages. The blogging tools like Movable
Type, Blogger, and WordPress offered a mechanism for organization owners to make their websites more dynamic.
In most cases, the tools were freely available. These tools utilized php code and database functionality underneath
a website to dynamically serve content. The owners weren͛t exposed to the complexity of the underlying system,
which in turn freed them up to focus on their website content. A key feature of these tools was the ability to add a
page on the fly, with the tools handling the previous complexity of creating an html page. Essentially, in this setup,
a blog entry was a piece of content or web page. Following on the coattails of the blogging revolution was the
What You See Is What You Get (WYSIWYG pronounced WIZ-e-WIG) editor. This allowed the site owner to create a
new page with the blogging tools and add feature rich content like text and images without knowing any html
whatsoever. By making it super easy to add rich content, more organizations started to adopt these blogging
platforms.

Around this time, websites created with Adobe Flash (Formerly Macromedia Flash) technology began to gain
steam. These websites allowed a richer user experience that standard html websites by including native support
for animation, video, and sound. These sites were usually very appealing to the end user. Most of the Flash
websites were similar to the initial static websites in that their content was static and usually built by a specialized
web development firm. It didn͛t take long before some websites were coded 100% in Flash. Although this provided
a unique experience for the end user, the robots that scan web pages often had a tough time deciphering the page
content. This meant that much of the content in the website was never indexed by the likes of Google which in
turn made it difficult for search users to find the sites. This caveat of Flash led to what we see today, html websites
with small amounts of Flash inside of them.


c  
    

In part 1 of his series on the history of programming, David Chisnall takes a look at some of the

developments of the last few decades that have created the current crop of languages and discusses where they came fromJ

In the first half of the last century, Alan Turing proposed a theoretical mechanical programming engine, known as the Turing
Machine. This machine had an infinitely long tape, an internal register storing its state, and a table of actions.

At each step, it would read the symbol from the current location on the tape and consult the table to find what it should do for
that symbol and state pair. It would then perform some or all of the following actions:

¦| Write a new symbol.

¦| Change the state in the internal register.

¦| Move the tape left or right.

With the right entries in its table, this simple machine was capable of computing any algorithm. One of the fundamental
concepts of information theory governs relationships between sets; it is possible to uniquely map any item in one set to an item
in another set with the same cardinality.

Turing realized that this meant you could represent a Turing Machine so that it could be read by another Turing Machine. You
could then construct a Universal Turing Machine, which would take another Turing Machine (suitably encoded) as input and
then run as if it were that machine.

This is the concept behind all programming: that a suitably general computing machine can emulate any specific ones. A
computer program is nothing more than a means of turning a general-purpose computing engine into a special-purpose one.

       

The first computers were highly specialized machines. Due to the source of their funding, they were focused heavily on running
a set of simple algorithms that were used for code breaking. Whenever the algorithm (or, in many cases, the input) changed,
the computers needed to be rewired.

It was a little while later that stored program computers emerged, such as the Manchester Baby. Like the Universal Turing
Machine, these computers stored the algorithms they were to compute in the same way they stored data.
These early machines were programmed in pure machine code. The operations that the computer would perform were
represented by short binary sequences, and programmers would enter them either by flipping switches, making holes in punch
cards or tapes, or pressing buttons.

Instead of binary sequences, most systems enabled programmers to enter short sequences as a single ocal or hexadecimal digit,
but this still wasn͛t ideal.

This binary system wasn͛t very human-friendly, so the idea of a symbolic assembler arose. Rather than entering the binary
codes directly, programmers would enter mnemonics that represented them. While an add operation might be 01101011, the
programmer would enter ADD, which was much easier to remember.

These assembly language sequences had a simple one-to-one mapping with machine code instructions, so a simple program
comprising a lookup table was all that was required to turn them into real code.

One of the biggest innovations introduced by symbolic assemblers was that of symbolic branch destinations. Most programs
involve large numbers of conditional statements: do one thing if a value is in a certain range; otherwise, do something else.

At the machine-code level, they are translated into jumps, either relative or absolute, which move the place from which the
next instruction is read, either to a specific location or to a certain offset from the current one.

A machine code programmer had to calculate these offsets and enter them in the program as fixed numbers. If the programmer
wanted to add another instruction somewhere, all jumps that ended after this new instruction (or backward relative jumps
from after it to before) needed to be updated.

With a symbolic assembler, jumps could be given symbolic names, and the assembler would convert these names into real
addresses when it ran. If you added a new instruction somewhere, you still needed to run the assembler again, but it would
take care of the jump updates for you. This made programs a lot more flexible. It also made them slightly more efficient.

Programmers previously worked around this limitation by inserting short sequences of no-operation instructions in places
where they thought they might need to add code later (often with an unconditional jump to skip over them). With an assembly-
language program, this was no longer required

Das könnte Ihnen auch gefallen