Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Freax: The Brief History of the Computer Demoscene
Freax: The Brief History of the Computer Demoscene
Freax: The Brief History of the Computer Demoscene
Ebook555 pages10 hours

Freax: The Brief History of the Computer Demoscene

Rating: 0 out of 5 stars

()

Read preview

About this ebook

FREAX – the biggest book ever written about the history of the computer demoscene. The book tells the complete history of the Commodore 64 and the Amiga, both about the machines and about the underground subcultures around them, from the cracker- and warez-scene to the demoscene, from hacking and phreaking to the ASCII art scene. Interviews with scene celebrities, former key persons of the computer industry, citations from contemporary magazines and fanzines make the narrative history of the big adventure complete. The book contains 350 pages and is illustrated with 480 color photos and screenshots. This is the comprehensive guide to the golden era of home computers.
LanguageEnglish
PublisherCSW-Verlag
Release dateApr 17, 2016
ISBN9783941287976
Freax: The Brief History of the Computer Demoscene

Related to Freax

Related ebooks

Applications & Software For You

View More

Related articles

Reviews for Freax

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Freax - Tamás, Polgár

    Madwizards

    part 1

    the basics …

    Ultimately, a demo(nstration) in a demoscene sense, is a piece of free software that shows realtime rendered graphics while playing music. Often, the music is tightly connected/synced to the visuals. Modern PC demos run linear from start to finish and are non-interactive. There is no what-so-ever rule what a demo must/can show. The creator is free to decide whether he wants to show stylish and/or impressive effects, an epic story, funny/bizzare/ satirical audiovisual artwork or a distorted mindfuck. A scene demo is not a try-out version of a commercial application or game.

    – PC Demoscene FAQ by Thomas Tomaes Gruetzmacher

    Yeah … The Horizon parties … Venlo, Diskswapping, phonecalls from people from a far country. Then you had people to look up to … Groups like 1001 Crew, Horizon, Bones, BlackMail, Ash&Dave … People like Jeroen Tel and JCH … Calling with a 1,200 baud modem to boards … When cracking and demo-making went hand in hand … Real fights at parties! (Tear gas!!), Anti Demo’s … The stripteases at the Silicon Ltd. Parties … The PC scene is TOO professional. Everybody makes hi-tech 3 D engines, hoping to sell it to a software company and make big money. Back then demo-making was a big adventure. Even coding and typing the text for a scroller was big fun! You were doing things that ‘normal’ people could not understand. Remember the feeling when you finished a demo, and spread it by mail? Watching endless scrolltexts, hoping you would be greeted?

    – Scout of Success

    1. What is the scene?

    Many have tried to answer this question, with more or less success. Of course I also tried and completed an entire chapter about it but scraped my results in favor of an old article by Jean of Chromance, published in 1992 in the yearbook of Commodore Világ, (Commodore World) a legendary Hungarian computer magazine. So, what did Jean write about the meanings of the words scene and demo?

    We have already met demo-sections or articles about demos, but no such article has ever appeared on the pages of CoV before. Certainly it’s an innovation, so we’re eager if you, the readers, desire such articles. This time we will give you a glance of what happened in the recent year on the really fertile demoscene … For those not really experts on this field, let us have a little historical overview. Maybe we can start with explaining what is a demo. Upon hearing this word, many can’t really grasp it. Is it a demonstration version of some game, or some utility? Certainly there are also such demos, but this is a different matter. The word in fact derives from demonstration. This might give a clue that this is something spectacular to see. Graphics, music, scrollers, programming tricks and knacks, but what is the point? Many computer users are simply untouched by this branch of programming, mainly those PC users, who never surpassed database management during their careers. To better understand the term, let’s travel back in time.

    The first well playable games on the Commodore 64 appeared around 1983–1984. Everybody was happy to see that, but nobody was able to figure the short intros before the games, or rather few cared. Of course those were the illegally cracked stuffs. So, shorter or longer routines were running before the programs, which were not placed there by the developer company. The few cracker groups appeared – a group of people spreading cracked software, like Section 8 or Crackman – with the single goal to gain quick fame by their activity. Years passed, and around 1985–1986 it just wasn’t enough to simply change the screen color or show a few miserable graphical sprites. So the first quality intros appeared. The word intro refers to introduction. The contemporary celebrities (Dynamic Duo, 1001 Crew, Ikari, FairLight, Hotline, Triad and Eagle Soft Inc. in the United States) were not content with the obsolete, boring screen tricks, but looked for new ideas. This was the time to see the first worthwhile scrollers, rasterbars, logos, and as a very important element, music has appeared! So a group’s fame was not only built by the amount of the programs they cracked, but also the quality of their often really beautiful intros. But for the beginners, we have to clear a few expressions. The word scroll refers to a rolled piece of paper or parchment, but in this case it means a rolling text running across the screen. Later many innovative variants appeared: little scroller, big scroller, waving, bouncing, flashing scrollers, and so on. More meanings appear in the case of games, since any object moving onto the screen, and exiting later, can be considered a scroller. The word logo is widespread in everyday life: the depiction of a name, like a company logo, or in this case, the logo of a specific group. It’s usually a colorful, still picture, but not necessarily. As intros developed, logos started to move sideways, later also up and down. By applying rasterbars or stripes, varying the screen color or other color registers by screen lines, they were able to further enhance the spectacle of the introduction programs.

    It was an important moment when the different groups learned about each other’s existence. They were bound together by an invisible force, which was already called the scene. They greeted each other in their intros, and organized meetings, so-called copyparties. As the number of C64 owners grew, more and more programmers wanted to scrape their voices, but as they didn’t have access to fresh programs to crack, so they had to find an alternative way.

    Soon the intros were not bound to an illegally copied software any more, but they started to spread detachedly, and were called demos. The first legal groups appeared (Zetrex, Dexion, Triangle, Beyond Force, Fire Eagle, Contex, Super Swap Sweden, later called Horizon, etc.) The first demos were just resembling intros, but as sophisticated compression routines appeared and spread, more data was stuffed into the memory, and this led to the development of much more serious programs. Demo programming slowly evolved into a form of art, and the groups introduced new ideas. A lot of undocumented features of the C64 were revealed, so it became possible to remove the screen border, shaking the screen up and down, using a fourth, undocumented digital sound channel, and many more solutions appeared, which were previously considered impossible. Multiplexed sprites extended the basic sprite set of 8, first only with a few, but later democoders pushed the limit to the possible maximum (60–70). The programs slowly grew over the 64 kilobyte limit, and overlaying was introduced, giving birth to a new genre called the megademo. The first overlay routines worked the most obvious way, while an effect was shown on the screen, the next one was already loading. This was quickly dropped, as disk operations consumed a lot of resources, on the expense of visual effects. Demo parts were linked together with loader routines, and this method still goes by on the C64, for want of better.

    The next great achievement of the scene was the advent of diskmagazines, like Sex’n’Crime, Mamba, or the Hungarian Fölény (Superiority). These magazines, consisted of news, demo reviews, charts and copyparties informed everybody about everything, and soon the scene developed into a serious organization.

    But soon (1986) the Amiga was introduced, and many groups changed to the new system, which particularly supported visual effect programming. It didn’t take a long time for an Amiga scene to emerge, serious cracker groups were already active around 1988 (Bamiga Sector One & Kent Team, Tristar, Quartex, World of Wonders, Ackerlight, Spreadpoint, Paranoimia, later Skid Row), but also the first demogroups were around (Wild Copper, Triangle, Fair-Light & Northstar, Piranhas, Phenomena, Red Sector, Brainstorm, Scoopex). Two streams were formed, still holding their fields nowadays. The first is the so- called DOS programs, while the other is the school of trackload stuffs. The latter is a subject of our particular inspection. The Amiga demos first followed the old C64 suit: several parts chained to each other with a loader routine. Often these parts showed no coherence, and even the demo didn’t have a specific title, it was simply called Megademo. The great breakthrough came with a gentleman called Slayer, who was a reputed coder of the Finnish group Scoopex that time. The revolutionary idea was to drop the loader part, and let the parts follow each other dynamically. We don’t even realise when disk operations are running, as the pace of the program enchants the viewer. Different effects are only shown for seconds, and the entire program forms a complete whole, along with the unique design and one single music. It is to be noted that some groups (Origo, FairLight) tried to introduce the same style on the C64, with more or less success, but long loading sessions on this computer make the demo a bit languid. Back to the Amiga, several styles appeared during the times. Besides of demos, we can find slideshows, musicdisks and diskmags. A slideshow is a collection of graphics, while a musicdisk is a collection of music. The quality of Amiga diskmags is uncomparable to their C64 forerunners. Most of them are coming out every one or two months, often with several hundred pages of footage (Zine, McDisk, ICE, Raw, Hack-Mag, Top Secret).

    Demoroutines were mostly based on two basic abilities of the computer. First, thanks to the blitter, a graphical coprocessor, the programming of polygon based objects became fast and easy, and second, the copper, another coprocessor supported creating raster effects and plasmas.

    The newly appeared PC scene is worth a side note. This platform always suffered from the drawback of huge diversity of configurations, which made it almost impossible to create demos that are running on every machine. However, today a 286-based PC, with a VGA monitor and a Sound Blaster can be considered something like a minimal standard, and some demos already take advantage of it. PC groups are slowly mushrooming, among them a few oldies (Tristar & Red Sector, FairLight, Skid Row), but also some new talents (Spacepigs, Ultraforce, and, but not last, our country’s leading group, Twin Sectors).

    In these days the demos have already reached a really high, professional level, requiring months of studious work to finish a quality production. This progress is greatly endorsed by the fat, often several thousand dollars worth prizes at the demo contests of the greatest copyparties. For example the total prizing of the Hurricane-Brutal party, held in Denmark this summer, was 20,000 dollars!

    Thus your introduction to the demo scene, from more than ten years ago, but Jean’s precise description fits even nowadays. Demogroups operate worldwide today, in 2004, and still release breathtakingly spectacular visual programs we call demos. Sure, the PC has grown up for the task during the decade that passed since the above article, and most demos and intros are already written for them, but the two old platforms mentioned in the article, the Amiga and the Commodore 64 still stoutly hold up, and there are several more, less known computers, with hundreds of enthusiasts. The ultimate goal of this book is to display the history of the demo creators and their productions. From the first pioneers to our present day.

    Artistic floppy disk cover from 1988, by Hobbit of Fairlight.

    First let’s begin with the origins of the technologies applied during the creation of demos, intros and other spectacular computer programs today.

    Computers in the future may weigh no more than 1.5 tons.

    – Popular Mechanics Magazine, 1949

    2. The beginnings of computer graphics

    Human creativity is one of the keystones of civilization. This quality created the computer itself, an invention only comparable to the steam engine. I won’t go into details on the fact that the first electronic computer was the American ENIAC in 1946, firstly because amost everyone has learned about this in school, secondly because it is not true. Konrad Zuse, the German scientist, deceased in December 1995, had already built electronic computers, similar to the ENIAC in 1944, the Z-2, Z-3 and Z-4. Hence the computer is not an American invention, despite all the hype surrounding the fiftieth anniversary ENIAC.

    Konrad Zuse’s Z-3 computer in the Deutsches Museum of Munich

    Computer imaging was not born with the computer itself. More than ten years passed before the first monitor appeared which was only able to display characters. But if we really want to see the beginnings of computer graphics, we have to travel back to the end of the forties with our imaginary time machine, when Jay Forrester, a young engineer at Massachussets Institute of Technology was assigned a task by the American government. He was to form a research team and design a device to help train fighter pilots and develop new airframes. They decided to use the newest achievement of technology, a digital computer. Their first computer was baptised Whirlwind. Then their attention was turned towards a computer controlled radar system. They found that a Whirlwind, connected to a radar station, was able to precisely and quickly calculate and display the distance, height and speed of incoming aircraft. The system was presented to the representatives of the US armed forces on the 20th of April, 1951, was met with success, and immediately accepted. From its introduction in 1958, it was kept in service until 1983 as an important part of the American strategic air defense system. This was the first practical use of computer graphics.

    Civilian scientists were also not indolent. During the fifties the Americans, pioneering electronics, revolutionized some marginal fields of mathematics like data processing. The first graphical computer, meaning a machine equipped with a monitor, that is able to do more than displaying mere characters, was the LDS-1, Line Drawing System in 1954. As its name implies it was a vector display system. The electron beam was not drawing pixels on the screen but straight lines.

    Digital Equipment Corporation (DEC) was founded in 1957, in Maynard. Later they became known as a manufacturer of high performance Alpha RISC chips, today they’re a subsidiary of Compaq. Back then, DEC only had three employees in an old mill. In November 1960 they introduced the first relatively small – under eight tons – graphical computer, which was also the first to boast a color raster screen. The computer was marked PDP-1 (Programmed Data Processor). It was an 18-bit machine. More than forty years later, DEC had over 124,000 employees, and a 12.9 billion dollars yearly income.

    The first person using a computer for artistic purposes, was John Whitney, an abstract movie maker. He started his experiments in 1958, with a computer marked M-5, originally used as a machine gun targeting device for the B-29 Superfortress bombers, and created a short animation film. This clip was later featured in the movie Vertigo by Hitchcock. Whitney later created his own film titled Catalog, which he finished in 1961, and another one, Lapis in 1966, together with his brother. These movies were the first computer animations. The second pioneer with artistic intentions, but with a particular knowledge of hardware tinkering, was Ivan Sutherland, a student of Massachussets Institute of Technology. He created the first drawing program, Sketch-Pad in 1961, and lacking the appropriate input device, he was also forced to invent the light pen. The light pen was the forerunner of the mouse, a pencil-shaped device, which the user held in his hand like a conventional pencil. The tip of the pencil was a photoelectric cell, which, when touched to the screen, was hit by the cathode ray. The other end of the pencil was connected to the computer with a cable, and it determined where the pencil tip was touching the screen. Hence, just like with a normal pencil, the user could draw on the screen. Some of the basic ideas of SketchPad are still used in modern graphics programs. For example it already featured a rectangle drawing function. The user did not need to draw a rectangle painstakingly pixel by pixel, but it was enough to pick the two of its opposite corners, and the program did the rest. Further developments included functions like edge blurring or sharpening, and other features, which later became essential in modern drawing software.

    The DEC PDP-1. The typewriter-like device is a card puncher. Photo courtesy of The Computer Museum History Center, Moffett Federal Airfield, Mountain View, California

    At the rise of the sixties there were already some computers with color graphics monitors and high performance processors, however, these were typically only found at the largest universities in the world. Here the history of underground started, when students, learning to code discovered these computers. The playful men, the homo ludens immediately tried to create something for their own entertainment. This is how the first game programs were born. An early experiment with games first saw the light in 1958, in the nuclear research center of Brookhaven National Laboratory in Upton. William A. Higinbotham, nuclear physicist, a former participant of the first nuclear experiment in Alamogordo, wished the visitors of the institute to see more than boring electric boxes, switches and flashing screens. Using an analogue computer, originally used to calculate projectile trajectories, he created a simple computer game, titled Tennis Two. In this game, two players were bouncing a bright dot to and fro on a phosphorescent, five inch oscilloscope screen, just like in tennis. The game was very popular, visitor groups often stopped for hours at the tiny screen. This was the first try, but the world’s first real computer game was undoubtedly Star Trek, or, according to other sources, Spacewar, which first appeared on a PDP-1 computer at the Massachussets Institute of Technology. The game was programmed by graduate students and teachers, based on the idea of Steve Slug Russel. Today it looks quite primitive, but still entertaining, and it can be recreated by anyone with minimal programming knowledge. In the original version, two spaceships, a Klingon Warbird, and the famous Enterprise from the Star Trek series fought on the circular screen. There is a planet in the middle of the screen whose gravity attracts both ships. If the ships – both controlled by an individual player – don’t want to crash into the planet, they can keep distance by boosting their engines. They can fire missiles at each other which is also deflected by the planet’s gravity. This was the basic idea of Spacewar. Of course it can be complicated, as many were during the years. Game programs soon ended being restricted entertainment at universities, and since people were already copying software that time, and there were no FBI raids to stop them, Spacewar soon appeared on all DEC PDP-1 machines worldwide. Even DEC used it as a final test program to check their newly assembled machines. Most of the first game cabinets, which appeared in the seventies, offered this game to try for a quarter.

    Ivan Sutherland demonstrates SketchPad.

    Old computers displayed in an American high sschool. On the left there is Dr. Higinbothan’s computer, which ran Tennis Two on the little circular screen..

    The first home game console, that was connected to a TV set, appeared in the stores in 1966. It was designed by Ralph Baer, an engineer of Sanders Associates, who sold the patent to Magnavox, and the machine was introduced as Odyssey. One can imagine how primitive it was, as there were no microprocessors yet.

    Soon also scientists started to play with the newest wonders of electronics. Their games were not exactly toys, as the students’ funny little tidbits. They studied the new branches of mathematics. One of these was fractals. Incredibly beautiful computer-generated images based on complex numerical operations. In the sixties, when they first experimented with iterative manipulation of complex numbers the first tangible result of fractal science was an algorithm that plotted Great Britain’s shoreline on the screen in 1968. The vast possibilities of fractals were already obvious, but the picture was finally cleared by Benoit B. Mandelbrot, head of IBM Thomas J. Watson Research Center. His program generated the first image of the shape that was later called the Mandelbrot-fractal. This was the first of those beautiful fractals, which we still see in demos and intros.

    Mandebrot-fractal and magnified detail.

    What is a fractal? It derives from a series of operations with complex numbers. Complex numbers have a real and an imaginary value, for example: 4 + 3i. In this case, 4 is the real, and 3i is the imaginary value. Actually this is all the math that is needed to understand fractals Along with some coordinate geometry. Our screen is a flat surface, and each pixel has an X and an Y coordinate. The origin is the top left corner of the screen, different from the conventional Descartes coordinate system. The fractal plotter algorithm is a simple iteration – sometimes more iterations, nested – which paints the pixels given by the real (X coordinate) and imaginary (Y coordinate) parts of the complex number that’s being iterated.

    The real beauty of fractals – especially the Mandelbrot-fractal – is actually the fact that we can magnify any part of it, and we’ll get more and more, always varying, beautiful shapes. This can be done again and again, ad infinitum, or to the performance limit of the computer’s processor.

    There is a lot more that can be done with fractals, as discovered by Benoit B. Mandelbrot and published in his book titled The Fractal Geometry of Nature in 1978. His discovery proved that it is possible to generate incredibly lifelike mountain landscapes by fractal algorithms. They can even be photorealistic, depending on the complexity of the drawing algorithms. Later, in 1980, Loren Carpenter, working for Boeing’s computer graphics division, revised and perfected Mandelbrot’s landscape theory, resulting in even more realistic pictures. There are thousands more fractals besides of Mandelbrot’s discoveries, like Dragon, Julia, Mandeljulia, Lyapunov, just to name a few.

    The first spectacular application of fractal technology, probably already seen by everyone, was in Star Wars (1977). At the end of the movie, during the battle of Yavin, when the Rebels fly their X-wing fighters in the trench of the Death Star, the texture on the walls of the trench were fractals, generated and animated with computers. The starfighters were models and laser beams were painstakingly painted on the film by hand. Fractals also made their appearance in Alien where the planet’s surface was a fractal landscape during the starship’s landing scene. The mountains were randomly placed by the program created by Alan Sutcliffe, working for Systems Simulation Ltd. in London.

    This is a valley, as a computer and the Vista Pro landscape generator program imagines. Created by Gaf of Controlled Dreams in 1996

    Other scientists created another game: Life was it was simpler than fractals, but just as interesting. It was invented by John Horton Conway, a mathematician at the University of Cambridge in 1968, and was enhanced in the seventies by Carter Bays, a programmer mathematician at South Carolina University. Life is a simple math game. There is a grid, each square meaning a place for a cell. Some squares are filled, those are living cells, others are empty, there are no cells. A clock is ticking. At each tick, some cells die, others survive. If a cell has 2 or less neighbors, it will die, because it can’t multiply. If there are 4 or more, it dies due to overcrowding. A new cell born in an empty square, if it has at least 2, but at most 3 neighbors. These parameters can be freely modified. To save them, Bays introduced a code that’s called Bays-code ever since. In this example, it is 2423. As it’s easy to figure, the first two digits mean the fate of living cells at each tick, the last two control the birth of new cells. This basic version of Life can be enhanced many ways. It is a simple programming challenge which anyone with basic programming knowledge can write. Even the basic version is very interesting to watch running, not to mention the three dimension version created by Carter Bays.

    Computers and spatial imagery opened new perspectives for engineering: three dimensional, virtual object design. The key was vector geometry, a mathematical method to represent solid objects in a coordinate system. Such vector objects are easy to create with a computer, and it can also rotate or move the object. Computers are perfect tools to design machine parts, bearings, and model their working states.

    Computer vector graphics is coeval with computer graphics, as the first cathode ray displays were developed exactly for this purpose, at the beginning of the fifties. For the early sixties, spatial object display became a fulfilled dream. To replace the uneasy LDS-1, IBM and General Motors designed and built DAC-1 (Design Augmented by Computers) in 1959, along with the first CAD software, a 3 D design program of real practical use. The machine was first displayed at the Joint Computer Conference in Detroit, 1964. It had a vector monitor, but soon it was found that high resolution raster screens can achieve as good image quality as the best vector displays. The demand for raster graphics was first created by the IBM 2250 terminal, which became the first commercially available graphical computer in 1965.

    Filled vector. Spaceships travel on the sky of a vector city in the Finnish 64 K intro titled Airframe (Prime, 1994)

    Spatial vector graphics soon went through a breathtaking advancement. The first major achievement was to hide masked surfaces. It was a serious problem that all 3 D graphics programs displayed all the edges of the object, so a more complex object looked like a chaos of lines. The new method, developed by a research team at Utah University, supervised by Edwin Catmull, made it possible to find the lines covered by surfaces, so they weren’t drawn any more. This not only enhanced the stereophony of the image, but opened the way to new advancements. As a byproduct of this technology, they found the way to paint the surfaces of an object, achieving a better illusion of a solid material. This rendering method is called filled vector.

    A combination of Gouraud shading and texture mapping in the French demo titled Aquaphobia by Realtech. The rough edges of te vector objects can be observed, their surface has a smooth illusion.

    It only took one more step to develop flat shading. They set a point of light in the virtual 3 D space, and the surfaces of the rotating object were painted a brighter or darker color depending on their angle towards this specific point. This was sufficient for flat, angular objects. Filled vector technology also brought the approach of building 3 D objects from triangular polygons, which is still followed nowadays. Instead of drawing all the lines of the object, which required difficult programming, all surfaces were divided to triangles, and these were put next to each other. Any flat surface can be divided to triangles, and the more such polygons an object contains, the more small details, smooth curves it can feature.

    SketchPad’s developer, Ivan Sutherland meanwhile invented the thing we call VR helmet today. A computer monitor that can be worn on the head like a helmet so it displays the picture right front of the two eyes. Sutherland’s goal was to create a fake spatial stereography, by deceiving the human eye. After receiving his degree, Sutherland became head of the information systems development section at Advanced Research Projects Agency (ARPA), where also ARPANET, the forerunner of the Internet was developed. Later he became a professor of Harvard University. He also participated in the work of Ed Catmull’s team, where he perfected his VR helmet.

    As every technology, flat shading also had its limitations. It wasn’t suitable to properly shade a complex, curved surface. Such vector objects – like a sphere – looked angular and rough. More polygons smoothed the surface, but it was not possible to create a more lifelike look, even by exploiting processor power to the limits. A new shading method was required. The problem was solved by the French mathematician at University of Utah, Henri Gouraud in 1971. He implemented a simple method, which was later called Gouraud shading. This method, instead of painting each polygon with a solid color, interpolates the difference between the color of individual surfaces, and creates a smooth, shiny illusion. The idea worked, and since it only required slightly more processing power than flat shading, it soon superseded it. Its only drawback was that the contour of the object was still angular, as the edge polygons had no neighbors, and so it was not possible to entirely smooth them.

    Ed Catmull, still not satisfied with all what he did for computer science, wrote his Ph. D. thesis about texture mapping in 1974. It was a new technology which practically united bitmap and vector graphics. The new method enabled to fix a bitmap image – a drawing, a pattern or even a digitized photograph – onto the surface of a vectorgraphic surface, like on the side of a cube. It was now possible to display objects in more than one color. The thesis also described z-buffer technology. This was a novelty on the field of spatial modeling. It reformed the method of removing invisible surfaces by not simply storing and calculating the depth of individual pixels by their brightness, but instead calculating their actual spatial depth along the z axis of the coordinate system. This is why it was called z-buffer. This innovation enabled 3 D objects to intersect each other. Previously it was not possible, because the polygons at the point of intersection disappeared, resulting in a hole on both objects. Catmull, later launched the world’s first computer animation course and became a founder of Pixar Animation Studios. He also took part in the creation of the world’s first computer animated movie, Toy Story.

    In 1971, a Vietnamese mathematician arrived to University of Utah, Bui-Tuong Phong. The scientist, who has fled from his homeland, studied Gouraud’s shading method, and a few faults caught his eye. For example, Gouraud’s algorithm sometimes miscalculated the colors at the polygon edges, so the polygons sometimes blinked, or became transparent, making the other side of the object visible. Phong perfected Gouraud’s method in 1974, and published his discoveries in his work titled Illumination for Computer Generated Pictures. His shading method is nowadays called Phong shading. Its greatest advantage, exploited by many 3 D accelerator cards, is that it can be greatly sped up by hardware means. But even Phong shading did not solve the problem of angular object contours, and unfortunately it was much slower than Gouraud shading, consuming much more processor time. For this reason even to our present days, both shading algorithms are used and flat shading didn’t disappear either as it’s still the best method to render simple objects. Phong later became a professor of Stanford University until his death in 1975.

    Phong shading and texture mapping in Maximum Realtity 4K intro by Microgenious (1996)

    Catmull’s texture mapping was not improved till 1976, when James Blinn, a programmer of Jet Propulsion Laboratory in Pasadena cooked up an interesting idea. The original technology just fixed the bitmap texture on the flat surface. Blinn developed a method to imitate pits and swells on the surface with colors and shading. He called the algorythm bump mapping, and there are barely any demos today without it. A new element was added to spatial imagery, which was used to render bumpy, rough surfaces.

    Metal shading, again from Bomb. the lower number of polygons enable us to see the edge of them, where the difference between their brightness is interpolated.

    Blinn’s other useful invention was environment mapping, or envmap for short. An envmapped object reflects its environment, by a projection from six directions – bottom, top, front, back, left and right – are applied to it as a blended texture. Finally, based on Phong’s works, he improved Gouraud and Phong shading, creating blinn shading. Blinn shading is somewhat faster than Phong, and unlike its forerunner, it does not create sharp light spots on the object’s surface, but instead a softer glow, which doesn’t exude on the smoothed edges.

    Two dimensional bump mapping in Pure Spirit, a 4K intro of Hugo Habets of Spirit New Style. The simulated bumpy pattern can be fitted on a vector object, mimicking an uneven surface. the second picture, taken from the Solstice demo from Valhalla, shows this effect.

    The fifth shading method was introduced in 1977, when Rob Cook, a student of Cornell University – after reading Don Greenberg’s works about refraction and surface shininess of different materials – started to think wonder why all computer generated objects look like polished plastic. He figured that lights have to be treated another way, and that by taking the energy of a light source into consideration, instead of its strength, the result will be a metallic, shiny surface. This way metal shading was born. Countless more shading methods had been introduced ever since, which were all offsprings of Phong and Gouraud shading.

    Another new light treatment method was discovered in 1980 by Turner Whitted. It is called raytracing. Whitted’s idea was to quit simply checking which side of a virtual object looks towards the light source, but instead, as in real life, let light rays radiate from the source, and calculate their course in virtual space. The light rays are reflected from surfaces they hit, and again from the next one, and so on, till they either hit a non-reflecting surface, or leave the virtual space. The drawback of this new technology was the high processor power it required, but the result was previously unseen picture quality.

    A few years later, in 1984, Cindy Goral, a reasearcher at Cornell University invented radiosity. It further enhanced the lifelikeness of raytraced pictures, by not only following the course of light rays, but even calculating the behavior of individual light particles, photons. She based her studies on the works of Parry Moon and Eberle Spencer, mathematician-researchers of MIT in the fourties, who published their thesis about light rays reflected from an empty room’s walls in 1940. Real radiosity calculation requires tremendous computing resources, even today it’s only possible on supercomputers. But Moon and Spencer already plotted a radiosity picture in 1948 by hand. A very long and tedious process that would have required a lot of patience.

    As a matter of fact, Paul Heckbert mathematician, also a researcher of radiosity, has discovered a thesis on photometry in 1991, which was written by Yamauti Ziro, Japanese scientist in 1926. This paper describes a calculation method that is identical the basics of radiosity as we know today. Yamauti’s research was focused on reflection and illumination of different surfaces.

    What is the state of computer graphics today, and where will it go? This is easy to answer in that all one needs to do is visit the nearest cinema or playe the latest video game. When I started this book – 1996 – photorealistic computer images were a novelty and this was shortly after the premiere of Toy Story and Apollo 13. Seven years passed since then and computer graphics (CG) is common place everywhere. Eye candy movies like Final Fantasy, Lord of the Rings and Shrek all show the triumph of this young branch of mathematical science. This rapid advancement also left its footprint on the demoscene as you will soon find out.

    Raytraced robot, Karter’s picture from 1996.

    Everything that can be invented has been invented.

    – Charles H. Duell, Commissioner of the US Patent Office, 1899

    3. Music for our ears

    Electronic music. The first thing that comes to mind, for most people, is perhaps the beeping of a quartz watch, or some cacophonic machine music. This stereotype is incorrect. However, several decads ago, this was electronic music.

    What is sound? It’s an oscillation of pressure in the transmitting agent – air – spreading as a wave. Our ears process this wave as neural signals, and our brains decide whether this sound is pleasant or not. Since sound is a wave, it can be represented with a numeric value, the hertz, meaning the number of oscillations per second. The longer the distance between the travelling waves, the lower the sound is. Human ears can sense sound in the spectrum between 20 Hz and 20 kHz; under this range is the infrasonic, beyond it is the ultrasonic spectrum. The normal middle A sound is 440 hertz.

    The simplest electronic sound device is a speaker, which beeps upon receiving a certain electric pulse or tension. The amount of tension controls the pitch of the sound. Such primitive synthesizers can be used, for example, in quartz watches, or medical devices. The emitted sound can be plotted as a curve: a simple beeping draws a regular sine wave on the screen of the oscillator. But if we continuously alternate the electric tension, synchronized to the air pressure oscillation of the original sound, we can almost entirely reproduce the original sound, like human speech. This was already discovered by Thomas Edison Alva, the inventor of the phonograph (1878) and telephones, also took advantage of this fact. The phonograph, and its successor, the gramophone recorded the vibration of air on wax drums or ceramic discs, by an analogue method. Magnetic sound recording was first achieved in 1898 by Valdemar Poulsen, Danish engineer, who later also constructed the first radio transmitter which was capable of broadcasting music. There were no digital sound recording means until the advent of computers, but all the methods and techniques we nowadays use in our computers, were already around decades earlier. It only took a sensible step to unify these methods under digital control, and replace some of them by digital means.

    Pierre Boulez, French composer and conductor first presented his composition, Répons in 1981, in Donauschweig, West Germany. Two special computers took place among the musicians, equipped with conventional instruments. This is the computer marked 4X, which had eight independent, programmable digital sound channels. It was designed by Pierre Boulez and Andrew

    The basic principles of sound sampling and storage were laid in 1948 by Claude E. Shannon, American physicist at Bell Research Laboratories. According to this axiom, any complex wave, consisting of several components of different frequency, can be unequivocally represented with a single line of numbers. These numbers specify either the amplitude values of the wave in a given band, or the series of subfrequencies. (This is what they specify in hertz.) Sampling frequency should be twice as much as the wave’s bandwidth. We use this principle every time when we record a sound sample with our computer, but also compact discs couldn’t exist without Shannon’s thesis.

    CDs also store the sound by digital means, converted to numbers, which are stored as microscopic pits on their surface. The pits fall in a spiral line on the entire surface of the round disc, almost five kilometers long. Not only music, but any other digital signal can be stored this way, even computer data, programs, like on CD-ROM discs.

    During electronic sound processing, digital signals have to be converted to analogue to play them, but also conversely, sound to be recorded has to be made digital. Two simple devices serve for these tasks in computers: the AD (Analogue-Digital) converter, which interprets the analogue electric signal as a set of numbers, and the DA converter, which is responsible for the opposite operation.

    Special musical computer in the eighties, in the laboratory of professor Max V. Matthews. The device consisted of a custom built sensor, a regular personal computer and a digital synthesizer. The sensor is the rectangular surface. Hitting the surface with the drumstick, the place and strength of the beat controlled the strength and the spatial ilusion of the sound..

    The DA converter creates a series of electric impulses from the amplitude samples. Often it’s also enhanced with a special filter, converting it into a continuous wave before bringing out the sound. The digital signal is stored in binary, in 16 or 8-bit format. CD quality is achieved by 16-bit storage, as 15 bit sound depth is already noiseless for the average human hearing. But as Max V. Matthews and John R. Pierce, electronic instrument researchers at

    Enjoying the preview?
    Page 1 of 1