Sie sind auf Seite 1von 5

Making The Parthenon:

Digitization and Reunication of The Parthenon and its Sculptures


Paul Debevec Jessi Stumpfel Chris Tchou Andrew Gardner Andrew Jones Per Einarsson Tim Hawkins Charalambos Poullis Marcos Fajardo Philippe Martinez

University of Southern California Institute for Creative Technologies 13274 Fiji Way, 5th Floor Marina del Rey, CA 90292 Contact Email: paul@debevec.org

Fig. 1. Virtual model of the modern Parthenon acquired with laser scanning and surface reectometry. The image is rendered using the Arnold global illumination system using light captured in Marina del Rey, California.

Abstract In this paper we overview the technology and production processes used to create The Parthenon, a short computer animation that visually reunites the Parthenon and its sculptural decorations, separated since the beginning of the 19th century. The lm uses a variety of technologies including 3D laser scanning, structured light scanning, photometric stereo, inverse global illumination, photogrammetric modeling, imagebased rendering, BRDF measurement, and monte-carlo global illumination to create the imagery used in the lm.

I. I NTRODUCTION Since its completion in 432 B.C., the Parthenon has stood as the crowning monument of the Acropolis in Athens. Built at the height of classical Greece and dedicated to the goddess Athena, The Parthenon was the most rened structure ever constructed, and featured sculptural decorations that are regarded among the greatest achievements in classical art. Over time, the structure has been damaged by re, vandalism, and war as Athens fell under the control of the Roman, Byzantine, Frankish, and Ottoman Empires. In the early 1800s, with Athens under the waning control of the Ottomans, the British Lord Elgin negotiated to remove the majority of the Parthenons frieze, metopes, and pediment sculptures from the Acropolis to England; since 1816 these sculptures have been in the collection of the British Museum. Whether the sculptures

should be repatriated to Greece is the subject of a longstanding international debate. With the sculptures far removed from the Parthenon, the possibility of visually reuniting the Parthenon with its sculptures in an animated visualization presented itself as an appropriate challenge for newly developed computer graphics techniques. This paper describes the work of our computer graphics research team to realize this goal. Our groups rst research in modeling architectural scenes was presented at the SIGGRAPH 96 conference, where Paul Debevec presented results from his University of California at Berkeley Ph.D. thesis regarding a technique for modeling and rendering architecture from photographs [1] using photogrammetry and image-based rendering. Shortly thereafter Debevec was forwarded an email from the Foundation of the Hellenic World organization in Greece inquiring if the work could be applied to creating a virtual version of the Parthenon. Since the Parthenon is a geometrically complex ruin, the techniques from the thesis were not fully applicable; they were suited to more modern architecture with regular geometric forms. Nonetheless, the letter inspired research into the Parthenons architecture and history, a visit to the Parthenon sculptures cast collection at the University of Illinois in Urbana-Champaign, and initial experiments with 3D laser scanner data for capturing complex geometry. In August 1999 Debevec was introduced to archaeologist Philippe Martinez and learned of Martinezs previous 3D scanning work including the sculptural decorations of the circular Tholos temple at Delphi [2] using a MENSI scanner from Electricite de France. From this meeting it became clear that visualizing the Parthenon and its sculptures using computer graphics would be a fruitful endeavor both technologically and artistically. The work we performed on this project involved extending currently available research techniques in order to: 1) Acquire 3D models of the Parthenons sculptures, 2) Acquire a 3D model of the Parthenon, 3) Derive the surface reectance properties of the Parthenon, and 4) Render the Parthenon under novel real-world illumination. With these techniques in place, our group created a short computer animation, The Parthenon, that includes visualizations of the Parthenon and its sculptures, both as they exist in the British Museum and

Fig. 2. Scanning a cast of a Caryatid sculpture in the Basel Skulpturhalle Museum using structured light.

Fig. 3. The virtual Caryatid model included in a virtual reconstruction of the Erechtheion. The model was formed from 3D scans of a cast augmented by photometric stereo albedo and surface normal measurements derived from photographs of the original.

to verify the accuracy of the scanned cast models. To assemble the individual 3D scans into complete surface models, we used the MeshAlign 2.0 software [4] developed at the National Research Center (CNR) Pisa, which implements a volumetric range scan merging technique [5]. Our 3D scan assembly process is described in [6], and sample models can be found at: http://www.ict.usc.edu/graphics/parthenongallery/. In the lm, the introductory sculptures were rendered using a photographic negative shader to emphasize the sculptures contours and to provide an abstract introduction to the lm. The second half of the sequence continues this appearance and chooses new images that hint at the history that connects the Parthenon to the present day. These images include a Byzantine cross (Fig. 5) carved into one of the Parthenons columns, a cannonball from one of the sites battles, and a cannonball impact in one of the cella walls. The carving and the impact were recorded on the site of the Acropolis in November 2003 using a computer vision technique known as photometric stereo, in which the surfaces were illuminated using wired camera ash unit (Fig. 4) and the appearance of the surfaces under different lighting directions were analyzed to determine the surface orientations and geometry at a ne scale. The cannonball model was scanned in April 2003 from a real cannonball on the Acropolis site using the structured light scanning system.

where they were originally placed, with the goal of providing a visual understanding of their relationship to the architecture of the temple. This paper provides an overview of the techniques used to create the lm. II. S CANNING THE S CULPTURES The lms shots are arranged in four sequences. The rst sequence shows 3D models of sculptures from the Parthenons frieze, metopes, and pediments; these are seen more extensively later in the lm. While the majority of these sculptures are in the British Museum, a signicant proportion of them remain in Athens, with only about half of them currently on display. To conveniently capture the full set of sculptures, Dr. Martinez arranged a collaboration with Dr. Thomas Lochman at Switzerlands Basel Skulpturhalle museum which features high-quality casts of nearly all of the surviving Parthenon sculptures. We designed a custom structured-light based 3D scanning system [3] using a desktop video projector and a high-resolution monochrome video camera to efciently capture the sculptures at between 1 and 2 millimeters of accuracy. In ve days in October of 2001, our team of four people acquired 2,200 3D scans including the Parthenons 160m of frieze, its 52 surviving metopes, the East and West pediment arrangements, and a cast of a Caryatid gure from the Erechtheion (see Fig. 2). In addition, through the support of Jean-Luc Martinez at the Louvre, we scanned the original panel of Frieze, metope, and pediment head in Paris. These scans of the originals were added to the dataset and also used

Fig. 4. Photometric stereo data capture setup; consisted of a digital camera, a hand-held camera ash and a calibration frame.

Fig. 5. A rendering of Byzantine Christian inscription from the lm; the geometry was recovered using photometric stereo using the device described above.

Fig. 6.

The QuantaPoint 3D scanner at the site of the Acropolis. Fig. 7. The light probe device in operation near the Parthenon on the Acropolis, used in determining the surface reectance colors of the Parthenon.

III. S CANNING AND R ENDERING THE PARTHENON The lms second sequence shows a time-lapse day of light over the modern Parthenon. The Parthenon was threedimensionally scanned over a period of ve days in April 2003 using a Quantapoint time-of-ight laser range scanner (Fig. 6). Each eight-minute scan covered a 360 degree horizontal by 84 degree vertical eld of view, and consisted of 57 million points. Fifty-three of the 120 scans acquired were assembled using the MeshAlign 2.0 software and post-processed using Geometry Systems Inc.s GSI Studio software. The nal model was comprised of 90 million polygons, which for efcient processing was divided into an 8 17 5 lattice of cubical voxels each approximately 4m on a side. The model also included a lower-resolution polygonal model of the surrounding terrain. The surface colors of the Parthenon were recovered from digital photographs using a novel environmental reectometry process described in [7]. A Canon EOS 1Ds digital camera was used to photograph the Parthenon from many angles in a variety of lighting conditions. Each photograph shows the surface colors of the Parthenon, but transformed by the shading and shadowing present within the scene. In order to show the Parthenon under new illumination conditions, we needed to be able to determine the actual surface coloration at each surface point on the monument. These values are independent of the lighting in the scene, and are calibrated values such that zero represents a black surface that absorbs all light and one represents a white surface reecting all light. To determine these colors we constructed a customized light probe device based on that in [8] to measure the incident illumination from the sun and sky on the Acropolis at the same moment that each picture of the Parthenon was taken (Fig. 7). We then applied an iterative algorithm to determine surface texture colors for the Parthenon model such that, when lit by each captured lighting environment, they produced renderings that matched the appearance in the corresponding photographs. The surface reectance properties were stored in texture maps, one for each voxel of the Parthenon model. An orthographic view of the surface reectance properties obtained for the front of the Parthenons West Facade is shown in Figure 8.

Fig. 8. Recovered surface reectance colors for the West Facade of the Parthenon.

Once the surface reectance properties of the Parthenon were recovered, we had the ability to virtually render the Parthenon model from any viewpoint and under any form of illumination. The time-lapse image-based lighting was chosen from one of several days recorded in Marina del Rey, CA using a new high dynamic range photography process [9]. This capturing process involves using a camera equipped with a sheye lens pointing at the sky that takes a seven - or eight-exposure high dynamic range image series every minute. Covering the full dynamic range of the sky, from before dawn to the full intensity of the disk of the sun, required using a combination of shutter speed variation, aperture variation, and a factor of one thousand neutral density lter attached to the back of the sheye lens. The captured image sequence spanned a dynamic range of over 1 million to one. We rendered the sequence (and the rest of the lm) using the Arnold MonteCarlo global illumination rendering system written by Marcos Fajardo. Rendering this sequence virtually allowed us to show the Parthenon without its current scaffolding and tourists and to orchestrate the movement of the camera through the scene during the postproduction process. The second sequence ends with a view of the Parthenon seen from a virtual reconstruction of the Caryatid porch of

the Erechtheion (Fig. 3). Our 3D model of the Caryatid from the Basel Skulpturhalle was duplicated several times to form the complete Caryatid porch. We obtained the ne detail of the Caryatids face by acquiring a photometric stereo dataset of the original statue in the British Museum, and this detail was added to the lower-resolution 3D model for the close-up of the sculpture. IV. R ECREATING THE B RITISH M USEUM The lms third sequence takes place in a virtual re-creation of the Parthenon Sculptures Gallery in the British Museum. This gallery is the most recent space to exhibit the sculptures since being transported from Athens by Lord Elgin in the early 1800s. The dimensions of the Gallery were obtained from digital photographs using the Facade photogrammetric modeling system [1], and details were added to the model using traditional 3D modeling in Maya program made by Alias, Inc. Texture maps were created from unlit digital photographs, with absolute color and reectance values determined using a reference chart to correctly simulate indirect light within the museum. Photographs of the real sculptures were projected onto 3D models from the cast collection to produce the virtual models in the museum. The torso of Poseidon, a particularly dramatic sculptural fragment, was a particular challenge to visualize since the scanned 3D model from the Basel Skulpturhalle included a restoration of the frontal fragment which remains in Athens. Lacking a useful model of the torso of Poseidon, the nal shot in the gallery sequence was created using only a set of still photographs taken in a circle around the torso. Tim Hawkins developed a silhouette-based reconstruction algorithm to derive approximate geometry from the photographs, and then implemented a view-dependent image-based rendering algorithm to create the virtual camera move around the sculpture. V. T HE F INAL T RANSITION S EQUENCE The nal sequence of the lm shows a continuous camera move with ve cross-dissolves between the sculptures in the museum and their original locations on the Parthenon. The matched-motion transitions were made possible by having accurate three-dimensional models for all of the visual elements. These models allowed the viewpoint to be made consistent, and where it was necessary the illumination could be matched between the elements as well. Near the end of the sequence, the view transitions from a panel of the North frieze in the British museum to its painted appearance on the ancient Parthenon. This coloration is conjectural since essentially no trace of the original polychromy survives; educated theories draw inspiration from contemporary painted artifacts and surviving traces of color on the temple itself. Our team used several references including [10] and [11] to develop our coloration of the frieze, which included a blue background and brightly colored clothing with an emphasis on the heroic red. As the camera pulls back from the frieze, the lm shows the Ancient Parthenon in its original glory amongst its neighboring buildings on the ancient

Fig. 9. A shot of the torso of Poseidon (left) was created by deriving geometry from points and silhouettes (right).

Fig. 10. Virtual model of the Parthenon Gallery in the British Museum, obtained through photogrammetry and structured light scanning.

Acropolis. Though the sequence is brief, efforts were made to model the Parthenons original renements such as the entasis of the columns. These nal reconstructions were based on architectural plans displaying the results of archaeological research including the drawings of Prof. Manolis Korres [12]. VI. C ONCLUSION The Parthenon lm is the effort of several researchers and artists over the course of four years. As a short animation, it was necessary to be selective in the imagery chosen to tell a story about the temple to a contemporary audience. One principle that guided our choices was to construct the imagery as much as possible from what can be recorded today, which motivated the use of 3D scanning and reectance measurement technology to obtain accurate geometry and coloration wherever possible. The sequence showing the restored frieze was necessary to connect the ruin to its original appearance, but it is kept brief, perhaps in proportion to what is known of Periclean Athens to that which has been lost to time. While there are

Fig. 11.

Close-up of the painted frieze.

Fig. 12.

The painted frieze of the ancient Parthenon.

Fred Persi, Dimitrios Raptis, Simon Ratcliffe, Mark Sagar, Roberto Scopigno, Alexander Singer, Judy Singer, Diane Suzuki, Laurie Swanson, Bill Swartout, Despoina Theodorou, Mark Timpson, Rippling Tsou, Zach Turner, Esdras Varagnolo, Michael Wahrman, Greg Ward, Karen Williams, and Min Yu for their help making this project possible. We also offer our great thanks to the Hellenic Ministry of Culture, the Work Site of the Acropolis, the Basel Skulpturhalle, the Musee du Louvre, the Herodion Hotel, Quantapoint, Alias, CNR Pisa, and Geometry Systems, Inc. for their support of this project. This work was sponsored by TOPPAN Printing Co., Ltd., the University of Southern California ofce of the Provost, and Department of the Army contract DAAD 19-99-D-0046. Any opinions, ndings and conclusions or recommendations expressed in this paper are those of the authors and do not necessarily reect the views of the sponsors. R EFERENCES
[1] P. E. Debevec, C. J. Taylor, and J. Malik, Modeling and rendering architecture from photographs: A hybrid geometry- and image-based approach, in Proceedings of SIGGRAPH 96, ser. Computer Graphics Proceedings, Annual Conference Series, Aug. 1996, pp. 1120. [2] B. J., Marmaria, le sanctuaire dathna delphes. Ecole Franaise dAthenes, Paris, July 1996. [3] C. Tchou, Image-based models: Geometry and reectance acquisition systems, Masters thesis, University of California at Berkeley, 2002. [4] M. Callieri, P. Cignoni, F. Ganovelli, C. Montani, P. Pingi, and R. Scopigno, Vclabs tools for 3d range data processing, in VAST 2003 and EG Symposium on Graphics and Cultural Heritage, 2003. [5] B. Curless and M. Levoy, A volumetric method for building complex models from range images, in Proceedings of SIGGRAPH 96, ser. Computer Graphics Proceedings, Annual Conference Series. New Orleans, Louisiana: ACM SIGGRAPH / Addison Wesley, August 1996, pp. 303312. [6] J. Stumpfel, C. Tchou, N. Yun, P. Martinez, T. Hawkins, A. Jones, B. Emerson, and P. Debevec, Digital reunication of the parthenon and its sculptures, in 4th International Symposium on Virtual Reality, Archaeology and Intelligent Cultural Heritage, Brighton, UK, 2003. [7] P. Debevec, C. Tchou, A. Gardner, T. Hawkins, A. Wenger, J. Stumpfel, A. Jones, C. Poullis, N. Yun, P. Einarsson, T. Lundgren, P. Martinez, and M. Fajardo, Estimating surface reectance properties of a complex scene under captured natural illumination, Conditionally Accepted to ACM Transactions on Graphics, 2004. [8] P. Debevec, Rendering synthetic objects into real scenes: Bridging traditional and image-based graphics with global illumination and high dynamic range photography, in Proceedings of SIGGRAPH 98, ser. Computer Graphics Proceedings, Annual Conference Series, July 1998, pp. 189198. [9] J. Stumpfel, A. Jones, A. Wenger, C. Tchou, T. Hawkins, and P. Debevec, Direct HDR capture of the sun and sky, August 2004, 3rd International Conference on Virtual Reality, Computer Graphics, Visualization and Interaction in Africa (AFRIGRAPH 2004). [10] J. Neils, The Parthenon Frieze. Cambridge University Press, 2001. [11] M. Robertson and A. Frantz, The Parthenon Frieze. Phaidon Press, 1975. [12] M. Korres, The Parthenon and its Impact on Modern Times. Melissa Publishing House, 1994, ch. 2, The Architecture of the Parthenon.

Fig. 13.

The restored Parthenon on the ancient Acropolis.

many stories to tell about the temple, the separation of the architecture and its decorations seemed the most immediate to the monuments place in modern consciousness. We hope that our lm has virtually reunited the Parthenon and its sculptures, not only as datasets on a computer screen, but in the mind of the viewer who may some day visit the British Museum or ascend the Acropolis. ACKNOWLEDGMENTS The Parthenon lm was created by Paul Debevec, Brian Emerson, Marc Brownlow, Chris Tchou, Andrew Gardner, Andreas Wenger, Tim Hawkins, Jessi Stumpfel, Charis Poullis, Andrew Jones, Nathan Yun, Therese Lundgren, Per Einarsson, Marcos Fajardo, and John Shipley and produced by Diane Piepol, Lora Chen, and Maya Martinez. We wish to thank Tomas Lochman, Nikos Toganidis, Katerina Paraschis, Manolis Korres, Jean-Luc Martinez, Richard Lindheim, David Wertheimer, Neil Sullivan, and Angeliki Arvanitis, Cheryl Birch, James Blake, Bri Brownlow, Chris Butler, Elizabeth Cardman, Alan Chalmers, Yikuong Chen, Paolo Cignoni, Jon Cohen, Costis Dallas, Christa Deacy-Quinn, Paul T. Debevec, Naomi Dennis, Apostolos Dimopoulos, George Drettakis, Paul Egri, Costa-Gavras, Darin Grant, Rob Groome, Christian Guillon, Craig Halperin, Youda He, Eric Hoffman, Leslie Ikemoto, Peter James, David Jillings, Genichi Kawada, Shivani Khanna, Randal Kleiser, Cathy Kominos, Jim Korris, Marc Levoy, Dell Lunceford, Donat-Pierre Luigi, Mike Macedonia, Brian Manson, Jean-Luc Martinez, Paul Marty, Hiroyuki Matsuguma, Gavin Miller, Michael Moncreif, Chris Nichols, Chrysostomos Nikias, Mark Ollila, Yannis Papoutsakis, John Parmentola,

Das könnte Ihnen auch gefallen