Sie sind auf Seite 1von 9

ASPRS 2005 Annual Conference

Geospatial Goes Global: From Your Neighborhood to the Whole Planet


March 7-11, 2005 Baltimore, Maryland

3D GEOSPATIAL VISUALIZATION OF THE UCSC CAMPUS

Suresh K. Lodha
Andrew Ames
Adam Bickett
Jason Bane
Hemantha Singamsetty
Department of Computer Science
University of California, Santa Cruz
Santa Cruz, CA 95064
{lodha,agames,abickett,jbane,hemanth}@soe.ucsc.edu


ABSTRACT

Creation of accurate large scale 3D environments is important in many applications, including emergency
simulation, disaster preparedness, urban planning, resource management, and tourism. In this work, we use a variety
of data sources: aerial LiDAR data, digital elevation models, two dimensional architectural drawings of the
buildings, digitally rectified aerial images, and building photographs to construct a texture-mapped 3D model of
UCSC. We use manual techniques to construct 3D models of the buildings, relying on accurate architectural
drawings. The objective is to provide a platform for testing the results of automatic methods of building
reconstruction using aerial LiDAR data or other photogrammetry techniques, and ultimately, to build an accurate 3D
visualization and navigation system for urban environments. We demonstrate our results by visualizing a 3D
texture-mapped model of some subregions of the UCSC Campus.


INTRODUCTION

Rapid advances in data acquisition, geospatial location, sensor and computing technologies have made it
possible in the last few years to acquire large scale outdoor geospatial data with the objective of reconstructing
realistic models of environments. Various sensor types, such as aerial LiDAR, ground based LiDAR, digital
cameras, video camcorders, stereo cameras, and omnidirectional cameras are being utilized to create accurate maps
of cities and urban environments. Archived GIS data sets including architectural drawings, aerial imagery, DEM
(Digital Elevation Models), street maps, detailed building attribute databases, and associated parameters are also
being used to create 3D environments. Research is underway to use this vast amount of data to construct 3D models
of environments automatically and accurately.
Automatic and accurate 3D environment reconstruction poses challenges. Most of the existing methods for
constructing these environments are at best semi-automatic, and require either substantial input from the user, some
assumptions, or hole filling heuristics to create both geometry and texture. As these methods are slowly making
progress towards more automated construction, there is a need to evaluate the accuracy of the reconstructed
environments. Although visually compelling virtual reality environments may be adequate for games and
entertainment, they are not often sufficient for urban planning, infrastructure development, resource management, or
disaster preparedness. It remains difficult to assess the accuracy of the models constructed without the availability of
the ground truth.
In the last decade several research efforts have been undertaken to develop manual and semi-automatic
reconstruction of outdoor 3D environments using a variety of techniques and sensors (Hu et al., 2003).
Various researchers have created different types of platforms for ground-based acquisition of geospatial data
using multiple sensors. The AVENUE project group (Allen et al., 2001) designed a mobile robot platform to acquire
data for automating the urban site modeling process. The pose camera, named ARGUS (Teller, et al., 2000) , and
designed at MIT Computer Graphics Lab, is a fully automated model acquisition system placed on wheel-chair sized
mobile platform. The platform includes instruments like GPS, IMU sensors and a high resolution digital camera
along with a processing unit. A portable data acquisition and tracking system consisting of range, image and GPS
and gyro tracking sensors was developed by Neumann at USC (Neumann et al., 2003). Zhao and Shibasaki (Zhao et
al., 2001) collected data using a platform mounted on a van. Wang et al. (Wang et al., 2003) used an automobile
ASPRS 2005 Annual Conference
Geospatial Goes Global: From Your Neighborhood to the Whole Planet
March 7-11, 2005 Baltimore, Maryland

equipped with various sensors (GPS, odometry, compass, gyro, video sensors, omnidirectional camera, three LiDAR
sensors, etc.) to collect outdoor data. Frueh et al. (Frueh et al., 2003) have also used a sensor platform, mounted on a
truck, which moves on public roads at uniform speeds. The platform is capable of acquiring 3D and texture
information of building facades. The range data is obtained using two 2D laser scanners and the texture data is
acquired using a digital camera.
In this work, we reconstruct 3D building models using architectural drawings and texture map the walls with
digitally acquired images. Although reconstruction using these drawings and images is labor intensive, it can
provide a good yardstick for measuring the accuracy of reconstruction using new algorithms. We use many other
data sources including digital elevation models, aerial LiDAR data, digitally rectified aerial images, and wall texture
images taken with a digital camera to construct a texture-mapped 3D model of certain regions of the UCSC Campus.
The 3D visualization of UCSC campus that we describe here is similar to previous efforts by Fitzgerald
(Fitzgerald, 2002), Wasilewski et al. (Wasilewski et al., 2002), and more recently Sugihara (Sugihara, 2004), where
they constructed 3D models of South Carolina, Georgia Tech Campus, and Nagoya, respectively. In our work,
however, we reconstruct accurate 3D models from architectural drawings that can then be compared with
automatically reconstructed models. We have also utilized aerial LiDAR data for constructing building models.


GEOSPATIAL DATA SOURCES

In this work, we used five different data sources architectural drawings, aerial imagery, digital elevation
models, wall textures and aerial LiDAR data. We will now describe each of these data sources.
The two dimensional architectural AutoCAD drawings provide a highly detailed description of the terrain
hidden under foliage. They exist in two different formats: DWG and DXF. We obtained the high-detail and low-
detail AutoCAD drawings of the entire UCSC campus in DWG format. The data in DWG file exist in layers
representing building footprints, tree footprints, roads, map posts and parking lots etc. The high-detail drawings are
registered using NAD27 SPCS Zone III coordinate system, whereas the low-detail drawings are registered using
NAD83 SPCS Zone III. Figure 2 is an example of the architectural drawings that we used in this work.
A Digital Elevation Model (DEM) consists of a sampled array of elevations for ground positions at regularly
spaced intervals. The most common DEM format is an ASCII text file with the .dem extension. These files have
header information including the general characteristics of DEM, the state, boundaries, units of measurement,
minimum and maximum elevations, projection parameters and statistics of accuracy of data. DEM data, with 10
meter horizontal spacing, is available for the entirety Santa Cruz through the USGS. Figure 1 is a rendering of the
terrain height map of a subregion of UCSC campus.



Figure 1. Rendering of terrain elevation of UCSC using DEM

ASPRS 2005 Annual Conference
Geospatial Goes Global: From Your Neighborhood to the Whole Planet
March 7-11, 2005 Baltimore, Maryland

Light Detection and Ranging (LiDAR) is a laser scanning system that is used to collect depth range data. Unlike
DEMs, LiDAR data can be used to produce highly accurate and high-resolution topographic data. The aerial LiDAR
data we acquired for the entire UCSC campus contains about 9 million points and has elevation information for all
of the buildings, vegetation, and bare ground.
A DOQQ (Digital Ortho-photo Quarter Quadrangle, or digital ortho-photo), is a digitally rectified aerial image
that has the geometric qualities of a map. Unlike in standard aerial photographs, the ground features are displayed in
their true ground position in DOQQs. They cover an area of approximately 7000 meters x 7000 meters and have a
resolution of 1 meter per pixel. The DOQQs used in this research are referenced using NAD83 State Plane
Coordinate System (SPCS) Zone III coordinates.
We also use wall textures acquired using a digital camera that we describe in further detail in Section 3.3. An
example of a wall texture used in this work is shown below in Figure 6.


3D RECONSTRUCTION

The 3D model for the terrain is provided by DEM. Buildings are reconstructed on top of the terrain using either
architectural drawings (Section 3.1) or using aerial LiDAR data (Section 3.2). The terrain texture is provided by
aerial images and the wall textures are acquired through digital camera (Section 3.3). Furthermore, roof textures
must be extracted from the aerial images and applied to the building rooftops (Section 3.4).

Ground Truth Building Models
To build accurate building models, we located and studied the original architectural construction plans for the
buildings. These construction plans are designated as either Bid Set or As Built. It is important to use As Built
drawings to ensure that the drawings actually reect what was constructed. At UCSC, these documents are
controlled by the Physical Planning and Construction department, and are kept in both paper and electronic form.



Figure 2. Roof plan and elevation drawings

In the construction of the actual model, there are two general types of information that can be obtained from the
schematics: the footprint dimensions, and the heights of points on the footprint and on the roof. The footprint
information can be obtained from the first oor plan and the roof plans, while the height information is located in the
external elevations drawings (Figure 2). Unfortunately, because all of the expected heights and dimensions are not
present in the schematics, additional computations are necessary to complete the models. For example, the height at
the top of the roof is rarely supplied, and must be calculated using the slope of the roof, the height at the top of the
wall, and the horizontal length of the pitch. If additional assumptions need to be made in order to complete the
model, the accuracy of the model may be compromised. This occurs most often for older buildings, which typically
have less detailed schematics.
The actual creation of these models is a difficult and slow task, done by hand in the OBJ model format. It
requires deducing and listing all (x,y,z) points in the models (about 50 for a simpler building), and then creating
faces out of these points. The potential for human error in the construction of these models is high, although errors
can be identified by loading the models in an OBJ model viewer to identify any problems.
ASPRS 2005 Annual Conference
Geospatial Goes Global: From Your Neighborhood to the Whole Planet
March 7-11, 2005 Baltimore, Maryland

Three additional data points are required to place the models on the UCSC terrain: elevation, latitude and
longitude, and orientation. Of these, only the elevation is supplied in the drawings. However, the reference for the
elevation of the figure is not specified. They appear to be referenced with respect to sea level, but do not necessarily
match up well with the NAVD 88 DEM model of the UCSC terrain. As for the orientation and placement, we used
simple building footprint data from the PPC office to match two points from the model with two points from the
simplified footprints, thereby finding the rotation and translation needed to place the model. A better approach to
find latitude, longitude and orientation would be desirable, as this relies on simplified footprints, yet often no more
accurate information is available.
We completed the process of constructing building models for 8 models of relatively simple, gabled-roof
buildings, ignoring details such as patios and sloped bases. Although with practice the process becomes more
productive, it still is difficult to automate. Moreover, it may be unfeasible to hand-build accurately detailed models
for more complicated buildings, in which the schematics seem to be lacking in some areas. Such buildings have
curves or irregular protrusions which are not fully specified in the drawings, and would be difficult to precisely
model. With this in consideration, the ground truth information and the basic derived models still provide enough
information to be useful for comparison with the automatically constructed models.

Aerial LiDAR
The above architectural approach is very time intensive, and has inherent difficulties which undermine its goal
of producing a completely accurate building model. In the interests of having models to represent each of the 486
buildings on the UCSC campus, we decided to model the remaining buildings by extruding the roofs of each of
these buildings, using the simple footprints supplied by the Physical Planning and Construction office, and a roof
height for each building that was calculated from normalized LIDAR data.
Creating the models is fairly simple. For each point in the footprint, two points are created, one at height of the
ground at the given point, the other at the height of the normalized roof. The faces are then created from these
points, with a at roof covering these faces. As long as back-face rendering and tessellation is enabled in the
rendering of these models, this approach works perfectly.
The advantage of this approach is apparent; it is automated. As new and more detailed footprint information
becomes available, the models can be easily rebuilt and updated. The most important issue to address with this
approach is the creation of more realistic roofs, because roofs greatly contribute to the complexity and visual appeal
of the created models. We are currently working on determining the roof geometry of buildings using aerial LiDAR
data (Lodha et al., 2005).

Wall Textures
While taking digital pictures of a building and directly applying it to a model might be fast, there are several
problems associated with doing so. It is not always possible to take a picture of an entire building face, and there are
often trees or other objects that obscure such a photo. Even if those difficulties are not present, the person applying
the textures to the building models still needs to worry about proper perspective, consistent lighting, and shadows in
such photographs. Correcting these images can take hours, and on a large-scale project this method would not at all
be practical.
However, a picture of a complete wall is not entirely necessary. Because of repetition in building design, walls
may be constructed from only a few pictures: doors, windows, and primary wall texture. Once these pieces have
been finished they can easily be arranged to form the wall, saving time without sacrificing accuracy or important
detail.
To obtain the wall texture, we first photographed the building side to identify individual elements and their
placement. Field sketches of the wall were sometimes necessary, in the case of obstruction of the wall (like the trees
in Figure 3).

ASPRS 2005 Annual Conference
Geospatial Goes Global: From Your Neighborhood to the Whole Planet
March 7-11, 2005 Baltimore, Maryland



Figure 3. Photographs taken for perspective and positioning.

We next took close up pictures of each of the identified elements and textures of the building wall, in enough
detail to be able to tile the components to form a faithful representation of the wall.



Figure 4. Window, Siding, and Trim.

We then adjusted the individualized elements and textures to account for lighting, shadows, and perspective,
and made them proportional to each other (Figure 5).


Figure 5. Window, Siding, and Trim (correct scale)

The last step was then to assemble the tiles into a wall, according to the sketches and photographs of the wall
we had. (Figure 6).
ASPRS 2005 Annual Conference
Geospatial Goes Global: From Your Neighborhood to the Whole Planet
March 7-11, 2005 Baltimore, Maryland




Figure 6. The complete wall texture.

Not only were we able to save time reconstructing walls from tiles, but many buildings (especially residence
halls) are identically constructed, so we were able to use elements, textures, and even whole walls, for multiple
building models.

Roof Textures
We developed a roof texture extraction utility as a part of the visualization software, which enables faster
retrieval of roof images. The building models are rendered onto the terrain, and using the picking mode the desired
building rooftop can be obtained. The software automatically searches for the roof pattern in the database of aerial
images and highlights the roof texture boundary. The user can adjust the boundary in order to make a precise
extraction.
Figure 7 shows the snapshot of the roof texture extraction utility. The selected building roof is highlighted in
red.




Figure 7. Roof texture identification and cutting

The highlighted roof texture can then be saved and applied to the selected building roof. Figure 8 shows the
texture mapped roofs of 4 L-shaped buildings.
ASPRS 2005 Annual Conference
Geospatial Goes Global: From Your Neighborhood to the Whole Planet
March 7-11, 2005 Baltimore, Maryland



Figure 8. Texture mapped terrain of College 8 region of the UCSC campus with some texture-mapped buildings.


VISUALIZATION

In order to facilitate the visualization of the data available we developed software that can render different
datasets that are discussed in Section 3. The visualization program is created using C, OpenGL, and wxWindows,
and runs in real-time on a PC with a Pentium 4 processor and 3GB memory.
A 3D terrain height map is created from the elevation data available from Digital Elevation Model dataset and
rendered using the program. A sample screenshot of the rendering of the UCSC campus height map is shown in
Figure 1. Different textures that are created from aerial images (DOQQs) or AutoCAD drawings can be applied to
the rendered height maps. Figure 9 shows the texture mapped height map of UCSC campus with several buildings
extruded using aerial LiDAR data information. The texture is created using aerial image of the entire campus.



Figure 9. UCSC campus height map with texture and building models.

Figure 8 shows the 3D building models of the College-8 region in UCSC campus that are rendered
simultaneously with the terrain height map and texture map. Before rendering, the 3D models are registered in the
same State Plane Coordinate System as terrain and texture maps.

ASPRS 2005 Annual Conference
Geospatial Goes Global: From Your Neighborhood to the Whole Planet
March 7-11, 2005 Baltimore, Maryland



Figure 10. Sample detailed building model.

The software also provides additional capability of adding highly detailed 3D building models (Figure 10) to the
texture mapped terrain height maps.


CONCLUSIONS AND FUTURE WORK

Currently, the visualization program can be used for a walk-through of some parts of the campus. We expect to
create a much more detailed model as more detailed data is collected using additional sensors. We also hope and
expect to use this program to compare the ground truth with the reconstructed truth. By adding geospatial
information, we hope to build upon the current visualization to create a navigational tool for the visitors to our
campus, and to ultimately use this program to aid the mobility of the visually impaired through wireless and
multimodal technology.

ACKNOWLEDGEMENTS

This work was partially supported by the Multi University Research Initiative (MURI) grant by Army Research
Office under contract and the NSF grant ACI-0222900. We would also like to thank Karthik-Kumar Arun-Kumar,
Amin Charaniya, Brian Fulfrost, Sanjit Jhala, and Srikumar Ramalingam for helpful discussions related to this
project.


REFERENCES

Allen, Peter, Ioannis Stamos, Atanas Gueorguiev, Ethan Gold, and Paul Blaer (2001). Automated site modeling in
urban environments. In Proceedings of the Third International Conference on 3D Digital Imaging and
Modeling, May-Jun 2001.
Fitzgerald, Brian (2002). Virtual reality meets GIS: Urban 3D modeling in South Carolina. Geospatial Solutions,
July 2002.
Frueh, C., and A. Zakhor (2003). Constructing 3D city models by merging ground-based and airborne views. In
IEEE Conference on Computer Vision and Pattern Recognition, Madison, USA, December 2, 2003, pp.
562569.
Hu, Jinhui, Suya You, and Ulrich Neumann (2003). Approaches to large-scale urban modeling. In IEEE Computer
Graphics and Applications, IEEE Computer Society Press Los Alamitos, CA, USA, November 2003, Vol.23, pp.
6269.
ASPRS 2005 Annual Conference
Geospatial Goes Global: From Your Neighborhood to the Whole Planet
March 7-11, 2005 Baltimore, Maryland

Lodha, Suresh K., and Karthik K. ArunKumar. Semiautomatic roof reconstruction from aerial LiDAR data using K-
Means with refined seeding. In Proceedings of ASPRS Conference, March 2005.
Neumann , U., Suya You, Jinhui Hu, Bolan Jiang, and JongWeon Lee (2003). Augmented virtual environments
(AVE): Dynamic fusion of imagery and 3D models. In IEEE Proceedings on Virtual Reality, Mar 2003, pp.
6167.
Sugihara, K. (2004). GIS-based automatic generation of 3D buildings from building polygons filteres. In
Proceedings of IASTED Conference on EnIEEronmental Modeling and Simulation, November 2004.
Teller, Seth, M. Bosse, and D. de Couto (2000). Eyes of Argus: Georeferenced imagery in urban environments. GPS
World, April 2000, pp. 2030.
Wang, Chieh-Chih, Charles Thorpe, and Sebastian Thrun (2003). Online simultaneous localization and mapping
with detection tracking of moving objects: Theory and results from a ground vehicle in crowded urban
areas. In IEEE International Conference on Robotics and Automation, May 2003.
Wasilewski, Tony, William Ribarsky, and Nickolas Faust (2002). From urban terrain models to visible cities. In
IEEE Computer Graphics and Applications, July-Aug 2002.
Zhao, Huijing and Ryosuke Shibasaki (2001). Reconstructing urban 3D model using vehicle-borne laser range
scanners. In 3-D Digital Imaging and Modeling, 2001.

Das könnte Ihnen auch gefallen