Beruflich Dokumente
Kultur Dokumente
Abstract—Video see-through Augmented Reality adds computer graphics to the real world in real time by overlaying graphics onto a
live video feed. To achieve a realistic integration of the virtual and real imagery, the rendered images should have a similar appearance
and quality to those produced by the video camera. This paper describes a compositing method which models the artifacts produced
by a small low-cost camera, and adds these effects to an ideal pinhole image produced by conventional rendering methods. We
attempt to model and simulate each step of the imaging process, including distortions, chromatic aberrations, blur, Bayer masking,
noise, sharpening, and color-space compression, all while requiring only an RGBA image and an estimate of camera velocity as inputs.
1 INTRODUCTION
the blending seams between graphics and video. Both of
A UGMENTED Reality inserts virtual graphics into the real
world. Here, we consider video see-through AR and the
insertion of graphics by blending some rendered image onto
these methods exploited ever-increasing GPU bandwidth to
achieve their aims, and we follow the same approach.
a video feed from a small hand-held or head-mounted To enable easy integration with existing rendering
camera. In some applications, it may be desirable to have the methods, we cast our compositing method as a postrender-
virtual graphics appear as if they were part of the real ing process, operating on an ideal rendered image of the
world, and to create this illusion requires surmounting a virtual graphics such as is typically produced by OpenGL.
number of challenges: tracking should be accurate and jitter- This image is warped and blurred according to lens,
free, so that the graphics appear glued in place in the real motion, and sensor characteristics, and then, resampled
world; occlusions between real and virtual objects should be and degraded in accordance with sensor behavior. We then
correct; the lighting of the virtual objects should match that blend the degraded image with the captured video data to
of the real world; and the quality and texture of the produce the composited image. We attempt a principled
rendered pixels should match that seen in the video feed. simulation of camera effects based on data either learned
In this paper, we address the last problem. Virtual (preferentially) from device specifications, offline calibra-
graphics are usually rendered assuming that the camera is a tion, or guesswork (when necessary). We show that lens
perfect pinhole device, when in reality, the Webcams or distortions, chromatic aberrations, vignetting, antialiasing
other small devices often used for AR add many distortions filters, motion blur, Bayer interpolation, noise, sharpening,
and imperfections to the image. To convincingly blend the quantization, and color-space conversion can all be mod-
two images, there are then two options: One is to somehow eled on today’s commodity graphics hardware.
remove the distortions and imperfections from the captured The next section of this paper discusses previous related
image, but this is unrealistically difficult. The other is to work. Section 3 discusses an a priori model of the imaging
artificially introduce imperfections into the rendered pipeline of a small camera. Section 4 describes methods of
images, and this is the approach taken here. More quantifying the effects of this pipeline as performed on the
specifically, this paper seeks to emulate the imaging process Fire-i camera, and in Section 5, we show how this pipeline
which occurs in small cameras with wide-angle lenses, such can be simulated on the computer. Results are presented in
as the Unibrain Fire-i. There has been some previous work Sections 6 and 7, and finally, Section 8 concludes the paper.
on this topic: Watson and Hodges [17] have shown that lens
distortion can be emulated or corrected using graphics
hardware; more recently, Fischer et al. [4] have shown that 2 RELATED WORK
the integration of rendered graphics can be improved by A previous short version of this paper [11] has appeared in
adding synthetic noise and motion blur, and by antialiasing the Proceedings of ISMAR 2008.
Grain matching (a simulation of the texture of film stock)
is long-established in the offline world of the movie
. The authors are with the Department of Engineering Science, University of
Oxford, Parks Road, Oxford OX1 3PJ, UK. industry, and the imaging process of digital cameras has
E-mail: {gk, dwm}@robots.ox.ac.uk. been investigated in offline contexts (e.g., the work of Florin
Manuscript received 4 Feb. 2009; revised 13 Aug. 2009; accepted 3 Oct. 2009; [6]). In AR, such sensor effects were only recently
published online 20 Nov. 2009. considered: Fischer et al. [4] add noise and motion blur to
Recommended for acceptance by M.A. Livingston, R.T. Azuma, O. Bimber, the rendered image, and antialias the boundaries where real
and H. Saito. and virtual images meet. This paper can be seen as an
For information on obtaining reprints of this article, please send e-mail to:
tvcg@computer.org, and reference IEEECS Log Number
extension of that work in which we consider more sensor
TVCGSI-2009-02-0024. effects. However, our method is also more general in which
Digital Object Identifier no. 10.1109/TVCG.2009.210. it requires no special treatment of seams or transparent
1077-2626/10/$26.00 ß 2010 IEEE Published by the IEEE Computer Society
Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:487 UTC from IE Xplore. Restricon aply.
370 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 16, NO. 3, MAY/JUNE 2010
Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:487 UTC from IE Xplore. Restricon aply.
KLEIN AND MURRAY: SIMULATING LOW-COST CAMERAS FOR AUGMENTED REALITY COMPOSITING 371
Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:487 UTC from IE Xplore. Restricon aply.
372 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 16, NO. 3, MAY/JUNE 2010
Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:487 UTC from IE Xplore. Restricon aply.
KLEIN AND MURRAY: SIMULATING LOW-COST CAMERAS FOR AUGMENTED REALITY COMPOSITING 373
Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:487 UTC from IE Xplore. Restricon aply.
374 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 16, NO. 3, MAY/JUNE 2010
Fig. 8. The impulse response measured from the Fire-i on the 320
240 image at a sharpness setting of 100.
5.2 Processing
Fig. 7. Frequency response of the in-camera sharpening on a 320 Our method requires three inputs per frame: a rendered
240 image. pinhole image of the virtual graphics to be displayed, the
input video image, and an estimate of the camera’s rotational
cannot be certain of this: The impulse response has been displacement during the frame’s exposure. The rendered
calculated from a noisy, postprocessed version of inputs image should be stored in premultiplied alpha, requiring
and outputs, so accuracy will be low. What is apparent in either a trivial modification of blending modes when
both the impulse response and the sharpened image is the rendering, or a preprocessing step to multiply each color by
asymmetric nature of the filter, which shifts the image its alpha component. The image should also be sufficiently
sideways as more sharpening is applied. high-resolution to provide good detail across the distorted
frame; with the wide-angle lens used here, this means that we
5 IMPLEMENTATION use a 3;0002;250 pixel rendering. (The 3,000 size is chosen by
working backward from the 1;024768 size later in the
This section describes a method by which some elements of process. Scaling the image up by a factor of two to provide
the above image formation process can be emulated on a
some antialiasing yields a width of 2,048, and this is
computer. We start with a high-resolution image of virtual
multiplied by 1.5 to compensate for the change in pixel pitch
graphics rendered in OpenGL, and progressively down-
in the center versus edges of the distorted image.) The
sample, blur, and degrade the image to produce the data
transparent background should be of color c ^ ¼ ½0; 0; 0; 0.
which the camera would have measured at each Bayer
photosite, and subsequently, blend and color-space-convert The rendered image is processed and blended with the
the image together with the video input feed to produce a video frame in 10 distinct steps, which are shown in a
final 640480-pixel blended image. We implement every- flowchart in Fig. 9. These steps are now described in detail.
thing using OpenGL and its shading language GLSL. The
input image needs to have an alpha (transparency) channel: 1. Radial distortion: The pinhole image is warped by
At this point, it is helpful to briefly review alpha blending rendering into a 2;0481;536-pixel texture as a
and compositing in general. 2418-cell3 grid, as described by Watson and
Hodges [17]. Here, we use the “FOV” radial
5.1 A Note on Color Representation distortion model of Devernay and Faugeras [3], i.e.,
We adopt the usual convention of an alpha value with a the same distortion parameters used by the visual
range from zero to unity indicating fractional pixel cover- tracking system employed. The field-of-view cover-
age: that is, an RGBA pixel encodes both an RGB color as age of both the input image and the new distorted
well as a fraction of the pixel’s area which is covered by that image is slightly larger than the video frame: This
color. When performing calculations such as convolution provides the margin of pixels required for later blur
and interpolation on RGBA data, it is, however, also and aberration steps, and is illustrated in the
important to store this data in an appropriate format; here, flowchart by the black and white outlines (the white
this means storing pixels using premultiplied alpha. outline is the green channel’s frame coverage).
The use of premultiplied alpha has been commonplace 2. Half sampling and color mixing: The distorted
in computer graphics since the compositing paper of Porter image is subsampled down to 1;024768 pixels. We
and Duff [15] but appears less well known in AR. In the use a slightly enlarged 22 box filter to avoid
premultiplied representation, each pixel stores not the aliasing (four taps using bilinear interpolation yield
usual quadruplet of c ¼ ½r; g; b; but rather the values 16 source pixels for each pixel output). At the same
^ ¼ ½r; g; b; , with all values in the range 0-1. This has
c time, the image is desaturated by mixing a small
the advantage that interpolation and summation over fraction of each color channel into the others. The
pixels become trivial: The average of pixels c ^1 and c^2 is sensor specifications suggest a mixing operation as
simply 12 ð^ ^2 Þ, whereas 12 ðc1 þ c2 Þ would yield incorrect
c1 þ c described in Section 4.2. In practice, we find the
results when 1 6¼ 2 . Correct summation over pixels of desaturation produced by this transform is not as
different alpha values is crucial for the ability to handle strong as that observed in the Fire-i’s images, so we
transparent objects, allows simple convolution operations
3. The choice of 24 18 as a mesh size mirrors that of Watson and
over pixels of varying alpha, and avoids the sometimes- Hodges [17] who found that 400 vertices “struck a good balance between
seen black or gray outlines around virtual graphics. oversampling and overinterpolating.”
Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:487 UTC from IE Xplore. Restricon aply.
KLEIN AND MURRAY: SIMULATING LOW-COST CAMERAS FOR AUGMENTED REALITY COMPOSITING 375
Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:487 UTC from IE Xplore. Restricon aply.
376 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 16, NO. 3, MAY/JUNE 2010
Fig. 10. Compositing comparison with a “Cartman” figure inserted in the scene. The two large images show (a) the old and (b) new compositing
methods. (c) The top enlargement illustrates motion blur applied by the new method, which matches reasonably well the motion blur of the
background. Visible in the second enlargement is virtual noise added by the method, some slight separation of the color channels (chromatic
aberration), and the low chroma resolution of YUV-411.
Fig. 11. Compositing comparison with a digger inserted in the scene. Here, the enlargements show the graceful degradation of areas with intricate
detail, and artificial sharpening artifacts at the junction of real and virtual graphics.
estimated in Section 4.4.3 is too long to be used color channel has its own alpha channel, which is
directly and an IIR filter is incompatible with the interpolated from the Bayer images alongside the
parallel nature of GPUs, so we perform sharpening color component. We perform the 22 box inter-
using a symmetric4 seven-tap FIR filter. This filter polation described in Fig. 5, with a slight modifica-
F is constructed as a mixture of the identity tion: The interpolation pattern for the blue channel is
impulse response I and a Gaussian low-pass filter harmonized to that of the red and green channels,
L with standard deviation : i.e., the 22 box is placed so the physical sensel is in
the top-right corner. This is done to avoid a 1-pixel
F ¼ I þ ðI LÞ; shift of the blue channel in the reconstructed image,
where is the value of the camera’s sharpness as any such shift produced by the Fire-i would
control. The parameters and are set to roughly already be reproduced by our method’s chromatic
match the measured filter’s frequency response aberration stage.
( ¼ 0:55 and ¼ 0:1). The blending procedure effectively converts the
This convolution produces pixels with negative YUV frame to RGB, blends it with each rendered color
values. The premultiplied alpha representation still channel in turn, and then, converts back to YUV. Wary
allows these pixels to be blended correctly. However, of finite numerical precision, we rearrange these
they are stored in an 8-bit unsigned integer value, so operations to ensure that source video pixels not
we add a þ128 offset to prevent underflows. Prior to covered by graphics ( ¼ 0) remain completely un-
this, we also divide each value by 4. Upon storage in changed. The output is a composited 640480-pixel
the 8-bit texture, this has the effect of quantizing each image with a YUV triplet per pixel.
pixel to 6-bit resolution to match the quantization 9. Chroma split and squash: To match the color
artifacts observed in the Fire-i’s output. resolution of the video input, the UV component of
8. YUV blending: The rendered image is blended into the blended image is horizontally subsampled by a
the input video frame. At this stage, each rendered factor of four using a uniform box filter. This splits
the 640480 YUV image into a 640480 Y image
4. Even though the Fire-i’s filter is known not to be symmetric, we use a and a 160480 UV image.
symmetric filter here. This is because any image shift produced by the Fire-i
is already compensated by the visual tracking system. Any extra shift 10. Chroma recombine: The blended image is converted
introduced in the compositing stage would misalign the virtual graphics. to RGB, using the full-resolution luminance (Y)
Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:487 UTC from IE Xplore. Restricon aply.
KLEIN AND MURRAY: SIMULATING LOW-COST CAMERAS FOR AUGMENTED REALITY COMPOSITING 377
Fig. 12. Compositing comparison with a test pattern. Two test patterns are in each image, of which the left is an actual printout in the world and the
right is the inserted virtual version. Our new compositing method makes the inserted version look more like the real pattern. Particularly noticeable
are the chromatic aberrations introduced near the image edges. (a) Old method. (b) New method.
information and the low-resolution chrominance while the CPU already tracks the next frame; however, it
(UV) without interpolation. adds 15 ms of latency to the display.
Figs. 10, 11, and 12 show some side-by-side comparisons
6 RESULTS of the proposed compositing method with a standard
compositing approach which blends the warped input
The results presented here are also included as losslessly
texture on to the video frame directly. As well as some
compressed PNG images in the supplemental materials,
which can be found on the Computer Society Digital Library
at http://doi.ieeecomputersociety.org/10.1109/TVCG.
2009.210.
We have integrated the above compositing method into
our “Parallel Tracking and Mapping” markerless tracking
system [10], [12]. On a desktop computer with a fast
graphics card (NVIDIA GeForce 9800 GTX), the new
compositing method adds approximately 6 ms of rendering
overhead per frame compared to our previous method,
which undistorted the video frame, drew graphics into it,
and redistorted the composite. On more modest mobile
hardware (NVIDIA GeForce 8600M/GT), the overhead is
substantial at 15 ms extra rendering cost per frame. In our
experiments, this does not impact frame rate (the system
maintains 30 Hz) since compositing takes place on the GPU,
Fig. 14. Real text interleaved with virtually inserted text at different
sharpening settings. The rendition of fine structures like text is a good
Fig. 13. Two sharpening settings with a virtual Darth Vader figure. The test for the accuracy of our simulation of the whole processing pipeline,
compositing method reads the camera’s sharpness setting and adjusts in particular the demosaicing method. A relatively good match is
its own sharpening filter accordingly. At the default value of 80, achieved, especially compared to a standard compositing approach,
sharpening artifacts are visible both in the real scene and the although the heavy coloration the Fire-i exhibits at low sharpness
augmented graphics. settings remains unmodeled.
Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:487 UTC from IE Xplore. Restricon aply.
378 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 16, NO. 3, MAY/JUNE 2010
Fig. 15. Drawbacks of the proposed method. (a) The method is indiscriminate and will degrade all graphics rendered in 3D, for example, here, the
cross hair is dull. (b) The next image showing the standard compositing method’s output shows how bright the cross hairs should be. (c) The same
applies to motion blur which is applied to all elements of the screen, even those like the cross hairs which always remain in the center of the display;
here, the cross hairs have been blurred to the point of being unrecognizable. (d) Finally, the rendering of blur relies on accurate velocity
measurements from the tracking system; the last image shows a misestimation of velocity, and hence, incorrect blur (heavily blurred virtual brown
shapes near the top of the crop).
simple inserted objects, the results show a texture of some causes a motion blur mismatch between real and virtual
test patterns superimposed next to a printout of the same objects. Beyond this, many sensor effects are not simulated
texture. In most instances, the new method provides quite correctly: For example, the real chromatic aberrations
superior integration of the real and virtual images. are more purple than the simulated versions, and sharpening
An advance presented in this paper over our previous artifacts appear more monochromatic on the Fire-i (perhaps,
short paper [11] is more accurate simulation of in-camera in-camera sharpening is performed on Y rather than RGB).
processing and in-camera sharpening in particular. The These small infidelities are due to imperfect understanding
most notable difference this has made is that the method of the processes in the camera as well as implementation and
presented here is effective with the Fire-i set to its rather calibration constraints.
aggressive power-on default sharpening of 80. In previous
work, sharpening was lowered to 25 to avoid the obvious
artifacts the higher setting produces. Fig. 13 shows a
7 RESULTS WITH A RAW BAYER CAMERA
comparison between the two sharpness settings. Some of the imaging steps taken by the Fire-i camera are a
Fig. 14 displays more results at different sharpness nuisance. For computer vision (and AR) applications, it
settings, this time comparing text printed on a piece of paper would often be preferable to have the camera data as soon as
to text in the same font inserted virtually. Ideally, there should possible, so any further processing could be user-controlled
be no visual difference between the real and virtual text. on the host computer. Some new cameras now make the raw
Again, the method does a reasonable job of emulating the Bayer image available to the computer; one such camera is the
sharpening artifacts which appear at higher sharpness IDS -eye. This is a USB CMOS device with a global shutter. It
settings. The fact that our method employs a symmetric sends raw Bayer data over the USB bus, allowing the user to
sharpening filter, whereas the Fire-i’s is biased to one side is choose any appropriate de-Bayering method in software. In
apparent on close inspection, but not glaringly intrusive to the other respects (form factor and sensor size), it is similar to the
casual observer. More obvious is the low-frequency red-blue Fire-i and could easily be used in the same applications.
banding visible only in the Fire-i image at low sharpness To form composite images for this camera, we truncate
settings, which indicates that our understanding of the the blending method of Fig. 9 after stage 5. Instead of
camera’s imaging pipeline is far from perfect. performing blending and YUV conversion, the individual
The compositing method presented here is not without color channel images are blended directly using the
drawbacks. Some problems arise from the decision to apply appropriate pixels from the camera’s Bayer image. The color
sensor effects as a postprocess step: For example, objects channels images are finally converted to an RGB composite
which should appear stationary in the image are still using bilinear interpolation.5 As for the Fire-i pipeline, all
blurred, which looks unnatural. Further, items which are processing is performed on the graphics card; however, the
very bright because they are user interface components truncated pipeline here requires only circa 4 ms on an
(which should stand out from the background) are treated NVIDIA GeForce 9800 GTX due to its shorter length.
the same way as any other graphics, and the resulting look Fig. 16 shows results obtained with this simplified
is dull. Solving these drawbacks would require modifica- method. The quality of images delivered by the -eye is
tions to the 3D rendering stage. Some of the problem cases higher than the quality obtained from the Fire-i due to the
are illustrated in Fig. 15.
Further problems include misestimation of camera velo- 5. For this camera, it is helpful to combine the Gr and Gb Bayer-mask
images into a single Green image rotated 45 degrees, as this allows the
city by the tracking system, which is likely due to the often- graphics hardware to perform bilinear interpolation directly, as described
false assumption of zero acceleration. This misestimation in [11].
Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:487 UTC from IE Xplore. Restricon aply.
KLEIN AND MURRAY: SIMULATING LOW-COST CAMERAS FOR AUGMENTED REALITY COMPOSITING 379
ACKNOWLEDGMENTS
This work was supported by EPSRC grant EP/D037077/1.
REFERENCES
[1] B.E. Bayer, “Color Imaging Array,” US Patent 3971065, July 1976.
[2] J. Chen, G. Turk, and B. MacIntyre, “Watercolor Inspired Non-
Photorealistic Rendering for Augmented Reality,” Proc. ACM Symp.
Virtual Reality Software and Technology (VRST ’08), pp. 231-234, 2008.
[3] F. Devernay and O.D. Faugeras, “Straight Lines Have to Be
Straight,” Machine Vision and Applications, vol. 13, no. 1, pp. 14-24,
2001.
[4] J. Fischer, D. Bartz, and W. Strasser, “Enhanced Visual Realism by
Incorporating Camera Image Effects,” Proc. Int’l Symp. Mixed and
Augmented Reality (ISMAR ’06), pp. 205-208, Oct. 2006.
[5] J. Fischer and D. Bartz, “Stylized Augmented Reality for
Improved Immersion,” Proc. IEEE Conf. Virtual Reality (VR ’05),
vol. 325, pp. 195-202, 2005.
[6] T. Florin, “Simulation of a Digital Camera Pipeline,” Proc. Int’l
Symp. Signals, Circuits and Systems (ISSCS ’07), July 2007.
[7] M. Haller, F. Landerl, and M. Billinghurst, “A Loose and Sketchy
Approach in a Mediated Reality Environment,” Proc. Third Int’l
Conf. Computer Graphics and Interactive Techniques in Australasia and
South East Asia (GRAPHITE ’05), pp. 371-379, 2005.
[8] K. Irie, A.E. McKinnon, K. Unsworth, and I.M. Woodhead, “A
Technique for Evaluation of CCD Video-Camera Noise,” IEEE
Trans. Circuits and Systems for Video Technology, vol. 18, no. 2,
pp. 280-284, Feb. 2008.
Fig. 16. Virtual text superimposed onto a printout of real text. Ideally, the
[9] K. Jacobs and C. Loscos, “Classification of Illumination Methods
virtual text should look identical in character to the real text. These
for Mixed Reality,” Computer Graphics Forum, vol. 25, no. 23,
images are from the -eye camera; here, the availability of raw Bayer pp. 29-51, Mar. 2006.
data means that our new compositing method can achieve good fidelity, [10] G. Klein and D. Murray, “Parallel Tracking and Mapping for
especially when compared to the standard blending method. Small AR Workspaces,” Proc. Int’l Symp. Mixed and Augmented
Reality (ISMAR ’07), Nov. 2007.
lack of destructive on-camera processing (and probably also [11] G. Klein and D. Murray, “Compositing for Small Cameras,”
Proc. Int’l Symp. Mixed and Augmented Reality (ISMAR), 2008.
due to more modern sensor design). At the same time, our [12] G. Klein and D. Murray, “Improving the Agility of Keyframe-
compositing method has an easier time attempting to match Based SLAM,” Proc. 10th European Conf. Computer Vision
the camera effects, resulting in a good match between real (ECCV ’08), pp. 802-815, Oct. 2008.
and virtual imagery. Mismatches at this stage are mostly [13] B.D. Lucas and T. Kanade, “An Iterative Image Registration
due to incorrect parameters, e.g., for motion blur or the Technique with an Application to Stereo Vision,” Proc. Seventh
Int’l Joint Conf. Artificial Intelligence, pp. 674-679, 1981.
basic lens softness/antialiasing filter blur function, and the [14] B. Okumura, M. Kanbara, and N. Yokoya, “Augmented Reality
fact that our method does not model depth defocus. Based on Estimation of Defocusing and Motion Blurring from
Captured Images,” Proc. Int’l Symp. Mixed and Augmented Reality
(ISMAR ’06), pp. 219-225, Oct. 2006.
8 CONCLUSIONS [15] T. Porter and T. Duff, “Compositing Digital Images,” SIGGRAPH
Computer Graphics, vol. 18, no. 3, pp. 253-259, 1984.
This paper has presented a simulation of the behavior of [16] Sony. ICX098BQ Diagonal 4.5 mm (Type 1/4) Progressive Scan
small Webcams by postprocessing the ideal images pro- CCD Image Sensor with Square Pixel for Color Cameras, http://
duced by the standard OpenGL pipeline. Results show that www.unibrain.com/download/pdfs/Fire-i_Board_Cams/
ICX098BQ.pdf, Aug. 2009.
the integration of real and virtual graphics can be improved [17] B. Watson and F. Hodges, “Using Texture Maps to Correct for
by simulating some of the various artifacts that give the Optical Distortion in Head-Mounted Displays,” Proc. IEEE Virtual
video image its characteristic look. Reality Ann. Symp. (VRAIS ’95), pp. 172-178, Mar. 1995.
Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:487 UTC from IE Xplore. Restricon aply.
380 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 16, NO. 3, MAY/JUNE 2010
Georg Klein received the doctorate degree from David W. Murray received the graduate degree
the University of Cambridge in 2006. He was a with first class honors in physics and the
postdoctoral research assistant in Oxford’s Ac- doctorate degree in low-energy nuclear physics
tive Vision Laboratory until August 2009. He now from the University of Oxford in 1977 and 1980,
works at Microsoft Corporation. His research respectively. He was a research fellow in
interest is mainly the development and applica- physics at the California Institute of Technology
tion of visual tracking techniques for Augmented before joining the General Electric Company’s
Reality. He is a member of the IEEE. research laboratories in London, where he
developed research interests in motion compu-
tation, structure from motion, and active vision.
He moved to the University of Oxford in 1989 as a university lecturer in
engineering science and a fellow of St Anne’s College, and was made a
professor of engineering science in 1997. His research interests
continue to center on active approaches to visual sensing, with
applications in surveillance, telerobotics, navigation, and wearable
computing. He is a fellow of the Institution of Electrical Engineers in
the UK and is a member of the IEEE.
Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:487 UTC from IE Xplore. Restricon aply.