Beruflich Dokumente
Kultur Dokumente
1 2005
ABSTRACT
In this paper we describe a virtual clothing system for retail and design, created for
Bodymetrics Ltd., at their request. In the retail setup we installed the system at Selfridges, a
well known department store in London. To our knowledge this is a world first installation
of a fully automatic virtual try-on system. Sizing and body landmark information is
extracted from 3D scanner data in a fully automatic process and customers can try-on
garments from a database on their virtual selves within seconds. The system uses very fast
numerical methods and collision detection mechanisms that harness the capabilities of
graphics hardware for cloth body collision detection and response. Fabric property
measurements from a Kawabata evaluation system are mapped onto our cloth model to
ensure appropriate virtual drape behaviour. The information required in order to simulate
particular garments is provided in a format tailored to the computational requirements for
defining the garment pieces, seaming, partitioning on the body, cloth reflectance and
patterning, and location of accessories such as buttons, etc. Based on the realistic visual
feedback customers are able to decide whether to buy or not. The whole process in which a
customer is scanned, registered to the system, and virtually tries on ten different garments in
different sizes takes less than ten minutes. The acceptance of the system is shown by the
high interest and demand, the willingness of customers to pay for the service, and an
increased sale of items available in the virtual garment database. The system was also
designed to be usable over the Internet and has been made freely available. We show that the
same system in combination with global illumination for near photo-realistic augmented
reality is also of interest for garment designers.
approach a garment designer with a specific of one spring, however, can lead to over-
occasion in mind where she wants to wear a new elongation of other springs and may require
design. For a designer it would then be beneficial several iterations of the post correction steps to
to experiment with the new garment in the target resolve. Vassilev et al (2001) improved this by
environment and to be able to present near photo- modifying the particles' velocities instead of their
realistic images of the customer wearing the new positions.
clothes at the specified location. Such a system Bend
will save time and cost for the process of clothing
design.
77
RJTA Vol. 9 No. 1 2005
78
RJTA Vol. 9 No. 1 2005
79
RJTA Vol. 9 No. 1 2005
simulation has converged to a certain degree of the scene at a certain point, therefore, the light-
before we swap to visualise the simulation of the probe should be taken with the chrome ball
new choice. The customer thus gets important located near where the client is placed in the
visual feedback from the seaming simulation target photograph. An example light-probe of our
animation but at the same time we make sure she designer's office is depicted in Figure 4.
always appears dressed. Fig. 4
5. Natural Global Illumination
80
RJTA Vol. 9 No. 1 2005
has to be animated with the garment to approach 5and on the right a rendered jacket extracted from
the pose in the photograph as closely as possible. the background is depicted.
An automatic character animation system
combined with an image registration method
would be required to find the right pose
automatically. For this system we did not
concentrate on automatic model alignment.
Instead, we superimposed the virtual model over
the client's photograph and adjusted the virtual
camera to allow us to visually match photograph
and scan. In addition, rather than attempting to
register the limbs, in particular the client's arms, Fig. 5. Left: automatically generated mask to
in the scan to match the photograph, we chose to extract a garment from a background,
replicate appropriate background texture over the right: extracted garment.
original position of the arms in the photograph.
Part of the subject's left arm shown in Figure 8 to Synthesis of Occluded Background
illustrate this process.
When merging images, usually the background,
Fabric Reflectance Estimation where the new image is inserted, is lost.
Sometimes it is also required to recreate parts of
Owing to its micro- and meso-structure the the background texture occluded by a real
bidirectional reflectance distribution function garment. For example, parts of our client's real
(BRDF) of fabric can be very complicated. We garments may not be sufficiently covered by the
approximated the spatially varying fabric synthetic garment and therefore need to be
reflectance, described by a bidirectional texture replaced by the background expected in the image
function (BTF), by taking photographs of a as indicated in Figure 8. Ideally, an image of the
sample of the fabric in diffuse light. Therefore the background without the client is available, but if
approximation is described by a Lambertian this were not the case, the background could be
texture. Colour and light calibration was achieved approximated by use of texture synthesis
by inserting a MacBeth colour checker in the techniques. Generally texture synthesis takes as
image with the fabric. The image is balanced input an area of the image that is similar to what
afterwards so that the known values of the should be regenerated. The area that should be
MacBeth chart best match those of the image in a recovered is marked and synthesized with a
least squares sense. Since our fabric reflectance texture that most likely would have covered that
sample was not always large enough to create the area according to a probability distribution of the
whole virtual garment we used a texture synthesis specified sample region. We employed a
method briefly described below. technique described by Harrison (Harrison, 2001)
for this, but the results, though promising, were
Masking the Rendering for Photo Composition
not as good as those we could obtain by using
In order to subtract a garment from the parts of a background image to replace areas in
background of a synthetic image it is necessary the foreground of Figure 9.
to create a mask image. The problem is that
client and garment interact and it would be 6. Results
difficult manually to generate a good mask to
sufficient accuracy. Instead we use an The system was tested in two setups. In the retail
automatic depth buffer approach. We create a service without use of the global illumination
depth buffer image for the 3D scan and methods and in the augmented reality garment
background (the light probe) without the design application with the natural global
garment and another depth buffer image of illumination methods.
scan with the simulated garment. A simple
filter function is then used to generate a mask Retail Installation
image. The filter function returns 1 if the depth Statements about the retail service are based on
value of the garment is greater than the depth informal interviews with some of the operators
value of the client and 0 otherwise. An example that run the service. Generally, these operators
mask is depicted on the left in Figure 5 Fig.
81
RJTA Vol. 9 No. 1 2005
were female students from the London College of are mainly people from the film and games
Fashion (LCF). Their education provides a very industries but also university students who want
good background to give recommendations on the to find out what the system does and how it works.
fit of garments. Also, they know how to work
Internet Access
with computers though they are not expert
computer users. A brief training of less than an Once scanned, customers can access the system
hour is sufficient for them to run the service. over a secure link on the Internet. There are now
several thousand scans stored in the central
System Reliability
database of Bodymetrics.
About one in every fifty customers is scanned a
Visual Quality
second time owing to size and landmark
extraction problems of the TC2 software. No Images that illustrate the visual quality of the
problems have been reported with the simulation are given below. Figure 7 shows one
visualisation and simulation system. of our customers in default wear on the left. A
front view of the customer in a pair of jeans is
Customer Feedback
given in the middle of Figure 7 and a view from
At the moment the system is for women only, this the back of the same customer in the same pair of
limitation is because of the type of garments jeans is depicted on the right. An illustration of
available for simulation. Ten different designer the visual feedback a customer would get if a pair
jeans brands in five different sizes are offered for of jeans does not fit so well is depicted in Figure 6.
virtual try-on at the moment. New customers are On the left we show a customer in jeans that are
added to the system every day. They read about too tight for the seams to join properly and on the
the service in the news, have heard about it on TV right jeans that are too large for this customer and
or are informed by word of mouth. They are therefore appear very baggy.
coming mainly alone or with a male friend.
Customers are normally coming to buy jeans Augmented Reality Garment Design
anyway and they are willing to pay for the service. We collaborated with a designer to test the system
In general, they are finding the system useful and in a real world design setting. As a simple
they find the real-time garment visualisation garment, our designer was asked to make a jacket.
sufficiently realistic. A few customers completely In order to verify the simulation our designer
trust the simulation and would buy the garment created the jacket not only in the CAD system for
without trying them on in person. Operators simulation but also made a real toile of the
recommend to try-on the real garment before garment. The designer's client was scanned with
buying however. Used in this manner, the system the TC2 system and we took photographs of the
helps customers find the best fitting size and style client in the real jacket for comparison. A light
of jeans that they like. Customers are very probe was taken in our designer's office as
interested to try out new virtual garment items depicted in Figure 4. This was done under real-
added to the database. life conditions, with the office otherwise in near-
normal use, as can be seen from the fact that the
We found that most of the customers are very door is slightly open in Figure 8 (and therefore
confident with their body and embarrassment also in Figure 9, right) but closed in Figure 9, left).
about viewing their body as suggested by (Istook Figure 9 depicts a client in a real jacket toile on
and McKinnon, 1999) only occurs rarely. the left and on the right the client dressed in a
virtual jacket. We note that owing to natural
Some of the customers would like to enter the illumination the simulated garment fits
scanner with high heels to appear differently. realistically into the photograph. Since hands
Quite a few female customers seem to be were chopped off the 3D scan they are not present
encouraged by their male friends to try-out the in the image with the virtual garment either. There
system. It is also mainly the males who are are creases in the real jacket which do not appear
interested in the data format of the scan and in the simulation. In general the simulation
whether they could access it from the Internet. appears much smoother. This may be for two
Other Visitors reasons. Firstly, the garment simulation does not
account for memory effects of the fabric, while,
There are many people visiting the installation secondly, the resolution of the mass-spring mesh
just because of interest in the technology. These could be increased to generate more detailed
82
RJTA Vol. 9 No. 1 2005
Fig. 6. Jeans that do not fit well; left a customer with jeans too tight; right a customer with very baggy jeans.
83
RJTA Vol. 9 No. 1 2005
Fig. 9. Left client in the real jacket toile, right the naturally illuminated and
simulated jacket composited with a photograph of a client.
84
RJTA Vol. 9 No. 1 2005
[9] Desbrun, M., Schroeder, P. and Barr, A. [22] Provot, X. (1997), Collision and self-
(1999), Interactive animation of collision handling in cloth model dedicated
structured deformable objects. In to design, Computer Animation and
Proceedings of Graphics Interface Simulation 97, pages 177-190, ISBN 3-
(GI1999), pages 1-8. Canadian Computer- 211-83048-0. Held in Budapest, Hungary
Human Communications Society. September.
[10] Eberhardt, B., Weber, A. and Strasser, W. [23] Rector, B.E., Sells, C. (1999), ATL
(1996), A fast, flexible, particle-system Internals (The Addison-Wesley Object
model for cloth draping, j-IEEECGA, Technology Series) by Pearson
16(5):52-59, September. Educational; 1st edition ISBN:
[11] Eischen, J.W., Deng, S. and Clapp, T.G. 0201695898 March 31
(1996), Finite element modeling and [24] Tchou, C. and Debevec, P. (2001),
control of flexible fabric parts, IEEE HDRShop.
Computer Graphics and Applications, [25] Terzopoulos, D., Platt, J., Barr, A. and
16(5):7180, ISSN 0272-1716, September. Fleischer, K. (1987), Elastically
[12] Etzmuss, Olaf, Eberhardt, B. and Hauth, deformable models, Computer Graphics
Michael, Implicit-explicit schemes for (Proc. SIGGRAPH87), 21(4):205214.
fast animation with particle systems. In [26] Teschner, M., Kimmerle, S., Heidelberger,
Computer Animation and Simulation 2000, B., Teschner, M., Zachmann, G.,
pages 138-151, Eurographics, August Raghupathi, L., Fuhrmann, A., Cani, M.P.,
2000.ISBN 3-211-83392-7. Faure, F., Magnenat Thalmann, N.,
[13] Feynman, C.R. (1986), Modeling the Strasser, N. and Volino, P., Collision
Appearance of Cloth, MS thesis, EECS detection for deformable objects,
dept, MIT, May. Eurographics, Sep 2004.
[14] Harrison, P. (2001), Non-hierarchical [27] Vassilev, T., Spanlang, B. and
procedure for re-synthesis of complex Chrysanthou, Y. (2001), Fast cloth
textures. In V. Skala, editor, WSCG 2001 animation on walking avatars, Computer
Conference Proceedings. Graphics Forum, 20(3):260267, ISSN
[15] Home page of the textile clothing 1067-7055.
technology cooperation. www.tc2.com. [28] Volino, P. and Magnenat-Thalmann, N.
[16] Home page of the UK national sizing (1995), Collision and self-collision
survey. www.sizeuk.org. detection: Effcient and robust solutions for
[17] Istook, C.L. and McKinnon, L. (1999), highly deformable surfaces, Computer
Psychological Issues Concerning Body Animation and Simulation 95, pages 55
Scanning, Annual International Textile 65, September. ISBN 3-211-82738-2.
and Apparel Association Meeting, [29] Volino, P. and Magnenat-Thalmann, N.
November 10-13, Santa Fe, NM (2001), Comparing efficiency of
(November ). integration methods for cloth simulation,
[18] Kawabata, S. (1980), The standardization In Computer Graphics International 2001,
and analysis of hand evaluation, The pages 265-272, July. ISBN 0-7695-1007-8.
Textile Machinery Society of Japan. [30] Volino, P., Courchesne, M. and Magnenat-
[19] Lester, H. and Arridge, S. (1999), A Thalmann, N. (1995), Versatile and
survey of hierarchical non-linear medical efficient techniques for simulating cloth
image registration, Pattern Recognition, and other deformable objects, In Proc.
32(1):129-149. SIGGRAPH95, pages 137-144.
[20] Lin, M. and Gottschalk, S. (1998), [31] Ward, G. and Shakespeare, R. (1998),
Collision detection between geometric Rendering with radiance: the art and
models: A survey. science of lighting visualization, Morgan
[21] Provot, X. (1995), Deformation Kaufmann Publishers Inc.
constraints in a mass-spring model to [32] Weil, J. (1986), The synthesis of cloth
describe rigid cloth behaviour, objects, In David C. Evans and Russell J.
Proceedings of Graphics Interface, 141- Athay, editors, Computer Graphics
155. (SIGGRAPH 86 Proceedings), volume 20,
pages 49-54, August.
86
RJTA Vol. 9 No. 1 2005
[33] Wernecke, J. (1994), The Inventor Mentor: Computer Graphics and Applications,
Programming Object-Oriented 3D pages 328-337, October. ISBN 0-7695-
Graphics with Open Inventor, Release 2, 0868-5.
Addison Wesley. [35] Zhang, Z. (2000), A flexible new
[34] Zhang, D. and Yuen, M.M.F. (2000), technique for camera calibration, IEEE
Collision detection for clothed human Transactions on Pattern Analysis and
animation, 8th Pacific Conference on Machine Intelligence, 22(11):13301334.
.
87