Beruflich Dokumente
Kultur Dokumente
Abstract
Usability in virtual reality based design tools (VRAD) is a major issue since its interaction techniques
are not yet fully investigated. Human factors such as pointing precision, fatigue, hand vibrations, lack
of limb support, and interaction anisotropy should be taken into account for a more effective interface
as compared to the 2D. This work presents an ongoing study addressing human performances in VR
during common CAD tasks: picking, pointing, and line drawing. Tests confirm performances
reduction along the users head to hand direction, mainly due to occlusion and lack of appropriate
feedback. Three virtual tools are here presented in order to overcome the interaction anisotropy: the
Ortho Tool, the Smart Transparency, and the Smart Object Snap. The new interface has shown
better user performances and improved model understanding. Results achieved in this work contribute
not only toVRAD development, but also to other virtual reality applications, because their context can
be easily extended.
1.
Introduction
Rapid development in computer graphics, position tracking, image recognition and wireless
connections are nowadays disclosing new and interesting features for the next generation of
virtual reality based design tools (VRAD).
The main advantages of VRAD against traditional 2D CAD tools can be summarized as
follows:
-
2.
Related Work
Virtual reality is not a novel technology; thirty years of study concerned two different fields:
applied research and human-computer interaction (HCI).
Applied research in VRAD was carried out in the past using various types of input devices
(3D mouse, gloves, wand, fly stick, gaze-control, etc.), different configurations (immersive
VR, semi-immersive VR, desktop VR, etc.), and - for specific modelling tasks - solid
modelling [17], free form [6], conceptual styling [19], assembly [18], virtual prototyping [3],
and sculpting [5]. VRAD implementations are scientifically relevant, but in most of the cases
the related literature lacks systematic performance evaluation and sometimes misses the intent
of defining clear development guidelines. But, at present, VRAD interface is far from being
mature, and further research is needed for the understanding of the interaction basics and of
the definition of standards and benchmarks, in a way similar to the traditional CAD.
In a different way, human-computer interaction (HCI) research follows a general approach,
but the guidelines provided should be then applied to VRAD specific modelling task, and the
results achieved can vary according to the case.
The simplest form of interaction in 3D space - the pointing - was explored by many authors;
Boritz [4] investigated 3D point location using a six degree of freedom input device. Four
different visual feedback modes were tested: monoscopic fixed viewpoint, stereoscopic fixed
viewpoint, monoscopic head-tracked perspective, and stereoscopic head-tracked perspective.
The results indicate that stereoscopic performance is superior to the monoscopic one, and that
asymmetries exist both across and within axes.
Zhai et al. [21] presented an empirical evaluation of a three-dimensional interface,
decomposing tracking performance into six components (three in translation and three in
rotation). Tests revealed that subjects tracking errors in the depth dimension were about 45%
(with no practice) to 35% (with practice) larger than those in the horizontal and vertical
dimensions. It was also found out that subjects had initially larger tracking errors along the
vertical axis than along the horizontal axis, likely due to attention allocation strategy.
Poupyrev et al. [14] developed a test bed which evaluates manipulation tasks in VR in an
application-independent way. The framework provided systematic task analysis of immersive
manipulation, and suggested a user-centred non-Euclidean reference frame for the
measurement of VR spatial relationship.
Grossman et al. [10] investigated 3D pointing using a true volumetric display, where the
target size varied in three spatial dimensions. The effect of the user's physical movement
angle on pointing performance was considered. Results show that target acquisition time
along the depth direction has greater impact on performance than the other two axis. The
authors proposed and validated an extended Fitts law model which accounts for the
movement angle.
Mine et al. [12] explored manipulation in immersive virtual environment using the users
body as reference system. They presented a unified framework for VE interaction based on
proprioception, a person's sense of the position and orientation of his/her body and limbs. Test
were carried out on the body-relative interaction techniques presented.
The short survey here presented illustrates how interaction in VR is still an unexplored topic,
and how at the moment the interface usability stands on the way to the development of VRAD
applications. Many research studies have pointed out that the user interaction performances
vary according to the position of the users limbs in the virtual environment, but at the present
time no VRAD application takes into account this issue for the interface design.
The purpose of this research is to examine human bias, consistency, and individual
differences when pointing, picking and line sketching in a virtual environment (VE), in order
to provide useful information and solutions for future VRAD improvement.
3.
Experiment Design
The aim of this set of tests is to give a qualitative and quantitative evaluation of human
performances in a general VRAD application. We selected a set of the most frequent tasks
carried out in a CAD: pointing, picking, and line sketching. These tasks are similar in both 2D
and 3D CAD system. Using a semi-immersive head tracked stereoscopic display and a 6DOF
pointer, the following tests were carried out:
-
analysis of the sketched lines traced by the user when following a virtual geometry,
in order to discover preferred sketching methods and modalities;
measurement of the users the ability to pick points in 3D space in order to evaluate
human performance in object selection.
The SpaceXperiment [9] application was used for the tests. Position, orientation and
timestamp of the pointer (pen tip) and of the users head were recorded for subsequent
analysis.
3.1. Participants
Voluntary students from the faculty of mechanical engineering and architecture were
recruited. All participants were regular user of a windows interface (mouse and keyboard), but
none of the subjects had been in a VR environment before. All the users were given a
demonstration of the experiments and were allowed to interact in the virtual workspace for
approximately 20 minutes in order to become acquainted with the stereo perception of the
virtual space. Moreover all the user performed a double set of tests. The first set was
considered a practice session and the second a data collection session. All subjects were right
handed, and had normal or corrected-to-normal vision. Informed consent was provided before
the test sessions.
3.2. Apparatus
The experiments were conducted in the VR3lab facility at the Cemec (Politecnico di Bari,
Italy) on the VR system, which normally runs the Spacedesign VRAD application.
The Virtual Reality system is composed of a vertical screen of 2.20m x 1.80m with two
polarized projectors (Figure 2) and an optical 3D tracking system by Art [1]. Horizontal and
vertical polarized filters in conjunction with the users glasses make possible the so called
passive stereo vision. The experiment was conducted in a semi-dark room.
The tracking system uses two infrared (IR) cameras and IR-reflective spheres (the markers),
to calculate the position and orientation of the users devices in space by triangulation. The
markers, which are of 12mm diameter, are attached to the interaction devices following a
unique pattern which allows them to be univocally identified by the system.
During the test sessions the system records the three-dimensional position of the users
devices, and stores the results in text data files for subsequent off-line analysis.
The user handles a transparent Plexiglas pen with 3 buttons, which is visualized in VR with a
virtual simulacrum. The user is also provided with a virtual palette (a Plexiglas sheet) that can
be used to retrieve information and to access the virtual menus and buttons (Figure 3).
4.
Three test have been carried out in order to evaluate interaction techniques in VR: Pointing,
picking and line sketching.
4.1. Pointing
In this first experiment we investigated the users accuracy in pointing to a fixed target in
virtual space. Each participant was asked to place the tip of the virtual pen as close as possible
to the midpoint of the crosshair marker. Once the subject had reached the marker in a stable
manner, he/she clicked on the pen button and kept the pen in the still position for 5 seconds.
Each user repeated the experiment 10 times for 3 different points: MDP (Medium Difficulty
Point), HDP (High Difficulty Point) and LDP (Low Difficulty Point). Each experiment
recorded the pen position for 5 seconds (on our system this corresponded to approximately
310 sample points per experiment) for a total of 186000 sampled points. We applied a
statistical analysis to the measured data to evaluate mean, variance, and deviation from the
target point.
The error isotropy was verified in the workspace using a world-fixed reference frame by
projecting the error vectors onto three orthogonal reference directions: horizontal, vertical and
perpendicular to the screen (i.e. depth).
Error
Total deviance(mm)
Horizontal Range(mm)
Vertical range(mm)
Max
17,31
7,28
9,53
19,50
Mean
6,21
4,81
5,29
10,12
4.2. Sketching
The aim of this experiment was to evaluate the users ability to sketch as precisely as possible
a reference geometry displayed in the 3D environment. This test simulated the typical CAD
task of transferring a geometrical idea into an unconstrained 3D space sketching.
The user traced a free hand sketch simply by moving the pen while pressing its button. The
subjects repeated the task for different patterns: horizontal line, vertical line, depth line (line
perpendicular to the screen) and rectangular frame aligned with the screen plane. The users
were required to perform the experiment 5 times for 4 geometries with 5 different modalities
as follows: in the most comfortable fashion (users choice), in reversed tracing direction, at
low, medium and high sketching speed. The combinations of the previous modes were
counterbalanced across subjects according to a Latin square and each condition was
performed for an equal number of times. We collected a total of 2000 sketches. The
divergence of the sketch from the displayed geometry represented the error. For the error
metric measurement, we considered the deviance, which is the distance between the pen tip
and its closest point on the reference geometry. The range of the deviance error was evaluated
in each reference direction: horizontal range, vertical range, and depth range.
The following considerations could be drawn: the higher error value along the axis
perpendicular to the screen, already noticed in the previous experiment, was confirmed in all
sketching modalities and geometries; besides, also the ratios among the error components
along the reference directions were in accordance.
4.3. Picking
Previous experiments have shown a systematic pointing anisotropy related to direct input in a
virtual environment. We decided to investigate on the picking task, since it is one of the most
used operations in VRAD applications (selection, control point manipulation, direct
sketching, etc.). The aim of this test was to evaluate the users performances in picking a 3D
cross hair target located in a random position within the workspace. The user picked the
midpoint of the target using the pen button. Each subject repeated the picking operation for 30
points randomly chosen from 3 different positions: in front, to the right, and on top of the
users head. After each picking, he/she had to return to a home position before picking the
next target. Different sounds accompanied each step in order to guide the user along the
experiment.
The error vector, computed as the difference between the target and the picked position, was
projected onto each screen-aligned reference frame directions: depth, horizontal, and vertical
direction.
We used ANOVA to verify the anisotropic behaviour of the interaction. The error values
demonstrated a significant effect of the reference directions (F(2,357) = 29.17; p < 0.0001)
rejecting the null hypothesis. Multiple Means Comparison (MMC) showed a significantly
higher error in depth direction but no significant difference along horizontal and vertical axes
(Figure 5).
We verified if the screen-aligned frame is the best fitting reference to evaluate the picking
error anisotropy. We decided to fit an ellipsoid to the error value for each of the 3 picking
points. Principal Component Analysis (PCA) applied to the error vectors returned the
directions and the axis lengths of the best fit ellipsoid. The results show that the principal
(major) axis always converges towards the users head (Figure 6).
The results suggested that a different reference frame could be proposed for the error
decomposition. So, instead of using depth, horizontal, and vertical directions, we decided to
test a user-centred reference frame whose principal direction V1 was directed from the pointer
to the users head; the direction V2 was perpendicular to V1 and parallel to the horizontal
plane; and the third direction V3 was perpendicular to V1 and to V2 (Figure 7).
In order to verify this new frame of reference, we designed a new set of experiments.
The ANOVA showed a significant effect of the reference frame change (Figure 8). The
following Table 2 shows how the squared variance sigma changes with the reference frame.
Table 2. Sigma values (mm) changing the reference frame.
Reference Frame
Depth vs V1 (mm)
Horizontal vs V2 (mm)
Vertical vs V3 (mm)
3.366
1.666
1.878
User-Centred (V1,V2,V3)
3.759
1.560
1.458
These results show that the user-centred reference frame best fitted the error vectors as
compared to the other: the error component along V1 was greater than the one along the depth
direction.
4.5. Discussion
The performed tests demonstrated a systematic anisotropy in the error vector distribution
during all the basic modelling tasks: pointing, picking, and line sketching. The following
interaction principles can be thus pointed out:
the error along the depth direction (perpendicular to the screen) is always greater than
the error along the horizontal and vertical directions;
the magnitudes of the error along the horizontal and vertical directions are comparable,
and always at least 1.9 times smaller than the error along the depth direction;
the principal axis of the error direction distribution always converges towards the users
head.
The results of these experiments can be explained mainly in terms of to occlusion issues, as
the users hand and the pointing device hide the screen and thus the stereo effect vanishes.
This problem can be solved by using an offset between the real pen and the virtual pen. This
solution was previously proven to have no influence on interaction precision for offset values
minor to 20 cm [13].
Yet using an offset is not sufficient, and other interaction tools should be developed in order
to take into account the anisotropy. The following section presents some of the solutions
developed by the authors.
5.
Transparent physical tools (rules, French curves, squares, etc.) can be introduced into a virtual
environment in order to offer real constraint during modelling, just as real world tools do
during drawing and sculpturing. For example, the Plexiglas sheet, handled by the user during
the VRAD session for displaying the menu, can also be used as a planar reference (i.e.
sketching on a plane), without interfering with the stereoscopic vision. Practical use
observations have shown the effectiveness of such equipment, and how designers use them
within the digital modelling context in a natural and personal fashion.
The virtual aids, on the other hand, are software tools specifically developed to support the
user during the interaction. For example, the geometrical snapping constrains the user input to
determined geometries such as planes, lines, or grids; the topological snapping assists the
locating of topological meaningful positions.
The word smart tool in HCI interface design stands for software objects which change their
behaviour according to the surrounding context (i.e. users limbs position, gestures, speed and
acceleration of the input devices, previous commands, etc.).
In order to address the users limitation in depth perception and interaction, as seen in the
previous sections, we propose a set of smart virtual tools:
10
11
Osnap has a different marker, as shown for the Autocad application in the first two columns
of Table 3.
Object Snaps can be easily extended to a 6DOF input in a virtual environment, where they are
very useful due to tracking error, fatigue, hand vibration, and lack of limb support. Compared
to the 2D version of the tool, 3D Object Snap uses a sensible volume instead of a flat area,
and the marker is displayed as a wire framed 3D geometry (see Table 3 and Figure 11),
which varies according to the snapped topology (Endpoint, Midpoint, Perpendicular, Centre,
etc.).
This solution increases the pointing efficiency thanks to a better alignment of the snapping
zone, without nevertheless affecting the resolution, because it allows a volume reduction as
compared to a sphere or a world aligned ellipsoid.
By adjusting the influence area by the slider, and by activating the object snap according to
the specific task, the user can model in 3D using previous geometries as a reference,
supported in the fundamental task of pointing with enforced precision inside of the virtual
space.
Table 3 illustrates the correspondence between Autocad Osnaps and their 3D counterpart
developed by the authors. Snap Tips appear if you let the virtual cursor hover over an Osnap
location for a second or so.
12
Object Snap
Autocad feedback
3D Osnap feedback
Use
Centre
Snaps to the centre of a circle
or arc.
End point
Snaps to the endpoint of a
line, polyline, or arc.
Intersection
Midpoint
Snaps to the midpoint of a
line or arc.
Nearest
Locates the point or entity
nearest to the cursor position.
Node
Snaps to a point entity.
Perpendicular
Locates a perpendicular point
on an adjacent entity.
13
Quadrant
Tangent
Places an entity at the tangent
point of an arc or circle.
6.
[2]
[3]
[4]
[5]
Chen H., Sun H., Real-time Haptic Sculpting in Virtual Volume Space, Proceedings
of the ACM Symposium on Virtual Reality Software and Technology, November 11-13,
2002, Hong Kong, China.
14
[6]
Dani T.H., Wang L., Gadh. R., Free-Form Surface Design in a Virtual Enviroment,
proceedings of ASME '99 Design Engineering Technical Conferences, 1999, Las Vegas,
Nevada.
[7]
Desiger J., Blach R, Wesche G., Breining R., Towards Immersive ModellingChallenges and Recommendations: A Workshop Analysing the Needs of Designers,
Eurographics 2000.
[8]
Fiorentino M., De Amicis R., Stork A., Monno G., Spacedesign: Conceptual Styling
and Design Review in Augmented Reality, In Proc. of ISMAR 2002 IEEE, Darmstadt,
Germany, 2002, pp. 86-94.
[9]
Fiorentino M., Monno G., Renzulli P. A., Uva A. E., 3D Pointing in Virtual Reality:
Experimental Study, XIII ADM - XV INGEGRAF International Conference on Tools
And Methods Evolution In Engineering Design, Napoli, June 3th and June 6th, 2003.
15
[21] Zhai, S., Milgram P., Anisotropic Human Performance in Six Degree-of-Freedom
Tracking: An Evaluation of Three-Dimensional Display and Control Interfaces, IEEE
Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans, Vol. 27,
No.4, 1997, pp. 518- 528.
16