Sie sind auf Seite 1von 10

Spatio Temporal Noise Model For Measuring Contrast Sensitivity of

Human Eye System – How Complex Is Defining Bad Imager Pixel


*
Vinesh Sukumar, *Herbert Hess, *Ken Noren, *** Steve Krone
**
Microelectronics Research and Communications Institute
University of Idaho, Moscow, Idaho, U.S.A.
***
Department of Mathematics, University of Idaho, Moscow, Idaho, U.S.A

Abstract: This paper talks about defining an image quality metric to quantify physical defects in an image. Perception
based detection threshold from a panel of subjects is identified by conducting various experiments on test pattern
images. Spatio-temporal noise model for a human visual system is briefly presented. These models and experimental
data presented are done under certain simplifying assumptions which are presented in the course of the paper.

Keywords: Image sensor, defect modeling, image quality, quality metrics and visual difference prediction.

I. INTRODUCTION

Over the years, their has been an increased need to come up with an objective way to measure perceived image
quality. This is limited not only by the physical parameters of the image forming system, like resolution and contrast,
but also on the impression of the image formed by the eye of the observer which has is own limitations. Images
produced on monitors, printers and by camera systems contain a variety of elements, including text, graphics, and
pictorial segments. As viewers evaluate the quality of images, they may notice that various image elements are not
well formed or do not communicate the desired intent. Examples of some of the most significant deficiencies seen in
images that have pictorial segments include lack of clarity in details, noise in areas expected to be smooth, loss of
information in highlights or darker areas, and inaccurate representation of color. These characteristics form the basis
for defining sharpness, graininess, tone scale, and color rendition, respectively. These are significant factors that
viewers consider when judging image quality [1]. Sample images of 1.3M resolution camera’s are shown in Figure 1.

Fig. 1. Images taken at low light conditions (1 Lux, f/2.8 and 5fps from different commercially available 1.3Meg sensors – Define Good?

The quality of images is of concern to (i) industry, which manufactures devices that produce images, such as
monitors, printers, image sensors, and films; and (ii) consumers, who use the devices that produce images in
entertainment, medicine, photography, and other business applications. There is a marked difference in the influence
of these two communities. Consumers are the ultimate evaluators of image quality, but it is the industry that conducts
systematic studies of image quality to design and manufacture devices. If the image quality is unacceptable for a
user’s application, other characteristics of the system, including the cost or its features list, are of relatively little
importance [1].
The task of the designers and manufacturers of imaging systems is to manipulate the design variables so that the
image quality is maximal. Image quality involves a subjective metric. It is important to realize that ‘‘subjective’’
does not mean vague, variable, or unspecific. ‘‘Subjective’’ refers to the fact that image quality involves properties
of the observer’s percept, rather than physical properties of the image itself (examples of physical properties of the
image include reflectance and contrast). The relations between the physical properties of the image and visual
properties and image quality is studied using methods of psychophysics. This is gaining significant importance for
engineers working in CMOS Imaging industry. Design for test/pixel defect correction becomes easier to implement
as the limitations for perceived quality of an image/defining defectivity are better understood with an image that can
be projected without a real sensor in place.
The larger intent of our current study is to define defectivity in an image which is based of spatiotemporal noise
model of a human visualization system and physical parameters of an imaging system. In this paper spatiotemporal
noise model is introduced and perception based detection threshold is presented based of subjective study from a
panel of subjects. Visual artifacts are placed on test target images. The detection threshold is measured using
multiple stimuli (test pair images) under certain defined conditions and illuminant. The goal of this project in its final
form is to quantify quality and create a global image quality metric for designers and manufacturers of imaging
systems to implement pixel defect correction and maximizing quality.

II. FUNDAMENTAL PROPERTIES OF HUMAN VISUALISATION SYSTEM (HVS)

This section is not intended to be a thorough review of the properties of the HVS; for that the reader is directed to
[2], [3], and [4]. However, this section will outline some of the fundamental properties and terminology required as
background to understand the experimental results presented in the paper. This paper is specific to photopic, or
bright light vision. In this way, an image is viewed with the maximum visual acuity by the most densely packed,
color sensitive, cone cells at the fovea of the eye. This means that there must be sufficient image illumination and
that the image is either far enough away from the observer for its image to fall completely on the fovea or the
observer can visually roam the image for maximum detail. It has been found that the HVS has a number of
fundamental properties which, though not independent, have often been studied and modeled separately. In the
following section a brief presentation of the influencing factors are presented [5].

• Luminance Sensitivity: Subjective brightness is known to be a nonlinear function of the light intensity
incident on the eye [6, ch. 2, p. 34]. At practical light levels of around 100 cd/m it is most commonly
modeled by either a logarithmic or power law model [7, 8, ch. 3, p. 25].
• Frequency Sensitivity: The HVS is not only sensitive to the luminance levels in an image, but also to
spatial changes of these luminance levels. It is also dependent on orientation, being most sensitive in the
horizontal and vertical directions and atleast sensitive at oblique angles [9]. The results are usually
discussed in terms of contrast sensitivity functions (CSF). The CSF has a peak response of between 2 and
10 cycles per degree (c/d), depending on the viewer and the viewing conditions. CSF for a uniform field is
shown in Figure 2.

Fig. 2. CSF presented for a uniform field in spatial frequency versus sensitivity domain
• Signal Content Sensitivity: Contrast masking is a phenomenon whereby noise can be masked, i.e., its
visibility reduced, by the underlying image signal. For an image signal to mask a noise signal, both signals
must occur in approximately the same spatial location, be of approximately the same spatial frequency, and
their spatial frequencies must be of approximately the same orientation. These observations have led to the
development of multichannel models of the HVS [10], [11], often generally referred to as a cortex
transform [12].

III. INTRODUCTTION TO SPATIO TEMPORAL NOISE MODEL OF HVS

Objects can generally be better distinguished from each other or from their background, if the difference in
luminance or color is large. Of these mentioned factors, luminance plays the most important role rather than the
absolute difference. The relative difference can be expressed in ratio between two luminance values, which is
called as contrast ratio, or the difference between the two luminance values divided by the sum of them which is
often referred to as contrast which is a dimensionless variable. HVS is more sensitive for the observation of
objects, it the required amount of contrast is lower. The reciprocal if the minimum contrast required for detection is
called as contrast sensitivity. The spatio temporal noise model to a large extent is based on contrast sensitivity
function (CSF). All measurements and results presented in this paper are based of luminance test targets and have
no color information [13].
The CSF model in spatio temporal domain has been heavily researched in the past by many researchers
[Campbell and Robson, 1967; Watanabe et al., 1968; Carlson, 1992; etc.]. These models are based on weighted
combination of psychophysical parameters of the human visual system and presented in mathematical expressions,
which could be used to calculate the critical flicker frequency for Image sensors, aiding in better technical design.
In this model we heavily use modulation transfer functions (MTF). This function describes the filtering of
modulation by an image forming system as a function of temporal and spatial frequency. The use of MTF has the
advantage that according to the convolution theorem, the MTF’s of the different parts of an imaging system can
simply be multiplied with each other to obtain total effect on the image. The model assumes that the luminance
signal entering the HVS is filtered by the optical MTF of the eye and then by the MTF of a lateral inhibition
process. Optical MTF to a large extent is obtained by the eye lens and the discrete structure of the retina and the
MTF of the lateral inhibition are done by the neural processing. Block diagram representation of the model is
presented in Figure 3. Internal noise is partly due to photon noise caused by the statistical fluctuations of the
number of photons that generate and excitation of the photo receptors and partly due to the fluctuations in the
signal transport to the brain. Although the optical image of an object entering the eye has some photon noise, this
noise is not considered as an external noise but as an internal noise. Assumption made here is that the spatial
frequency of this noise is not filtered by the low pass filter formed by the eye lens. External noise presented in the
block diagram consists of display noise or grain noise present in the photographic images. w is a representation of a
temporal frequency and H1(w) is the MTF that represents the temporal filtering of the photo-receptor signal to the
photoreceptors of the brain. H2(w) is the MTF that represents the temporal filtering of the spatial inhibition signal
before being subtracted from the photo-receptor signal. u is the spatial frequency of interest under study. X, Y and
T are the spatial and temporal size of the viewing object covered with noise. For T, it has been assumed that with a
presentation time Tz and Ty of the eye, the shortest of the both be used. In a similar way, it is assumed that the
spatial dimensions X and Y are limited by the maximum angular size.

Fig. 3. Block diagram representation for a spatio-temporal noise model based of CSF of HVS

In psycho-physics, HVS has a detection probability of noise/defectivity in an image only if it’s beyond a certain
defined threshold. This threshold can be practically calculated as done through experiments presented in this paper.
Psycho-physics researchers present this threshold in terms of psychometric functions which gives the detection
probability as a function of signal strength commonly known as signal to noise (SNR) metrics in imaging industry.
Example of a detection threshold for a sinusoidal luminance patterns based of study from Foley and Legge (1981) is
50%. Mathematical equations expressing the temporal and spatial components of the HVS (presented in the block
diagram) that can integrate information from a luminance pattern is beyond the scope of the paper.

IV. STANDARDS USED FOR EXPERIMENTATION, IMAGE ANALYSIS AND EVALUATION

Realistic image synthesis involves the creation of a picture from a mathematical description of the world. Objects
in the environment are modeled, a synthetic camera is placed in the scene, and the transport of light is simulated.
Discrete samples are taken of the light energy at the picture plane, and the final image is reconstructed from these
samples. These image synthesis techniques can be extended to take human perception into account. Perception
researchers have gathered considerable amounts of data regarding the properties of the Human Visual System
(HVS). The psychophysical tests performed are generally done under highly controlled conditions, with simple,
artificial test patterns. Extreme care is taken when taking HVS data from complex scenes. This section examines
some points which are considered during experimentation [14].

Standards used for viewing conditions:


• Ratio of luminance of inactive screen to peak luminance is maintained < 0.2.
• Maximum observation angle relative to normal = 35 deg.
• Maximum screen luminance observed for the phase of the experimentation = 70cd/m2.
• The monitor used for evaluations is situated so that there are NO strongly colored areas (including clothing)
that are directly in the field of view or that may cause reflections onto the monitor screen. All walls, floors
and furniture in the field of view are black, free of any publication material. This was done so that this may
not affect the viewer’s vision.
• The area immediately surrounding the displayed image is black, to eliminate any flare content. Precaution
is also taken so that NO illumination sources are in the field of view.
• The stimuli are presented to the observers at a fixated distance of 1m.
• The monitors used for subjective evaluation were 19’’ professional grade monitor. The test patterns of
interest are presented on the same screen.

Fig. 4 .Experimental setup: Layers and Mask

Most of the practiced standards are compliant with the International Telecommunication Union Recommendations.
It was also made sure that the panel of subjects are not color blind. About ten observers took part in the experiment.
Before the observation tests, subjects were tested for visual acuity and their eyesight was verified to be within 0.93
dioptres of normal vision. Acuity is checked according to the method specified in ITU-T P.910 standard. To
evaluate repeatability the experimentation was repeated two times on consecutive days. We devised a large set of
stimuli from digitized luminance based photographs of many classes of scene; threshold differences between pairs
could be natural (e.g. object moves – orientation based) or computer-generated (e.g. blurred or desaturated). A
subset originated from a single reference scene which was then transformed in many ways (e.g. desaturation;
background moves).The desaturated and/or noisy image is created by imposing a mask on the reference layer to
create image layer of interest as indicated in Figure.4. The perceptual differences between image pairs were
determined with subjective magnitude ratings Contrast sensitivity readings were taken based on the ability of the
subject to make a distinction on the presence of the visual artifact for the test target. The test targets are placed at a
fixated distance of 1m from the viewer for the entire experimentation. It should be noted that as the size of the
displayed object shrinks when the distance of the object from the observer increases, the retinal spatial frequencies
increases [15].

Standards used for image analysis and capture:


• The panel of subjects are made to observe frames/sequences of images of interest side by side from two
different camera module systems with the same update frequency (85MHz).
• An initial training phase is conducted before the evaluation phase to provide information to the subjects on
the material content. A mock session is conducted to increase the comfort level amongst the subjects.
• Data collected is evaluated to identify perception based detection threshold. The values are compared for
the different camera module systems of interest.
• The test material is positioned in such a way that it is perpendicular to the optical axis of the test camera.
• Images were captured on reflective test charts with a color temperature profile of 3300 +/- 100K.
• All images were captured using YUV601 standards.
• Test pair images are 512 * 512 pixels wide and are monochromatic with 8 bits of luminance resolution.
Test image pairs are printed using a high resolution EPSON printer on photo quality paper for observer
evaluation.

V. EXPERIMENTAL STUDY AND RESULTS BASED OF VISUALIZATION OF DETECTION MAPS

Measuring thresholds: The difference threshold is the smallest difference between two stimuli that can be reliably
detected. This value may then be used to predict the probability that the observer will detect the contrast difference
between the rendered and the reference images. The method of constant stimuli is commonly used to measure a
difference threshold. In this method, a number of levels of the test stimulus are chosen around the value of the
reference stimulus and presented to the observer in a random order. The mean value calculated is called the point
of subjective equality (PSE), and the standard deviation is a measure of the difference threshold DL. Here
experimental analysis is started by displaying the normal image and the image with modified additive noise placed
side by side as shown in Figure. 6. The sequencing of images (normal image + modified image with X% additive
noise) is done at random for every subject. The observer is asked to indicate using a laser pointer on the screen
when he or she is able to make a distinction between the displayed images. The observer’s viewing time for any
pair of images was unlimited. In this experimentation procedure 20 sets of test images pairs (Normal image +
Normal image with additive noise i.e. – 10%, 20% etc) are used. A small example of a normal image and normal
image with 100% additive noise is shown in Figure 5. The observer’s results are categorized as three subsets:
experienced, photographic and inexperienced. The experienced group involved people with some image processing
and defect study background, photographic group included people with some knowledge about image quality
assessment of photographs and the inexperienced people were those who fit neither of these categories. However
the mean value of discriminations varied little from class to class. As a result, the data from all three groups were
averaged together to present an analytical recommendation. Studies are done which are (a) Background based
(grey, black and white scale influence) – Test pattern images with more planer distribution of a uniform color
(covering atleast half the image), which is more towards one of the dominants of the test pattern image (b) Scene
Based (Lena and Tiffany) – Test pattern image (Lena) with scene induced information which highlights edge
information and other (Tiffany) with more blend information with less of edges and (c) Orientation based as
presented in Figure 7. The results obtained from each of these test cases are presented in the next section. All these
analysis are done on test pattern images which have spatial frequencies varying in x direction only. Figure 6
presents the different test pattern images (Lena based) with varying amplitude of Gaussian noise added to the
images which are presented to the panel of subjects. Perception based thresholds are identified on various test
pattern targets [1].

Experimental results: The observer results for each of the test targets are averaged and presented in Table 1.
Results definitely indicate than scene based content has a large influence in identifying perception detection
thresholds. For a uniform scale background, test images with more white dominance is difficult to observe in terms
of defectivity/noise. Also detection probability increases for scene content images with sharper edges than less
fidelity based images which blend along with the background. This experimental study is repeated using the same
panel of observers to evaluate repeatability and study consistency. Study shows high amount of consistency on
various test patterns. Orientation of background (gray, white and black), scene based information on left, right side
Fig. 5. Target images - Normal image and Gaussian noise added noisy image

Fig. 6. Presentation of images for the panel of subjects to evaluate perception detection threshold

Fig. 7. Target images with varying amplitude of additive noise on Lena target image (Only a few are presented to make the image file concise)
Image a) Image b)

Image c) Image d)

Image e) Image f)

Fig. 8. Target images used for perception based detection threshold which are (a) Test pattern with varying spatial frequencies (b) Test pattern
with background scale based (black, grey and white) (c) Test pattern with scene based information (Lena which highlights edges and Tiffany
based which highlights contrast/blend information with less of edges.

corners on test targets didn’t show any huge toggle in test results. This leads us to believe that orientation plays
little to no role in calculating perception detection. Similar results were observed when this study is repeated.
A brief summary of the test results indicate that defectivity in images can be made easily observable for a black
scale background. When noise detection has to be done on scene content based information, images with more
sharp edges is noticeable rather than images with less fidelity/contrast.

VI. CONCLUSION

This paper has highlighted some limitation and concerns regarding defining image quality and suitably identifying
a quality metric. An attempt is made to understand perception based threshold to quantify quality for an imaging
system based of experiments conducted on a panel of subjects using defined test targets. A study of perception
detection threshold as a function of background, scene information and direction are also presented. A number of
issues remain to be investigated in terms of validating the spatio temporal mathematical noise model for a HVS
which has not been presented in detail in this paper. Study has to be completed in terms of correlating
mathematically calculated perception thresholds with experimental collected data to understand weakness of the
model. Finally research has to be done to understand the limitations of this experimental study with continuous
efforts to extend the scope of the experimental results for a more broadened usage.
Number of voters presenting a decision

Image a) Image b) Image c)

Image d) Image e)
Polynomial fit function

Perception threshold at maximum value

Additive noise added to the background in percentage

Fig. 9. Test results plotted as function of additive noise and perception detection probability from the panel of observers for a) Black
background scale b) Grey background scale c) White background scale d) Lena based which highlights edges and e) Tiffany based which
highlights contrast/blend information with less of edges.

Table 1. Experimental test results with statistical data captured from the panel of observers

REFERENCES

[1] Norman Burningham, Zygmunt Pizli, et al. “Image Quality metrics”, Image Processing, Image Quality, Image
Capture Systems Conference (2003).
[2] D. H. Hubel, in Eye, Brain and Vision. New York: Scientific American Library, 1988.
[3] H. L. Snyder, “Image quality: Measures and visual performance,” in Flat Panel Displays and CRT’s, L. E.
Tannas, Ed. New York: Van Nostrand Reinhold, 1985, pp. 70–90.
[4] B. A. Wandell, Foundations of Vision. Sunderland, MA: Sinauer, 1995.
[5] A.P. Bradley, “A Wavelet Visible Difference Predictor”, IEEE Trans. on Image Proc., vol.8 no. 5, May 1999, pp.
717.
[6] R. C. Gonzalez and R. E. Woods, Digital Image Processing. Reading, MA: Addison-Wesley, 1992.
[7] S. Daly, “The visible difference predictor: An algorithm for the assessment of image fidelity,” in Digital Images
and Human Vision, A. B.Watson, Ed. Cambridge, MA: MIT Press, 1993, pp. 179–206.
[8] A. K. Jain, in Fundamentals of Digital Image Processing. Englewood Cliffs, NJ: Prentice-Hall, 1989.
[9] W. E. Glenn, “Digital image compression based on visual perception,” in Digital Images and Human Vision, A.
B. Watson, Ed. Cambridge, A: MIT Press, 1993, pp. 63–71.
[10] S. J. P. Westen, R. L. Lagendijk, and J. Biemond, “Perceptual image quality based on a multiple channel HVS
model,” in Proc. ICASSP,1995, pp. 2351–2354.
[11] C. Zetzsche and G. Hauske, “Multiple channel model prediction of subjective image quality,” in Proc. SPIE,
Human Vision, Visual Processing, and Display, 1989, vol. 1077, pp. 209–215.
[12] A. B. Watson, “The cortex transform: Rapid computation of simulated neural images,” Computation. Vis.,
Graph., Image Process., vol. 39, pp. 311–327, 1987
[13] Barten P.G.J, “Spatio-temporal model for the contrast sensitivity of the human eye and its temporal aspects,
“Human Vision, Visual Processing and Digital Display IV, Proc. SPIE, (1993).
[14] www.vqeg.com
[15] Brady, N. and D.J.Field, “What’s constant in contrast constancy? The effects of scaling on the perceived contrast
of bandpass patterns, “Vision Res.1995.
[16] B. Fowler, J. Balicki, D. How, S. Mims, J. Canfield and M. Godfrey, “An Ultra Low Noise High Speed CMOS
Scientific Line scan Sensor for Scientific and Industrial Applications,” 2003 IEEE Workshop on Charge-Coupled
Devices and Advanced Image Sensors, May 15-17, 2003, Elmau, Germany.
[17] B. Fowler, A. Krymski, N. Khalliulin, H. Rhodes, “A 2 e- Noise 1.3 Megapixel CMOS Sensor, 2003 IEEE
Workshop on Charge-Coupled Devices and Advanced Image Sensors, May 15-17, 2003 , Elmau, Germany.
[18] D.Yang, B.Fowler, A.El Gamal, H.Min, M.Beiley and K.Cham, “Test Structures for Characterization and
Comparative Analysis of CMOS Image Sensors, “in Proceedings for SPIE, (1996).
[19] Hank Hogan, “Image is Everything”, pp.82, Photonics Spectra, (1998).
[20] Kalwant Singh, “Noise Analysis of a fully Integrated CMOS Image Sensor”, in Proceedings for SPIE, Vol. 3650,
p. 44-51. (1999).
[21] A.El Gamal, B.Fowler H.Min, X.Lin, “Modeling and Estimation of FPN components in CMOS Image Sensors,
“in Proceedings for SPIE, Vol. 3301, (1998).
[22] Randy Linebarger, “MTF Characterization, “Imaging Product Characterization Seminar, Micron Technology,
Inc. Feb 2005.
[23] Gennady Agranov, “Basic Opto-Electrical Characterization Methodology for Sensor Core, “Imaging Product
Characterization Seminar, Micron Technology, Inc. Feb 2005.
[24] D. H. Kelly, “Visual responses to time-dependant stimuli. I. Amplitude sensitivity measurements, “Journal of
Optical Society, 51, 422-429, 161.
[25] A. B. Watson, “Efficiency of a model human image code, “Journal of Optical Society, A4, 2401-2417 (1987).
[26] D. G. Pelli, “Effects of visual noise, “Ph.D dissertation, Cambridge, Univ., England (1981).
[27] A. E. Burgress and B. Colborne, “Visual signal detection. IV. Observer inconsistency, “Journal of Optical
Society, A5, 617-627 (1988).
[28] M. A. Georgeson and G. D. Sullivan, “Contrast sensitivity: deblurring in human vision by spatial frequency
channels, “Journal of Physiology, 252,627-656 (1975).
[29] Nachmias, J, “Effect of exposure duration on visual contrast with square wave grating, “Journal of Optical
Society, 57(3), 421-427 (1999).
[30] Pelli. E, et.al, “Contrast in complex images, “Journal of Optical Society, [A7], 2030-2040 (1990).
[31] Pelli. E, “Test of a model for foveal vision by using simulations, “Journal of Optical Society, [A13], 1131-1138
(1996).
[32] C. J. Bartleson and F. Grum, in Visual Measurements, vol. 5, Academic Press, Orlando, FL, 1984, p. 451.
[33] G. A. Gescheider, Psychophysics: Method, Theory, and Application, Lawrence Erlbaum, Hillsdale, NJ, 1985.
[34] P. G. J. Barten, Human Vision, Visual Process. Digital Display III, SPIE vol. 1666, 1992, pp. 57–72.
[35] T. N. Cornsweet, Visual Perception, Academic Press, NY, 1970.
[36] S.N. Yendrikhovskij, F. J. J. Blommaert, and H. de Ridder, Human Vision Electron. Imaging III, SPIE vol. 3299,
1998, pp. 274–281.
[37] W. Wu, Z. Pizlo, and J. P. Allebach, Proc. 2001 IS &T Image Process. Image Quality Image Capture Syst. Conf.,
Quebec City, Quebec, Canada, 21–25, April 2001
[38] A. J. Ahumada aud C. H. Null, “Image quality: A multidimensional problem,” in Digital Images and Human
Vision, A. B. Watson, Ed.Cambridge, MA: MIT Press, 1993, pp. 141–148.
[39] M. G. Albanesi, “Wavelets and human visual perception in image compression,” in Proc. Image Processing and
its Applications, 1995, pp. 859–863.
[40] S. Bertoluzza and M. G. Albanesi, “On the coupling of human visual system model and wavelet transform for
image compression,” SPIE,vol. 2303, pp. 389–397, 1994.
[41] S. Daly, “The visible difference predictor: An algorithm for the assessment of image fidelity,” in Digital Images
and Human Vision, A. B. Watson, Ed. Cambridge, MA: MIT Press, 1993, pp. 179–206.
[42] R. Rosenholtz and A. B. Watson, “Perceptual adaptive JPEG coding,” in Proc. IEEE Int. Conf. Image
Processing, Lausanne, Switzerland, 1996,vol. 1, pp. 901–904.
[43] C. Zetzsche, E. Barth, and B.Wegmann, “The importance of intrinsically two-dimensional image features in
biological vision and picture coding,” in Digital Images and Human Vision, A. B. Watson, Ed. Cambridge, MA:
MIT Press, 1993, pp. 109–138.
[44] K. Koffka, Principles of Gestalt Psychology, Harcourt Brace, NY, 1935.
[45] W. Wu, Ph.D. Dissertation, Purdue University, West Lafayette, IN, 2000.
[46] H. Wallach, J. Exp. Psychol. 38, 310–324 (1948).
[47] A. Gilchrist et al., Psychol. Rev. 106, 795–834 (1999).
[48] G. A. Gescheider, Psychophysics: Method, Theory, and Application, Lawrence Erlbaum, Hillsdale, NJ, 1985.

Das könnte Ihnen auch gefallen