Sie sind auf Seite 1von 7

WCCI 2012 IEEE World Congress on Computational Intelligence June, 10-15, 2012 - Brisbane, Australia

IJCNN

A Simple Platform of Brain-Controlled Mobile Robot and Its Implementation by SSVEP


Cheng Zhang , Yosuke Kimura , Hiroshi Higashi , and Toshihisa Tanaka
of Electrical and Electronic Engineering, Tokyo University of Agriculture and Technology, Japan Laboratory for Advanced Brain Signal Processing, RIKEN Brain Science Institute, Japan Emails: {chousei, kimura, higashi}@sip.tuat.ac.jp, tanakat@cc.tuat.ac.jp
AbstractBrain-computer interfacing (BCI) is an emerging technology to translate non-invasively measured brain signals into commands, providing an additional communication channel. This paper develops a BCI platform to remotely control a mobile robot through the Internet. The platform consists of a server computer for EEG signal acquisition and signal processing/classication and a bluetooth controlled robot carrying an Android smartphone. The smartphone can give a visual feedback via the Internet to the user with its phone camera. The Android smartphone transfers the command from the server to the robot and transmits video stream to the server for the feedback to the user. It is also shown to employ the steady state visual evoked potentials (SSVEP) to classify commands. We propose a method for detecting the idle state when a user does not gaze any visual stimuli.
Department

I. I NTRODUCTION A braincomputer interface (BCI) is a system for communication between human and computer, which allows people to send messages or commands from humans brain activities to the external world without peripheral nerves and muscles activities [1], [2]. These activities can be detected and recorded by a measurement equipment, such as electroencephalography (EEG) or electrocorticogram (ECoG). A huge variety of BCI systems has been developed such as ones using event related desynchronization (ERD), event related potential (ERP), or visual evoked potentials (VEP) [3], [4]. A well-known type of VEPs is steady-state visual evoked potential (SSVEP). Due to its system conguration, short training time, as well as high information transfer rate (ITR), the SSVEP-based BCI has become one of the most promising BCI systems for a practical noninvasive BCI system [5]. SSVEP is an EEG signal response to a ickering visual stimulus with a ickering frequency higher than 6 Hz [6]. In recent works, Cheng et al. implemented an SSVEP-based BCI system in a PC-based environment, and focused on allowing the patient to ring the mobile phone [7]. M uller-Putz et al. reported an SSVEP-based BCI to control an electrical prosthesis [8], and Martinez et al. and Bakardjian et al. proposed fully online multi-command BCIs using SSVEP [9], [10], where on a computer screen small checkerboards ickering at different but xed frequencies move along with a navigated car. In [9], [10], blind source separation (BSS) algorithms are

employed to remove artifacts and a bank of bandpass lters and smoothing are applied for extracting features. These online SSVEP-BCIs successfully classify the observed EEG signal to detect commands, however, they only focus on detecting SSVEP (the system always chooses one of four commands: left, right, up, and down), not consider the idle state. Recently BCI has not been a technology within a computer / simulator. A possible and attractive application of practical BCI is a remotely controlled mobile robot that can be operated as an another body of a user. For the BCI purpose, a robot should transmit the sight of the surrounding to the operator. Thus, although the operator is far away from the robot, by watching this view on the screen, the operator can control the robot. S. Dasgupta et al. proposed a BCI to control a mobile robot. This BCI-platform using an iRobot [11] is controlled by commands sent from the brain interface sever via remote wireless connection, and transmits video feedback via Skype. However, this BCI-platform could not be away from the server due to its wireless connection, and it was hard to stop the platform because the operator has to select a command meaning stop continuously. In this paper, we develop a simple BCI platform called Andstorm that remotely controls a mobile robot, using smartphone and a robot built with a LEGO R MINDSTORMS R NXT 2.0. Figure 1 shows the concept of our system. The webcam installed in the Andstorm can transmit streaming view of its environment. We implement a steady-state visually evoked potential (SSVEP)-based BCI on this platform, and conduct the online experiment. In the designed platform, a user can choose ve commands; left, right, forward, back, and move the robotic arm. Moreover, as the idle state, the system can detect when the user does not gaze any visual stimuli. It should be noted that the overall system consists of very simple signal processing techniques. II. OVERVIEW OF A NDSTORM P LATFORM Figure 1 shows the concept of the system of Andstorm. The EEG signal is captured by bioampliers, then the EEG signal is band-pass ltered, is classied to a command, and the classied command is transmitted to the Android smartphone via the Internet. From the smartphone, the command is transmitted by the Android platform via Bluetooth [12] to a robot

978-1-4673-1490-9/12/$31.00 2012 IEEE

Fig. 2. Devices overview of Andstorm. In this work, we used an Android smart phone called Huawei U8150 with Android OS version 2.2, manufactured by HUAWEI Inc., and LEGO R MINDSTORMS R NXT Robot Kits for robot body.

Fig. 1. The concept of the Andstorm system be controlled via Wi-Fi or 3G network.

B. Data Transmission 1) BCI Commands: Signal acquisition, signal processing, and command discrimination are conducted on the server-side. The details of the BCI system processing in the server-side is shown in Sec III. The Android platform controls the LEGO robot via Bluetooth based on the command received from the server, and transmits camera view through the Internet as a visual feedback. The server sends the classied command to the robot via Wi-Fi or 3G network using the Internet Protocol (IP) called user datagram protocol (UDP) [15]. When the Android platform receives a command signal via Wi-Fi or 3G network, the platform transmits the command signal to the LEGO robot via a Bluetooth device in the platform. At last, the LEGO robot gets a command, then move itself, i.e. go front, turn left, or turn right, or move the robotic arm. 2) Video Streaming (Visual Neurofeedback): The Android platforms devices overview is shown in Fig. 2. The streaming view from the Andstorms camera is displayed on the servers monitor as feedback to a user. In this work, the camera of the Android platform captures frame pictures. The size of each picture is 320 240, 15 frame/s. The each frame is encoded by Joint Photographic Experts Group (JPEG). In our online experiments, the bitrate is about 690.5 kbps. The Android platform sends the image from camera to the server via Wi-Fi or 3G network using UDP. We can change the resolution of frame pictures and JPEG compress-rate, then reduce bitrate to adapt low-speed network conditions. III. M ETHOD FOR SSVEP-BCI
ON

build with a LEGO R MINDSTORMS R NXT 2.0 (we call this robot the LEGO robot, hereafter), and nally executed in the Andstorm. The streaming view from the Andstorms camera is displayed on the servers monitor as feedback to a user. A. Devices Figure 2 shows the devices overview of the Andstorm. In this work, we use an Android smart phone called Huawei U8150 where Android version is 2.2 produced by HUAWEI Inc., and LEGO R MINDSTORMS R NXT Robot Kits for a robot body. 1) Android Platform: Android is a software stack for mobile devices that includes an operating system, middleware and key applications. The Android SDK provides the tools and APIs which are necessary to develop applications on the Android platform using the Java programming language. Android is based on Linux version 2.6 for core system services such as security, memory management, process management, network stack, and driver model. 2) Robot Kits: The robot for this BCI platform is built with LEGO R MINDSTORMS R NXT 2.0, which is a programmable robotics kit released by LEGO R in late July 2006 [13]. The main component of the kit is a brick-shaped computer called the NXT Intelligent Brick. It can take inputs from up to four sensors and control up to three motors. Power is supplied by a Li-Ion rechargeable battery. The bulling tutorials of this robot is available in [14]. In this work, we control three motor in the robot to move itself, i.e. go front, turn left, or turn right, and to control the robotic arm pick up an object.

A NDSTORM

Note that any BCI algorithm for classifying users commands can be tested on the Andstorm. We adopt SSVEP

TABLE I SSVEP
FREQUENCY ASSIGNMENT TO THE COMMANDS FOR THE ROBOT

AFz

Command Turn left Turn right Go forward Go back Move up/down the arm

Frequency [Hz] 7 11 13 17 19

A1 Pz

O2

Oz

O1

Fig. 3. EEG electrodes placement. GND and reference electrodes are at AFz and A1, respectively.

to control this robot. To control a robot driven by BCI, the important state is the idle state. Previous works on SSVEP classication have mainly focused on the detection of the frequency of visual stimulus. This section develops how to detect EEG when the user does not gaze on any visual stimuli, resulting to the idle state. Technologies behind this system consist of canonical correlation analysis, multi-class linear discriminant analysis, and the Euclidean distance, sequentially. Figure 5 shows the ow diagram of signal processing and classication of the observed EEG signal. A. Subjects and Tasks Four males (Subjects A, B, C, and D) aged 2025 years took part into our experiment. All had normal vision. All subjects gave informed consent, and this study was approved by the research ethics committee of Tokyo University of Agriculture and Technology. B. EEG Recordings In the online SSVEP recognition, we use signals observed in electrodes named Oz, O1, O2, and Pz. The EEG electrodes are placed as shown in Fig. 3, based on the extended 1020 system. GND and reference electrodes are AFz and A1, respectively. The EEG signals are amplied by MEG6116 (Nihon Kohden), which provides high-cut and low-cut analogue lters for each channel. In this paper, we set the high-cut lter and the low-cut lter to 100 Hz and 0.08 Hz, respectively. The EEG signal is digitized by A/D converter (AIO-163202F-PE, CONTEC) with sampling rate of 1200 Hz. The signals are recorded with Data acquisition toolbox in the MATLAB (MathWorks). Moreover, all data are band-pass

Fig. 4.

A screenshot during experiment.

ltered between 320 Hz. A real-time classier analyze the multichannel EEG with time interval of 1.5 s. The system contains ve stimulus frequencies of 7 Hz, 11 Hz, 13 Hz, 17 Hz, and 19 Hz corresponding commands: turn left, turn right, go forward, go back, and move up/down the robotic arm, respectively. These commands are summarized in Table I. C. Visual Stimuli A 23 inches LCD display is used as a visual stimulator. The display is also used for displaying the video streaming from Andstorm. This display has 120 Hz refresh rate and 1920 1080 screen resolution. Figure 4 shows a screenshot which a user watches in our BCI system. During the whole experiments, subjects sat on a comfortable chair in front of the visual stimuli about 45 cm and focused on a ickering checkerboard. Each target is 63.6 mm square checkerboard, whose spatial frequency is 2 c/deg. There are ve targets on the screen. The left, right, top, bottom, and top-right targets correspond to command of turn left, turn right, go forward, go back, up/down the robotic arm, and ickered at 7 Hz, 11 Hz, 13 Hz, 17 Hz, and 19 Hz, respectively. Camera view from Android platform is displayed in the center.

D. Signal Processing and Classication SSVEP and the corresponding frequency of the gazing stimulus are detected by using canonical correlation analysis (CCA) and linear discriminant analysis (LDA). In particular, signal processing by the combination of CCA with LDA for SSVEP detection is novel and works well in this BCI application. 1) Canonical Correlation Analysis: CCA is a classical method for measuring similarity between two multivariable statistical signals. These signals do not necessarily have the same number of variables. This can be seen as an extension of the ordinary correlation between two random variables [16], [17]. The underlying idea behind CCA is to nd a pair of linear combinations, called canonical variables, for two sets, in such a way that the correlation between the two canonical variables is maximized. Details will be reviewed in the following. Consider two multivariate random signals, x(t) and y(t), T and their linear combinations X (t) = wx x(t) and Y (t) = T y(t). Recall that x(t) and y (t) can have different dimenwy sions. CCA nds wx and wy that maximize the correlation coefcient between X (t) and Y (t), by solving following problem;
wx ,wy

can be observed in Fig. 6 that canonical correlations is not uniformly distributed over frequencies, and the detection of the maximum cannonical correlation can fail in classcation. This may suggest that the distribution of canonical correlations should be considered to classify the observed EEG. Therefore, we propose in this paper to use the multiclass LDA which is a widely-used technique for pattern classication, nds a linear discriminant that yields optimal discrimination between classes [20]. Details are illustrated as follows. For each frequency fk , the input signal, x(t) gives the canonical correlation denoted by k , which forms an K dimensional vector dened as 1 2 (3) xLDA = . , . . K where k is again the canonical correlation between x and yfk . T Next, we introduce D > 1 linear futures zk = wk xLDA , where k = 1, . . . , D . These feature values can conveniently be grouped together to form a vector z. The weight vector {wk } can be considered to be the columns of a matrix W of size K D , so that z = WT xLDA , (4)

max = =

E [X (t)Y (t)] E [X (t)2 ]E [Y (t)2 ] T E [wx x(t)y(t)T wy ] T x(t)x(t)T w ]E [wT y(t)y(t)T w ] E [wx x y y
T wx Cxy wy , T C w wT C w wx xx x y yy x

which is a vector with D components. The generalization of the within-class covariance matrix to the case of K classes is SW =
K k=1

Sk ,

(5)

(1) where is called the canonical correlation, X (t) and Y (t) are called canonical variables, Cxx and Cyy are the within-sets covariance matrices, and Cxy is the between-sets covariance matrix. Lin et al. proposed the use of CCA method for multichannel SSVEP detection [18], and Bin et al. constructed an online SSVEP-based BCI using this method [19]. These methods assume that x(t) is a multichannel EEG signal and y(t) consists of simulated stimulus signals, which are ideal SSVEPs with frequency fk given as ( ) sin(2fk t) yfk (t) = . (2) cos(2fk t) To address the problem in SSVEP-based BCI systems to detect the frequency of the SSVEP component in the subjects EEG, in [18] and [19], canonical correlations k with respect to command frequencies fk are calculated and the frequency that maximizes the canonical correlation is chosen. 2) Linear Discriminant Analysis: We would like to point out that the conventional method of using CCA that nds the maximum of canonical variables for each frequency is not appropriate. Figure 6 illustrates averaged canonical correlations of EEG of Subject D with respect to frequencies of 1 to 21 Hz when the subject is gazing at the stimulus with 17 Hz. It

where Sk =

nCk

(xLDAn mk )(xLDAn mk )T , 1 xLDAn , Nk


nCk

(6)

mk =

(7)

and Nk is the number of samples in class Ck . In order to nd a generalization of the between-class covariance matrix, we follow Duda and Hart in [21] and consider rst the total covariance matrix ST =
N n=1

(xLDAn m)(xLDAn m)T ,

(8)

where m is the mean of the total data set m=


N K 1 1 xLDAn = Nk mk N n=1 N k=1

(9)

and N = k Nk is the total number of samples. The total covariance matrix can be decomposed into the sum of the within-class covariance matrix, given by Eq. (5) and Eq. (6), plus an additional matrix SB , which we identify a measure of the between-class covariance ST = SW + SB (10)

Fig. 5.

A ow diagram of SSVEP classication.

where SB =

0.8
K k=1

(mk m)(mk m)T .

(11)
0.6 Canonical correlation

These covariance matrices are dened in the original x-space. We can now dene similar matrices in the projected D dimensional y-space sW = and sB = where k =
K k=1 nCk K k=1

0.4

(zn k )(zn k )T

(12)

0.2

Nk (k )(k ) 1 zn , Nk
nCk

(13)
0 2 4 6 8 10 12 14 16 18 20

(14)

Frequency [Hz]

K 1 = Nk k . N k=1

(15)

Fig. 6. Result of canonical correlation on 1 to 21 Hz, when Subject D was gazing at the stimulus with 17 Hz. TABLE II ACCURACY WITHOUT NON - CONTROL STATE Subject Subject A Subject B Subject C Subject D Average proposed using LDA 0.950 1.000 0.754 0.621 0.753 Maximum of CCA 0.875 0.990 0.580 0.535 0.664

We wish to construct a scalar that is large when the betweenclass covariance is large and when the within-class covariance is small. According to [22], the criterion can be formulated by J (W)
1 = Tr{s s } (16) {W B } ( )1 ( ) = Tr WSW WT WSB WT .

The weight values are determined by the eigenvectors of 1 S W SB that correspond of the D largest eigenvalues. Table II shows the accuracy on online experiment I without non-control state. We used Lins method to compare using LDA. 3) Detection of Non-Control State: Because we stop the robot when the subject does not gaze at any stimuli, called the idle or non-control state, we include in xLDA the canonical correlation with respect to 4 Hz. This is motivated by the following observation. We found out that all of our subjects have the highest canonical correlation with respect to 4 Hz

when the subject does not gaze at any stimuli as shown in Fig. 7. This gure illustrate the averaged canonical correlations on 1 to 21 Hz over 40 trials for subject B. Note that this frequency of 4 Hz does not correspond to any SSVEP frequency of commands. Although we could increase the redundancy of frequency more, our experimental study indicates that the use of only 4 Hz leads to good enough in performance of BCI. 4) Classication: To classify the input vector, the Euclidean distance between the input and the mean of vectors

0.8

0.6 Canonical correlation

0.4

Fig. 8. Route of task. The line shows a route of online experiment task. The task of subject is to enable the robot to go around the cups forming the shape of 8 through watching display.

0.2

10

12

14

16

18

20

Frequency [Hz]

Fig. 7. Result of canonical correlation on 1 to 21 Hz, when Subject B was gazing at no stimuli. TABLE III ITR [ BIT / MIN ] RESULTS Subject Subject A Subject B Subject C Subject D Average Not using 4 Hz Accuracy ITR [bit/min] 0.804 56.669 0.979 95.620 0.571 24.120 0.479 15.075 0.708 47.871 Using 4 Hz Accuracy ITR [bit/min] 0.804 56.669 0.988 98.360 0.583 25.505 0.492 16.194 0.717 49.182

any commands, as mentioned in Sec. III-D3. In the rst case, xLDA includes ve canonical correlations, say, (f1 , . . . , f5 ) = (7, 11, 13, 17, 19) [Hz]. In the second case, xLDA consists of six components: (f1 , . . . , f6 ) = (4, 7, 11, 13, 17, 19) [Hz]. For both cases, the LDA matrix compress xLDA to a four dimensional vector, thus, the size of z is four (D = 4 in (4)). Table III shows the information transmission rate (ITR) [bit/min] results of each subject. ITR is a standard measure of communication systems, the amount of information communicated per unit time [23]. ITR depends on both speed and accuracy. The ITR can be expressed as ITR =S {log2 N + A log2 A [ ]} 1A + (1 A) log2 [bit/min] N 1 (18)

in each class projected into a D -dimensional subspace determined by WT x. The gazed target ickering at frequency fktarget is detected by the rule given as ktarget = arg min (z mk )T (z mk ). (17)
k

where N is the total number of commands, A is the accuracy, and S is the number of commands a minute. As listed in the table, the ITR Subject B achieved was 98.36 bit/min, which is indeed very high, and shown that using 4 Hz improves the classication accuracy. C. Experiment II In the second set of experiments, the Andstorm performance was measured while the subject operated in real-time. The task of subject is to enable the Andstorm to go around the cups forming the shape of 8, and pick up an object and then place it on a designated box through watching display. Figure 8 shows the route of the task, and Fig. 9 shows the route of one online experimental result. V. C ONCLUSIONS This work has proposed a mobile robot platform using Android platform and LEGO R MINDSTORMS R NXT robot kits called Andstorm. The Andstorm has a webcam that transmits streaming view and interact with its environment through streaming video wherever the user is. All data transmissions use UDP through the Internet. Next, we employed BCI based on SSVEP on this platform and conducted the online experiments. The aim of the presented study is to show that it is possible to control a mobile robot with an SSVEP-based BCI. The ability of handling the idle state was also investigated. We demonstrated the BCI system by the online experiment.

IV. E XPERIMENTAL R ESULTS We conducted two experiments. In Sec. IV-A, we will explain how to train LDA and the parameters for Eq. (17). Section IV-B shows the result of the experiment, and then Sec. IV-C shows the performance of a real-time experiment. A. Training Phase The weight matrix for LDA and the must be found prior to the online experiment. To this end, each subjects are required to watch each targets for 30 seconds, which implies that 20 samples of x(t) are obtained, because the signal is segmented every 1.5 seconds as mentioned in Sec. III-B. B. Experiment I To conrm the performance of classication of SSVEP and the idle state, all subjects were asked to watch each targets for 90 seconds. In this experiment, we just measured the accuracy in classifying the commands and the idle state without controlling the robot. We compare the classication result for two cases: using and not using 4 Hz, which does not correspond to

Fig. 9.

The route of one online experimental result.

[16] T. W. Anderson, An Introduction to Multivariate Statistical Analysis, 2nd ed. New York: John Wiley & Sons, 1984. [17] H. von Storch and F. Zwiers, An Introduction to Multivariate Statistical Analysis. Cambridge, U.K.: Cambridge Univ. Press, 1999. [18] Z. Lin, C. Zhang, W. Wu, and X. Gao, Frequency recognition based on canonical correlation analysis for SSVEP-based BCIs, IEEE Trans. Biomed. Eng., vol. 34, no. 6, pp. 11721176, 2007. [19] G. Bin, X. Gao, Z. Yin, B. Hong, and S. Gao, An online multi-channel SSVEP-based braincomputer interface using a canonical correlation analysis method, J. Neural. Eng., vol. 6, no. 4, 2009, article ID 046002, 6 pages. [20] C. M. Bishop, Pattern Recognition and Machine Learning. SpringerVerlag, 2006. [21] R. O. Duda, P. E. Hart, and D. G. Stork, Pattern Classication, 2nd ed. Wiley Interscience, 2000. [22] K. Fukunaga, Introduction to Statistical Pattern Recognition, 2nd ed. Academic Press, 1990. [23] J. R. Wolpaw, N. Birbaumer, W. J. Heetderks, D. J. McFarland, P. H. Peckham, G. Schalk, E. Donchin, L. A. Quatrano, C. J. Robinson, and T. M. Vaughan, Braincomputer interface technology: A review of the rst international meeting, IEEE Trans. Rehabil. Eng., vol. 8, pp. 164 173, Jun. 2000.

ACKNOWLEDGMENT This work is supported in part by KAKENHI, Grant-in Aid for Scientic Research (B), 21360179. R EFERENCES
[1] J. J. Vidal, Toward direct braincomputer communication, Annu. Rev. Biophys. Bioeng., vol. 2, pp. 157180, 1973. [2] J. R. Wolpaw, N. Birbaumer, D. J. McFarland, G. Pfurtscheller, and T. M. Vaughan, Braincomputer interfaces for communication and control, Clin. Neurophysiol., vol. 113, pp. 767791, 2002. [3] G. Pfurtscheller, C. Neupwe, C. Guger, W. Harkam, H. Ramoser, A. Schl ogl, B. Obermaier, and M. Pregenzer, Current trends in graz braincomputer interface (BCI) research, IEEE Trans. Rehabil. Eng., vol. 8, pp. 216219, Jun. 2000. [4] M. Middendorf, G. McMillan, G. Calhoun, and K. S. Jones, Brain computer interfaces based on the steady-state visual-evoked response, IEEE Trans. Rehabil. Eng., vol. 8, no. 2, pp. 211214, Jun. 2000. [5] M. Cheng, X. R. Gao, S. K. Gao, and D. Xu, Design and implementation of a braincomputer interface with high transfer rate, IEEE Trans. Biomed. Eng., vol. 49, no. 10, pp. 11811186, Oct. 2002. [6] Y. J. Wang, R. P. Wang, X. R. Gao, B. Hong, and S. K. Gao, A practical vep-based braincomputer interface, IEEE Trans. Neural Syst. Rehabil. Eng., vol. 14, no. 2, pp. 234240, Jun. 2006. [7] M. Cheng, X. R. Gao, and D. F. Xu, Design and implementation of braincomputer interface with high transfer rates, IEEE Trans. Biomed. Eng., vol. 49, no. 10, pp. 11811186, Oct. 2002. [8] G. R. M uller-Putz and G. Pfurtscheller, Control of an electrical prosthesis with an SSVEP-based BCI, IEEE Trans. Biomed. Eng., vol. 55, no. 1, pp. 361364, 2008. [9] P. Martinez, H. Bakardjian, and A. Cichocki, Fully online multi command braincomputer interface with visual neurofeedback using SSVEP paradigm, Comput. Intel. Neurosci., vol. 2007, 2007, article ID 94561, 9 pages. [10] H. Bakardjian, T. Tanaka, and A. Cichocki, Optimization of SSVEP brain responses with application to eight-command braincomputer interface, Neurosci. Lett., vol. 469, pp. 3438, 2010. [11] S. Dasgupta, M. Fanton, J. Pham, M. Willard, H. Nezamfar, U. Orhan, B. Shafai, and D. Erdogmus, Brain controlled robotic platform using steady state visual evoked potentials acquired by EEG, in Proc. Asilomar SSC, 2010. [12] (1998) Bluetooth special interest group. [Online]. Available: http://www.bluetooth.org/ [13] 8527 MINDSTORMS NXT Kit, MINDSTORMS Website. LEGO Group., Dec. 2008. [14] L. Valk, The LEGO MINDSTORMS NXT 2.0 Discovery Book: A Beginners Guide to Building and Programming Robots. San Francisco: No Starch Press, May 2010. [15] J. Postel, User datagram protocol. USC/Information Sciences Institute, Aug. 1980.

Das könnte Ihnen auch gefallen