Sie sind auf Seite 1von 8

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

IEEE TRANSACTIONS ON CYBERNETICS

1

EEG-Based Classification of Implicit Intention During Self-Relevant Sentence Reading

Suh-Yeon Dong, Bo-Kyeong Kim, and Soo-Young Lee

Abstract—From electroencephalography (EEG) data during self-relevant sentence reading, we were able to discriminate two implicit intentions: 1) “agreement” and 2) “disagreement” to the read sentence. To improve the classification accuracy, discrimi- nant features were selected based on Fisher score among EEG frequency bands and electrodes. Especially, the time-frequency representation with Morlet wavelet transforms showed clear dif- ferences in gamma, beta, and alpha band powers at frontocentral area, and theta band power at centroparietal area. The best clas- sification accuracy of 75.5% was obtained by a support vector machine classifier with the gamma band features at frontocen- tral area. This result may enable a new intelligent user-interface which understands users’ implicit intention, i.e., unexpressed or hidden intention.

Index Terms—Agreement/disagreement, electroencephalogra- phy (EEG), implicit intention, self-relevance.

I. INTRODUCTION

O VER the last 25 years, electroencephalography (EEG)- based brain–computer interfaces (BCIs) have been devel-

oped to provide assistance to physically impaired patients [1]. Therefore, current BCI technologies had focused on rec- ognizing and responding to users’ explicitly expressed intentions [2]–[4]. On the other hand, we believe that intel- ligent user interface for common people should also be able to understand users’ unexpressed or hidden intention. We use the term implicit intention to denote this unexpressed or hidden intention. However, surprisingly only a few research has been reported to recognize human implicit intention. Even all the previous researches on the implicit intention focused on hidden inten- tion or lie detection, i.e., whether or not the user’s explicitly expressed intention is the same as the actual intention [5], [6].

We focus on another type of the implicit intention, i.e., unex- pressed intention, especially whether or not a user agrees with the others during conversation or sentence reading.

Manuscript received July 13, 2015; accepted September 9, 2015. This work was supported in part by the Brain Research Program through the National Research Foundation of Korea funded by the Ministry of Science, ICT, and Future Planning under Grant 2013-035100 (2013.05.01-2014.04.30), and in part by the U.S. Air Force Research Laboratory through the Asian Office of Aerospace Research and Development. This paper was recommended by Associate Editor A. Cichocki. The authors are with the Department of Electrical Engineering, Korea Advanced Institute of Science and Technology, Daejeon 305-701, Korea (e-mail: suhyeon.dong@gmail.com; kbghome@kaist.ac.kr; sylee@kaist.ac.kr). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TCYB.2015.2479240

We postulated that the implicit intentions of “agreement” and “disagreement” elicit different brain signals, and a recent functional magnetic resonance imaging (fMRI) study showed different activation areas for the unexpressed intentions of agreement and disagreement [7]. In this paper, EEG signals are used to classify the agreement-versus-disagreement implicit intention while reading a self-relevant sentence. EEG is a useful technique because of its noninvasive nature and high temporal resolution. In particular, EEG oscillatory responses are functionally related to cognitive processing [8]. Recently, changes in frequency band powers were reported during language processing, specifically syntactic and seman- tic processing at the sentence level. Increased power in the theta band was observed in response to grammatical violations [9]–[11]. Most of the increase in the theta band power was found at the temporal areas, whereas the power increase in the gamma band was also found at the frontal areas in sentence processing [12], [13]. Furthermore, the alpha band power decreased with semantic congruency [14], while the beta band power decreased in response to semantic violations during reading Chinese sentences [15]. The EEG-based studies were further extended with self- relevant sentences with stronger responses. The top-down attention to self-relevant information has been widely reported in human perception. The most famous example of this phe- nomenon is known as the “cocktail party effect” [16], whereby an individual’s auditory attention can be focused on a par- ticular stimulus (e.g., a conversation containing self-relevant items) while filtering out other stimuli (e.g., other conversa- tions taking place nearby). More recently, based on both EEG and fMRI data, many researchers have suggested that self- relevant information has preferential access to our perceptual systems. The exposure to one’s own name evokes stronger P300 and N250 responses than other stimuli [17]. Moreover, it was reported that autobiographical words or phrases evoke P300 responses [18], and that sentences written in the first per- son elicited early components (P1, N1, and P2) [19]. Also, one fMRI study compared brain responses between self-knowledge and semantic knowledge [20]. However, the self-relevance may not be sufficient to excite strong brain signals. If people do not agree with the state- ment, the brain responses may not reflect the self-relevance. Even in cases with strong autobiographical words or phrases written in the first person, people may not consider them self-relevant. Therefore, with respect to self-relevant stim- uli, it is important to distinguish “agreed” responses from “disagreed” responses. However, there has been no reported

2168-2267 c

2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

2

is final as presented, with the exception of pagination. 2 Fig. 1. statement (contents), a positive

Fig. 1.

statement (contents), a positive or negative ending phrase (sentence ending),

and a subject’s response. For each subject, 74 trials were made in sequence separated by a fixation cross.

Experimental paradigm. One trial consisted of a fixation cross, main

research to further discriminate “Yes” (or agreement) versus “No” (or disagreement) responses on the self-relevant state- ments. Although a recent study investigated discriminating brain responses between “Yes/No” thinking, which actually came from a semantic violation [21].

We had reported differences on fMRI [7], [22], [23] and multichannel EEG signals [24], [25] between agreement and disagreement to self-relevant statements. This paper presents

a new time-frequency EEG analysis based on Morlet wavelet transform and discriminant electrode selection.

II. METHOD AND MATERIALS

A. Subjects

Nine healthy right-handed Korean subjects (six males and

three females between 20 and 30 years old) were recruited for this paper. All were either undergraduate or graduate stu- dents, and participated voluntarily. None of the subjects had

a history of mental disorder, significant physical illness, head

injuries, neurological disorder, or alcohol or drug dependence.

Written informed consent was obtained from all subjects, and this paper was approved by the institutional review board at the Korea Advanced Institute of Science and Technology.

B. Experimental Paradigm

Each subject participated an experimental session of 74 tri- als. As shown in Fig. 1, each trial started with a fixation cross and a beeping sound for the subject’s readiness, which was followed by two images of a stimulating sentence to read. Each statement consisted of a contents block and a sentence- ending block. (The idea behind this division will be explained in Section II-C.) The contents block was shown for 4 s and the sentence-ending block was shown for 2 s. Immediately after- wards, an asterisk image was presented for 2 s, and subjects were instructed to answer whether or not they agreed with the statement. A blank image was presented for 2 s at the end of each trial. During the experiment, subjects were asked to remain still and to avoid eye blinking as much as possible, especially just after the onset of the contents block. Before, during, and after the experiment, the subjects were carefully monitored to make sure of their good performance without any difficulty.

IEEE TRANSACTIONS ON CYBERNETICS

C. Stimulating Statements

We aimed to discriminate EEG responses for agreement and disagreement cases while reading self-relevant statements. Self-relevant statements are those related to personal experi- ences or opinions, and expected to generate stronger brain signals than irrelevant sentences. Seventy-four stimulating sentences were selected from the Minnesota multiphasic personality inventory-II [26], one of the most frequently used standardized psychometric tests. All sentences were converted into Korean, a subject-object- verb (SOV) language. Also, all sentences were ending with only one type of verb, i.e., existence verbs such as “do exist” and “do not exist.” Therefore, an English sentence “I do (not) worry a great deal over money” was converted into an SOV typology with a contents block “The experience of worrying over money” and an ending block “Does (not) exist.” Here, the negated sentence was formed by simply changing only the ending block from “Does exist” to “Does not exist.” At the design stage of the experiment, we were not sure whether the positive and negative endings might result in dif- ferent brain signals. Therefore, each sentence was decomposed into two blocks: 1) the contents block and 2) the ending block, and EEG signals with the contents blocks and ending blocks were investigated separately. We had two hypotheses to prove by this experiment. The first was that, as the main objective of this paper, the implicit intentions of agreement and disagree- ment were elicited by reading only the contents block. The second was that the positive and negative ending blocks did not make any difference.

D. EEG Recording and Preprocessing

The EEG data were recorded using the BrainAmp sys- tem (Brain Products GmbH, Germany) and an EEG cap with 32 electrodes (BrainCap) in an electromagnetically shielded room. Thirty electrodes were placed on each sub- ject’s scalp according to the international 10–20 system. One electrode was positioned below subject’s left eye to record electrooculogram for eye movement. Another electrode was placed above subject’s left collarbone to record electrocar- diogram. The impedance of each electrode was maintained below 10 k . Data were acquired at a sampling rate of 500 Hz with 60 Hz notch filter. The EEG data were measured with a reference at FCz electrode, but later converted into values with the aver- age reference. High-pass filtering was implemented with 1 Hz cutoff frequency and 0.2 Hz transition bandwidth. Independent component analysis (ICA) based on extended Infomax algo- rithm was used to identify artifacts. Then, artifacts from eye blinking, eye movement, and heartbeats were manually selected and removed from the measured data [27]–[29]. Since EEG data elicited by the contents block were of our concern, the time reference was moved to the onset of the contents block for each trial. Then, EEG data from 0.2 to 1 s were extracted, and the baseline correction was adopted by subtracting the mean amplitude within the range of [0.2 s, 0 s]. In Fig. 2, averages of the resulting EEG data, i.e., event-related potential (ERP), were shown for the agreement

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

DONG et al.: EEG-BASED CLASSIFICATION OF IMPLICIT INTENTION DURING SELF-RELEVANT SENTENCE READING

3

IMPLICIT INTENTION DURING SELF-RELEVANT SENTENCE READING 3 Fig. 2. Average ERPs for a representative electrode site

Fig. 2. Average ERPs for a representative electrode site (Cz) of subject 4 for the agreement condition (black-dashed line) and the disagreement condition (charcoal-gray solid line). ERPs were time-locked to the onset of the contents block (vertical-dotted line).

and disagreement trials. The disagreement ERP showed higher amplitudes at two time intervals, i.e., [250 ms, 400 ms] and [750 ms, 850 ms].

E. Morlet Wavelets for Time-Frequency Representation

To classify the implicit intentions into two classes from each single-trial EEG, instead of the averaged ERP, frequency band powers of event-related oscillatory responses were estimated for each trial. There are several methods for extracting time-frequency joint representation from time-dependent ERP data. The spec- trogram based on fast Fourier transform of time-framed data is the popular choice, but the time and frequency resolu- tions become the same throughout the whole time-frequency region. To obtain proper time and frequency resolutions for each region, Morlet wavelet transform was adopted [30]. The mother wavelet was designed with a seven-cycle width, and scaled for center frequencies from 1 to 70 Hz. In this man- ner, a time-frequency representation (TFR) was obtained for each trial and electrode. The power values in each TFR were converted into percentage changes relative to the power in the baseline interval between 0.2 and 0 s prior to the onset of the contents.

III. DATA ANALYSIS AND RESULTS

A. Labeling of Explicit Intention

In accordance with the user responses, the recorded EEG for each trial was labeled as agreement or disagreement. Because some sentences contain contents related to personal privacy, this might cause a discrepancy between implicit and explicit answers. The social-desirability bias might cause more answers of “to-be-viewed favorably by others” [31]. To test the validity of tagging implicit intentions with explicit answers, a survey was conducted for 13 subjects (seven males and six females between 20 and 30 years old), who did not participate in the EEG experiment. For each of the 74 statements used in the EEG experiments, the subjects were asked to select either “be unwilling to respond honestly” or “do not care.” Table I summarizes the numbers of subjects who were unwilling to respond honestly for each statement. For six state- ments, three subjects answered that they were unwilling to respond honestly. There were 13 and 9 statements, respec- tively, to which two subjects and one subject showed their unwillingness to respond honestly. Using these results, the

TABLE I

NUMBER OF STATEMENTS WITH UNWILLINGNESS TO RESPOND HONESTLY

S TATEMENTS W ITH U NWILLINGNESS TO R ESPOND H ONESTLY TABLE II N UMBER OF
S TATEMENTS W ITH U NWILLINGNESS TO R ESPOND H ONESTLY TABLE II N UMBER OF
S TATEMENTS W ITH U NWILLINGNESS TO R ESPOND H ONESTLY TABLE II N UMBER OF

TABLE II

NUMBER OF TRIALS IN AGREEMENT AND DISAGREEMENT CLASSES

OF T RIALS IN A GREEMENT AND D ISAGREEMENT C LASSES probability of discrepancy between the
OF T RIALS IN A GREEMENT AND D ISAGREEMENT C LASSES probability of discrepancy between the
OF T RIALS IN A GREEMENT AND D ISAGREEMENT C LASSES probability of discrepancy between the
OF T RIALS IN A GREEMENT AND D ISAGREEMENT C LASSES probability of discrepancy between the
OF T RIALS IN A GREEMENT AND D ISAGREEMENT C LASSES probability of discrepancy between the

probability of discrepancy between the implicit and explicit intentions was estimated as 0.055(= (3 × 6 + 2 × 13 + 1 × 9)/ (13 × 74)). This probability may provide a limitation on the final experimental accuracy, but may not be significant to our binary classification accuracies. However, every efforts were made to assure anonymity of subjects throughout the experiments for reliable and honesty responses. The numbers of good trials for the agreement and disagree- ment classes for each subject are shown in Table II. Although 74 trials were given to all subjects, subject 5 missed one trial and subject 6 misunderstood the task during the first few trials. The numbers of trials become unbalanced between the two classes. Therefore, the overall classification accuracy was defined as an average classification rates of the two classes.

B. Feature Selection: Time-Frequency Components

To understand the neural mechanism as well as to improve the classification accuracy with small number of available data, feature selection was implemented in two steps. At the first step, several time-frequency components were selected with high discrimination capacity between the agreement and disagreement classes. For this purpose, based on previous fMRI studies with sim- ilar experiments [7], five frontal electrodes close to superior frontal gyrus and anterior cingulate cortex were considered as shown in Fig. 3(a). These areas were known to be related to decision-making [32], empathic judgments [33], and self- descriptive trait judgment [34]. The TFRs based on Morlet wavelet transforms were aver- aged for all subjects, with agreement and disagreement classes separately, at the five electrodes. Then, the difference of percentage changes between the two classes was plotted in Fig. 3(b).

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

4

IEEE TRANSACTIONS ON CYBERNETICS

exception of pagination. 4 IEEE TRANSACTIONS ON CYBERNETICS Fig. 3. Average difference in TFRs between agreement

Fig. 3. Average difference in TFRs between agreement and disagreement classes for five frontal electrodes. (a) Five considered frontal electrodes, i.e., F3, Fz, F4, FC1, and FC2, in yellow color. (b) Percentage differences (agreement–disagreement) of Morlet wavelet-based TFRs. The x-axis and the y-axis represent time and frequency, respectively. Boxes within the TFRs indicate the time-frequency components with significant differences. (c) Scalp topographic maps of the five selected time-frequency components.

TABLE III

SNR OF EACH TIME-FREQUENCY COMPONENT

TABLE III SNR OF E ACH T IME -F REQUENCY C OMPONENT With a careful inspection

With a careful inspection for significant percentage changes in Fig. 3(b), five time-frequency components were selected as: 1) the gamma component (35–45 Hz) between 350 and 550 ms; 2) the beta2 component (20–26 Hz) between 300 and 450 ms; 3) the beta1 component (14–17 Hz) between 800 and 1000 ms; 4) the alpha component (9–12 Hz) between 300 and 700 ms; and 5) the theta component (5–7 Hz) between 400 and 1000 ms after the onset of the contents. For each time-frequency component, the signal-to-noise ratios (SNRs) were shown in Table III. The SNRs were calcu- lated for each time-frequency component, but averaged over subjects, trials, and electrodes. The noise power was estimated at the baseline time period [0.2 s, 0 s], and the signal power was estimated by subtracting the noise power from the total power. Due to the low SNR of entire time-frequency region, it is advantageous to consider the specific time-frequency region. Although the gamma component has negative SNR, it is still discriminant in Fig. 3(b). In Fig. 3(c), scalp topographic maps were shown for the five selected time-frequency components using the average power for each component [35]. When subjects disagreed, the aver- age power increased in gamma, beta1, and alpha bands around

age power increased in gamma, beta1, and alpha bands around Fig. 4. Electrode (or channel) selection

Fig. 4. Electrode (or channel) selection procedure for each time-frequency component. Fisher scores were computed for all electrodes, and sorted in descending order. (a) Compute Fisher score for each channel. (b) Channel selection based on Fisher score.

frontocentral area. However, when subjects agreed, beta2 and theta band activities increased at frontal area. Moreover, at low frequencies such as alpha and theta bands, some parietal and occipital regions also showed large differences between agreement and disagreement.

C. Feature Selection: Electrode Positions

For each of the five chosen time-frequency components, the importance of each electrode positions were evaluated, and only a subset was selected for efficient classification. As shown

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

DONG et al.: EEG-BASED CLASSIFICATION OF IMPLICIT INTENTION DURING SELF-RELEVANT SENTENCE READING

5

TABLE IV

FIVE BEST ELECTRODES FOR EACH TIME-FREQUENCY COMPONENT BASED ON FISHER SCORE

T IME -F REQUENCY C OMPONENT B ASED ON F ISHER S CORE in Fig. 3

in Fig. 3(c), there existed some high activation differences outside of the frontal areas, and all the 30 electrode positions were considered. The Fisher score is one of the most widely used measures of feature discriminant ability [36]. For each time-frequency component, the Fisher score for the ith electrode (or channel) is computed as

F i = K

k=1 n k μ

k i μ i 2

k=1 n k σ k 2

i

K

i

where n k is the number of trials of the kth class, and μ k and σ k denote the mean and standard deviation for the kth class at the ith electrode, respectively. Here, K = 2 is the number of classes. Also, μ i denotes the mean for the entire data at the ith electrode. After computing the Fisher score for all electrodes, top M ranked electrodes were selected and used as the inputs to a classifier. The electrode selection procedure for each time- frequency component is summarized in Fig. 4. Table IV summarizes the five best electrodes for each time- frequency component. For the gamma component, the selected electrodes were located at the frontocentral regions (F3 and FC5 on the left and FC2 on the right), the right tempo- ral region (T8), and the left centroparietal region (CP5). For the beta2 component, the selected electrodes were located at the left central regions (C3 and CP5) and frontal regions (FC1, Fp1, and Fp2). For the beta1 component, the selected electrodes were located at the frontocentral regions (F3 and FC1 on the left, and F4 on the right), the right tempo- ral region (T8), and the left parietal region (P7). For the alpha component, the selected electrodes were located at the frontocentral regions (Fz, FC1, and F4) and left central regions (C3 and Cp1). For the theta component, the selected electrodes were located at the centroparietal regions (C3 and CP5 on the left and CP2 on the right) and the left parietal region (P3 and P7). For the gamma to alpha components, the frontocentral elec- trodes were found to contain the discriminating features. By contrast, the theta component of the centroparietal electrodes was more discriminant. Moreover, the gamma component had higher Fisher scores, and therefore higher classification performance than the other components.

i

D. Single-Trial Classification Results

Our goal was to classify users’ implicit intention into agreement and disagreement binary classes from single-trial

agreement and disagreement binary classes from single-trial Fig. 5. Training (the top plot) and testing (the

Fig. 5. Training (the top plot) and testing (the bottom plot) accuracies for each time-frequency component with increasing number of selected electrodes using (a) linear kernel and (b) RBF kernel for binary SVM classifiers. The x- axis represents the number of selected electrodes, and the y-axis represents the average classification accuracy (%). Accuracies were averaged with fivefold cross-validations for each subject, and then averaged again for all nine sub- jects. The line colors darken as the band frequency increases. The red-dotted line indicates the chance level, i.e., 50%.

EEG data. Simple linear discriminant analysis (LDA) was first applied, but the results were not satisfactory. Therefore, support vector machine (SVM) was applied with two types

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

6

is final as presented, with the exception of pagination. 6 Fig. 6. Training (the top plot)

Fig. 6. Training (the top plot) and testing (the bottom plot) accuracies using all five time-frequency components with increasing number of selected electrodes using a linear and RBF kernel for SVM classifiers. The x-axis repre- sents the number of selected electrodes, and the y-axis represents the average classification accuracy (%). Accuracies were averaged with fivefold cross- validations for each subject, and then averaged again for all nine subjects. The red-dotted line indicates the chance level, i.e., 50%.

of kernel functions. The LIBSVM toolbox was used in [37]. Considering the fact that LDA did not work well, the two classes might not be linearly separable, and radial basis func- tion (RBF) kernel was expected to perform better. The optimal values for the SVM parameters were found through an exhaus- tive grid search during the validation phase [38]. Due to the small data available, fivefold cross-validation was performed using the test data for the validation. With the electrode selection in the previous section, the clas- sification results were calculated as functions of the number of selected electrodes, for each time-frequency component and for all components. Fig. 5 shows the training and testing accuracies versus the number of selected electrodes for SVM classifiers with linear and RBF kernels. As the number of electrodes increased, the training accuracy improved for both SVMs. The testing accu- racies for most components also increased as more electrodes were used with the linear kernel, whereas the testing accu- racy for the gamma component decreased when more than 20 electrodes were used. By contrast, with the RBF kernel, the testing accuracy started to decrease from smaller num- ber of electrodes. Actually, the gamma component with five most discriminant electrodes showed the best performance, at 75.5% (for these five electrodes of the gamma component, see Table IV). Over-fitting on the training data might be the reason. Furthermore, all the five time-frequency components were used for the classification. The 150 Fisher score values from 5 components and 30 electrodes were sorted, and selected from the highest Fisher score. Fig. 6 shows the training and testing classification rates (%) for both linear and RBF SVMs. Using the RBF kernel, the maximum test classification rate was

IEEE TRANSACTIONS ON CYBERNETICS

TABLE V

MAXIMUM TESTING ACCURACY (%) FOR EACH COMPONENT AND CLASSIFIER. PARENTHESIS INDICATES THE NUMBER OF SELECTED CHANNELS REQUIRED TO MAKE THE MAXIMUM TESTING ACCURACY

R EQUIRED TO M AKE THE M AXIMUM T ESTING A CCURACY obtained at 74.0% with

obtained at 74.0% with four electrodes from gamma, beta1, and beta2 components. However, the linear kernel required 20 electrodes to reach its highest accuracy at 64.7%. The rel- atively poorer performance with all five components may come from overfitting. Table V summarizes the maximum single-trial testing accu- racy and the number of electrodes for each time-frequency component and with both kernel types for the SVM classifiers. The gamma component had higher Fisher scores than the other components, which resulted in a more accurate classification performance.

IV. DISCUSSION

In this paper, using single-trial multichannel EEG data, we proposed a new method to classify human implicit intentions into agreement and disagreement classes.

A. Concept of Implicit Intention

The term “implicit intention” in this paper refers to the unexpressed or hidden intention. Especially, we had focused on the agreement versus disagreement classes among the unexpressed intentions. On the other hand, a few others had reported lie-detection, which is related to the hidden intention. The implicit agreement (or disagreement) to a given statement may be regarded as the agreement (or disagreement) with a counterpart in conversation. On the other hand, lie detec- tion is concerned with agreement (or disagreement) to ones expressed intention. While a vast body of research exists on lie detection, our concept of implicit intention on agreement versus disagreement is novel.

B. Experiment Paradigm

In most experiments concerning lie detection, subjects are asked to tell a lie in order to create a situation where neural activities related to lying may be assessed. However, there may be differences between this artificially generated and truly nat- ural cases. Our experiment was constructed to elicit subjects’ natural and reflexive intentions on self-relevant statements. The self-relevance had been reported to generate pronounced intention of agreement or disagreement and stronger neural activities. Although Ruf et al. [21] had a similar experimental paradigm, their yes thinking and no thinking were generated by semantic violation. Our single-trial experiments yielded

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

DONG et al.: EEG-BASED CLASSIFICATION OF IMPLICIT INTENTION DURING SELF-RELEVANT SENTENCE READING

7

a superior classification performance in terms of classification

accuracy, which probably came from the strong and profound neural signals to the self-relevant statements. Our experiment was designed specifically for Korean speak- ers, and took advantage of the SOV Korean grammar. The stating sentence was divided into two parts: 1) sentence con- tents and 2) ending. The sentence ending part may be negated, ex., “does not.” It was shown that the neural activities related to the implicit intentions of agreement and disagreement were generated by the sentence contents. Also, the careful analy- sis of EEG data at the sentence ending block showed that the positive “does” and negative “does not” endings did not make any noticeable difference in our preliminary studies. It demon- strated that a user’s implicit intention is determined before the sentence ending. For English speakers, the experimental design may be modified to merge the contents and ending blocks with only positive verbs.

C. Feature Selection in Time-Frequency Space Domain

Unlike many other researches using ERP in time domain or band powers in frequency domain, time-frequency joint repre- sentation was adopted in this paper. Since the ERP and band powers may be obtained by integrating the time-frequency spectrum in frequency and time domain, respectively, the TFR is more informative. Also, to obtain appropriate time and fre- quency resolutions throughout the whole time and frequency domain, Morlet wavelet transform was implemented. As a result, five time-frequency components were identified to be discriminative features between the agreement and dis- agreement classes. The gamma and theta components within

a similar latency range were also observed in the general-

knowledge violation condition for sentence comprehension in [12] and [13]. In the context of agreement versus dis- agreement intention, the general-knowledge violation may be regarded as a disagreement with the propositional statements. Also, the important electrodes were found in the same brain areas, i.e., the frontocentral areas for the gamma component. This similarity may come from the experimental design of agreement versus disagreement cases. However, the beta com- ponent was not found in the knowledge violation experiments. This difference may result from the self-relevance versus general knowledge of the stating sentences. However, there is a drawback to our feature selection method. The Fisher score was evaluated separately for each electrode, while some combination of time, frequency, and electrode position may contribute to the classification. One may try every possible feature combinations. Or, before feed- ing to the SVM classifier, one may remove the dependency among features by using ICA. However, considering that an overfitting occurred with only five features and all pos- sible 150 features did not improve the testing accuracy in Fig. 6, no significant improvement is expected from the 75.5% classification rate with 5 gamma-band features.

V. C ONCLUSION

In this paper, by applying Morlet-wavelet time-frequency analysis to multichannel EEG data, it was demonstrated that

human implicit intentions of agreement versus disagreement were classified with 75.5% accuracy from single-trial EEG signals. Also, it was shown that the gamma-band power from 350 to 550 ms after the onset of stating sentence played an important role for the classification. These results suggest that different neuro-cognitive pro- cesses are involved in the two implicit intentions, i.e., agree- ment versus disagreement, when people read self-relevant sentences. It may prove our hypothesis that agreement and disagreement consist of an axis on human implicit inten- tion space. Therefore, for a new intelligent human–machine interface, one may use EEG signals to generate labels for the audio-visual training database for the classification of agreement versus disagreement during conversation. The other alternative may be self-report, which is impossible to be obtained in real-time and may not be reliable for sensitive per- sonal issues. Therefore, although the EEG-based labels may include some error, it may be utilized at early developmental stage. Also, the accuracy may be further improved by online incremental learning.

REFERENCES

[1] J. R. Wolpaw, N. Birbaumer, D. J. McFarland, G. Pfurtscheller, and

T. M. Vaughan, “Brain-computer interfaces for communication and

control,” Clin. Neurophysiol., vol. 113, no. 6, pp. 767–791, Jun. 2002. [2] B. Blankertz et al., “The Berlin brain-computer interface: EEG-based communication without subject training,” IEEE Trans. Neural Syst. Rehabil. Eng., vol. 14, no. 2, pp. 147–152, Jun. 2006.

[3] D. Coyle, G. Prasad, and T. M. McGinnity, “Faster self-organizing fuzzy neural network training and a hyperparameter analysis for a brain– computer interface,” IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 39, no. 6, pp. 1458–1471, Dec. 2009.

C. Escolano, J. M. Antelis, and J. Minguez, “A telepresence mobile robot controlled with a noninvasive brain–computer interface,” IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 42, no. 3, pp. 793–804, Jun. 2012.

[5] L. A. Farwell and E. Donchin, “The truth will out: Interrogative polygraphy (“lie detection”) with eventrelated brain potentials,” Psychophysiology, vol. 28, no. 5, pp. 531–547, Sep. 1991. [6] C. Davatzikos et al., “Classifying spatial patterns of brain activity with machine learning methods: Application to lie detection,” NeuroImage, vol. 28, no. 3, pp. 663–668, Nov. 2005. [7] S.-Y. Dong, B.-K. Kim, and S.-Y. Lee, “Implicit agreeing/disagreeing intention while reading self-relevant sentences: A human fMRI study,” Soc. Neurosci. [Online]. Available: http://dx.doi.org/10.1080/

[4]

[8] E. Ba¸sar, M. Schürmann, T. Demiralp, C. Ba¸sar-Eroglu, and

A. Ademoglu, “Event-related oscillations are ‘real brain responses’—

Wavelet analysis and new strategies,” Int. J. Psychophysiol., vol. 39, nos. 2–3, pp. 91–127, Jan. 2001. [9] M. C. M. Bastiaansen, J. J. A. van Berkum, and P. Hagoort, “Event- related theta power increases in the human EEG during Online sentence processing,” Neurosci. Lett., vol. 323, no. 1, pp. 13–16, Apr. 2002. [10] D. Roehm, M. Schleschewsky, I. Bornkessel, S. Frisch, and H. Haider, “Fractionating language comprehension via frequency characteristics of the human EEG,” Neuroreport, vol. 15, no. 3, pp. 409–412, Mar. 2004. [11] D. J. Davidson and P. Indefrey, “An inverse relation between event- related and time–frequency violation responses in sentence processing,” Brain Res., vol. 1158, pp. 81–92, Jul. 2007.

[12] P. Hagoort, L. Hald, M. Bastiaansen, and K. M. Petersson, “Integration of word meaning and world knowledge in language comprehension,” Science, vol. 304, no. 5669, pp. 438–441, Apr. 2004. [13] L. A. Hald, M. C. Bastiaansen, and P. Hagoort, “EEG theta and gamma responses to semantic violations in Online sentence processing,” Brain Lang., vol. 96, no. 1, pp. 90–105, Jan. 2006. [14] R. M. Willems, R. Oostenveld, and P. Hagoort, “Early decreases in alpha and gamma band power distinguish linguistic from visual infor- mation during spoken sentence comprehension,” Brain Res., vol. 1219,

pp. 78–90, Jul. 2008.

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

8

[15] Y. Luo, Y. Zhang, X. Feng, and X. Zhou, “Electroencephalogram oscil- lations differentiate semantic and prosodic processes during sentence reading,” Neuroscience, vol. 169, no. 2, pp. 654–664, Aug. 2010. [16] E. C. Cherry, “Some experiments on the recognition of speech, with one

and with two ears,” J. Acoust. Soc. Am., vol. 25, no. 5, pp. 975–979, Sep. 1953. [17] K. Zhao, Q. Wu, H. D. Zimmer, and X. Fu, “Electrophysiological correlates of visually processing subject’s own name,” Neurosci. Lett., vol. 491, no. 2, pp. 143–147, Mar. 2011. [18] H. M. Gray, N. Ambady, W. T. Lowenthal, and P. Deldin, “P300 as an index of attention to self-relevant stimuli,” J. Exp. Soc. Psychol., vol. 40, no. 2, pp. 216–224, Mar. 2004. [19] E. C. Fields and G. R. Kuperberg, “It’s all about you: An ERP study of emotion and self-relevance in discourse,” NeuroImage, vol. 62, no. 1, pp. 562–574, Aug. 2012. [20] S. C. Johnson et al., “Neural correlates of self-reflection,” Brain, vol. 125, no. 8, pp. 1808–1814, Aug. 2002. [21] C. A. Ruf et al., “Semantic classical conditioning and brain–computer interface control: Encoding of affirmative and negative thinking,” Front. Neurosci., vol. 7, no. 23, pp. 1–13, Mar. 2013. [22] S.-Y. Dong and S.-Y. Lee, “Recognition of human implicit intention based on fMRI and EEG,” presented at Neuroinformat., Stockholm, Sweden, 2013. [Online]. Available: http://www.frontiersin.org/10.3389/

[23] S.-Y. Dong and S.-Y. Lee, “Understanding human implicit intention while reading self-relevant sentences: An fMRI study,” in presented at the 19th Annu. Meeting OHBM, Seattle, WA, USA, Jun. 2013. [24] S.-Y. Dong and S.-Y. Lee, “Understanding human implicit intention based on frontal electroencephalography (EEG),” in Proc. IJCNN, Brisbane, QLD, Australia, Jun. 2012, pp. 1–5. [25] S.-Y. Dong, B.-K. Kim, and S.-Y. Lee, “Decoding and predicting implicit agreeing/disagreeing intention based on electroencephalogra- phy (EEG),” in Neural Information Processing (LNCS 8227). Berlin, Germany: Springer, 2013, pp. 587–594. [26] J. N. Butcher, W. G. Dahlstrom, J. R. Graham, A. Tellegen, and B. Kaemmer, The Minnesota Multiphasic Personality Inventory-2 (MMPI-2): Manual for Administration and Scoring. Minneapolis, MN, USA: University of Minnesota Press, 1989. [27] A. Delorme, J. Palmer, J. Onton, R. Oostenveld, and S. Makeig, “Independent EEG sources are dipolar,” PLoS One, vol. 7, Feb. 2012, Art. ID e30135. [28] T. W. Lee, M. Girolami, A. J. Bell, and T. J. Sejnowski, “A unifying information-theoretic framework for independent component analysis,” Comput. Math. Appl., vol. 39, pp. 1–21, Jun. 2000. [29] A. Delorme and S. Makeig, “EEGLAB: An open source toolbox for analysis of single-trial EEG dynamics including independent component analysis,” J. Neurosci. Meth., vol. 134, no. 1, pp. 9–21, Mar. 2004. [30] A. Grossmann and J. Morlet, “Decomposition of hardy functions into square integrable wavelets of constant shape,” SIAM J. Math. Anal., vol. 15, no. 4, pp. 723–736, 1984. [31] R. J. Fisher, “Social desirability bias and the validity of indirect questioning,” J. Consum. Res., vol. 20, no. 2, pp. 303–315, Sep. 1993. [32] C. S. Carter et al., “Anterior cingulate cortex, error detection, and the online monitoring of performance,” Science, vol. 280, no. 5364, pp. 747–749, May 1998.

[33]

T. F. Farrow et al., “Investigating the functional anatomy of empathy and

forgiveness,” Neuroreport, vol. 12, no. 11, pp. 2433–2438, Aug. 2001. [34] I. I. Goldberg, M. Harel, and R. Malach, “When the brain loses its self:

Prefrontal inactivation during sensorimotor processing,” Neuron, vol. 50, no. 2, pp. 329–339, Apr. 2006. [35] Z. J. Koles, “The quantitative extraction and topographic mapping of

the abnormal components in the clinical EEG,” Electroen. Clin. Neuro., vol. 79, no. 6, pp. 440–447, Dec. 1991. [36] R. Duda, P. Hart, and D. Stork, Pattern Classification . New York, NY, USA: Wiley, 2001.

IEEE TRANSACTIONS ON CYBERNETICS

[37] C. C. Chang and C. J. Lin, “LIBSVM: A library for support vector machines,” ACM Trans. Intell. Syst. Technol., vol. 2, no. 3, Apr. 2011. [Online]. Available: http://www.csie.ntu.edu.tw/~cjlin/libsvm [38] C. W. Hsu, C. C. Chang, and C. J. Lin (Apr. 2010). A Practical Guide to Support Vector Classification. [Online]. Available:

. ntu . edu . tw/~cjlin/papers/guide/guide . pdf Suh-Yeon Dong received the B.S. and M.S. degrees

Suh-Yeon Dong received the B.S. and M.S. degrees from the Department of Electrical Engineering, Korea Advanced Institute of Science and Technology, Daejeon, Korea, in 2010 and 2011, respectively, where she is currently pursuing the Ph.D. degree in electrical engineering. Her current research interests include brain signal processing, implicit human intention, and the brain–computer interface.

human intention, and the brain–computer interface. Bo-Kyeong Kim received the B.S. degree from the Department

Bo-Kyeong Kim received the B.S. degree from the Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology, Daejeon, Korea, in 2011, where she is currently pursuing the Ph.D. degree in electrical engineering. Her current research interests include single EEG trial classification, feature extraction of brain signals, and brain–computer interface.

Soo-Young Lee received the B.S. degree from Seoul National University, Seoul, Korea, in 1975, and the Ph.D. degree from the Polytechnic Institute of New York University, Brooklyn, NY, USA, in 1984. He is currently a Full Professor with the Department of Electrical Engineering and Computer Science, Korea Advanced Institute of Science and Technology, Daejeon, Korea. He has researched on auditory models, information-theoretic processing, proactive knowledge development, and top-down selective attention. His current research interests include artificial brains such as human-like intelligent systems based on biological information processing mechanisms, and combining cognitive neu- roscience and information theory for artificial cognitive systems. Prof. Lee was a recipient of the International Neural Network Society (INNS) Leadership Award in 1994, the INNS Presidential Award in 2001, and the APPNA Service Award in 2004. He joined the INNS Governing Board in 2012. He was the President of the Asia-Pacific Neural Network Assembly. He is the Editor-in-Chief of Natural Intelligence and the INNS Magazine, and is on Editorial Boards of several other journals. In 1997, he established the Brain Science Research Center, and from 1998 to 2008 he served as the Director and Principal Investigator of the Brain Neuroinformatics Research Program, the first interdisciplinary research program in Korea for brain-inspired intelligent systems with perception, learning, inference, and human-like behavior.

program in Korea for brain-inspired intelligent systems with perception, learning, inference, and human-like behavior.