Sie sind auf Seite 1von 7

Simultaneous Interpretation Utilizing Example-based Incremental Transfer

Hideki Mima +, Hitoshi Iida and Osamu Furuse *+


ATR Interpreting Telecommunications Research Laboratories
2-2 Hikaridai Seika-cho Soraku-gun Kyoto 619-0288, Japan
EMAn= H.Mima@doc.mmu.ac.uk, iida@itl.atr.co.jp, furuse@cslab.kecl.ntt.co.jp

Abstract
This paper describes a practical method of automatic simultaneous interpretation utilizing an example-based
incremental transfer mechanism. We primarily show how incremental translation is achieved in the context of
an example-based framework. We then examine the type of translation examples required for a simultaneous
interpretation to create naturally communicative dialogs. Finally, we propose a scheme for automatic
simultaneous interpretation exploiting this example-based incremental translation mechanism. Preliminary
experimentation analyzing the performance of our example-based incremental translation mechanism leads us
to believe that the proposed scheme can be utilized to achieve a practical simultaneous interpretation system.

• How to determine plausible translation for


Introduction each IU - In terms of the information content,
Speech-to-speech translation necessitates quick the greater the number of words contained in
and perspicuous responses to natural IUs, the less semantic ambiguity in translation,
communication. Furthermore, since dialogues or the later the response is obtained. Because of
continuously expand, it is essential to time restrictions, deterministic processing by
incrementally translate inputs to avoid exploiting specious measures (e.g. linguistical
interrupting the coherency of communications. or statistical plausibility) is required for each IU
Therefore, a high degree of incrementality and translation in order to shorten the length of IUs.
acceptability in translation such as simultaneous * How to install simultaneous interpreters'
interpretation is essential. To satisfy these know-how (i.e. empirical knowledge) - In
requirements, an incremental translation system, practical simultaneous interpretation, human
which functions as a simultaneous interpreter, is translators generally use strong sentence
seen as an efficient solution in this field. planning using particular empirical know-how.
The main characteristic of incremental The exploitation of this kind of knowledge is
translations is the translation process. This is essential for achieving practical simultaneous
activated synchronously with the input, in interpretation (Kitano, 1994).
contrast with conventional sentence-by-sentence- Transfer-Driven Machine Translation (TDMT)
based translation which cannot start processing (Furuse, 1994a) (Mima, 1997) has been proposed,
until the end of an input (Kitano, 1994). However, and an efficient method of spoken dialog
in incremental translation, we believe that the translation. TDMT has the following key features:
following issues must be resolved to achieve
• Utilization of Constituent Boundary Patterns
actual simultaneous interpretation :
(CB-Patterns) (Fumse, 1994b) (Furuse, 1996)
• How to define Information Units (IUs) - CB-Patterns based on meaningful information
(Halliday, 1994) to determine appropriate units are applied to parse an input incrementally
components for translation - Since and produce translations based on the
differences exist among the word order of synchronization of the source and target
various languages, especially between language structure pairs (Abeillr, 1990) (Shieber,
linguistically distant languages such as English 1990). This contrasts with the linguistic manner
and Japanese, appropriate transfer units, of applying grammar rules.
equally effective for both the source and target The result of this provides for incremental
languages, have to be defined. translations that can even handle lengthy input

* Current affiliation: Department of Computing, Manchester Metropolitan University, Manchester M 1 5GD, U.K.
~" Current affiliation: NTT Communication Science Laboratories, 2-4 Hikaridai Seika-cho Soraku-gun, Kyoto 619-
0237, Japan.

855
efficiently by splitting the input into appropriate expressions at various linguistic levels. The source
and meaningful chunks. In addition, and target expressions from the transfer knowledge
• Existence of efficient disambiguation scheme in TDMT are expressed as CB-Patterns, which
- by dealing with best-only substructures represent meaningful units for linguistic structure
utilizing stored empirical translation examples and transfer. The efficient application of transfer
compiled from a linguistic database, the knowledge source components to an input string
explosion of structural ambiguities is plays a key role in our basic incremental
significantly constrained (Furuse, 1996). translation scheme. A pattern is defined as a
sequence that consists of variables and constituent
Accordingly, TDMT has the advantage of having
boundaries such as surface functional words.
both the capability to define effective IU and an
The transfer knowledge is compiled from actual
efficient deterministic processing scheme in
translation examples in every source pattern.
incremental spoken-language translation.
Additionally, in exploiting the empirical 1.2 I n c r e m e n t a l Pattern Application
"knowledge that is required in practical
The incremental application of CB-Pattems is
simultaneous interpretation, we can assume that
based on the idea of incremental chart parsing
the empirical knowledge is described within the
(Furuse, 1996) (Amtrup, 1995) with notions of
linguistic resource of simultaneous interpretation
linguistic levels.
corpora. (Harbusch, 1992) proposed a method of
The procedure for the application of CB-
default handling in incremental generation based
Patterns is as follows:
on this observation.
(a) Determination of possible pattern applications.
in this paper, we describe the achievement of
(b) Translation candidate determination and
practical simultaneous interpretation using a
structural disambiguation of patterns by
TDMT. Furthermore, we discuss what kind of
semantic distance calculation.
empirical knowledge is required for realizing
Our scheme determines the best translation and
efficient simultaneous interpretation, in terms of a
structure parallel with an input sequence and can
simultaneous translator's knowledge, as well as
restrain the number of competing structures
proposing a method to exploit this empirical
(possible translation candidates) at the possible
knowledge in an example-based framework in
utterance point in the input by performing (a) in
order to produce consistent translations.
parallel with (b), thus reducing the translation costs
A preliminary experiment analyzing our proposed
scheme indicates that it should be able to be used in in time. The structure selected in (b) has its result
achieving simultaneous interpretation systems. transferred with head word-information using
The next section of the paper briefly explains semantic distance calculations when combined
incremental translation using TDMT. Section 2 incrementally with other structures. The output
discusses the type of empirical knowledge sentence is generated as a translation result from
necessary in simultaneous interpretation using the structure for the whole input, which is
some examples. Section 3 describes our proposed composed of best-first substructures.
scheme for exploiting simultaneous interpretation In order to limit the combinations of patterns
examples. Section 4 presents a preliminary and control the appropriate timing of each partial
experiment for analyzing our proposed scheme to utterance during pattern application, we distinguish
confirm its feasibility. Section 5 examines some pattern levels, and specify the linguistic sublevel
related research in the field of incremental permitted for use in the assigned variables for each
translation. Finally, a summary of our approach linguistic level. This is because if any
concludes this paper. combinations of patterns are permitted, it is
obvious that the possibility of combinations are
1 Incremental Translation Using easily exploded. Table 1 shows examples of the
T r a n s f e r - D r i v e n M a c h i n e Translation relationship between linguistic levels. Every CB-
pattern is categorised as one of the linguistic levels,
1.1 C o n s t i t u e n t B o u n d a r y Pattern and a variable on a given level is instantiated by a
In TDMT, translation is performed by applying string on the linguistic levels in the second column
stored empirical transfer knowledge, which of Table 1.
describes the correspondence between source For instance, in the noun phrase "X of Y", the
language expressions and target language variables X and Y cannot be instantiated by a

856
simple sentence pattern, but can be instatiated by Within this pattern, X' is the target word
NP such as a noun phrase pattern or a compound corresponding to X, and a corresponding English
noun pattern. word is written below each Japanese word. For
Moreover, these levels give a guideline to the example, "hoteru'" means 'hotel', and "jasho "
timing of utterance production (i.e. the timing of means 'address'.
when an utterance is said). For example, each This transfer knowledge expression indicates
simple sentence level pattern has utterance that the Japanese pattern "X no Y" corresponds to
markers (Table 2, where '/' indicates the many possible English expressions. (hoteru,
utterance markers) for possible insertion of an jasho) are sample bindings for "X no Y", where X
utterance during left-to-right application of the = hoteru, and Y = jasho.
pattern. Thus, redundant o r incomplete partial TDMT makes the most of an example-based
matchings can be eliminated and an appropriate framework, which produces an output sentence by
trigger of utterance can be obtained. mimicking the closest translation example to an
(Furuse, 1996) provides further details of the input sentence. The semantic distance from the
algorithm for incremental CB-Parsing. input is calculated for all examples. Then the
example closest to the input is chosen, and the
Table 1 Possible linguistic sublevels in variables target expression of that example is extracted.
Suppose that the input is "nihoNgo no
Linguistic level Sublevels of variables
paNfuretto", where nihoNgo means 'Japanese',
Simple sentence VP, NP . . . . and the input is closest to (eigo, paNfuretto); "the
Verb phrase (VP) VP, NP, verb . . . . pamphlet written in Japanese" can be gained by
Noun phrase (NP) NP, CN, proper-noun ....
choosing Y' written in X' as the best target
compound noun (CN) CN, noun . . . .
expression.
Furthermore, ambiguity in the combination of
Table 2 Utterance markers
patterns, which have not been constrained by the
English pattern Japanese Pattern linguistic levels, is also dissolved incrementally by
By the way / X tokorode / X' using the total sum of the semantic distances of
No/X iie / X' patterns contained (Furuse, 1996).
X but / Y X' / shikashi Y' The distance between an input and a translation
Xif/Y X' / moshi Y' example is measured based on the semantic
X / where Y X'/Y' distance between the words contained, and the
semantic distance between words is calculated in
1.3 Disambiguation of Translation terms of a thesaurus hierarchy. (Sumita, 1991)
Candidate provides further details of the semantic distance
caluculation.
The CB-pattern "X no Y" with the particle "no" is
a frequently used expression in Japanese. We can 2 Exploitation of a Simultaneous
observe the following Japanese-to-English Interpreter's Empirical Knowledge
transfer knowledge about "X no Y'" from such
translation examples as the source-target pairs of : In practical simultaneous interpretation, human
"hoteru no jasho'" --~ "the address o f the hotel", translators generally use strong sentence planning
"eigo no paNfuretto'" ~ "the pamphlet written in such as transformation between the active and the
English", etc. passive voice, transformation from a lengthy
interrogative sentence to a tag question, and
X no Y =>
topicalization transformation. Moreover, the input
Y' ofX' ((hoteru ,jasho) .... ), is produced and modified in a step-by-step manner,
'hotel' 'address' so that it can be temporarily incomplete - although
Y' written in X' ((eigo, paNfuretto) .... ), as a whole sentence it may become sufficient.
'English' 'pamphlet' Thus, the consistency of translations has to be
adjusted appropriately when a contradiction occurs
Y' for X' ((asu, tenk~) .... ), between a previously uttered part of the translation
'tomorrow' 'weather' and the part currently being translated.
As a consequence of under specification,
simultaneous interpretation is essentially based on

857
working with empirical knowledge - e.g. as: "no, sorry, we do have twin rooms, but none
simultaneous interpreters' translation examples. ..... ", such restatements should not be used
In this section, we first describe the kinds of frequently. This is because the restatements tend to
examples that are required to achieve break in general, coherency of human interaction
simultaneous interpretation using some sample However, in this case, translation can be
sentences. continued as (E2)' by using a peculiar translation
example [TE2], with no restatement.
2.1 Empirical Knowledge [TE2] (X ga, Y) ---) (X' usually, but Y')
• Transformation to a tag question (E2)' A twin room is available usually, but we do
Let us consider the following Japanese utterance: not have any vacancies today. ({[TE2]: X'=
(El) Nani-mo moNdai-wa ari-maseN -<pause>- 'A twin room is available', Y'='we do not
de-sh~-ka. (what problem exist -<pause>- is have any vacancies today' })
there) 1 • Failure o f prediction
In Japanese, an interrogative is specified at the In simultaneous interpretation, elements are
end of the sentence, while in English, it is usually uttered before the input consumption has
generally specified in front of the sentence. Thus, been finished. Thus, because of the uncertainty in
although a translation of the whole sentence of assumptions, a system with this facility must be
(El) is "Is everything all right', in some cases, able to adjust the whole content of the translation
"Everything is all right' could be uttered after the when it is realized that the assumption is incorrect
pause in the incremental framework. In this case, from information given later.
the meaning of the previously uttered part is no Consider the following English utterance:
longer consistent with the current translation. (E3) That restaurant is open -<pause>- as only as
However, even in this case, translation can be in the evening.
continued transforming to a tag question as (El)' In the case of the part of the translation already
by using a peculiar translation example [TEll uttered, " sono-resutoraN-wa @uN-shite-I-masu",
without interruption by semantic inconsistency it should have been inserted "yoru nomi" in front
and the insertion of a restatement. of the phrase "@uN-shite-l-masu", when the
[TEll (X de-sh6-ka) --~ (X', isn't it) whole sentence is translated.
(El)' Everything is alright, isn't it.({[TE1]: X' = However the translation can be continued as it is
'Everything is alright' }) as in (E3)' by using a peculiar translation example
[TE3].
• Negative sentence
[TE3] (X as only as Y) --->(X' I-masu, ga, Y' nomi-
Let us consider the following utterance:
desu)
(E2) TsuiNramu-wa gozai-masu -<pause>- ga,
hoNjitsu-wa goriy6-ni-nare-maseN. (twin (E3)' Sono-resutoraN-wa @uN-shite I-masu, ga~
room exist -<pause>- but today not- voru nomi-desu ({[TE3]: X'= '@uN-shite',
available) Y'='yoru' })
In Japanese, negation is also specified at the end As the above example shows, simultaneous
of the sentence while in English it has to be interpretation as skilled as that performed by a
specified in front of the finite verb. In addition, an human interpreter is achievable by exploiting
expression "X wa gozai-masu" in (E2) has peculiar translation examples - i.e. simultaneous
possible translations as "'we have X'" or "X' is interpretation examples (or SI-examples, in short).
available". Thus, although the whole translation In the next section, we propose an algorithm to
should ideally read as "We have twin rooms, but handle these kinds of SI-example with the best-
none are available today", "A twin room is first example-based incremental MT mechanism.
available" might be selected as a part of the
translation in some cases. Although one solution 3 Simultaneous Interpretation Algorithm
could be to restate previously uttered phrases such Although the main characteristic of example-based
translation is the use of the most similar examples
I In this paper, sample Japanese is Romanized in italic as the main knowledge source for translation, the
based on the Hepburn system with the corresponding exploitation of SI-examples is drawn from the
English words following in parentheses. following consideration :

858
* A translation should use an example consis-
tent with previously uttered information
Thus, the key translation process with exploiting
SI-examples consists of the following stages:
(1) Checking the contextual consistency between
previously uttered phrases 2 and the phrase to
be uttered next.
(2) Retrieving the most plausible example J j
according to both the contextual sequence and
similarity. i trans(ST,): "i i
(3) Re-translating the phrase to be uttered next by i Sono'resutoraN'waldPuN'shlte I.masu i
using the example retrieved in (2) iF,,- i

The algorithm is described as follows. In the Ou trans(ST2): i


@mV-slffta I.mssu, k.g.a..,...yo..rU.....no.m..i.:.de.s_~
algorithm, the input phrase to be considered as a
combination of structures shown in Figure 1 to
facilitate understanding of the algorithm. For Figure 1 Notation of Substructures
example, in the case of (E3), STj indicates "'The
restaurant is open", ST2 indicates "open as only then, output the difference between trans(STi,
as in the evening", and STy.2 indicates the whole E) and trans(ST~. 0, then goto End.
phrase. In addition, trans(S%) returns word 5. Goto 2.
sequence indicating translation of S%., trans(STi, End.
E) also returns word sequence indicating the In the majority of conventional example-based
translation of S% using example E, and i indicates frameworks, only a semantic similarity is
the current processing part. Since the algorithm considered in retrieving the examples to be applied.
for the exploitation of SI-examples is applied only In our scheme, on the other hand, not only
if a previous translated phrase exists, the semantic similarity but also contextual consistency
proposed algorithm is executed in the case of
with the previous translation is considered. In other
i>=2.
words, the key notion of the scheme is its
Algorithm: mechanism for selecting appropriate examples.
Start. Hence, as the above algorithm shows, exploitation
1. Retrieve the similar examples of ST~ from the of SI-examples can be combined smoothly with
total example database (normal + SI-examples) the conventional example-based framework.
and assign the list to the {SE} with the Let us explain the algorithm in terms of sentence
appropriate semantic distance. (E3) as an example. First, assuming that
trans(STl) = "Sono-resutoraN-wa &puN-shite I-
2. Produce trans(ST~, E), where E indicates the
masu" (the-restaurant open), the most similar
most similar example listed in {SE}.
example of ST~ is normally:
3. Remove the example E from {SE}.
[TE4] (X as only as Y) ---) (Y' nomi X' I-masu)
4. If trans(STi.,.~., E) == trans(STH) +3 trans(ST~,
E) 4, Thus, trans(ST2, TE4) can be "yoru nomi &puN-
shite l-masu'" (evening only open) and as the
phrase "yoru nomi ..." is, in this case, not
2 In this paper, we only state the context within a contextually continuous, and the next example
sentence and do not refer to contexts between dialogs.
should be extracted from the similar example list
3 Indicating sequencial appending operation, which {SE}. Then, the example is [TE3], since trans(ST2,
includes removal operation of the common sub- TE3) -- "&ouN-shite l-masu, ga, yoru nomi-desu",
sequence among the last of the first item and the first
of the second item. For example, word sequences "A in terms of the contextual order of the words, this
B" + word sequences "B C" indicates "A B C". translation can be continuous. Thus, the difference
4 i.e. trans(STi.j) and trans(STi) are contextually between trans(ST,) and the trans(ST2, TE3), "ga,
continuous. In this paper, we define contextually yoru nomi-desu" can be obtained as the next
continuous from the view point of sequences of utterance.
concrete words (phrases) contained, in terms of
combination with an example-based framework.

859
4 Preliminary Experiments -- l--.Xwo Y
We conducted a preliminary experiment with ~, Xno Y
respect to (a) the quality of example-based
100
translation in relation to IUs (i.e., meaningful ol"
.I " -I

units), and (b) the quality and speed of -~ 8o


incremental parsing (CB-Parsing), to confirm the .~_

feasibility of our proposed scheme. 6O


In the evaluation of (a), we conducted a jack-
knife experiment to measure the average success
m

rate of translation for the most frequently used = 20


r~

expressions (i.e. the most ambiguous) in Japanese, [-


"X no Y'" and "X w o Y". We prepared 774 and 689 0 I I I I ' ' '

examples for the expressions respectively, and 100 200 300 400 500 600 700 800
conducted the experiment in increments of 100 No. of Examples
examples (Furuse, 1994a). The examples were Figure 2 Quality of Example-based Transfer
extracted by random sampling. We then evaluated
the 10 translations of corresponding expressions
in the dialog database for each case. Table 3 Evaluation Results
Figure 2 shows the average rate of the J-E E-J
evaluation for 10 translations. No. of test dialogues (sent.) 69 (1225) 77 (1341)
Although the translation quality of each unit Morphemes / sentence 9.7 7.1
depended on the type of expression, the graph
shows that, in general, the more examples the CB-Parsing Accuracy 76.7 % 76.0 %
system has, the better the quality 5. Parsing Time (average) 0.4sec. 0.3 sec.
Conditions of our experiment and evaluation
for (b) are that the number of CB-patterns for The main problem in the parsing procedure
Japanese-English translation and English- involved an insufficient number of examples for
Japanese translation are 777 and 1241, the CB-Pattem. However, as Figure 2 shows, an
respectively, and the number of total examples increase in the ratio with the number of examples
are 10000 and 8000, respectively. In the could be observed with our framework. Thus,
evaluation, we set the system to retain only one overall accuracy and acceptability should improve
substructure in the semantic distance calculation in proportion to an increase in transfer examples.
in order to confirm the feasibility of deterministic Although the speed depends on the amount of
processing at each incremental step. knowledge and sentence length, the average time
CB-Parsing for 69-77 unseen dialogs (of 1,000 was less than 0.4 seconds, which is fairly rapid.
different unseen sentences) were manually Thus, our translation scheme can be seen as an
evaluated by assigning a grade indicating success efficient translation mechanism in achieving a
or failure. All of the parsing times include practical simultaneous interpretation system.
accessing time for an example database (i.e.
corresponding to the whole transfer time) and 5 Related Research
were measured on a Sparc Station 10 workstation Several schemes have been proposed with respect to
with 256 MB of memory. incremental translation based on the synchronization
Table 3 shows the experimental results. For of input and output fragments and the use of
CB-Parsing accuracy, a success rate of specialized information for simultaneous
approximately 76 % was achieved for both interpretation. (Kitano, 1994) proposes incremental
translations, rates that are fairly high for spoken- translation that is based on marker-passing memory-
language parsing. based translation. Although the technique adopts a
cost-oriented best-first strategy to avoid the
explosion of structural ambiguity, the strategy does
not pay attention to actual aspects of the overall
meaning such as in the case when a previously made
5However, we also have to ascertain the practical assumption turns out to be incorrect. (Matsubara,
satiation limit, or how much the transfer knowledge 1997) proposed a method to handle extra-
can be expanded, as a future work.

860
grammatical phenomena with a chart-based References
incremental English-Japanese MT system based A. Abeill6, Y. Schabes and A. K. Joshi (1990) Using
on observations of a translation corpus. However, Lexicalized Tags for Machine Translation. In Proc. of
this system was only capable of English to Coling'90, pages 1---6.
Japanese translation. In this paper, the aspects of J. W. Amtrup (1995) Chart-based Incremental Transfer
flexible order, repetitions, and ellipses are only in Machine Translation. In Proc. of 6'h TMI. pages
briefly considered and necessary extensions, such 188--195.
as the adjustment of consistency in related to the O. Furuse, E. Sumita, and H. Iida (1994a) Transfer-
whole sentence by employing simultaneous Driven Machine Translation Utilizing Empirical
interpreters' "knowledge have not been previously Knowledge (in Japanese). Trans. of Information
investigated. Processing Society of Japan. Vol. 35, No. 3, pages
414--425.
Conclusion O. Furuse, and H. Iida (1994b) Constituent Boundary
Parsing for Example-Based Machine Translation. In
We have described a practical method of Proc. of Coling '94, pages 105--111.
automatic simultaneous interpretation. In the O. Furuse, J. Kawai, H. Iida, S. Akamine, and D. Kim
exploitation of empirical knowledge, we (1995) Multi-lingual Spoken-Language Translation
examined the kind of empirical knowledge Utilizing Translation Examples. In Proc. of
required to achieve efficient simultaneous NLPRS'95, pages 544--549.
interpretation. We then have proposed a method O. Furuse and H. Iida (1996) Incremental Translation
to exploit these empirical simultaneous translation Utilizing Constituent Boundary patterns. In Proc. of
examples in an example-based framework to Coling '96, pages 412--417.
produce a practical method of simultaneous K. Harbusch, G. Kikui, and A. Kilger (1994) Default
interpretation. Handling in Incremental Generation. In Proc. of
Preliminary experimentation analyzing our Coling'94. pages 356--362.
proposed scheme showed that it can be utilized to M. A. K. Halliday (1994) An Introduction to Functional
achieve a simultaneous interpretation system. Grammar. Edward Arnold.
The possibility of applying this sort of H. Kitano (1994) The • DM-DIALOG System. In
example-based framework into multilingual Speech-To-Speech Translation. H. Kitano. Kluwer
translation, such as a Japanese-German pair and a Academic Publishers. Pages 47--113.
Japanese-Korean pair, has been shown in (Furuse, L. Levin, O. Glickman, Y. Qu, D. Gates, A. Lavie, C. P.
1995) and (Mima, 1997). Therefore, the algorithm Ros6, C. V. Ess-Dykema and A. Waibel (1995) Using
can also be expected to work for not only an Context in Machine Translation of Spoken Language.
In Proc. of 6 ~hTMI. pages 173--187.
English-Japanese pair but also other language
pairs. S. Matsubara and Y. Inagaki (1997) Utilizing Extra-
grammatical Phenomena in Incremental English-
Important areas of future research will involve Japanese Machine Translation. In Proc. of 7 th TMI.
methods for: pages 31--38.
• Predicting the contents of the next H. Mima, O. Furuse, and H. Iida (1997) Improving
utterance by using dialog-specific discourse Performance of Transfer-Driven Machine
analysis (Levin, 1995) Translation with Extra-linguistic Information from
Context, Situation, and Environment. In Proc. of
• Handling linguistic differences between the IJCAI'97. pages 983--988.
source and target languages such as S. M. Shieber and Y. Schabes (1990) Synchronous
subject ellipsis Tree-Adjoining Grammars. In Proc. of Coling'90,
We believe that some situational information, pages 253--258.
such as the speakers-roles in the conversation E. Sumita and H. Iida (1991) Experiments and
(Mima, 1997) could be potentially helpful for Prospects of Example-based Machine Translation. In
both predicting the contents of the next utterance Proc. of 29th ACL. pages 185--192.
and resolving linguistic differences. The K. Yamamoto, E. Sumita, O. Furuse, and H. Iida (1997)
integration of statistical/stochastic approaches, Ellipsis Resolution in Dialogues via Decision-Tree
such as Decision-Tree Learning (Yamamoto, Learning. In Proc. of NLPRS'97. pages 423--428.
1997) for the above discourse-related issues is
another area of interest for future work.

861

Das könnte Ihnen auch gefallen