... to classify
more verbs into a larger number of classes. In our
automatic verb classification, we aim for a larger
scale experiment. We select our experimental verb
classes and verbs as follows: ... (2002). Automaticverbclassification using a
general feature space. Master’s thesis, University of
Toronto.
Joanis, E., Stevenson, S., and James, D. (2007). A general
feature space for automaticverb ... on a
small set of verbs or a small number of verb classes.
For example, Schulte im Walde (2000) uses 153
verbs in 30 classes, and Joanis et al. (2007) takes
on 835 verbs and 15 verb classes. Since...
... token
at the head of the buffer, and pop the stack.
2.1 Classification
Transition-based dependency parsing reduces
parsing to consecutive multiclass classification.
From each configuration one amongst ... in the MaltParser is to use a 2nd-
degree polynomial kernel with the SVM.
3 Confidence-weighted classification
Dredze et al. (2008) introduce confidence-
weighted linear classifiers which are online-
classifiers ... On the other hand if it has
never been updated before the estimation is prob-
ably very bad. CW classification deals with this
by having a confidence-parameter for each weight,
modeled by a Gaussian...
... algorithms for
sentiment classification, SCL and SFA. Each set of
bars represent a cross-domain sentiment classifica-
tion task. The thick horizontal lines are in-domain
sentiment classification accuracies. ... 19-24, 2011.
c
2011 Association for Computational Linguistics
Automatically Extracting Polarity-Bearing Topics for Cross-Domain
Sentiment Classification
Yulan He Chenghua Lin
†
Harith Alani
Knowledge ... train-
ing. Figure ?? shows the classification results on the
five different domains by varying the number of top-
ics from 1 to 200. It can be observed that the best
classification accuracy is obtained...
... be effec-
tive in text classification tasks (Joachims, 1998). We
then apply a simple word filter based on POS tags to
select content words (nouns, verbs, adjectives, and
adverbs). In particular, ... sentiment classification using multi-
ple source domains. Experimental results using a
benchmark dataset for cross-domain sentiment clas-
sification show that our proposed method can im-
prove classification ... as the classification algo-
rithm (Ng, 2004), which produces a sparse model in
which most irrelevant features are assigned a zero
weight. This enables us to select useful features for
classification...
... learning of hierar-
chical multilabel classification models. In Journal of
Machine Learning Research.
G. Salton, A. Wong, and CS Yang. 1975. A vector space
model for automatic indexing. Communications ... hierarchical text classification with
latent concepts. Experimental results show
that the performance of our algorithm is com-
petitive with the recently proposed hierarchi-
cal classification algorithms.
1 ... criterions, e.g. “Entertainment”, “Sport-
s” and “Education” in news classification, “Junk E-
mail” and “Ordinary Email” in email classification.
In the literature, many algorithms (Sebastiani, 2002;
Yang...
... between verbs. The
main idea is that asymmetric entailment relations
between verbs can be analysed in the context of
class-level and word-level selectional preferences
(Resnik, 1993). Selectionalpreferences ... that selectionalpreferences like {player} win
may be read as suggesting the entailment relation
win(x) → play(x).
Selectional preferences have been often used to
infer semantic relations among verbs ... win.
3 Verb entailment: a classification
The focus of our study is on verb entailment. A
brief review of the WordNet (Miller, 1995) verb
hierarchy (one of the main existing resources on
verb entailment...
... of
inference. The order is learned automatically, and
partial output is in turn used to train the local clas-
sifier. Therefore, the order of inference and the lo-
cal classification are dynamically ... although we do not know which
is the most desirable. In this approach, we can eas-
ily collect the automatically generated negative sam-
ples, and use them in learning. These negative sam-
ples are ... search was employed. Our guided learning al-
gorithm provides more flexibility in search with an
automatically learned order. In addition, our treat-
ment of the score of action and the score of...
... the above ker-
776
4.2 Answer classification
Question classification does not allow to fully ex-
ploit the PAS potential since questions tend to be
short and with few verbal predicates (i.e. the ... state-of-the-art accuracy on
question classification. (b) PB predicative structures
are not effective for question classification but show
promising results for answer classification on a cor-
pus of answers ... the
question but could not be judged as valid answers
5
.
Answer classification results To test the impact
of our models on answer classification, we ran 5-fold
cross-validation, with the constraint...
... (unsegmented) is encoded in a
similar way, but does not contain the class labels B
and N.
Finally, we automatically assign probability of 0.5
for unseen events.
4.3 Predicting word boundary with a classifier
The ... information and it treats
character strings as context which provides infor-
mation on the possible classification of character-
breaks as word-breaks. We are confident that once
a standard model of ... Association for Computational Linguistics
Rethinking Chinese Word Segmentation: Tokenization, Character
Classification, or Wordbreak Identification
Chu-Ren Huang
Institute of Linguistics
Academia Sinica,Taiwan
churen@gate.sinica.edu.tw
Petr
ˇ
Simon
Institute...
... resolution. In
Proc. of NAACL, pages 55–62.
R. Iida, K. Inui, H. Takamura, and Y. Matsumoto. 2003.
Incorporating contextual cues in trainable models for
coreference resolution. In Proc. of the EACL ... a training or test instance, and the
clustering algorithm used to coordinate the coref-
erence classification decisions. Selecting a corefer-
ence system, then, is a matter of instantiating these
elements ... induction
system (Quinlan, 1993), the RIPPER rule learning
algorithm (Cohen, 1995), and maximum entropy
classification (Berger et al., 1996). The classifica-
tion model induced by each of these learners...
... function of
d. Models that take this form are commonplace in
classification.
2.3 Turney’s Classifier as Naive Bayes
Although Naive Bayes classification requires a la-
beled corpus of documents, we ... slightly better than the
average pair. Whereas mean performance ranges
from 37.3% to 39.6%, misclassification rates for
this pair of anchors ranges from 37.4% to 38.1%.
4.3 A Smaller Unlabeled ... 0.40
Num. of Labeled Documents
Classif. Error
Misclassification versus Sample Size
Figure 5: Misclassification with Labeled Docu-
ments. The solid curve represents a latent fac-
tor model with...
... O-polysaccharide of Proteus mirabilis G1 (Eur. J. Biochem. 269) 1409
Structure of the O-polysaccharide and classification
of
Proteus mirabilis
strain G1 in
Proteus
serogroup O3
Zygmunt Sidorczyk
1
, Krystyna...
... types based on manual examina-
tion of 50 fluency edit misclassifications and 50
factual edit misclassifications.
leads to a small decrease in classification accu-
racy, namely 86.68% instead of 87.14% ... have better understanding of errors made by
the classifier, 50 fluency edit misclassifications
and 50 factual edit misclassifications are ran-
domly selected and manually examined. The er-
rors are ... to recognize a fluency edit.
3 modify adjectives or adverbs that do not change
the meaning such as “entirely” and “various”.
Factual edit misclassifications: the big major-
ity, 35 instances (70%),...
... POS N-grams to generate test data for
their selectionalpreferences model, but not to
infer preferences. Zhou et al. (2011) identified
selectional preferences of one word for another
Reference ... 1999
Verb- object
Verb- subject
none
Target, relative,
and relation in a
parsed corpus
(parsed BNC)
none
EM-based
clustering
Ritter,
2010
Verb- subject
Verb- object
Subject -verb-
object ... Schmid,
H. 2008. Combining Em Training and the Mdl
Principle for an AutomaticVerb Classification
Incorporating Selectional Preferences. In
Proceedings of the 46th Annual Meeting of the
Association...