... 0.04.
For the two parses of the full sentences contain-
ing the NPs in figure 1 and figure 2, we sum over
7 POSParents and get a value of 0.27 for parse 1
and 0.11 for parse 2. The lower value for parse
2 ... 3 April 2009.
c
2009 Association for Computational Linguistics
Rich bitext projection features for parse reranking
Alexander Fraser Renjing Wang
Institute for...
... its
usefulness for reranking.
We performed experiments on parse reranking us-
ing a neural network based statistical parser as both
the probabilistic model and the source of the list
of candidate parses. ... model of parsing. For
this we use a statistical parser which has previously
been shown to achieve state-of-the-art performance,
namely that proposed in (Henderson, 2003). This
parse...
... pairs are further
parsed by Stanford parser (Klein and Manning,
2003) on both the English and Chinese sides. We
manually annotate the sub-tree alignment for the
automatically parsed tree pairs ... sub-tree alignment along with some
plain features. Our study reveals that the struc-
tural features embedded in a bilingual parse tree
pair are very effective for sub-tree alignment...
... linguis-
tic features are located within five tokens. For
comparison, we exploit the two groups of non-
local syntax parser-based features; we use Collins
parser and extract this type of features ... with
word features and iteratively add them one-by-one
so that we achieve the best performance. Table 1
shows the empirical results of local features, syn-
tactic parser-based features,...
... and all
words respectively.
Features
For simplicity, in current work, we only used two
sets of features word-pair and tag-pair indicator
features, which are a subset of features used by
other researchers ... unlabeled
data resource. Our goal is to obtain better perfor-
mance than a purely supervised approach without
unreasonable computational effort. Unfortunately,
although significant...
... into on-
tological domains for modelling some forms of
metaphors.
• Types are not sufficiently 'constrained' to ac-
count for the constraints holding, for each
predicate, on the ... in particular for the Telic role,
(explored e.g. in the EuroWordNet project, the
European WordNet). Qualias are well-designed
and useful for nouns, but look more artificial for
other...
... Association for Computational Linguistics:shortpapers, pages 558–563,
Portland, Oregon, June 19-24, 2011.
c
2011 Association for Computational Linguistics
A Probabilistic Modeling Framework for Lexical ... (e.g.
parsing) aiming at high performance.
Nevertheless, simple entailment methods, per-
forming at the lexical level, provide strong baselines
which most systems did not outperform...
... ar-
bitrary local features in a log-linear model. Our
base features include: a feature for each word type,
a set of features that check whether the word con-
tains digits or hyphens, suffix features up ... weight.
7
We compute features for
membership in the top N items by this metric, for
N ∈ {1000, 2000, 3000, 5000, 10000, 20000}.
TAGDICT: Traditional tag dictionary. We add
f...
... features which have implied opinions
and normal features which have no opinions, e.g.,
“voice quality” and “battery life.” For normal
features, people often can have different opinions.
For ... 1541 4308 2306
# Noun features 326 38 173 222
Table 1. Experimental datasets
An issue for judging noun features implying
opinions is that it can be subjective. So for the gold...
... the voting scores of all features.
First of all, we must calculate the voting ratio of
each category for all features. Since elements with
a high TF-IDF value in projections of a feature
must ... readily available and plentiful.
Therefore, this paper advocates using a
bootstrapping framework and a feature projection
technique with just unlabeled data for text
categorization. T...