... 896–903,
Prague, Czech Republic, June 2007.
c
2007 Association for Computational Linguistics
A Sequencing Model for Situation Entity Classification
Alexis Palmer, Elias Ponvert, Jason Baldridge, ... aspectual classification.
5 Models
We consider two types of models for the automatic
classification of situation entities. The first, a la-
beling model, utilizes a maximum entropy...
... present a novel model of transliteration min-
ing defined as a mixture of a transliteration model
and a non-transliteration model. The transliteration
model is a joint source channel model (Li et ... labelled information for training. Our sys-
tem extracts transliteration pairs in an unsupervised
fashion. It is also able to utilize labelled information
if available, obtaining improv...
... University) for excellent technical
assistance. This work was supported by grants from
the Swedish Foundation for Health Care Sciences and
Allergy Research, the Swedish Research Council for
Medicine ... the fate of an aller-
gen upon inhalation, we addressed this issue for a major dust mite allergen,
Der p 2. First, a model for Der p 2-sensitization was established in
C57BL ⁄ 6 J...
... different qualities.
For example, both topic “css test” and “test suite”
are the bi-gram matching for query “css test suite”;
however, the former might be more informative.
To model that, we use ... following sub-sections, we will detail two
sub-models: the expert matching model P(c|e) and
the evidence matching model P(e|q).
3.2 Expert Matching Model
We expand the evidence...
... use a nearly
identical model to the FLAT model, but instead of
having a single m variable, we have three: one for
IE, one for Austronesian and one for “all languages.”
For a general tree, we ... in Figure 3 (on a log-scale for the x-axis).
The two best-performing models are the two hier-
archical models. The flat model does significantly
worse and the random model does terribly...
... used, and sub-
section 2.2 describes the model.
2.1 The Formalism
In order to handle the non-linear phenomenon of
Arabic, our model adopts the two-level formalism
presented by (Pulman and Hepple, ... first
to appear on the left of LEx.
In our morphographemic model, we add a similar
formalism for expressing error rules (3).
(3)
ERROR FORMALISM
ErrSurf =~ Surf
{ PLC- PRC } whe...
... of the Association for Computational Linguistics, pages 1048–1057,
Uppsala, Sweden, 11-16 July 2010.
c
2010 Association for Computational Linguistics
A Statistical Model for Lost Language Decipherment
Benjamin ... translations, mea-
sured with respect to complete word-forms and
morphemes, for the HMM-based substitution ci-
pher baseline, our complete model, and our model
without...
... of the Association for Computational Linguistics, pages 885–894,
Portland, Oregon, June 19-24, 2011.
c
2011 Association for Computational Linguistics
A Discriminative Model for Joint Morphological ... Experimental Results
We compare the performance of the pipeline model
(§4) and the joint model (§3) on morphological dis-
ambiguation and unlabeled dependency parsing.
Model Tagge...
... performance on both by sharing infor-
mation between them. We present a joint model
for these two subtasks: it is joint not only in that
it performs both tasks simultaneously, sharing in-
formation, ... set sizes for differ-
ent languages.
We use three different lexicons for each lan-
guage: one for training (LTrain), one for devel-
opment (LDev), and one for testing (LTest). Th...
... of
identifying the correct team for every player. The
EEA model is not designed for this and performed
poorly. Our model can do better, since it makes use
of context information and features, and ... result, the model discov-
ers parts of names—Mrs., Michelle, Obama—
while simultaneously performing coreference res-
olution for named entity mentions. In the sports
news dataset,...