... all models, and for the LM
we use an interpolated Kneser-Ney 5-gram model.
For GIZA ++, we use the standard training reg-
imen up to Model 4, and combine alignments
with grow-diag-final-and. For ... noted that forcing align-
ments smaller than the model suggests is only used
for generating alignments for use in heuristic extrac-
tion, and does not affect the training process....
... ordinary entity-mention model with heuris-
tic first-order features.
6 Conclusions
This paper presented an expressive entity-mention
model for coreference resolution by using Inductive
Logic Programming. ... co-refer with “she”.
The entity-mention model based on Eq. (2) per-
forms coreference resolution at an entity-level. For
simplicity, the framework considere...
... using it as a platform
for research including the use of new information
sources (which can be easily incorporated into the
coreference resolution process as features), different
resolution algorithms ... decision trees
for coreference resolution. In Proc. IJCAI 1995.
Morton, T. S. (2000). Coreference for NLP applications. In
Proc. ACL 2000.
Moschitti, A. (2006). Making tree...
... en-
tity. Coreference resolution on text datasets is well-
studied (e.g., (Cardie and Wagstaff, 1999)). This
prior work provides the departure point for our in-
vestigation of coreference resolution ... intermittently. Our model treats the
relevance of the non-verbal modality as a hidden
variable, learned jointly with the class labels. Ap-
plied to coreference resolution, thi...
... Experimental Setup
We performed experiments with two parsing mod-
els, the Collins (1999) generative model number
2 and the Charniak and Johnson (2005) reranking
model. For the first we used a reimplementation
(?). ... higher. Figure 1 demonstrates
these phenomena for two leading models, Collins
(1999) model 2, a generative model, and Charniak
and Johnson (2005), a reranking model...
... degree d. For exam-
66
Proceedings of the ACL 2007 Demo and Poster Sessions, pages 65–68,
Prague, June 2007.
c
2007 Association for Computational Linguistics
An Approximate Approach for Training ... the training
and testing time costs for polynomial kernel SVM
is far slow than the linear kernel. For example, it
took one day to train the CoNLL-2000 task with
polynomial kernel...
... of
expectation-maximization algorithms. The DOP
model has also been tested as a model for human
sentence processing (Bod 2000d).
This paper presents ongoing work on
DOP models for Lexical-Functional Grammar
representations, ... Homecentre
An Improved Parser for Data-Oriented Lexical-Functional Analysis
Rens Bod
Informatics Research Institute, University of Leeds, Leeds LS2 9JT, UK,...
... higher-level aspects such as analy-
sis models and methodological concerns. Finally,
when other platforms usually enforce the use of a
dedicated document format, LinguaStream is able
to process ... (tag set for a given language ),
it is impossible to fundamentally modify their be-
haviour. Others, on the contrary, provide an anal-
ysis model, that is to say, firstly, a formalism
for repre...
... tree description and/or of semantic formu-
las. The XMG formalism furthermore supports the
sharing of identifiers across dimension hence al-
lowing for a straightforward encoding of the syn-
tax/semantics ... presented a system, XMG
8
, for produc-
ing broad-coverage grammars, system that offers
an expressive description language along with an
efficient compiler taking advantages from lo...
... work as basis of
or support for own work?
Figure 2: Decision tree for annotation
Our materials consist of 48 computational lin-
guistics papers (22 for Study I, 26 for Study II),
taken from ... to be most reproducible in Study II,
performed almost as well as trained annotators;
Group 1, which performed worst, also happened
to have the paper with the lowest reproducibil-
ity...