English flag English version

Siete in Ricerca -> Implicazione testuale

VENSES: Venice Semantic Evaluation System

Intro

Venses is a system for recognizing textual entailment developed by Rodolfo Delmonte and can be regarded as a wrapper on top of two subsystems: a reduced version of GETARUN which produces the semantics from complete linguistic representations; a partial robust constituency-based parser.

Online demo

Try out our Venses demo online!

Installing

Venses is an application built with XPCE, the graphic tools coming with SWI Prolog. In order to work it needs installing SWI Prolog. The versions for Mac OS X are included in the download. The same applies for the version used by MS Vista. We also include the application for Linux Ubuntu. SWI Prolog code can be downloaded directly from Ubuntu website or the source code can be used from SWI Prolog website.

Using Venses under Vista requires the Prolog directory to be installed under User and the application file to be placed under /bin. Files to be used for running the system can be, on the contrary anywhere.

Click on the appropriate link to download a version for your system:

RTE files

Venses uses a simple format for running with RTE files. The Text and the Hypothesis should be sitting in the same file in that order: at first comes the Text, then after two newline characters, the Hypothesis followed again by two newline characters.

We include one folder containing the 1000 files distributed with RTE4. As can be noticed, inside the same folder we placed another folder where the system will produce the analysis output: one file per sentence. Venses also produces an Evaluation file which contains all the judgements and a statistics at the end.

Click on the following link to download the RTE development set:

Video presentations

There exists a series of video lectures link esterno about VENSES.

How Venses Works

The system for semantic evaluation called VENSES, is organized as a pipeline of two subsystems: the first is a reduced version of GETARUN, our system for Text Understanding [see references]; the second is the semantic evaluator which was previously created for Summary and Question evaluation [see references] and has now been thoroughly revised for the new more comprehensive RTE task.

The reduced GETARUN is composed of the usual sequence of submodules common in Information Extraction systems, i.e. a tokenizer, a multiword and NE recognition module, a PoS tagger based on finite state automata; then a multilayered cascaded RTN-based parser which is equipped with an interpretation module that uses subcategorization information and semantic roles processing. Eventually, the system has a pronominal binding module [see references] that works at text/hypothesis level separately for lexical personal, possessive and reflexive pronouns, which are substituted by the heads of their antecedents - if available. The output of the system is a flat list of head-dependent structures (HDS) with Grammatical Relations (GRs) and Semantic Roles (SRs) labels (for similar approaches see [9,10]). Notable additions to the usual formalism is the presence of a distinguished Negation relation; we also mark modals and progressive mood.

The evaluation system uses a cost model with rewards/penalties for T/H pairs where text entailment is interpreted in terms of semantic similarity: the closest the T/H pairs are in semantic terms the more probable is their entailment. Rewards in terms of scores are assigned for each "similar" semantic element; penalties on the contrary can be expressed in terms of scores or they can determine a local failure and a consequent FALSE decision - more on scoring below.

The evaluation system accesses the output of GETARUN which sits on files and is totally independent of it. It is made up of two main Modules: the first is a sequence of linguistic rule-based submodules; the second is a quantitatively based measurement of input structures. The latter is basically a count of heads, dependents, GRs and SRs, scoring only similar elements in the H/T pair. Similarity may range from identical linguistic items, to synonymous or just morphologically derivable ones. As to GRs and SRs they are scored higher according to whether they belong to the subset of core relations and roles, i.e. obligatory arguments, or not, that is adjuncts. Both Modules go through General Consistency checks which are targeted to high level semantic attributes like presence of modality, negation, and opacity operators, the latter ones as expressed either by the presence of discourse markers of conditionality or by a secondary level relation intervening between the main predicate and a governing higher predicate belonging to the class of non factual verbs. Two other general consistency checks regard temporal and spatial location modifiers which must be identical or entailed in one another, if present - but see below.

Linguistic rule-based submodules are organized into a sequence of syntactic-semantic transformation rules going from those containing axiomatic-like paraphrase HDSs which are ranked higher, to rules stating conditions for similarity according to a scale of argumentality (more below) and are ranked lower. All rules address HDSs, GRs and SRs. Both Modules strive for True assessments: however, Submodule 1 is then followed by Submodule 2 which can output True or False according to general consistency or scoring. Modifying the scoring function may thus vary the final result dramatically: it may contribute more True decisions if relaxed, so it needs fine tuning.

Click on the following link for a slide presentation of VENSES:

References

  • Delmonte, R. (2004). Evaluating GETARUNS Parser with GREVAL Test Suite. In Proc. ROMAND - 20th International Conference on Computational Linguistics - COLING (pp. 32-41). University of Geneva.
  • Delmonte, R. (2004). Text Understanding with GETARUNS for Q/A and Summarization. In Proc. ACL 2004 - 2nd Workshop on Text Meaning & Interpretation, Barcelona (pp. 97-104). Columbia University.
  • Delmonte, R. (2002). GETARUN PARSER - A parser equipped with Quantifier Raising and Anaphoric Binding based on LFG. In Proc. LFG2002 Conference, Athens, (pp. 130-153). http://cslipublications.stanford.edu/hand/miscpubsonline.html link esterno
  • Delmonte, R. (1990). Semantic Parsing with an LFG-based Lexicon and Conceptual Representations. Computers & the Humanities 5-6, pp. 461-488.
  • Delmonte, R. (2007). Computational Linguistic Text Processing - Logical Form, Semantic Interpretation, Discourse Relations and Question Answering. New York: Nova Science Publishers. ISBN: 1:60021-700-1.
  • Delmonte, R. (2008). Computational Linguistic Text Processing - Lexicon, Grammar, Parsing and Anaphora Resolution. New York: Nova Science Publishers. ISBN: 978-1-60456-749-6.