Browsing by Subject "constraint grammar"

Sort by: Order: Results:

Now showing items 1-3 of 3
  • Yli-Jyrä, Anssi Mikael (Northern European Association for Language Technology, 2011)
    NEALT Proceedings Series
    The paper reconceptualizes Constraint Grammar as a framework where the rules refine the compact representations of local ambiguity while the rule conditions are matched against a string of feature vectors that summarize the compact representations. Both views to the ambiguity are processed with pure finite-state operations. The compact representations are mapped to feature vectors with the aid of a rational power series. This magical interconnection is not less pure than a prevalent interpretation that requires that the reading set provided by a lexical transducer is magically linearized to a marked concatenation of readings given to pure transducers. The current approach has several practical benefits, including the inward deterministic way to compute, represent and maintain all the applications of the rules in the sentence.
  • Yli-Jyrä, Anssi Mikael (The Association for Computational Linguistics, 2011)
    This paper describes a non-conventional method for compiling (phonological or morpho-syntactic) context restriction (CR) constraints into non-deterministic automata in finite-state tools and surface parsing systems. The method reduces any CR into a simple one that constraints the occurrences of the empty string and represents right contexts with co-determististic states. In cases where a fully deterministic representation would be exponentially larger, this kind of inward de- terminism in contexts can bring benefits over various De Morgan approaches where full determinization is necessary. In the method, an accepted word gets a unique path that is a projection of a ladder-shaped structure in the context recognizer. This projection is computed in time that is polynomial to the number of context states. However, it may be difficult to take advantage of the method in a finite-state library that coerces intermediate results into canonical automata and whose intersection operation assumes deterministic automata.
  • Yli-Jyrä, Anssi (Linköping University Electronic Press, 2019)
    NEALT Proceedings Series
    Deep neural networks (DNN) and linguistic rules are currently the opposite ends in the scale for NLP technologies. Until recently, it has not been known how to combine these technologies most effectively. Therefore, the technologies have been the object of almost disjoint research communities. In this presentation, I first recall that both Constraint Grammar (CG) and vanilla RNNs have finite-state properties. Then I relate CG to Google’s Transformer architecture (with two kinds of attention) and argue that there are significant similarities between these two seemingly unrelated architectures.