Английская Википедия:Anti-unification
Шаблон:For Anti-unification is the process of constructing a generalization common to two given symbolic expressions. As in unification, several frameworks are distinguished depending on which expressions (also called terms) are allowed, and which expressions are considered equal. If variables representing functions are allowed in an expression, the process is called "higher-order anti-unification", otherwise "first-order anti-unification". If the generalization is required to have an instance literally equal to each input expression, the process is called "syntactical anti-unification", otherwise "E-anti-unification", or "anti-unification modulo theory".
An anti-unification algorithm should compute for given expressions a complete and minimal generalization set, that is, a set covering all generalizations and containing no redundant members, respectively. Depending on the framework, a complete and minimal generalization set may have one, finitely many, or possibly infinitely many members, or may not exist at all;[note 1] it cannot be empty, since a trivial generalization exists in any case. For first-order syntactical anti-unification, Gordon Plotkin[1][2] gave an algorithm that computes a complete and minimal singleton generalization set containing the so-called "least general generalization" (lgg).
Anti-unification should not be confused with dis-unification. The latter means the process of solving systems of inequations, that is of finding values for the variables such that all given inequations are satisfied.[note 2] This task is quite different from finding generalizations.
Prerequisites
Formally, an anti-unification approach presupposes
- An infinite set V of variables. For higher-order anti-unification, it is convenient to choose V disjoint from the set of lambda-term bound variables.
- A set T of terms such that V ⊆ T. For first-order and higher-order anti-unification, T is usually the set of first-order terms (terms built from variable and function symbols) and lambda terms (terms containing some higher-order variables), respectively.
- An equivalence relation <math>\equiv</math> on <math>T</math>, indicating which terms are considered equal. For higher-order anti-unification, usually <math>t \equiv u</math> if <math>t</math> and <math>u</math> are alpha equivalent. For first-order E-anti-unification, <math>\equiv</math> reflects the background knowledge about certain function symbols; for example, if <math>\oplus</math> is considered commutative, <math>t \equiv u</math> if <math>u</math> results from <math>t</math> by swapping the arguments of <math>\oplus</math> at some (possibly all) occurrences.[note 3] If there is no background knowledge at all, then only literally, or syntactically, identical terms are considered equal.
First-order term
Шаблон:Main Given a set <math>V</math> of variable symbols, a set <math>C</math> of constant symbols and sets <math>F_n</math> of <math>n</math>-ary function symbols, also called operator symbols, for each natural number <math>n \geq 1</math>, the set of (unsorted first-order) terms <math>T</math> is recursively defined to be the smallest set with the following properties:[3]
- every variable symbol is a term: V ⊆ T,
- every constant symbol is a term: C ⊆ T,
- from every n terms t1,...,tn, and every n-ary function symbol f ∈ Fn, a larger term <math>f(t_1,\ldots,t_n)</math> can be built.
For example, if x ∈ V is a variable symbol, 1 ∈ C is a constant symbol, and add ∈ F2 is a binary function symbol, then x ∈ T, 1 ∈ T, and (hence) add(x,1) ∈ T by the first, second, and third term building rule, respectively. The latter term is usually written as x+1, using Infix notation and the more common operator symbol + for convenience.
Higher-order term
Substitution
Шаблон:Main A substitution is a mapping <math>\sigma: V \longrightarrow T</math> from variables to terms; the notation <math>\{x_1 \mapsto t_1, \ldots, x_k \mapsto t_k \}</math> refers to a substitution mapping each variable <math>x_i</math> to the term <math>t_i</math>, for <math>i=1,\ldots,k</math>, and every other variable to itself. Applying that substitution to a term Шаблон:Mvar is written in postfix notation as <math>t \{x_1 \mapsto t_1, \ldots, x_k \mapsto t_k \}</math>; it means to (simultaneously) replace every occurrence of each variable <math>x_i</math> in the term Шаблон:Mvar by <math>t_i</math>. The result Шаблон:Mvar of applying a substitution Шаблон:Mvar to a term Шаблон:Mvar is called an instance of that term Шаблон:Mvar. As a first-order example, applying the substitution <math>\{x \mapsto h(a,y), z \mapsto b\}</math> to the term
Шаблон:Math Шаблон:Math Шаблон:Math Шаблон:Mvar Шаблон:Math yields Шаблон:Math Шаблон:Math Шаблон:Math Шаблон:Mvar Шаблон:Math .
Generalization, specialization
If a term <math>t</math> has an instance equivalent to a term <math>u</math>, that is, if <math>t \sigma \equiv u</math> for some substitution <math>\sigma</math>, then <math>t</math> is called more general than <math>u</math>, and <math>u</math> is called more special than, or subsumed by, <math>t</math>. For example, <math>x \oplus a</math> is more general than <math>a \oplus b</math> if <math>\oplus</math> is commutative, since then <math>(x \oplus a)\{x \mapsto b\} = b \oplus a \equiv a \oplus b</math>.
If <math>\equiv</math> is literal (syntactic) identity of terms, a term may be both more general and more special than another one only if both terms differ just in their variable names, not in their syntactic structure; such terms are called variants, or renamings of each other. For example, <math>f(x_1,a,g(z_1),y_1)</math> is a variant of <math>f(x_2,a,g(z_2),y_2)</math>, since <math>f(x_1,a,g(z_1),y_1) \{ x_1 \mapsto x_2, y_1 \mapsto y_2, z_1 \mapsto z_2\} = f(x_2,a,g(z_2),y_2)</math> and <math>f(x_2,a,g(z_2),y_2) \{x_2 \mapsto x_1, y_2 \mapsto y_1, z_2 \mapsto z_1\} = f(x_1,a,g(z_1),y_1)</math>. However, <math>f(x_1,a,g(z_1),y_1)</math> is not a variant of <math>f(x_2,a,g(x_2),x_2)</math>, since no substitution can transform the latter term into the former one, although <math>\{x_1 \mapsto x_2, z_1 \mapsto x_2, y_1 \mapsto x_2 \}</math> achieves the reverse direction. The latter term is hence properly more special than the former one.
A substitution <math>\sigma</math> is more special than, or subsumed by, a substitution <math>\tau</math> if <math>x \sigma</math> is more special than <math>x \tau</math> for each variable <math>x</math>. For example, <math>\{ x \mapsto f(u), y \mapsto f(f(u)) \}</math> is more special than <math>\{ x \mapsto z, y \mapsto f(z) \}</math>, since <math>f(u)</math> and <math>f(f(u)) </math> is more special than <math>z</math> and <math>f(z)</math>, respectively.
Anti-unification problem, generalization set
An anti-unification problem is a pair <math>\langle t_1, t_2 \rangle</math> of terms. A term <math>t</math> is a common generalization, or anti-unifier, of <math>t_1</math> and <math>t_2</math> if <math>t \sigma_1 \equiv t_1</math> and <math>t \sigma_2 \equiv t_2</math> for some substitutions <math>\sigma_1, \sigma_2</math>. For a given anti-unification problem, a set <math>S</math> of anti-unifiers is called complete if each generalization subsumes some term <math>t \in S</math>; the set <math>S</math> is called minimal if none of its members subsumes another one.
First-order syntactical anti-unification
The framework of first-order syntactical anti-unification is based on <math>T</math> being the set of first-order terms (over some given set <math>V</math> of variables, <math>C</math> of constants and <math>F_n</math> of <math>n</math>-ary function symbols) and on <math>\equiv</math> being syntactic equality. In this framework, each anti-unification problem <math>\langle t_1, t_2 \rangle</math> has a complete, and obviously minimal, singleton solution set <math>\{t\}</math>. Its member <math>t</math> is called the least general generalization (lgg) of the problem, it has an instance syntactically equal to <math>t_1</math> and another one syntactically equal to <math>t_2</math>. Any common generalization of <math>t_1</math> and <math>t_2</math> subsumes <math>t</math>. The lgg is unique up to variants: if <math>S_1</math> and <math>S_2</math> are both complete and minimal solution sets of the same syntactical anti-unification problem, then <math>S_1 = \{ s_1\}</math> and <math>S_2 = \{ s_2 \}</math> for some terms <math>s_1</math> and <math>s_2</math>, that are renamings of each other.
Plotkin[1][2] has given an algorithm to compute the lgg of two given terms. It presupposes an injective mapping <math>\phi: T \times T \longrightarrow V</math>, that is, a mapping assigning each pair <math>s,t</math> of terms an own variable <math>\phi(s,t)</math>, such that no two pairs share the same variable. [note 4] The algorithm consists of two rules:
<math> f(s_1,\dots,s_n) \sqcup f(t_1,\ldots,t_n) </math> <math> \rightsquigarrow </math> <math> f( s_1 \sqcup t_1, \ldots, s_n \sqcup t_n ) </math> <math> s \sqcup t </math> <math> \rightsquigarrow </math> <math> \phi(s,t) </math> if previous rule not applicable
For example, <math>(0*0) \sqcup (4*4) \rightsquigarrow (0 \sqcup 4)*(0 \sqcup 4) \rightsquigarrow \phi(0,4) * \phi(0,4) \rightsquigarrow x*x</math>; this least general generalization reflects the common property of both inputs of being square numbers.
Plotkin used his algorithm to compute the "relative least general generalization (rlgg)" of two clause sets in first-order logic, which was the basis of the Golem approach to inductive logic programming.
First-order anti-unification modulo theory
Equational theories
- One associative and commutative operation: Шаблон:Citation; Шаблон:Citation
- Commutative theories: Шаблон:Cite conference
- Free monoids: Шаблон:Citation
- Regular congruence classes: Шаблон:Citation; Шаблон:Cite journal
- A-, C-, AC-, ACU-theories with ordered sorts: Шаблон:Cite journal
- Purely idempotent theories: Шаблон:Cite journal
First-order sorted anti-unification
- Taxonomic sorts: Шаблон:Cite journal; Шаблон:Cite conference; Шаблон:Cite conference
- Feature terms: Шаблон:Cite conference
- Шаблон:Cite conference
- Шаблон:Citation
- A-, C-, AC-, ACU-theories with ordered sorts: see above
Nominal anti-unification
- Baumgartner, Alexander; Kutsia, Temur; Levy, Jordi; Villaret, Mateu (Jun 2013). Nominal Anti-Unification. Proc. RTA 2015. Vol. 36 of LIPIcs. Schloss Dagstuhl, 57-73. Software.
Applications
- Program analysis:
- Code factoring:
- Induction proving:
- Information Extraction:
- Case-based reasoning:
- Program synthesis: The idea of generalizing terms with respect to an equational theory can be traced back to Manna and Waldinger (1978, 1980) who desired to apply it in program synthesis. In section "Generalization", they suggest (on p. 119 of the 1980 article) to generalize reverse(l) and reverse(tail(l))<>[head(l)] to obtain reverse(l')<>m' . This generalization is only possible if the background equation u<>[]=u is considered.
- Шаблон:Cite report — preprint of the 1980 article
- Шаблон:Cite journal
- Natural language processing:
Higher-order anti-unification
- Calculus of constructions:
- Simply-typed lambda calculus (Input: Terms in the eta-long beta-normal form. Output: higher-order patterns):
- Baumgartner, Alexander; Kutsia, Temur; Levy, Jordi; Villaret, Mateu (Jun 2013). A Variant of Higher-Order Anti-Unification. Proc. RTA 2013. Vol. 21 of LIPIcs. Schloss Dagstuhl, 113-127. Software.
- Simply-typed lambda calculus (Input: Terms in the eta-long beta-normal form. Output: Various fragments of the simply-typed lambda calculus including patterns):
- Restricted Higher-Order Substitutions:
Notes
References
Ошибка цитирования Для существующих тегов <ref>
группы «note» не найдено соответствующего тега <references group="note"/>
- ↑ 1,0 1,1 Шаблон:Cite journal
- ↑ 2,0 2,1 Шаблон:Cite journal
- ↑ Шаблон:Cite book; here: Sect.1.3
- Английская Википедия
- Inductive logic programming
- Automated theorem proving
- Logic in computer science
- Unification (computer science)
- Страницы, где используется шаблон "Навигационная таблица/Телепорт"
- Страницы с телепортом
- Википедия
- Статья из Википедии
- Статья из Английской Википедии
- Страницы с ошибками в примечаниях