Jump to content

Talk:Unification (computer science)

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Untitled

[edit]

Should this page be renamed, e.g., Unification (Prolog) or Unification (Computer Programming)? --Nate—Preceding undated comment added at 00:41, 24 November 2002 (UTC)[reply]

No; this is essentially the same concept as is used in various theorem provers and type checkers, and is not specific to Prolog or to computer programming. (Although in most "mathematical" uses of unification, the example A=f(A) fails rather than returning an infinite term; the extra step in the unification algorithm which checks for this case and makes it fail is called the "occurs check".) Carl—Preceding undated comment added at 04:40, 25 August 2003 (UTC)[reply]

We really ought to distinguish between syntactic unification (discussed here) and E-unification (not yet discussed). E-unification is really important in automated theorem proving. Syntactic unification (and matching) also has important applications aside from Prolog; in ML-style languages, for instance. Perhaps some discussion of the problem of coverage analysis would also be useful?--Iwehrman 17:01, 27 Mar 2005 (UTC)

I agree. This means that the dominance of Prolog should somehow be removed from the article. The right context to mention is First-order resolution; note that Prolog uses just a particular form of resolution. Higher-order unification should also be mentioned. --Tillmo 03:57, 24 August 2005 (UTC)[reply]

I have added some more general statements to the article, and corrected some typos. These generalities should lend credence to keeping it as its own topic. —Preceding unsigned comment added by 71.36.181.46 (talk) 08:07, 2 May 2008 (UTC)[reply]

[edit]

I've no luck finding a page on this kind of unification that doesn't assume the reader wants to do it in Prolog/Lisp/etc. If there are any good web pages that explain unification on logic with a series of clauses containing nothing but plain ol' variables (not even quantifiers, just variables) as you would do logic problems on paper, I think a link to that would be useful. --I am not good at running 21:32, 23 October 2005 (UTC)[reply]

Move unification in Prolog to the prolog article

[edit]

The bit on prolog ought to be moved to Prolog and referenced, I think. The Prolog page doesn't deal with it sufficiently on its own, and it matters. 138.38.32.84 14:03, 17 January 2007 (UTC)[reply]

Correct the example given

[edit]

To say that x^2 can be unified with y^3 is at best misleading because the argument that (z^3)^2 = (z^2)^3 = z^6 is semantic and unification is - as I understand it - purely syntactic. I am not enough of an expert to get this right but it seems to me that as *terms* x^2 and y^3 have to be written as m(x,x) and m(y,m(y,y)) where 'm' denotes multiplication. Can these terms be unified? I think so because they both begin with the same function symbol and have the same number of arguments, which - I believe - can themselves be unified. But the definition of unification given on this page does not cover binary (n-ary) functions. —Preceding unsigned comment added by Kilgore666 (talkcontribs) 03:01, 8 October 2007 (UTC)[reply]

I also find the polynomial example (in this version) confusing. Suppose x=0 and y=2. If we "unify" x2 with y3 as z6, as shown, we get z=0 to make x=z3 true, and to make y=z2 true. What?? Is the example wrong or is the article leaving out a crucial piece of information? --Ben Kovitz (talk) 03:55, 27 December 2007 (UTC)[reply]

Answer and important remark

[edit]

Yes, BenKovitz and Kilgore666, you are right: The article is leaving out a number of important aspects and in my opinion it is complete rubbish in certain portions. It should be completely rewritten. In some parts the explanations really do hurt, since they are just not at all catching the meaning and focus of the notions. —Preceding unsigned comment added by 93.207.221.50 (talk) 09:36, 1 April 2010 (UTC)[reply]

MGU

[edit]

What about the Most General Unifier? This article even redirects here, but the page does not mention it -- 200.185.249.203 (talk) 18:07, 16 March 2008 (UTC)[reply]

I have now tried to add the definition of MGU to the article. Hope this is helps. Offliner (talk) 00:00, 17 September 2008 (UTC)[reply]

Why use equations in the examples?

[edit]

Why speak of unifying "A=B, B=abc"? This is simply confusing. It is better to speak of unifying "A,B,abc" or "A=B=abc". There is no reason to mix equal signs and commas like that, when they mean the same thing in this context. I'm changing it to use commas throughout. Halberdo (talk) 00:24, 7 April 2009 (UTC)[reply]

Actually, I see why they are using equal signs. They are "unifying" a list of equations by separately unifying the terms that can be inferred to be equal in those equations. That is, whoever wrote those examples would say that unifying "A=abc, B=xyz" would succeed, unifying A with abc and B with xyz, actually performing two separate unifications. However, I think that this syntactic sugar is unnecessary for an introductory article. Halberdo (talk) 00:35, 7 April 2009 (UTC)[reply]

This is simply Prolog syntax: A = B is the unification operator, and it cannot be chained like A = B = C, one has to write it as several unifications joined by commas (which are really conjunction operators). I do not see any particular reason why a general article on unification should be bound by syntactic conventions of a specific language, your change is fine. — Emil J.—Preceding undated comment added at 09:44, 7 April 2009 (UTC)[reply]

It is better to use potential equations rather than . — Preceding unsigned comment added by 92.233.94.70 (talk) 23:06, 26 August 2011 (UTC)[reply]

car

[edit]

click this link for car because it is a part of a car —Preceding unsigned comment added by 86.14.186.72 (talk) 16:39, 22 March 2010 (UTC)[reply]

Introduction: non-standard notion of unifier?

[edit]

The introduction seems hopelessly confused to me. For one, it doesn't even mention what constitutes a solution to a unification problem. As far as I'm aware, it's entirely standard to return a *substitution* as a solution to a unification problem, not another term, where applying the substitution to input terms results in two alpha-equivalent terms. The notion of mgu is defined by taking an ordering on substitutions, not on terms. The polynomial example is completely unclear, as whether or not these two polynomials even have a unifier depends entirely on how they are encoded.

Also, nitpicking, but the section on first-order unification is mistitled. It isn't unification for first-order logic that is being described, but first-order unification, a description of the terms themselves to be unified (first-order terms, as opposed to higher-order terms or nominal terms).

The description of higher-order unification is also just ... bizarre. What are ellipsis, in this context? IMO, it's easier to describe higher-order unification as "just" the process of unifying higher-order (lambda-) terms. Huet's algorithm is mentioned in the programming language section above, and there it is misplaced: it belongs in the section on higher-order unification, as it's the canonical example of a higher-order unification algorithm, and because virtually every type-inferencing algorithm that I've come across for functional PL's use Robinson's first-order unification algorithm, not higher-order unification.

If I get time, I plan to rework this article soon. DPMulligan (talk) 16:32, 9 May 2010 (UTC)[reply]

Higher-Order Unification needs help!

[edit]

From reading the article I have *no* idea what higher-order unification is. DPMulligan has the right idea; this article definitely needs work! Norman Ramsey (talk) 14:52, 12 May 2010 (UTC)[reply]

Hi, i wrote some semntences for HO unification, but its more a stub than something else. Explaining HO unification in detail requires lots of technical knowledge, so even an example is very hard to give. If you like, i can add some more stuff, but i don't know, if it is useful. Alerion (talk) 00:46, 21 January 2011 (UTC)[reply]

Reworking article

[edit]

I have started reworking the article, starting with the Introduction, removing all the technical jargon (we do not need to mention lattices in the introduction to a unification article). Any comments (in particular, any technical mistakes) are welcome. DPMulligan (talk) 15:29, 13 May 2010 (UTC)[reply]

Yeah, this article sucks. But before diving into it, perhaps its prerequisites should be written. Term rewriting has no article basically, and neither does equational logic. I see that you started writing nominal terms (computer science), which seems to want to cover unification in nominal equational logic. Before we get to the latest research topic (which you seem to be involved with), please write some more basic stuff missing. I wrote the abstract rewriting system article and overhauled the Semi-Thue system one a while back, but don't have much time to contribute lately, so the rewriting article is still lacks major topics, including term rewriting and unification. You seem very into lambda-calculus stuff, but please (re)write this from a more general perspective. Thanks, Pcap ping 21:07, 13 May 2010 (UTC)[reply]

The article is good on syntactic unification but unclear on other forms of unification. — Preceding unsigned comment added by 92.233.94.70 (talk) 23:46, 26 August 2011 (UTC)[reply]

Suggestions for a rework of the article

[edit]

Hi, i don't have much time to rewrite the article at the moment, but would ask you to give me feedback on my ideas:

  • Basically, three topics are mixed here: Theory-free unification, Theory-unification (also called equational unification) and Higher order Unification. All three topics are large enough for an own article -- what do you think is the best way to proceed without creating 2 stub articels which may not be filled for some time?
  • As mentioned by others below, first order syntax seems better to me than prolog syntax. Also the notion of unification problem should be present.
  • The algorithm is very fuzzy explained -- what do you think of the algorithm of Martelli Montanari (this is the one with the rules of decomposition, variable elimination, occurs check and symbol clash, hope i did not forget one)
  • The applications could be redesigned:
    • Resolution (logic) is the most prominent example, with prolog as a resolution based programming language as an example
    • Automated theorem proving
    • Type inference in functional languages

Thanks for your input! Alerion (talk) 00:57, 21 January 2011 (UTC)[reply]

I've done some work, introducing the idea of potential equatio, ans a nice mult-set based unifation algorithm. — Preceding unsigned comment added by ODogerall (talkcontribs) 23:02, 1 February 2011 (UTC)[reply]
I agree the topics should not be mixed in the current way. What do you think about the following structure?
1. Unification in general - Defn.: trying to solve a system of equations - Common notions (substitution etc.) - Applications: resolution thm. proving, term rewriting(?), Knuth–Bendix_completion_algorithm, Type_inference by Hindley–Milner, ...
1.1. Unification of ordinary terms
1.1.1. purely syntactically, i.e. wrt. an empty equational background theory - most existing stuff from the current article goes here
1.1.2. "semantically", i.e. wrt. general (narrowing_(computer_science), Constraint_solving wrt. equational constraints) and particular [1] equational background theories, which is currently only mentioned, but not explained
1.2. Unification of lambda-terms, i.e. higher-order unif.
Jochen Burghardt (talk) 23:47, 15 May 2013 (UTC)[reply]
The algorithm section is quite nice. I tried to improve the rules' layout, using table alignment. The rules should possibly be named or numbered to ease referring to a particular one. Maybe a link to an article about presenting algorithms in rule form ("separation of logic and control", mentioned in Algorithm#By_implementation) should be added. Some notation needs to be unified: "no solution" in section Syntactic_unification_problem_on_first-order_terms = "fail" in section Examples_of_syntactic_unification_of_first-order_terms = "" in section A_unification_algorithm; equations are joined by "" in Syntactic_unification_problem_on_first-order_terms, but by "," in A_unification_algorithm - maybe some explanations at appropriate places will suffice.
I don't understand why the algorithms needs multisets. If a potential eqn is generated several times, won't it be sufficient to solve it just once? That is, wouldn't the alg. work also with ordinary sets (a notion which needs no additional explanation)?
In the termination proof, what is meant by "non-unique variable"? If the rules had names or numbers, the proof could list for each triple-component NUVN,NLHS,EQN the rules where it is decreased. (By the way: the proof is a good example for proving based on a well-ordering, but not by "structural induction" - both notions seem still to be confused e.g. in Structural_induction#Well-ordering.) Jochen Burghardt (talk) 19:39, 16 May 2013 (UTC)[reply]
  1. ^ e.g. Jörg H. Siekmann, Universal Unification, Proc. 7th Int. Conf. on Automated Deduction, Springer LNCS vol. 170, p.1-42, 1984 gives a catalog

Process or operation?

[edit]

Is Unification the operation that binds two terms, or is it the process that finds a solution to a whole theory? Some sections describe it as the former ("It represents the mechanism of binding the contents of variables and can be viewed as a kind of one-time assignment") and some as the latter (the new lead section, and the Satisfiability article describe it that way). Diego Moya (talk) 16:29, 13 June 2011 (UTC)[reply]

Don't know what you mean by "binding". In practice, unification is done on pairs of terms; in principle, you can take its reflexive, transitive closure which would make it apply to entire theories. I'll try to modify article to say this. ... Note: Unification is done with substitution, and that is what the opening paragraph says. Now, in systems which have bound and free variables, and one performs a substitution on a bound variable, then, yes, I guess you could think of it as being kind-of-like binding a value to a variable; but these are different ideas, really. Among other things, you can always make substitutions on free variables as well :-) linas (talk) 17:53, 13 June 2011 (UTC)[reply]
Unification can refer to syntactic unification, i.e. finding a substitution that makes both terms equal, or to theory unification, i.e. unification modulo some (equational) theory. The first is decidable with a fairly simple algorithm. The second is not, in general, decidable, but semi-decidable. There are interesting instances that are decidable (unification modulo associativity and commutativity is a major example with practical relevance). --Stephan Schulz (talk) 18:13, 13 June 2011 (UTC)[reply]
I tried to carefully distinguish between "equal" and "identical" in the article: syntactic unification is about identity, not equality. If there are no equations, there cannot be any equalities, only identities. linas (talk) 18:21, 13 June 2011 (UTC)[reply]
And, as I read the above, I see that I misused the word "theory". Sigh.. my bad. However, this comes back to Diego Moya's initial point: for syntactic unification, we use 'substitution (logic)' to describe what's going on. But soon as one goes over to equational theories, or talks about satisfiability, and esp boolean sat, we use the word 'interpretation (logic)' instead. These are really not the same thing, and some space should be devoted to this, as this is a point of confusion. More generally, I have yet to see any article on WP that really deals with syntax vs. semantics -- semantics could almost do that, but it doesn't... linas (talk) 19:00, 13 June 2011 (UTC)[reply]
p.p.s. I've casually noticed that associativity seems to make a striking difference in decidability, (i've noticed it here and there), but I don't really get why, so both details, and general remarks on why associativity (and commutativity) seems to tilt things around, would be nice. linas (talk) 19:18, 13 June 2011 (UTC)[reply]

Identity/equality?

[edit]

As far as I understand unification (and I understand it fairly well), unification does not try to show that terms are equal (or identical), but rather determine if they have equal (or identical) instances. As an example, f(a,X) and f(Y,b) are neither equal nor identical, but have the common instance f(a,b) (with most general unifier {X<-b, Y<-a}). Similarly, if the background theory is {g(a)=b}, then g(X) and g(b) have equal instances (g(g(a)) and g(b), respectively), although the terms themselves are neither equal nor identical. --Stephan Schulz (talk) 14:18, 19 October 2011 (UTC)[reply]

I would agree with that yes. —Ruud 12:00, 20 October 2011 (UTC)[reply]

Delete section "Definition of unification for first-order logic"?

[edit]

I made some changes in section "Definition of unification for first-order logic":

  • Since there is usually more than one unifier of two terms (or sentences) p and q, the notion "UNIFY(p,q)" made no sense; since it wasn't used elsewhere, I deleted it.
  • The notion of a unifier of a set of sentences had to be defined.
  • The notion of applying a substitution to a set of sentences still remains to be defined.

I now think that this section should be integrated into "Syntactic unification problem on first-order terms", where unification of terms (rather than sentences) is explained, since I don't see anything particular to logical sentences here. Instead, the main contents is the explanation of "unifier" vs. "most general unifier", which could better be shifted to "Syntactic unification problem on first-order terms". I also suggest not to define simultaneous unification of more than two terms formally, i.e., to simplify the explications about a set L of sentences/terms to.

I made an attempt to change "Syntactic unification problem on first-order terms" accordingly, but I don't dare to delete "Definition of unification for first-order logic" without agreement. An alternative could be to keep it, but to elaborate on the differences between unification of terms and sentences, making connections to Resolution_(logic). Jochen Burghardt (talk) 12:56, 15 May 2013 (UTC)[reply]

Dubious examples

[edit]

In the E-unification section, it is claimed that unification is decidable over Presburger arithmetic and the theory of real-closed fields. I’ve never heard of these theories in the context of unification, and the given references do not seem to support these claims, they are about decidability of the theories themselves (which is of course well known for both). Next, there is a claim that unification is semidecidable for “Diophantine equations”. I don’t even know what that’s supposed to mean, as Diophantine equations as such are not a theory at all, but again, the given source (Matiyasevich’s paper proving the MRDP theorem) does not say anything about unification. There may be something I’m missing here, but as written, these examples do not make sense, and are unsourced.—Emil J. 13:14, 2 July 2013 (UTC)[reply]

Come to think of it, unification for real-closed fields is UNdecidable. To begin with, E-unification of terms only uses the equational part of the given theory, and the equational fragment of the theory real-closed fields is the theory of commutative rings. Anyway, since there is a constant in the language, any unifiable set of equations has a ground unifier. Since every term is a polynomial with integer coefficients, any set of equations is equivalent to a set of polynomial equations with , and it has a ground unifier if and only if it has an integer solution. Thus unification over RCF (or for that matter, over any simple extension of the theory of commutative rings that does not prove n = 0 for any positive integer n) is equivalent to solvability of integer polynomial (i.e., Diophantine) equations, and as such it is undecidable.

I suppose unification over Presburger arithmetic is decidable by a similar argument, as it reduces to decidability of existential Presburger formulas. In any case, the equational fragment of Presburger arithmetic is the theory of abelian groups with an additional free constant 1, so it would be better to call it unification with constants for abelian groups. (I observe that unification with constants is not even mentioned in the article.) I still do not know what to make of the “Diophantine equations” example, but I begin to suspect that whatever the author may have intended also boils down to unification in the theory of commutative rings, as in the case of RCF.

I thus propose to ditch the three entries, and possibly replace them with abelian groups with additional constants as an example of decidable unification, and commutative rings as an example of semidecidable unification. (As for references, these are both mentioned on page 486 of Baader&Snyder.)—Emil J. 14:06, 2 July 2013 (UTC)[reply]

It might be also worthwhile to include some examples from modal logic, such as transitive modal algebras (decidable), Ku-algebras (undecidable).—Emil J. 14:20, 2 July 2013 (UTC)[reply]

Concerning the sources, here is a literal quote from Dershowitz.Jouannaud.1990 (see section "References"), p.282, sect. "Semantic unification":
"On the brighter side, many other theories have decidable unification problems, including Presburger arithmetic (Presburger [see footnote 20], Shostak [21]), real closed fields (Tarski [22], Collins [43]) and monoids (Makanin [see footnote 17])."
Concerning your arguments, I'll try later to understand them. Jochen Burghardt (talk) 08:26, 3 July 2013 (UTC)[reply]
(In the preceeding quote, I replaced Dershowitz+Jouannaud reference numbers by the article's footnote numbers.)
Concerning the source on Diophantine equations, I inferred the claim about semidecidability from the quote from Dershowitz.Jouannaud.1990, again p.282: "Of course, E-unifiability is semidecidable for recursively-enumerable E. ... 'British-museum' method of interleaving the production of substitutions with the search for equational proofs." I think, checking equality of ground instance of polynomial expressions over integers is even semidecidable. Hence, given any set of Diophantine equations, it is possible to enumerate all possible substitutions, checking for each one whether it is a solution. This would "semidecide" the equation set. Maybe, some of the preceeding should be said in the article, as an additional explanation?
I don't have sufficient knowledge to follow your argument about undecidability of unification for real-closed fields (e.g. why are only integers admitted as ground unifiers). However, I think if the whole theory is decidable, then at least unifiability is decidable, too; if the theory decision algorithm happens to provide a witness term for each proven existentially-quantified formula, then an answer substitution can be constructed from it. I admit, that it may still be a problem to get a complete and minimal solution set.
I didn't understand what you meant by "unification with constants" in general, since you seem not to consider unification in monoids as a special case of that. Anyway, the section on E-unification is admittedly still in a very preliminary state; you're welcome to explain unification with constants in general and/or in particular theories, as well as unification in modal theories (I wasn't aware of the latter at all). Jochen Burghardt (talk) 15:05, 3 July 2013 (UTC)[reply]
The main problem with the “Diophantine equations” line is that saying that E-unification for Diophantine equations is semidecidable makes as much sense as saying that E-unification for apples is semidecidable. What equational theory are you talking about?
As for RCF, it is far from true that unifiability in decidable theories is decidable. The decision algorithm for RCF cannot provide witnessing terms, because (constant) terms in RCF only denote integers, whereas existential quantifiers need to be witnessed by arbitrary algebraic reals.
As for unification with constants, see section 3.2.2 of the Baader and Snyder chapter in the Handbook of Automated Reasoning for an explanation. The basic point is to consider unification problems involving additional free constant outside the signature of the equational theory. I might try writing something about it later, but I don’t have the time at the moment.
No offence, but despite your expertise in other unification topics, none of the above persuades me that you have a clue about this particular issue, so I’ll go ahead with the changes.—Emil J. 15:36, 3 July 2013 (UTC)[reply]
Dershowitz.Jouannaud.1990 gives on p.284 a canonical term rewriting system (TRS) for addition, subtraction, and multiplication of integers, as an example for a theory where the word problem is decidable (by the TRS), but the unification problem is not (, since solvability of Diophantine equations is generally undecidable):
x+0 → x         x+s(y) → s(x+y)         x+p(y) → p(x+y)
x-0 → x x-s(y) → p(x-y) x-p(y) → s(x-y)
x*0 → 0 x*s(y) → x*y+x x*p(y) → x*y-x
p(s(x)) → x s(p(x)) → x
The algebra of ground terms over 0,s,p,+,-,*, factorized by the congruence (say, E) induced by the TRS, is obviously isomorphic to ℤ. On the other hand, a unification oracle wrt. this E would return substitutions mapping to arbitrary (non-ground) terms over 0,s,p,+,-,*; however, each such substitution could be instantiated to a ground substitution. Hence as a Corollary of Matiyasevich, unification w.r.t this E is undecidable. In this sense, this E can be viewed as the equational theory underlying Diophantine equations, with a grain of salt. You are of course right in that this E can't characterize ℤ up to isomorphism. Anyway, the argument of Dershowitz and Jouannaud about the distinction between word problem and unification problem would be worth to be explained in the article.
As for real closed fields, I see your point now: a formula "∃x: s=t" may be satisfied by a value for x that isn't expressible as a term, while a solution of the unification problem "s=t" can only be something expressible as a term. When talking about term algebras, both notions coincide; else (e.g. in RCF, you said), they may differ. I think, this point should also be explained in the article - maybe even Dershowitz and Jouannaud (who certainly have a clue about this particular issue) jumped to conclusions when claiming a decidable unification problem (rather than only a decidable theory) for RCF?
After having had a glance at sect.3.2.2 in Baader and Snyder, I think, this distinction of elementary / with-constants- / general E-unification (which is somewhat new and isn't adressed at all by Dershowitz.Jouannaud.1990) should also be mentioned in the article; perhaps illustrated with Burckert.1989's example of elementary unification being decidable, but unification with constants being undecidable (p.533-534 in his paper):
f(x, g(y, z, v), v) = g(f(x, y, v), f(x, z, v), v)   Dist-r
f(g(x, y, v), z, v) = g(f(x, z, v), f(y, z, v), v) Dist-l
f(f(x, y, v), z, v) = f(x, f(y, z, v), v) Ass
f(x, y, a) = a
f(x, y, f(u, v, w)) = a
f(x, y, g(u, v, w)) = a
g(x, y, a) = a
g(x, y, f(u, v, w)) = a
g(x, y, g(u, v, w)) = a
"In this theory every unification problem is trivially solvable by substituting a for each variable, since then all terms will become equal to a. However, if we introduce a new free constant b, a subset of the terms, where the third argument is always b, ..." amounts to DA-unification, "... and hence unification will be undecidable."
Btw: Thanks for fixing the MathJax Problems! Jochen Burghardt (talk) 17:01, 5 July 2013 (UTC)[reply]
The article is too long as is, and I don’t think that artificial examples involving unsightly ad hoc lists of equations are helpful to the reader. The fact that an equational theory can have undecidable unification with constants even though elementary unification is decidable is certainly worth pointing out, but I’d leave out the counterexample, a reference should be enough.
Thanks for explaining the Diphantine equations theory, however I think that what I just said applies to it as well. It is a weaker theory than commutative rings, but it has undecidable unification for exactly the same reasons, so it does not illustrate anything new, and being an artificial example, it would require listing all the equations above. (The undecidability of its unification problem also directly follows from undecidability of unification in commutative rings, since the two theories prove the same variable-free equations.)—Emil J. 18:36, 9 July 2013 (UTC)[reply]

Failed to Parse errors?

[edit]

It might be my browser, but I seem to get a lot of error messages? Example:

Failed to parse (unknown function '\textcolor'): \{ \textcolor{red}{y \stackrel{?}{=} nil}, \; a.v_3 \stackrel{?}{=} v_4, \; app(v_3,v_4) \stackrel{?}{=} a.nil \}

I think this is related to LaTex. You can delete this section if I've made a mistake :) 88.104.138.194 (talk) 08:16, 5 July 2013 (UTC)[reply]

Indeed, \textcolor and \leadsto are not supported by the default TeX rendering engine. They are supported by MathJax, which is probably why it went unnoticed so far.—Emil J. 11:52, 5 July 2013 (UTC)[reply]
Fixed. Another problem with the default rendering is that spacing is wrong (viz. missing) around the signs (because for some silly reason, the preprocessor wraps \stackrel’s in { } before passing them to LaTeX; \mathrel and friends are likewise broken). This is a long-standing problem, and I don’t know of a good way to work around this (inserting explicit \, fixes the texvc output, but makes the spacing too large in MathJax).—Emil J. 12:12, 5 July 2013 (UTC)[reply]

Note 4

[edit]

Note 4 has gone wrong. I do not know how to correct it. — Preceding unsigned comment added by 86.166.162.78 (talk) 20:33, 9 September 2013 (UTC)[reply]

Postfix vs infix

[edit]

"Applying that substitution to a term t is written in postfix notation as t {x1 ↦ t1, ..., xk ↦ tk}". This is infix, not postfix. — Preceding unsigned comment added by 97.80.110.212 (talk) 13:28, 27 November 2013 (UTC)[reply]

(I moved the preceding remark from page top to here and added a section heading, in order to improve traceability.)
"Postfix" refers to the order of the substitution "{x1 ↦ t1, ..., xk ↦ tk}" as a whole and its operand "t"; the former is written after the latter. See also the notation "tσ" in the same section. You are right if you meant that the operator "↦" is an infix operator; however, it is internal to the substitution and not referred to by the sentence you cited above. - Jochen Burghardt (talk) 14:49, 27 November 2013 (UTC)[reply]

Haskell type inference example

[edit]

The example in the article says «The Haskell expression 1:['a','b','c'] is not correctly typed, because the list construction function ":" is of type a->[a]->[a] and for the first argument "1" the polymorphic type variable "a" has to denote the type Int whereas "['a','b','c']" is of type [Char], but "a" cannot be both Char and Int at the same time.»

This may sound a bit nit-picking, but the expression «1:['a','b','c']» is not necessarily ill-typed. Number literals are overloaded in Haskell, so «1» isn't of type Int but actually of type Num n => n. Given a type class instance Num Char, «1:['a','b','c']» can be correctly inferred to have type [Char]. Therefore, I suggest to use another example here, like «1 : ('a', 'b')» which will fail to unify types [a] and (b, c). -- PyroPi (talk) 02:53, 28 November 2013 (UTC)[reply]

erroneous definition of "more special substitution"

[edit]

This statement from the current text is wrong: A substitution σ is more special than, or subsumed by, a substitution τ if xσ is more special than xτ for each variable x. For example, { xf(u), yf(f(u)) } is more special than { xz, yf(z) }, since f(u) and f(f(u)) is more special than z and f(z), respectively. For instance the first substitution applied to g(x,z) produces t1 = g(f(u),z), while the second gives t2 = g(z,z). And t1 is not more special than t2. (Hence the two substitutions do not satisfy the standard definition of "more general": theta is more general than tau if tau is the composition of theta with some substitution eta).

In particular, {x/y} is not more general than {x/a} (Example 2.7, K.Apt "From logic programming to Prolog").

I am going to at least introduce a simple correction, replacing for "each variable x" by "for each term". (This is easier than introducing substitution composition.) Hope this does not introduce any new hidden error :) --Wlodr (talk) 21:49, 24 October 2015 (UTC)[reply]

I am not familiar with non-syntactical unification. I hope that the definition I introduced works also in such case. However I am unable to add a standard definition for a general case. It possibly should replace = by ≡ in σ = τη. But what is ≡ for substitutions?

So a competent look of someone familiar with general unification is necessary. — Preceding unsigned comment added by Wlodr (talkcontribs) 22:43, 24 October 2015 (UTC)[reply]

Small Mistake

[edit]

The eliminate step in the algorithm seems wrong. I would suspect it to drop an element from the set in the right-hand side. This definition seems to loop. Anyone agrees? — Preceding unsigned comment added by 86.82.44.193 (talk) 07:50, 17 June 2016 (UTC)[reply]

Your argument is tempting, but wrong. The algorithm is given in a form where the solution substitution is accumulated within the set G itself. For this reason, x=t has still to appear on the rule's rhs. For a variable x, the eliminate rule isn't applicable more than once, since after its first application x is no longer a member of vars(G). I'll add a corresponding note to the article; thanks for the hint. - Jochen Burghardt (talk) 14:55, 24 July 2016 (UTC)[reply]
I too am very perplexed by that, indeed that specification cannot be translated straight to a recursive function: namely, what you say may make sense but then a base case seems to be missing, to say that all equations of the form {x=t} have to be considered "resolved"/have to be dropped from G, otherwise indeed all we get is infinite recursion: alternatively, in the eliminate rule, we should express something like "recurs on G only", i.e., overall, we should rewrite the procedure to make the recursive call explicit, and what we are returning/how we are building the return value. (And, in any case, building it "in place" is per se not satisfactory: we want a functional spec...) LudovicoVan (talk) 08:56, 3 July 2023 (UTC)[reply]
The rules follow the paradigm of separating logic from control, cf. e.g. Logic_programming#Logic_and_control. The rules just represent the "logic" part; adding an arbitrary "control" strategy will lead to a correct unification algorithm (provided the strategy is "fair", i.e. every applicable rule is eventually applied). This is the reason why no explicit recursion strategy is shown.
Rule "eliminate" is applicable only if the equation set contains "x=t" and some other occurrence of x outside of t. After it has been applied, the latter condition is false, so the rule is no longer applicable (on x). Therefore, no infinite recursion is possible.
If that doesn't convince you, try to make up a counter example, i.e. a unification problem and a sequence of rule applications that doesn't terminate. - Jochen Burghardt (talk) 14:05, 3 July 2023 (UTC)[reply]
I am not familiar enough with separation logic to discuss at that level, but you say << [r]ule "eliminate" is applicable only if the equation set contains "x=t" and some other occurrence of x outside of t >>, which even makes sense, indeed closes the alleged "circle", but, unless I have missed it, the article does not say that anywhere.
That said, on a more general note and FWIW, I'd contend that the most generic and readable expression of an algorithm is a functional specification, not a relational/equational one: all the more so in the context of a/any article whose focus is not per se logic programming or equational logic. LudovicoVan (talk) 14:56, 3 July 2023 (UTC)[reply]
Regarding the eliminate rule, Jochen added a note 8 on eliminate which points out that x must appear in G, i.e. outside t. So yes, you missed it.
All the unification algorithms I've read are formulated similarly to the example algorithm, as a series of rules. I would argue the rules are based on a rewriting formalism rather than a relational one - it's the same process of (nondeterministically) applying rules until no more rules are applicable. And of course one you prove the final result is unique (confluent), then the rewriting system defines a function. Rewriting is a natural extension of functional programming, just more powerful.
And I think the focus of the article is in fact logic. For one, the lead sentence says "in logic and computer science". I looked for a bit to find uses for unification besides in logic programming / proof search type applications but there really aren't any. The closest is "matching", which is where you are unifying a pattern with a closed term. That has some applications in higher-order term rewriting. But it's kind of a specialized use. Mathnerd314159 (talk) 03:54, 4 July 2023 (UTC)[reply]
You guys are amazing: 1) it wasn't me who started talking of separation logic, and of course *I* can see it's more rewrite rules; 2) my/our objection remains: you try and implement those rules as written and you get an infinite loop: indeed YOU do try and prove that it's confluent and fail! (and note 8 is totally irrelevant, so what are you you even talking about??); 3) the focus of the article is not "logic", which is not even meaningful... I will fix that definition as soon as I find the time. LudovicoVan (talk) 11:22, 4 July 2023 (UTC)[reply]
Sorry, I have just seen note 8: the thing is so poorly written and formatted that that portion goes off screen. It's a mess and very poorly written, it simply deserves a rewrite that is decently formal and not just that much handwaving. LudovicoVan (talk) 11:27, 4 July 2023 (UTC)[reply]
1) Actually it was you, the first hit for "separation logic" is your comment of 14:56, 3 July 2023 (UTC). Jochen was discussing separation of logic and control, which is completely different from separation logic.
2) There is a sketch of the termination proof in Unification_(computer_science)#Proof_of_termination, it is quite straightforward. Now that I look at it, it does not in fact prove confluence. The lead just refers to Martelli and Montanari's paper to say that a most general unifier (MGU) exists and the algorithm produces that unique answer. I guess if you wanted, you could expand the "proof of termination" to a "proof of termination and uniqueness" based on Martelli and Montanari's paper. It seems like a good idea, maybe you will be able to improve the article that way.
I myself am more interested in higher-order unification though, and there the MGU doesn't exist. It is for this reason that Huet's "algorithm" is only a semi-algorithm and all higher-order unification algorithms can get stuck in infinite loops evaluating possibilities for unification. So I would say that first-order unification being terminating and confluent is a nice property but not really that important to discuss.
3) There is a section "Application: unification in logic programming"... you clearly are in too much of a hurry. I would say to calm down and find some time to understand the subject, before attempting to make "fixes". Certainly the article is not very good, but at least it has some usable references. Mathnerd314159 (talk) 20:36, 4 July 2023 (UTC)[reply]
Sorry again, for misreading a poor reply. Yes, the article is poorly written if at all correct(!), that was indeed the whole point! But it's true that any half-competent programmer understands what was actually meant, surely the first-order part, so I'll just fix that spec when I find the time... and the motivation. LudovicoVan (talk) 21:52, 4 July 2023 (UTC)[reply]
As for confluence, Martelli and Montanari have proved (on p.4 of their 1976 internal note) the algorithm to produce always a "solved form", which, moreover, corresponds to a most general unifier. It is straight-forward to show that two different most general unifiers need to variants of each other mod. renaming. So there can't be two essentially different outcomes of the algorithm, hence it must be confluent. - Jochen Burghardt (talk) 08:16, 5 July 2023 (UTC)[reply]

Related to this rule, the text says that it must decrease the number of variables that occur more than once, because it eliminates all other mentions of x. But what if t contains the only mention of another variable y? Wouldn't then this variable occur more than once afterwards, and show that this is not decreasing the measure? Nomeata (talk) 07:15, 2 July 2022 (UTC)[reply]

The term t was already present before rule application (see the rule's left-hand side). So, if it contains the only occurrences of some y, they also were present before, and hence cannot prevent nvar from decreasing. - Jochen Burghardt (talk) 18:00, 3 July 2022 (UTC)[reply]

Citation gives funny dates

[edit]

In citing the Paterson, Wegman paper, the article gives it a year of 1978, and therefore seems to give more credit to Martelli, Montanari (1976)[15] But the year of 1978 is the date of the journal version of the Paterson, Wegman paper. There was an earlier publication from Paterson, Wegman in STOC '76 from May of 1976 paper that was almost the same as the journal version. See https://dl.acm.org/doi/10.1145/800113.803646 Note that in the Paterson, Wegman paper there is a description of an otherwise unpublished algorithm due to Robinson, which is significantly easier to describe than the one in this article. That algorithm is almost linear, but might be more appropriate for a Wikipedia article because of it's simplicity— Preceding unsigned comment added by MarkWegman (talkcontribs) 01:38, 23 December 2021 (UTC)[reply]

Thanks for the STOC reference; I'll insert it when I have more time. For now, I just see the [note 7]. - Jochen Burghardt (talk) 09:58, 23 December 2021 (UTC)[reply]
 Done I have included your STOC reference, and adapted the historical remarks. Since I don't have access to any of the Paterson,Wegmann papers, I can't check their description of Robinson's algorithm. @MarkWegman: maybe, you can publish the description here? - Jochen Burghardt (talk) 18:30, 28 December 2021 (UTC)[reply]
Thanks for the correction.
Sure can you get to this page? https://doi.org/10.1016/0022-0000(78)90043-0 It's a scanned PDF. I find it easier to read than the Martelli and Montanari reference but I'm hardly representative of most readers. You may find it easier too though. It's not important but the actual first version of that paper was an IBM RC in May '76, though that version had a minor error, which we corrected eventually (a test was placed in the wrong place in the algorithm, which effected correctness :-( but not performance).
Here's a latex version of Robinson's faster algorithm. It's time is $n\alpha(n)$
\begin{lstlisting}[mathescape=true, caption=traditional unification algorithm]
Comment: tests the pair of nodes u, v for unifiability.
Begin
Set u $\equiv$ v.
While (there is a pair of nodes r, s
with r $\equiv$ s but with the relation
r$'$ $\equiv$ s$'$ unknown for a pair of corresponding sons r $'$, s$'$):
set r$'$ $\equiv$ s$'$.
Test the relation `$\equiv$' for conditions (ii) and (iii) of validity.
If `$\equiv$' valid : print (`UNIFIABLE')
else print (`UNUNIFIABLE') and halt.
\end{lstlisting}
In the paper we say:
An equivalence relation on the nodes of a dag is valid if it has the following properties:
(i) if two function nodes are equivalent then their corresponding sons are equiva-
lent in pairs;
(ii) each equivalence class is homogeneous,that is it does not contain two nodes with
distinct function symbols;
(iii) the equivalence classesmay be partially ordered by the partial order on the dag.
You use the disjoint set union algorithm to maintain knowledge of equivalence. Actually way back when, I was taught the disjoint set union algorithm under the name "equivalence algorithm", so that's very natural. MarkWegman (talk) 14:55, 20 April 2022 (UTC)[reply]
Rereading our paper, I realized that independently both Robinson and (G. Huet and G. Kahn) came up with the simplified algorithm with and used the union-find algorithm to implement the equivalence algorithm. So if you are going to credit John Alan Robinson, you should also give credit to Huet and Kahn in the same way. MarkWegman (talk) 15:47, 20 April 2022 (UTC)[reply]

"Narrowing (computer science)" listed at Redirects for discussion

[edit]

An editor has identified a potential problem with the redirect Narrowing (computer science) and has thus listed it for discussion. This discussion will occur at Wikipedia:Redirects for discussion/Log/2022 September 19#Narrowing (computer science) until a consensus is reached, and readers of this page are welcome to contribute to the discussion. Mdewman6 (talk) 21:24, 19 September 2022 (UTC)[reply]

Added the section 'Logic language representation'

[edit]

Request to replace the inline reference to an entry in the Reference section -- it is far beyond my abilities. Thanks. February 2024 — Preceding unsigned comment added by Ddccc (talkcontribs) 03:48, 14 February 2024 (UTC)[reply]

What the heck?

[edit]

Why was the new section 'Logic language representation' deleted? By whom? Why? It reports about a 2022 algorithm that improves the 1965 & 1978 versions AND introduces an OO version of predicate calculus --- allowing meta info to be added. — Preceding unsigned comment added by Ddccc (talkcontribs) 20:33, 14 February 2024 (UTC)[reply]

Courtesy ping to Jochen Burghardt, who removed the content in question. Primefac (talk) 21:21, 14 February 2024 (UTC)[reply]
I didn't remove it; rather, I moved the essence of it (in particular the reference) up to the start of the section talking about complexity (cf. my edit summary). There are literally thousands of papers about unification, so we can't devote an own subsection to each of them. An OO version of predicate calculus certainly doesn't belong to this article. BTW: I doubt that Robinson's algorithm is faster than e.g. Martelli's and Montanari's, even for small terms, but nevertheless, I kept that claim of Champeaux. - Jochen Burghardt (talk) 09:06, 15 February 2024 (UTC)[reply]

To Mr Jochen Burghardt

[edit]

(1) Thx for adding my paper the reference section. (2) I didn't remove it;

   Yes you removed a carefully crafted section - in vandalism fashion - in stealth mode/ without any notification -- twice i believe

(3) rather, I moved the essence of it

   that is your value statement - but not mine 

(4) (in particular the reference)

   Yes thx

(5) up to the start of the section talking about complexity

        (cf. my edit summary).
   Yes you did because you did not appreciate the section added. 

(6) There are literally thousands of papers about unification, so

   Hyperbolic, which weakens your stance

(7) we can't devote an own subsection to each of them.

   Who is 'we'? Sorry - you weaken even further

(8) An OO version of predicate calculus certainly doesn't belong

        to this article.  
   Fatal ex-cathedra assertion.
   Unification is >functional< code.  It turns out that an OO
   approach to the >representation< of the logic language makes a key
   difference - and I >believe< that it can be a 'game-changer' -
   google lingo - elsewhere in theorem proving; for example in the
   preprocessors for theorem provers that I wrote about 4 decades ago. 
   Hence my section has the dual purpose of stating the progress of the
   algorithm AND that it dependent crucially about using a 'rich' OO
   version of the predicate calculus:
   --- Formulas/ terms/ etc. have a hashtable attribute containing
   variables (used by the parser) -- which helps with the occur check 
   --- Variables contain infrastructure for building substitutions
   and dealing with aliasing of variables exploiting the Union-Find
   algorithm
   Subclassing and/or adding attributes can enrich the semantics and
   'pragmatics' of expressions.

(9) I am 82 and dismayed that I need to spend time on you.

(10) If you think that you can capture my content better how about you rewrite my section and put it somewhere. Otherwise please restore my section. Please notify me at: temp AT ontooo DOT com

Here is my section: <<< Logic language representation

Logic operations work typically with a LISP like representation for formulas, terms, constants and variables. The Robinson type algorithms exhibit exponential performance on them. The linear unification algorithm by Patterson-Wegman (and others) obtain their better performance by using a richer formalism. This entails preprocessing first their input and postprocessing their output when a substitution is produced. The overhead causes these linear algorithms to be slower on small sized arguments. Using a parser that generates an object-oriented C++/Java type representation can support (meta) annotations on formulas, terms, constants and variables. The linear DC algorithm uses 'object' variables with infrastructure for building substitutions. This makes this version also competitive on small sized input, [de Champeaux, D. Faster Linear Unification Algorithm. J Autom Reasoning 66, 845–860 (2022); https://doi.org/10.1007/s10817-022-09635-1] >>> — Preceding unsigned comment added by Ddccc (talkcontribs) 23:30, 15 February 2024 (UTC)[reply]

Dear Ddccc, thank you for your contributions. I will go through and answer your points:
"you removed a carefully crafted section - in vandalism fashion - in stealth mode/ without any notification -- twice i believe" This is the normal mode of editing on Wikipedia, a section is added in one edit and then it is deleted in the next. This is the art of Wikipedia - to not only make an edit, but to avoid having it reverted.
" 'There are literally thousands of papers about unification' Hyperbolic, which weakens your stance" - This was actually not hyperbolic, see Google Scholar which says there are hundreds of thousands of papers,
"Who is 'we'?" - "we" is the community of Wikipedia editors. You can see from this table that simply by having made more than 10 edits, you are in the top 5% of Wikipedia users.
"'An OO version of predicate calculus certainly doesn't belong to this article.' Fatal ex-cathedra assertion." Well Wikipedia does have pillars: in particular there is WP:NPOV and in particular WP:UNDUE. If there is only one article and it has 0 citations then it is hard to justify giving it more space in the article than a sentence, when there are so many highly-cited articles on unification.
"It turns out that an OO approach to the >representation< of the logic language makes a key difference - and I >believe< that it can be a 'game-changer'" - and do you have sources for this? Independent of the original article ( WP:SECONDARY sources)?
"for example in the preprocessors for theorem provers that I wrote about 4 decades ago." - So then you write things? Are you in fact Dennis de Champeaux, as the d's and c's in your username might suggest? If so then it's a conflict-of-interest to cite your articles. Now per WP:SELFCITE it is allowed within reason (particularly note the "only if it is relevant" clause), but also per WP:CITESPAM an academic author trying to persistently add a citation to their work without going through the proper processes is one of the most common reasons for banning people.
"I am 82 and dismayed that I need to spend time on you." - You don't have to edit Wikipedia... it is your choice to spend time here. Many people have left. There is a page Wikipedia:Expert retention which discusses the fact that some (most?) experts get fed up pretty quickly.
Mathnerd314159 (talk) 02:45, 16 February 2024 (UTC)[reply]

For Mr Jochen Burghardt & Mathnerd314159 ako Karen Oliver

[edit]

No idea why I have to deal now with two volunteers.

I have rewritten the new section with the better title 'Infrastructure for linear versions'. This deals with linear unification having different algorithms that rely on different data structures. It is NOT specific for my DC algorithm. It will be added to the wiki soon.


I noticed that the wiki has now:

   According to Champeaux, Robinson's algorithm is faster on small
   inputs.

Please remove. I cover this in my new section. Also you not entitled to make that fuzzy assertion.


A purpose of life is: help other people. Wikipedia is a great example but is vulnerable due to sociopaths. Its solution is using unpaid, unvetted volunteers, who create other problems. Here goes:

      • "you removed a carefully crafted section - in vandalism fashion
    - in stealth mode/ without any notification -- twice i believe"
    - This is the normal mode of editing on Wikipedia, a section is
    added in one edit and then it is deleted in the next. This is  
    the art of Wikipedia - to not only make an edit, but to avoid 
    having it reverted. 

This is NOT normal. Doubling down is even worse. I was totally confused by first seeing the addition and later it had disappeared. I checked the talk section: nothing. Spend way, way to much time tracking down the perpetrator: Jochen Burghardt. A consequence of not being trained, vetted, and not accountable.

      • " 'There are literally thousands of papers about unification'
   Hyperbolic, which weakens your stance" 
   - This was actually not hyperbolic, see Google Scholar which says
   there are hundreds of thousands of papers,

I did. Doubling down again. OK, show me ten papers about vanilla unification with correctness proof, code on GitHub & comparison/ performance tests against 3 competitors. OK, show me five papers about LINEAR unification with correctness proof, code on GitHub & comparison/ performance tests against 3 competitors. FOLLOW UP instead of shouting in the void.

      • "Who is 'we'?"
   - "we" is the community of Wikipedia editors. You can see from
   this table that simply by having made more than 10 edits, you are
   in the top 5% of Wikipedia users. 

Doubling down yet again. This is fatal. As an unvetted volunteer you can not assume a 'WE' authority stance. You are entitled only to '>>I<< believe that XYZ'. Just consider the connection between authority and fascism: https://en.wikipedia.org/wiki/Fascism

      • "'An OO version of predicate calculus certainly doesn't belong to
   this article.' Fatal ex-cathedra assertion." 
   Well Wikipedia does have pillars: in particular there is WP:NPOV
   and in particular WP:UNDUE. If there is only one article and it
   has 0 citations then it is hard to justify giving it more space 
   in the article than a sentence, when there are so many highly- 
   cited articles on unification.

My paper is NOT about vanilla unification. It is about linear unification. I had one sentence about MY paper while I pointed out in the new section that richer languages have been used and are likely relevant also for other logic operations. {BTW I get still notification by ResearchGate about my papers from 40, 30, 20 years ago and from recent papers about quicksort.}

      • "for example in the preprocessors for theorem provers that I
    wrote about 4 decades ago." 
    - So then you write things? Are you in fact Dennis de Champeaux,
    as the d's and c's in your username might suggest? If so then
    it's a conflict-of-interest to cite your articles. Now per
    WP:SELFCITE it is allowed within reason (particularly note the
    "only if it is relevant" clause), but also per WP:CITESPAM an
    academic author trying to persistently add a citation to their
    work without going through the proper processes is one of the
    most common reasons for banning people.

Outrageous. Did you read my article? Did you see a self reference? I recommend, based on your persona - as shown, that you join law enforcement; you would likely thrive there.

      • "I am 82 and dismayed that I need to spend time on you."
   - You don't have to edit Wikipedia... it is your choice to spend
   time here. Many people have left. There is a page 
   Wikipedia:Expert retention which discusses the fact that some 
   (most?) experts get fed up pretty quickly.

Bizarre ... You admit being useless/ obnoxious/ authoritarian/ ...? Remember your purpose? Helping other people. I try. You two don't. I invited you two to IMPROVE my new section - coming soon. Nothing but wasting our time. How about you do some solid research instead. Start with:

  www.ontooo.com/AIAssessment2.pdf

which I presented Feb 1 at ACDSA.

[[ Let me add: the quicksort wiki is FATALLY wrong with its quadratic worst case complexity. A 1997 paper by David Musser gave a fix to get O(NlogN). The result? Java & Python use mergesort to avoid a quadratic blow up. Stupid. Slow and not in-place. Ditto with qsort on my ubuntu box. ChatGPT & Bard/Gemini regurgitate the O(N**2) complexity. The disasters get repeated ... and repeated ... and ... There are 2 people mentioned in the qs wiki 12&11 times. They made a cyberspace tombstone. Someone needs to clean them out. ]]

Go ahead ... do something useful instead.

added sig: user:Ddccc 20:36, 17 February 2024

"two unpaid, unvetted, unaccountable volunteers." "A purpose of life is: help other people." "you not entitled to make that fuzzy assertion." "Spend way, way to much time tracking down the perpetrator" "This is NOT normal." "As an unvetted volunteer you can not assume a 'WE' authority stance. You are entitled only to '>>I<< believe that XYZ'. Just consider the connection between authority and fascism: https://en.wikipedia.org/wiki/Fascism" "Bizarre ... You admit being useless/ obnoxious/ authoritarian/ ...? Remember your purpose? Helping other people." "www.ontooo.com/AIAssessment2.pdf which I presented Feb 1 at ACDSA."

This is all very interesting but as it is out of scope of the article content I have replied on your talk page.

I have rewritten the new section with the better title 'Infrastructure for linear versions'. This deals with linear unification having different algorithms that rely on different data structures. It is NOT specific for my DC algorithm. It will be added to the wiki soon.

I would say maybe to post it as a section on the talk page here (click "add topic" in the toolbar on the top), just because given your previous writing style it will probably get reverted if you add it unedited to the main page.

Show me ten papers about unification with correctness proof, code on GitHub & comparison/ performance tests against 3 competitors.

There is the efficient higher-order unification paper by Vukmirović, already in the article. And an older comparison of first-order algorithms here. I wouldn't say any of them are "complete" in the sense in comparing every published algorithm. Such a comparison would be useful but I guess is too tedious for anyone to have done.

My paper is NOT about vanilla unification. It is about linear unification.

I don't really see this distinction in the literature. All the algorithms for first-order unification are discussed equally. Whether they are linear or exponential is a matter of analysis, and if you analyze them empirically you will come to different conclusions on different datasets (and if you analyze them theoretically you will reach a third set of conclusions).

I had one sentence about MY paper while I pointed out in the new section that richer languages have been used and are likely relevant also for other logic operations.

I don't think such an observation is correct. One can implement your Java algorithms in C or C++, and while the code might be longer, it will probably also be faster than the Java version. Your discussion of an "object-oriented" approach in the "faster" paper is really just a disguised method to use pointers instead of a hash table. In Java, the only way to use pointers is to make a class and instantiate an object of that class, but this is not the case in other languages. Mathnerd314159 (talk) 22:14, 18 February 2024 (UTC)[reply]

Mathnerd314159 ako Karen Oliver

[edit]

Dear KO, Thx for removing the floating assertion.

Please remove from the talk page your and this version of: <<< "two unpaid, unvetted, unaccountable volunteers." "A purpose of life is: help other people." "you not entitled to make that fuzzy assertion." "Spend way, way to much time tracking down the perpetrator" "This is NOT normal." "As an unvetted volunteer you can not assume a 'WE' authority stance. You are entitled only to '>>I<< believe that XYZ'. Just consider the connection between authority and fascism: https://en.wikipedia.org/wiki/Fascism" "Bizarre ... You admit being useless/ obnoxious/ authoritarian/ ...? Remember your purpose? Helping other people." "www.ontooo.com/AIAssessment2.pdf which I presented Feb 1 at ACDSA." >>> It has no follow up here.

<<< This is all very interesting but as it is out of scope of the article content I have replied on your talk page. >>> Sorry, you may have time on your hand. I don't.

<<<

       I would say maybe to post it as a section on the talk page
       here (click "add topic" in the toolbar on the top), just
       because given your previous writing style it will probably 
       get reverted if you add it unedited to the main page. 

>>> What makes you think that I am not willing to change an addition to the wiki if improvements/ critique is provided? Deletion of an addition is based on what authority? Who are you anyway? Are you helping other people? Have you published 500 page books?

<<<

           Show me ten papers about unification with correctness
           proof, code on GitHub & comparison/ performance tests
           against 3 competitors. 
       There is the efficient higher-order unification paper by

Vukmirović, already in the article. And an older comparison of first-order algorithms here. I wouldn't say any of them are "complete" in the sense in comparing every published algorithm. Such a comparison would be useful but I guess is too tedious for anyone to have done. >>> Remember: the hyperbolic 'there are 1000 unification papers'? You don't show me ten papers about (fol) unification algorithms. You don't show me five papers about (fol) linear unification algorithms. Instead you refer to a paper about >>higher-order unification<<. These algorithms are totally different from fol algorithms.

<<<

           My paper is NOT about vanilla unification. It is about
           linear unification. 
      I don't really see this distinction in the literature. All the
      algorithms for first-order unification are discussed
      equally. 

>>> Indeed the wiki is ordered along another dimension and ignores now the complexity (exponential vs linear) of the different fol versions. There is a treatment of the Robinson type fol version by Montanari. But not about the linear versions.

<<<

     Whether they are linear or exponential is a matter of analysis,
     and if you analyze them empirically you will come to different
     conclusions on different datasets (and if you analyze them
     theoretically you will reach a third set of conclusions).

>>> Sorry, this does not make any sense. Doubt that you have a grip on computational complexity. Here an analogy:

   mergesort:          time NlogN, space 2N 
   insertionsort:      time N**2,  space N
   quicksort 1: worst  time N**2,  space N (average NlogN)
   quicksort 2:        time NlogN, space N 

Homework: What is the time ratio on a 1024 array for insertionsort/ mergesort? Homework: What algorithm to use when the array is 512M and memory is 700M? Homework: Can the empirical results differ from the theoretical complexity analysis when the same language is used? Quicksort is the fastest but the quicksort wiki is f-ked-up.

Robinson time performance on arguments of size 15-18::

Size --- Timing

15 --- 0081

16 --- 0103

17 --- 0483

18 --- 1775

The timings of the linear versions are proportional to the Size argument.

<<<

         I had one sentence about MY paper while I pointed out in 
         the new section that richer languages have been used and 
         are likely relevant also for other logic operations. 
    I don't think such an observation is correct. One can implement
    your Java algorithms in C or C++, and while the code might be
    longer, it will probably also be faster than the Java
    version. Your discussion of an "object-oriented" approach in the
    "faster" paper is really just a disguised method to use pointers
    instead of a hash table. In Java, the only way to use pointers 
    is to make a class and instantiate an object of that class, but 
    this      is not the case in other languages. 

>>> Sorry, things are getting worse. It is not even wrong. Suppose your parent needs a tool that reminds on their laptop to take medication X at time Y. Would you code that in assembly language? Fortran? C? C#? java-servlets? The choices are the 'same' from a theoretical perspective? The choices are the same from a pragmatic perspect? Why do you select a particular language?

The fol algorithms in the wiki were developed pre OO and their descriptions >suggest< a LISP style (not code) for the data structures; this is detrimental - exponential complexity - for the Robinson-type versions. PW & BS used similar structures to get their linear versions - but still not competitive on small sized arguments due to input and output overheads.

Being familiar with OO (check Google for my books) [and papers 9000+ reads], having written Mbs of Java code and having an OO version available for fol, I was able to exploit OO features to obtain a better/ competitive fol unification version. Could this be done also in assembly language? Sure, but ...

There is a feedback loop between better languages and better applications. That is the reason for:

         I had one sentence about MY paper while I pointed out in
         the new section that richer languages have been used and 
         are likely relevant also for other logic operations.

You get it this time?

[[ I want to add less than 10 pages to the wiki ... and notice how much trouble you create with your lack of ... of experience. You may be smart, but it does not work here. Trying to help me so that I can help improving the wiki? Doubt you can ... No relevant credentials anywhere. Please check my language because English is my 3rd one. ]] [[ BTW The talk page suggests to take out the Prolog stuff. Agree. Prolog never made sense when OO came on the scene (via Smalltalk). OO has added additional ways to decompose 'engineering' challenges. Reversing the order of <functionality & substrate> and providing architectures with mini-ontologies, subconcepts, inheritance, encapsulation, etc. {Hoare logic breaks with the raw-wild-west semantics of pointer assignment. Encapsulation geofences the side-effects.} ]]

Next moves:

(A) The 'Further reading' section has an entry from Franz Baader and Wayne Snyder (2001). How about moving it to the reference section? I want to use it in my new section. And add it to the section 'Unification algorithms'. BTW: I found earlier a defect in the PW version (published) and recently a defect in the BS version.

(B) Please add my linear unification paper (#15 in the references) to the 'Unification algorithms' section after 'Baader-Snyder' of (A)

(C) Here the text for the new section after:

  'Proof of termination'

and before:

  'Examples of syntactic unification of first-order terms'

PLease propose improvements as you see fit.

<<< Infrastructure for (linear) versions

Logic operations work typically with a LISP like representation for formulas, terms, constants and variables. The Robinson type algorithms exhibit exponential performance on them. The linear unification algorithms by Patterson-Wegman and by Baader-Snyder obtain their better performance - on larger size arguments - by using richer formalisms. This requires preprocessing their inputs and postprocessing their output when a substitution is produced. Another approach relies on a 'fat' version of the predicate calculus using an OO architecture for formulas, terms, constants, variables (and axioms, definitions, lemmas, theorems, etc.) A parser can generate these expressions with optional support for (meta) annotations. An example is capturing in a hash table the variables in a formula (and term), which can be used to improve the occur check.

The linear DC algorithm in [de Champeaux] uses 'object' variables with infrastructure for building substitutions, and for using the Union-Find algorithm to deal with variable aliasing and for the occur check +REF. This makes this version also competitive on small sized inputs. Using such a 'fat' version of the predicate calculus provides likely opportunities for other unification versions and for other deductive operations.

Refs::: [de Champeaux, D. Faster Linear Unification Algorithm. J. Autom Reasoning 66, 845–860 (2022); https://doi.org/10.1007/s10817-022-09635-1] 2nd Ref already available:: [Baader-Snyder] Baader, F. & W.Snyder, Unification theory, Chapter 8 in HANDBOOK OF AUTOMATED REASONING, Edited by Alan Robinson and Andrei Voronkov, Elsevier Science Publishers B.V., 2001. https://www.cs.bu.edu/fac/snyder/publications/UnifChapter.pdf >>> >>> [ My paper contains a reference to my GitHub unification repos, which has an implementation of a 'fat' predicate calculus. ]

Sun Mar 03 15:02:55 2024

Informatics has gone through the following sequence: Turing machines, hand coded hardware, assemblers, compilers for Fortran, machine instruction for multi tasking, recursion for Algol, C, Cobol, OO languages, C++, Java, concurrency & parallelism and these days ontologies. Each time other types of applications became possible. Unix relied on functional code, GUIs relied on OO. As mentioned earlier: there is a feedback loop between applications and better languages/ metaphors.

And what has been added to this wiki by an unvetted, overconfident volunteer (in response to what I want to add)??:

    Factors such as programming language, programming style, input
    size, and whether the equations are unifiable or non-unifiable
    can affect the performance of a unification algorithm.[15] 

Certified low-level trivia garbage. Get out of the way. My linear unification algorithm relies on the availability of an object-oriented version of the predicate calculus (pc); and CANNOT be done effectively on a ... Turing-machine.

You get it now? Plus I argue that an enhanced pc that supports meta annotations most likely facilitates future logic reasoning and planning algorithms.

Do NOT dare damaging my subsequent additions. Ugh

For Mr Jochen Burghardt & Mathnerd314159 ako Karen Oliver

[edit]

An earlier addition to this wiki was removed twice - vandalism style - by Jochen Burghardt without any notification on the talk page. Since then I have explained here page after page what is behind my linear unification paper (faster than all competition and also faster than the Robinson versions on small sized inputs where the other linear versions are slower). Karen Oliver added to the wiki twice unacceptable sentences about my paper with references to me. My 3rd contribution - carefully crafted - is again ripped out by Mr Jochen Burghardt - totally ignoring my explanations here and without follow up on my clarifications. This is not acceptable.

The first sentence of this wiki:

     In logic and computer science, unification is an algorithmic process ...

is already wrong. It is not a process. Unification is an algorithm.


— Preceding unsigned comment added by Ddccc (talkcontribs) 21:10, 28 February 2024 (UTC)[reply] 
Well, I warned you... "given your previous writing style, it will probably get reverted if you add it unedited to the main page." And indeed, this was what happened. Jochen is pretty quick with the reverts, but even in the absence of Jochen and myself, most likely someone else would have eventually done the same, seeing as it is an unsourced section with quite different writing style from the rest of the page. That is just how Wikipedia works - even the most finely crafted essay on quantum mechanics, if lacking sources, would get deleted as original research. I guess you find this unacceptable, like many experts who have attempted to edit Wikipedia, but it is also one of Wikipedia's core policies and I don't think it is changing anytime soon.
Regarding "unification is an algorithm", there are various sources which define it as a process, e.g. [1]. Since higher-order unification is undecidable, there is in fact no algorithm for doing higher-order unification, so it can't be an algorithm, just like the halting problem is not an algorithm. Pedantically we could distinguish "a unification problem", "a unification algorithm", and "the process of executing a unification algorithm on a unification problem", but this doesn't reflect how people write about unification in practice. Mathnerd314159 (talk) 22:31, 7 March 2024 (UTC)[reply]
@Ddccc: Your recent addition might be acceptable in an article about implementations of unsorted syntactic first-order unification. It is far too detailled in unification (computer science), where we can present only the major improvements. Since your improvement (which I still don't fully understand from your article and discussion contributions; I'd like to read your paper, but didn't find an open-source version of it) didn't affect the computational complexity, I removed your addition; see my edit summary. - Jochen Burghardt (talk) 07:54, 8 March 2024 (UTC)[reply]
For accessing the paper, I found a Github link which is in the references. It is also available from the Wikipedia Library, in ProQuest. Mathnerd314159 (talk) 19:21, 8 March 2024 (UTC)[reply]
The more recent addition is more manageable. I did expand it a little with some lines from the paper ("competitive", avoids DAGs). Mathnerd314159 (talk) 19:56, 8 March 2024 (UTC)[reply]

For Mr Jochen Burghardt & Mathnerd314159 ako Karen Oliver

[edit]

Not done yet. Regarding the 1st sentence::

     In logic and computer science, unification is an algorithmic
     process of solving equations between symbolic expressions, each
     of the form Left-hand side = Right-hand side. 

==> This is all seriously WRONG. The 1st sentence should convey what the >purpose< is of unification; for example a special case of pattern matching:

    Are two logic terms equal, can they be made equal through a
    substition or are they unequal?

Solving equations is a >>side<< story of the >>design/solution<< realm, concocted by Montanari. (And is not linear at all.) [ This reminds me about programmers that start coding instead of:: ascertaining what the purpose is, subsequently make a design, and validate that against the reqs, and only then start coding, etc. ]

Try something else like:

  The Unification operation investigates whether two pc terms are
  equal or can be made equal through a substitution obtained by
  assigning values to variables in the two terms.  The operation
  terminates in fol with a unique 'best' substitution or with the
  conclusion that unification fails.  Higher order logic unification
  can also fail or can produce more than one unifier.

---

Regarding:

  ... (which I still don't fully understand from your article and
  discussion contributions; ...

Self knowledge is the best. Excellent! Thank you!! ---

Regarding the current additions:

    ... is slower than the Robinson version on small sized inputs.

==>

    ... is slower than the Robinson version on small sized input due
    to the overhead of preprocessing the inputs and postprocessing of
    the output. 

---

    de Champeaux (2022) is of linear complexity in the input size
    but is also competitive with the Robinson algorithm on small size
    inputs. Champeaux attributes the speedup to using object-oriented
    programming techniques to avoid preprocessing and construction of
    a DAG.

==> No, no >programming technique<. It relies on a different (OO) representation of the pc. Hence::

    De Champeaux (2022) is also of linear complexity while also
    competitive with the Robinson algorithm on small size inputs. The
    speedup is obtained by using an object-oriented version of the
    predicate calculus. This avoids the need for pre- and
    post-processing.  Variable->objects< become responsibility for
    creating a substitution and for dealing with aliasing. He claims
    that the ability to add functionality to predicate calculus
    >>objects<< provides opportunities for other logic operations.

Alternatively this paragraph is shortened while the new section is added wbout logic representation - see above. Just consider, for example, the problem dealing with time/situation as in McCarthy situation calculus by adding a param to a term. Instead the term can be enriched >as an object< by a time/situation attribute. Etc. ...


[[ BTW The PW version is NOT using the Union-Find algorithm -- quite remarkable ... ]]

[[ Reminder:

   Let me add: the quicksort wiki is FATALLY wrong with its quadratic
   worst case complexity. A 1997 paper by David Musser gave a fix to
   get O(NlogN).  The result? Java & Python use mergesort to avoid a
   quadratic blow up.  Stupid.  Slow and not in-place.  Ditto with
   qsort on my ubuntu box.  ChatGPT & Bard/Gemini regurgitate the
   O(N**2) complexity.  The disasters get repeated ... and repeated
   ... and ...  There are 2 people mentioned in the qs wiki 12&11
   times.  They made a cyberspace tombstone.  Someone needs to clean
   them out.  They are Sedgwick and Bentley.  They block my papers
   already for a decade.  

Please apply your expertise to fix this DISASTER. ]]

Prolog: delenda est. — Preceding unsigned comment added by Ddccc (talkcontribs) 17:17, 14 March 2024 (UTC)[reply]

Regarding the lead, the article has to be at least somewhat accessible to non-technical readers. "Solving equations" is understandable to those with even a basic understanding of mathematics in a way that "investigating substitutions" is not. I think the example conveys more information to such a reader than your sentence "can be made equal through a substitution obtained by assigning values to variables in the two terms." But you are right that the article should explain the differences among forms of unification (first-order, higher-order, E-unification) in the lead, rather than just their application areas, so I have revised that part.
Regarding your other suggestions, they seem great. I've incorporated your additional suggestions almost exactly as you suggested. It seems like you're really getting the hang of writing in Wikipedia's style, which is fantastic to see. Regarding the "Logic Representation" section, I don't see it fitting in this article as this article is mainly about the concepts of unification rather than their implementation. There is an existing article on Knowledge representation and reasoning but perhaps a more focused article on representations used in implementations of logic provers could be written, perhaps with research on the representations AI models learn if more material is needed. I certainly have wondered about the implementation of Z3, most papers treat it as a black box "we translated the problem to SMTLIB and fed it to Z3", but I think there is a significant amount of literature around the techniques used. I don't know if you are interested, creating new articles from scratch is a lot of work, but if you are, I would suggest creating a draft article in your user space, like User:Ddccc/Logic reasoning, or else via the AfC draft process, Mathnerd314159 (talk) 04:41, 18 March 2024 (UTC)[reply]
Oh, and regarding quicksort, that should go on Talk:Quicksort, each talk page is intended for discussion only of its accompanying article. Mathnerd314159 (talk) 04:43, 18 March 2024 (UTC)[reply]

Sun Mar 31 16:27:45 2024

For Mr Jochen Burghardt & Mathnerd314159 ako Karen Oliver

[edit]

Again the 1st paragraph is TERRIBLE. Referring to solving equations is wrong; that is a metaphor used by Montanari to sketch a (lousy) design. Also using the notation:

    { cons(x,cons(x,nil)) = cons(2,y) } 

is crazy. We are not doing LISP here. Here a 2nd alternative: <<<

  The Unification operation investigates whether two pc terms are
  equal or can be made equal through a substitution.
  For example unification of:
     P(a) and P(a) succeeds
     P(a) and Q(a) fails
     P(a) and P(b) fails
     P(a) and P(x) succeeds with the substitution x->a
  The operation terminates in fol with a unique 'best' substitution
  or with the conclusion that unification fails.  Higher order logic
  unification can also fail or can produce one or more than one unifier.

>>> --- Take your ear-plugs out and listen to uncle Dennis who consulted for Fortune 500 companies to prevent them not doing their SW requirements homework thoroughly.Ddccc (talk) 23:40, 31 March 2024 (UTC)[reply]

You say "referring to solving equations is wrong", but I'm not clear on how it is wrong. Doesn't unification start with a set of equations? And doesn't it produce a solution set? It seems quite appropriate to describe the process as solving. Indeed, reviewing equation solving the description is identical other than that "equation solving" is generally over the real numbers and limited to one equation.
Meanwhile, saying "pc terms" ("predicate calculus" terms as used in first-order logic?) is misleading as not all unification problems are first-order logic.
I guess you are right though about unification not being associated with Lisp, I have replaced it with a Prolog-based example. I am not sure though about going into detail with multiple different examples in the lead - that seems more suitable for a section. The "Formal definition" section already has a fair number of examples. Mathnerd314159 (talk) 17:34, 1 April 2024 (UTC)[reply]
Imo, the lead example should (1) illustrate first-order syntactic unification, (2) not be too trivial, and (3) be understandable without much explanation. In order to achieve (2), we need an uninterpreted function of arity ≥2. Due to the 2 occurrences of x in the cons example, its solution is not obvious at first glance, so it gives an impression that unification might not be a trivial problem. As for (3): for everybody who has heard of Lisp, it is clear that cons and nil are what we call uninterpreted functions, without the need to be familiar with the latter notion. (I guess, everybody who knows the notion of uninterpreted function, also knowns cons and nil from Lisp, but not vice versa.) Giving an example using Lisp function names does not imply that unification is just a Lisp issue. For these reasons, I'd still prefer the former version. If there are other, similarly popular, uninterpreted functions around (maybe nowadays in languages like Haskell? Or in any other application area, not only in programming?), I have no objection to use them instead of cons, nil. - Jochen Burghardt (talk) 18:14, 2 April 2024 (UTC)[reply]
I think the most common place people will encounter unification is Haskell or Prolog (I say Haskell but also include strict FP languages like Ocaml). The example is a Prolog example, from https://www.dai.ed.ac.uk/groups/ssp/bookpages/quickprolog/node12.html (I would have cited it but there is nothing to verify for an example) It is non-trivial because it shows that unification is bidirectional. I guess I could have used "foo" like the source, I have no objection to changing it if you think it's clearer. I think though it is important to mention uninterpreted functions in the lead, the notion of syntactic equivalence doesn't make much sense without it.
With Haskell, there is plenty of unification going on in H-M, but it is a bit complex to make the unification visible - it generally only shows up when something goes wrong. For example, "s" + True says "Couldn't match type ‘Bool’ with ‘[Char]’", which is a failure of unification, but an explanation of how unification is involved would probably take up the majority of the lead. An arity 2+ example is actually pretty rare in Haskell, as most type constructors only take one parameter. Most problems are trivial, like a ~ [b]. Mathnerd314159 (talk) 01:54, 3 April 2024 (UTC)[reply]

Mon Apr 01 12:17:22 2024

For Mr Jochen Burghardt & Mathnerd314159 ako Karen Oliver

[edit]

The follow up is hopeless.

     You say "referring to solving equations is wrong", but I'm not
     clear on how it is wrong.

Because you are ignorant.

      Doesn't unification start with a set of equations? 

No. That is just a metaphor. Unification is left-to-right traversal over the pair and decide failure or construct substitutions.Ddccc (talk) 19:30, 1 April 2024 (UTC)[reply]

     And doesn't it produce a solution set? It seems 
     quite appropriate to describe the process as solving. Indeed,
     reviewing equation solving the description is identical other
     than that "equation solving" is generally over the real numbers
     and limited to one equation. 

Give up - go away please.

    Meanwhile, saying "pc terms" ("predicate calculus" terms as used
    in first-order logic?) is misleading as not all unification
    problems are first-order logic. 

pc stands for predicate calculus, which can be any order.

Give up - go away please.

   I guess you are right though about unification not being
   associated with Lisp, I have replaced it with a Prolog-based
   example. 

Even worse. Prolog delenda est. Give up - go away please.

    I am not sure though about going into detail with multiple
    different examples in the lead - that seems more suitable for a
    section. 

Give up.

[Previous text added by Ddccc (talk · contribs) 1 Apr 2024]

I hesitate to become involved in this unnecessarily unpleasant discussion. However, it seems that some of the problem may be due to the article's overly general approach, treating multiple kinds of logic simultaneously. A much better approach would have been to follow the lead of the survey by Kevin Knight, focussing on the simplest case of first-order logic before dealing with more exotic and less familiar logics. Here is a link to the paper: [1] If this approach were taken, then Knight's characterisation of unification would cover the most common and familiar case and would be more faithful to the original notion of unification:
"The unification problem is most often stated in the following context: Given two terms of logic built up from function symbols, variables, and constants, is there a substitution of terms for variables that will make the two terms identical?"
The interpretation of unification as equation solving, which is the way unification is defined in this article, does not come up in Knight's 32-page survey until page 21. Robert Kowalski (talk) 10:31, 2 April 2024 (UTC)[reply]
I found this link to Kevin Knight (Mar 1989) Unification: A Multidisciplinary Survey to be more accessible for me. --Ancheta Wis   (talk | contribs) 02:50, 3 April 2024 (UTC)[reply]

References

  1. ^ Kevin Knight (Mar 1989). "Unification: A Multidisciplinary Survey" (PDF). ACM Computing Surveys. 21 (1): 93–124. CiteSeerX 10.1.1.64.8967. doi:10.1145/62029.62030. S2CID 14619034.

 

@Ddccc: Please do adhere to Wikipedia:Civility in your discussion style, and avoid personal attacks and phrases like "you are ignorant" or "go away please". - Jochen Burghardt (talk) 18:22, 2 April 2024 (UTC)[reply]