5
$\begingroup$

Completeness theorem and compactness theorem do not hold in full second and higher-order logic. What makes them incompatible with the theorems? Is it somehow related to Russell's paradox or diagonalization issues?

Thanks.

$\endgroup$
1
  • 3
    $\begingroup$ My intuition on the matter (which may be wrong) is that we put an extra condition on semantics: that we insist that the class "all first-order predicates" must be interpreted as the set-theoretic power set of the domain, and that is the source of most/all of our problems. $\endgroup$ Commented May 7, 2012 at 1:10

4 Answers 4

11
$\begingroup$

For the Compactness Theorem:

Take second-order Peano arithmetic, that is, the original Peano arithmetic, with full second-order version of induction. The language of Peano arithmetic in particular has a constant symbol $0$, and a unary function symbol $S$ for the successor function. It is not hard to show that second-order Peano arithmetic is categorical: any two models are isomorphic.

Add to the language a new constant symbol $B$. Add to second-order Peano arithmetic the infinitely many special axioms $\lnot(B=0)$, $\lnot(B=S(0))$, $\lnot(B=S(S(0)))$, $\lnot(B=S(S(S(0))))$, and so on forever.

Any finite subset $T_{\text{fin}}$ of this theory has a model. Just interpret $B$ as a natural number larger than any of the natural numbers "mentioned" in the finitely many special axioms that occur in $T_{\text{fin}}$. But the full theory does not have a model, since such a model cannot be isomorphic to the natural numbers. Thus the Compactness Theorem is violated.

$\endgroup$
1
  • 1
    $\begingroup$ More abstractly, the compactness theorem implies that any theory with an infinite model has infinite models of arbitrarily large cardinalities (i.e., a form of the upwards Löwenheim–Skolem theorem). This fails for second-order logic e.g. because it can characterize various infinite structures up to isomorphism (such as arithmetic), or because it can express constraints of the form “the universe has size at most $\kappa$” for various infinite cardinals $\kappa$. $\endgroup$ Commented Nov 24 at 12:15
4
$\begingroup$

The reason is that (the set of valid formulas) of these systems are not c.e. and therefore do not have effective proof systems, i.e. there is no c.e. set of axioms that would formalize them.

Here I am using the word proof system in the usual sense, i.e. a decidable set of axioms and rules and a proof is finite structure built from them. In addition given a string and a formula we can decide if the string is a proof of the formula. If we modify the requirements we can have a proof system and a completeness theorem. For example, we can add the $\omega$-rule and then anything in second order arithmetic is derivable. We can combine these ideas with computability and consider only those proof objects which have finite representations, e.g. a Turing machine gives their bits. There is considerable amount of literature in proof theory on these topics.

$\endgroup$
2
  • 1
    $\begingroup$ What does c.e. mean? Is the point that the second order quantifier range over an un-enumarable domain? $\endgroup$ Commented Jun 23, 2013 at 22:57
  • 1
    $\begingroup$ @Nick, c.e. stands for computably enumerable is the modern terminology replacing r.e. and the part after i.e. in the first paragraph explains what I meant. $\endgroup$ Commented Jun 23, 2013 at 23:23
4
$\begingroup$

Since this question got bumped, let me add a bit to the existing answers.

The failure of compactness explains why the completeness theorem as usually phrased fails for $\mathsf{SOL}$. However, this leaves open the question of just how bad the set of validities is. For example, first-order logic + the quantifier "There are uncountably many" is not compact$^*$ but has a c.e. set of validities; this was proved by Vaught, but I recommend the presentation by Keisler.

There are lots of ways to see that the set of second-order validities is terrible. Personally, I like the following (see also Paulos, Characterizability of the syntax set, for essentially the same ideas). The key here is Tarski's undefinability theorem, usually stated for first-order logic but in fact far more broadly applicable. Specifically, a consequence of TUT is that as long as $\mathcal{L}$ is a "reasonable" logic (SOL qualifies here) and $\mathfrak{A}$ is a structure such that

  • $\mathfrak{A}$ is "rich enough" to talk about the syntax of $\mathcal{L}$,

  • each element of $\mathfrak{A}$ is defined by an $\mathcal{L}$-formula, and

  • there is an $\mathcal{L}$-sentence $\eta$ which pins down $\mathfrak{A}$ up to isomorphism,

then the set of ("codes of") $\mathcal{L}$-validities is not $\mathcal{L}$-definable over $\mathfrak{A}$. In particular, the semiring of natural numbers serves as such an $\mathfrak{A}$ for the case $\mathcal{L}=\mathsf{SOL}$ (the tricky point is the third bulletpoint which is addressed by Andre's answer), and so second-order validity is not second-order-definable over arithmetic. Since already first-order logic is capable of defining things well beyond the c.e., this kills off any hope for the set of second-order validities being computationally simple.

(Of course even this fails to truly describe the awfulness; in fact, the set of second-order validities isn't even set-theoretically absolute since there is a second-order sentence $\chi$ which is a validity iff the continuum hypothesis holds. But this gets into fairly technical territory.)


$^*$There's an unfortunate ambiguity here: Keisler states that this logic is compact! It obviously isn't, since if $I$ is an uncountable set of constant symbols then the theory $\{c\not=d: c,d\in I, c\not=d\}\cup\{\neg\mathsf{Q}x(x=x)\}$ is finitely satisfiable but not satisfiable. What Keisler means is that $\mathsf{FOL}(\mathsf{Q})$ is countably compact, i.e. every countable set which is finitely satisfiable is satisfiable. In general older texts tended to assume that signatures were countable; e.g. Fuhrken's paper cited by Keisler begins with the sentence "Let $L$ be a first-order language with countably many non-logical constants."

Note that countable compactness of $\mathsf{FOL}(\mathsf{Q})$ is highly nontrivial since a countable theory could assert that certain sets in a putative model are countable while others aren't, and in general (even for first-order logic) two-cardinal questions are generally quite complicated despite the Lowenheim-Skolem theorem, but it does in fact hold. A better example for this question might be a logic which is not even countably compact but has a c.e. set of validities, but I don't offhand know of an example which is as natural as $\mathsf{FOL}(\mathsf{Q})$.

$\endgroup$
3
  • $\begingroup$ The intro to the Keisler paper seems to say that logic is compact. $\endgroup$ Commented Nov 25 at 3:12
  • 1
    $\begingroup$ @spaceisdarkgreen It's obviously not compact (take $\{c_i\not=c_j: i\not=j, i,j\in I\}\cup\{\neg \mathsf{Q}x(x=x)\}$ with $I$ uncountable). It is, however, countably compact, and I think that's what Keisler means here. $\endgroup$ Commented Nov 25 at 3:27
  • 1
    $\begingroup$ @spaceisdarkgreen I've added a bit addressing this. $\endgroup$ Commented Nov 25 at 3:32
2
$\begingroup$

If we fix a proof system, the completeness theorem says we have the right balance in our collection of models:

  • Every statement that is provable is true in all of our models
  • For any statement that is syntactically consistent, there is a model in which the statement is true. In other words, if a statement is not provable then there is a model in which the statement is false.

As the comment by user14972 stated, the challenge in second-order logic with full semantics is we do not allow enough models: we only allow ones in which the second-order part is the full powerset of the first-order part.

That makes the second bullet false, because now we do not have enough models for every syntactically consistent statement to have a model.

Moreover, because of the complexity of the standard model of arithmetic, this remains true no matter which effective proof system we use. No effective proof system can prove exactly the statements that are true in the standard model of arithmetic, and this is the only model of arithmetic allowed in second-order logic with full semantics.

If we allow more models, we arrive at Henkin semantics for second order logic, which does have sound and complete proof systems. So the issue is not the "second order" nature, it is the choice to only allow particular models.

We could obtain the same issue in first-order logic if we arbitrarily limited the collection of models there, e.g. if we only allow finite models. Then again we would have theories that are syntactically consistent but don't have any models in our limited collection.

$\endgroup$

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.