Completeness theorem and compactness theorem do not hold in full second and higher-order logic. What makes them incompatible with the theorems? Is it somehow related to Russell's paradox or diagonalization issues?
Thanks.
Completeness theorem and compactness theorem do not hold in full second and higher-order logic. What makes them incompatible with the theorems? Is it somehow related to Russell's paradox or diagonalization issues?
Thanks.
For the Compactness Theorem:
Take second-order Peano arithmetic, that is, the original Peano arithmetic, with full second-order version of induction. The language of Peano arithmetic in particular has a constant symbol $0$, and a unary function symbol $S$ for the successor function. It is not hard to show that second-order Peano arithmetic is categorical: any two models are isomorphic.
Add to the language a new constant symbol $B$. Add to second-order Peano arithmetic the infinitely many special axioms $\lnot(B=0)$, $\lnot(B=S(0))$, $\lnot(B=S(S(0)))$, $\lnot(B=S(S(S(0))))$, and so on forever.
Any finite subset $T_{\text{fin}}$ of this theory has a model. Just interpret $B$ as a natural number larger than any of the natural numbers "mentioned" in the finitely many special axioms that occur in $T_{\text{fin}}$. But the full theory does not have a model, since such a model cannot be isomorphic to the natural numbers. Thus the Compactness Theorem is violated.
The reason is that (the set of valid formulas) of these systems are not c.e. and therefore do not have effective proof systems, i.e. there is no c.e. set of axioms that would formalize them.
Here I am using the word proof system in the usual sense, i.e. a decidable set of axioms and rules and a proof is finite structure built from them. In addition given a string and a formula we can decide if the string is a proof of the formula. If we modify the requirements we can have a proof system and a completeness theorem. For example, we can add the $\omega$-rule and then anything in second order arithmetic is derivable. We can combine these ideas with computability and consider only those proof objects which have finite representations, e.g. a Turing machine gives their bits. There is considerable amount of literature in proof theory on these topics.
Since this question got bumped, let me add a bit to the existing answers.
The failure of compactness explains why the completeness theorem as usually phrased fails for $\mathsf{SOL}$. However, this leaves open the question of just how bad the set of validities is. For example, first-order logic + the quantifier "There are uncountably many" is not compact$^*$ but has a c.e. set of validities; this was proved by Vaught, but I recommend the presentation by Keisler.
There are lots of ways to see that the set of second-order validities is terrible. Personally, I like the following (see also Paulos, Characterizability of the syntax set, for essentially the same ideas). The key here is Tarski's undefinability theorem, usually stated for first-order logic but in fact far more broadly applicable. Specifically, a consequence of TUT is that as long as $\mathcal{L}$ is a "reasonable" logic (SOL qualifies here) and $\mathfrak{A}$ is a structure such that
$\mathfrak{A}$ is "rich enough" to talk about the syntax of $\mathcal{L}$,
each element of $\mathfrak{A}$ is defined by an $\mathcal{L}$-formula, and
there is an $\mathcal{L}$-sentence $\eta$ which pins down $\mathfrak{A}$ up to isomorphism,
then the set of ("codes of") $\mathcal{L}$-validities is not $\mathcal{L}$-definable over $\mathfrak{A}$. In particular, the semiring of natural numbers serves as such an $\mathfrak{A}$ for the case $\mathcal{L}=\mathsf{SOL}$ (the tricky point is the third bulletpoint which is addressed by Andre's answer), and so second-order validity is not second-order-definable over arithmetic. Since already first-order logic is capable of defining things well beyond the c.e., this kills off any hope for the set of second-order validities being computationally simple.
(Of course even this fails to truly describe the awfulness; in fact, the set of second-order validities isn't even set-theoretically absolute since there is a second-order sentence $\chi$ which is a validity iff the continuum hypothesis holds. But this gets into fairly technical territory.)
$^*$There's an unfortunate ambiguity here: Keisler states that this logic is compact! It obviously isn't, since if $I$ is an uncountable set of constant symbols then the theory $\{c\not=d: c,d\in I, c\not=d\}\cup\{\neg\mathsf{Q}x(x=x)\}$ is finitely satisfiable but not satisfiable. What Keisler means is that $\mathsf{FOL}(\mathsf{Q})$ is countably compact, i.e. every countable set which is finitely satisfiable is satisfiable. In general older texts tended to assume that signatures were countable; e.g. Fuhrken's paper cited by Keisler begins with the sentence "Let $L$ be a first-order language with countably many non-logical constants."
Note that countable compactness of $\mathsf{FOL}(\mathsf{Q})$ is highly nontrivial since a countable theory could assert that certain sets in a putative model are countable while others aren't, and in general (even for first-order logic) two-cardinal questions are generally quite complicated despite the Lowenheim-Skolem theorem, but it does in fact hold. A better example for this question might be a logic which is not even countably compact but has a c.e. set of validities, but I don't offhand know of an example which is as natural as $\mathsf{FOL}(\mathsf{Q})$.
If we fix a proof system, the completeness theorem says we have the right balance in our collection of models:
As the comment by user14972 stated, the challenge in second-order logic with full semantics is we do not allow enough models: we only allow ones in which the second-order part is the full powerset of the first-order part.
That makes the second bullet false, because now we do not have enough models for every syntactically consistent statement to have a model.
Moreover, because of the complexity of the standard model of arithmetic, this remains true no matter which effective proof system we use. No effective proof system can prove exactly the statements that are true in the standard model of arithmetic, and this is the only model of arithmetic allowed in second-order logic with full semantics.
If we allow more models, we arrive at Henkin semantics for second order logic, which does have sound and complete proof systems. So the issue is not the "second order" nature, it is the choice to only allow particular models.
We could obtain the same issue in first-order logic if we arbitrarily limited the collection of models there, e.g. if we only allow finite models. Then again we would have theories that are syntactically consistent but don't have any models in our limited collection.