Broadly speaking, the idea of “reverse mathematics” is to find equivalents to various standard mathematical statements over a weak base theory, in order to gauge the strength of theories (sets of axioms) required to prove said statements. Also, it attempts to construct models of certain theories over the weak base theory, in order to prove that certain sets of axioms do not prove various other axioms. All of this takes place over a fixed weak base theory and all the theories considered are intermediate between the weak base theory and a strong full theory, so the idea is to map that space of intermediate theories.
Now there are two main flavors:
Classical (arithmetic) reverse mathematics (generally simply known as “reverse mathematics”, but in the context of this question I will be adding the adjective “classical” for obvious reasons) operates entirely within classical logic; it uses second-order arithmetic as the full theory, and weakens the induction/comprehension axiom scheme(s) of the latter. The weak base system is typically $\mathsf{RCA}_0$ which is limited to $\Sigma^0_1$-induction and $\Delta^0_1$-comprehension.
Intuitionistic reverse mathematics (also known as “constructive reverse mathematics”) takes a wholly different approach: this time, comprehension is left alone and it is the underlying logic that gets weakened. The full theory is typically something like classical higher-order arithmetic (or some variant, like Zermelo set theory, sometimes even $\mathsf{ZFC}$), the weak base system is the intuitionistic version thereof, and intermediate systems typically add various “omniscience principles”: so we weaken, in essence, the law of excluded middle.
(Yet another kind of reverse mathematics might be discerned: choice reverse mathematics — although it's not usually called that way —, which takes $\mathsf{ZFC}$ as full system, $\mathsf{ZF}$ as base system, and weakens the axiom of choice. This is, of course, historically the first to have been studied. I'm not asking about this one, but it can serve as a kind of comparison point.)
Prima facie, classical arithmetic RM and intuitionistic RM are completely different endeavors, even though they follow the same pattern. There is no reason why “weakening the comprehension axiom in classical arithmetic” and “weakening the law of excluded middle with unlimited comprehension” should yield anything similar. But they often do!
For example:
In both flavors of RM, the weak base is insufficient to prove the existence of a noncomputable function $\mathbb{N}\to\mathbb{N}$, and in both cases, computability theory is key to constructing models of the base theory or of certain intermediate theories.
In both flavors of RM, forms of Kőnig's lemma plays an important role: for classical arithmetic RM this is evidenced by the existence of one of the “Big Five” intermediate theories being named $\mathsf{WKL}_0$ where “WKL” stands for “Weak Kőnig's Lemma”; and in intuitionistic RM, see the paper by Berger, Ishihara and Schuster whose title begins with “The Weak Kőnig Lemma”.
In both flavors of RM, Choice principles play an important supporting role in the story. (So they have also have ties with what I called "choice RM” above.) See for example this question.
The theorems being considered are often similar or identical, such as the Heine-Borel covering lemma (“every covering of $[0,1]$ by a sequence of open intervals has a finite subcovering”, which isn't provable in the base system; of course, this one is strongly related to Kőnig's lemma, but it's just an example).
To give another surprising commonality, consider the uncountability of $\mathbb{R}$: on the classical arithmetic RM side, see this answer by Sam Sanders, and on the intuitionistic RM side, this paper by Andrej Bauer and James Hanson.
I might add, facetiously, that both flavors of RM tend toward excessive modesty in calling things “weak” in their terminology: in the classical arithmetic RM side, there is a “Weak Weak Kőnig's Lemma”, and in the constrictive RM side, there is a “Weak Limited Principle of Omniscience” (and even a “Weak Lesse Limited Principle of Omniscience”).
(Of course, there are important differences as well, such as the fact that intuitionistic RM can talk about arbitrary sets, not just sets of integers, so it can ask certain questions that don't even make sense in the framework of subsystems of second-order arithmetic. Also, one can limit both excluded middle and comprehension, and in a sense this is what the “Constructive ZF” $\mathsf{CZF}$ theory does as opposed to “Intuitionistic ZF” $\mathsf{IZF}$. But let me try to focus on the similarities.)
Questions:
Is there a “big picture” explanation of the similarities? Can one explain, for example, why it is that computability plays such a major role in both classical and intuitionistic RM?
Ideally: Can we make this similarity more precise, for example, by defining a procedure that takes a system of constructive mathematics (with full comprehension) and outputs a subsystem of classical second-order arithmetic (with excluded middle but only restricted comprehension) and/or perhaps in the other direction, that might preserve provability of certain kinds of statements?
Has anyone attempted to systematically compare the two flavors of reverse mathematics? (Surely I'm not the first one to notice the similarities, and I'm sure far more competent people have more intelligent things to say than my superficial observations.)
For example: has anyone attempted a “summary chart” of various standard theorems of elementary analysis (Heine-Borel, Bolzano-Weierstraß, etc.) showing where they lie on the “classical arithmetic RM” and on the “intuitionistic RM” side?
From the perspective of math philosophy, is it correct to say that classical arithmetic RM focuses on predicativism whereas intuitionistic RM focuses on constructivism, and that the similarities show that there is some connection between predicativism and constructivism (cf. this past question of mine)?