0
$\begingroup$

A reversible Turing machine ($\mathsf{RTM}$) is a Turing machine ($\mathsf{TM}$) whose transition function is a bijection, so that each instantaneous description ($\mathtt{ID}$) has at most one predecessor.

In [Ben73], Bennett showed that "Each conventional multitape Turing machine running in time $T$ and space $S$ can be simulated by a reversible input-saving machine running in time $O(T)$ and space $O(S + T)$."

In [Ben89], he improved his simulation as "For any $\epsilon>0$, any multitape Turing machine running in time $T$ and space $S$ can be simulated by a reversible input-saving machine using time $O(T^{1+\epsilon})$ and space $O(S\cdot\log{T})$."

To the best of my understanding, this is the proof-sketch of [Ben89] simulation.

For the parameters $m\approx S,n\ge 0,k\ge 2$, divide the entire computation $T=m\cdot k^n$ into $k^n$-segments of steps $m$. For $n=0$, simulate each segment of $m$-steps using [Ben73]. Then, for each $n>0$, there are exactly $k$-number of segments each of $m\cdot k^{n-1}$-steps. So, for each $n>0$, do $k$-number of forward simulation in each of which a checkpoint (the final $\mathtt{ID}$ of the simulation) is stored followed by $(k-1)$-number of backward simulation in each of which a checkpoint is deleted.

So, for all $n,k>1$, any $\mathsf{TM}$ (i.e., computation) with $k^n$-segments may be simulated by an $\mathsf{RTM}$ with $(2k-1)^n$-segments. The number of checkpoints is $\le n\cdot(k-1)=O(\log_k{T})$. As the size of each checkpoint is $O(S)$, the $\mathsf{RTM}$ requires $O(S\cdot\log{T})$-space and $O(S\cdot (2k-1)^n)=O(T^{1+(1/\log{k})})$-time.

My question is about the role of the obtained checkpoints after the reversal. It seems that the computation can be reconstructed (possibly in reverse) from those checkpoints in $O(\log{T})$-time but how? In particular, if the $\mathsf{TM}$ computes a function $f$ on an input $x$, then from these $\log{T}$- number of checkpoints would it be possible to validate in $O(\log{T})$-time that the $\mathsf{RTM}$ has computed $f(x)$ on the input $x$ using the [Ben89] simulation? If yes, then how else why are the checkpoints stored?

Further, irrespective of the usage of the checkpoints, if the simulation can not be reconstructed deterministically backward then why should we call Bennett's simulation reversible? I am unable to see how to do it from those $\log{T}$-checkpoints. Saving the pair $\langle x,f(x) \rangle$ only may not prove the reversibility.

$\endgroup$
2
  • $\begingroup$ Cross-posted from cs.stackexchange.com/questions/171205/… $\endgroup$ Commented Feb 6, 2025 at 16:41
  • $\begingroup$ @EmilJeřábek These two posts ask two different questions. The other post asks for the possibility to validate the computation from the obtained checkpoints. Speculating it to be negative, this post asks for the usage of the obtained checkpoints? In fact, stepping ahead, I should have asked that why should we call Bennett's simulation reversible if the computation can not be reconstructed in $O(\log{T})$-time as the simulation saves the input-output $\langle x,f(x) \rangle$ at the end of the simulation? $\endgroup$ Commented Feb 7, 2025 at 13:58

0

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.