Updated with proof.
Let $M(i)$ denote the matrix $M$ with its $i$th row and column deleted. Then
for the matrix $M$ in the question, the determinant after deletion of the $
mk $th (last) row and column is
$$
\left\vert M(mk)\right\vert =\frac{a_{k}b_{m}\left( 1+ta^{T}1_{k}\right)
^{m-2}\left( 1+tb^{T}1_{m}\right) ^{k-2}f}{\left( \prod a_{i}\right)
^{m}\left( \prod b_{i}\right) ^{k}}
$$
where
\begin{align}
f &=\left( a^{T}1_{k}-a_{k}\right) \left( b^{T}1_{m}-b_{m}\right) \left(
a^{T}1_{k}+b^{T}1_{m}\right) t^{3} \\
&+\left( 3\left( a^{T}1_{k}-a_{k}\right) \left( b^{T}1_{m}-b_{m}\right)
+\left( a^{T}1_{k}-a_{k}\right) ^{2}+\left( b^{T}1_{m}-b_{m}\right)
^{2}+\left( a_{k}+b_{m}\right) \left(
a^{T}1_{k}-a_{k}+b^{T}1_{m}-b_{m}\right) +a_{k}b_{m}\right) t^{2} \\
&+\left( 2\left( a^{T}1_{k}+b^{T}1_{m}\right) -(a_{k}+b_{m})\right) t+1.
\end{align}
Preliminaries
The approach will be to first find the eigenvectors of a matrix congruent to
$M$, and then convert them to eigenvectors of the submatrix. We will make
frequent use of the mixed-product property of the Kronecker product, $\left(
A\otimes B\right) \left( C\otimes D\right) =AC\otimes BD$. Let $J_{m}$ be
the $m\times m$ matrix of $1$s. We will require the eigenvalues and
eigenvectors of $
D_{b}^{1/2}J_{m}D_{b}^{1/2}=(D_{b}^{1/2}1_{m})(D_{b}^{1/2}1_{m})^{T}$. This is
a rank 1 matrix with $m-1$ zero eigenvalues, and non-zero eigenvalue $
(D_{b}^{1/2}1_{m})^{T}(D_{b}^{1/2}1_{m})=b^{T}1_{m}$ with corresponding
eigenvector $D_{b}^{1/2}1_{m}$.
The matrix of (column) eigenvectors is
chosen in the form
$$
V_{m}=
\begin{bmatrix}
b_{1}^{1/2}b_{m}^{-1/2} & -1_{m-1}D_{b}^{1/2}(1) \\
D_{b}^{1/2}(1)1_{m-1} & I_{m-1}
\end{bmatrix}
$$
where the first row and column have been partitioned out, and the
eigenvectors are scaled so their last entries are $1$ (first and last
eigenvector) or zero (others). The diagonal matrix of eigenvalues is $
\Lambda _{m}=\operatorname{diag}(b^{T}1_{m},0,\dotsc ,0)$. It is convenient to
orthogonalize this matrix using Gram Schmidt to give $U_{m}$. $U_{m}$ has
the same zero/nonzero pattern as $V_{m}$ in its last row, namely only the
first and last entries are nonzero. Since the first column is just
normalized in the orthogonalization, its last entry becomes $\beta
_{1}:=b_{m}^{1/2}/(b^{T}1_{m})^{1/2}$. Then considering the normalization of
the last row shows the last entry in the last row is $\beta
_{2}:=(b^{T}1_{m}-b_{m})^{1/2}/(b^{T}1_{m})^{1/2}$. Similarly for $
D_{a}^{1/2}J_{k}D_{a}^{1/2}$, the first and last entries in the last row of $
U_{k}$ are $\alpha _{1}:=a_{k}^{1/2}/(a^{T}1_{k})^{1/2}$ and $\alpha
_{2}:=(a^{T}1_{k}-a_{k})^{1/2}/(a^{T}1_{k})^{1/2}$.
Proof
Consider the first matrix $M=D_{1/a}\otimes D_{1/b}+t(J_{k}\otimes
D_{1/b}+D_{1/a}\otimes J_{m}$. Pre- and postmultiply it by $
D_{a}^{1/2}\otimes D_{b}^{1/2}$, which multiplies the determinant by $\left(
\prod a_{i}\right) ^{m}\left( \prod b_{i}\right) ^{k}$, to give the matrix
\begin{align}
N &=I_{mk}+t\left( \left( D_{a}^{1/2}\otimes D_{b}^{1/2}\right) \left(
J_{k}\otimes D_{1/b}\right) \left( D_{a}^{1/2}\otimes D_{b}^{1/2}\right) \right. \\
&+ \left. \left( D_{a}^{1/2}\otimes D_{b}^{1/2}\right) \left( D_{1/a}\otimes
J_{m}\right) \left( D_{a}^{1/2}\otimes D_{b}^{1/2}\right) \right) \\
&=I_{mk}+t\left( D_{a}^{1/2}J_{k}D_{a}^{1/2}\otimes I_{m}+I_{k}\otimes
D_{b}^{1/2}J_{m}D_{b}^{1/2}\right).
\end{align}
We carry out a similarity with $I_{k}\otimes U_{m}$ (inverse $I_{k}\otimes
U_{m}^{-1}= I_{k}\otimes U_{m}^{T}$) to give $N_{2}$
\begin{align}
N_{2} &=I_{mk}+t\left( \left( I_{k}\otimes U_{m}^{-1}\right) \left(
D_{a}^{1/2}J_{k}D_{a}^{1/2}\otimes I_{m}\right) \left( I_{k}\otimes
U_{m}\right) \right. \\
&+ \left. I_{k}\otimes U_{m}^{-1}\left( I_{k}\otimes
D_{b}^{1/2}J_{m}D_{b}^{1/2}\right) \left( I_{k}\otimes U_{m}\right) \right)
\\
&=I_{mk}+t\left( D_{a}^{1/2}J_{k}D_{a}^{1/2}\otimes I_{m}+I_{k}\otimes
\Lambda _{m}\right)
\end{align}
which has the effect of diagonalizing the diagonal $m\times m$ blocks
originating from the third term and leaving the other blocks unchanged
(these were already diagonal). Now we carry out a second similarity with $
U_{k}\otimes I_{m}$ to give the diagonal matrix $N_{3}$
\begin{align}
N_{3} &=I_{mk}+t\left( \left( U_{k}^{-1}\otimes I_{m}\right) \left(
D_{a}^{1/2}J_{k}D_{a}^{1/2}\otimes I_{m}\right) \left( U_{k}\otimes
I_{m}\right) \right. \\
&+ \left. \left( U_{k}^{-1}\otimes I_{m}\right) \left( I_{k}\otimes
\Lambda _{m}\right) \left( U_{k}\otimes I_{m}\right) \right) \\
&=I_{mk}+t\left( \Lambda _{k}\otimes I_{m}+I_{k}\otimes \Lambda _{m}\right).
\end{align}
The eigenvalues are in the diagonal entries. The identity $I_{mk}$
contributes $1$ to every diagonal entry, every $m\times m$ diagonal block
has an additional $b^{T}1_{m}$ in its $(1,1)$ entry, and the first block has
an additional $a^{T}1_{k}$ in each diagonal entry. The eigenvalues are $
\left( 1+ta^{T}1_{k}+tb^{T}1_{m}\right) $ with multiplicity $1$, $\left(
1+ta^{T}1_{k}\right) $ with multiplicity $m-1$, $\left( 1+tb^{T}1_{m}\right)
$ with multiplicity $k-1$, and $1$ with multiplicity $km-m-k+1$. (The
eigenvalues of the Kronecker sum $\Lambda _{k}\otimes I_{m}+I_{k}\otimes
\Lambda _{m}$ are the $mk$ possible sums of the eigenvalues of $\Lambda _{k}
$ with the eigenvalues of $\Lambda _{m}$ [1].) The product of the
eigenvalues of $N$ (or $N_{3}$) gives the determinant of $N$, which leads to
the determinant of $M$ previously deduced. The orthogonal matrix of
eigenvectors is the product of the two similarity matrices, $\left(
I_{k}\otimes U_{m}\right) \left( U_{k}\otimes I_{m}\right) =U_{k}\otimes
U_{m}=:U$.
We know by Cauchy interlacing [2], that $N(mk)$ must have eigenvalues $
\left( 1+ta^{T}1_{k}\right) $ with multiplicity $m-2$, $\left(
1+tb^{T}1_{m}\right) $ with multiplicity $k-2$, and $1$ with multiplicity $
km-m-k$. It remains to find the three remaining eigenvalues. (Actually, we
will only find their product.) As a consequence of its construction, the
matrix $U$ has four columns with their last entry nonzero, namely columns $
1$, $m$, $mk-m+1$, and $mk$; the others have zero as the last entry. These
correspond to eigenvalues $1+a^{T}1_{k}+b^{T}1_{m}$, $1+a^{T}1_{k}$, $
1+b^{T}1_{m}$, and $1$ respectively. The other columns with last entry zero
correspond to eigenvalues we already know by Cauchy interlacing are
eigenvalues of $N(mk)$. Deleting their last entries makes them eigenvectors
of $N(mk)$.
Consider now the matrix $Q_{1}$ below with the same determinant as $N(mk)$,
partitioned to show its last row and column.
$$
Q_{1}=
\begin{bmatrix}
N(mk) & 0 \\
0^{T} & 1
\end{bmatrix}.
$$
It is easy to see that the eigenvectors of $N$ with last entry zero are also
eigenvectors of $Q_{1}$, with the same eigenvalue. Therefore in the
similarity $Q=U^{T}Q_{1}U$ the corresponding diagonal entry is the
eigenvalue and the rest of the entries in that row (and column) are zero.
That eigenvalue is a factor in the determinant. Deleting these rows and
columns leaves a $4\times 4$ submatrix $S$ whose determinant is the
remaining part ($f$ above) of the determinant, i.e. the submatrix lying in
the intersection of the rows and columns $1,m,mk-m+1$, and $mk$ of $Q$. For
two eigenvectors $[u_{i}^{T},z_{i}]^{T}$ and $[u_{j}^{T},z_{j}]^{T}$, $
Q_{ij} $ is explicitly calculated as
$$
Q_{ij}=
\begin{bmatrix}
u_{i}^{T} & z_{i}
\end{bmatrix}
\begin{bmatrix}
N(mk) & 0 \\
0^{T} & 1
\end{bmatrix}
\begin{bmatrix}
u_{j} \\
z_{j}
\end{bmatrix}
=
\begin{bmatrix}
u_{i}^{T} & z_{i}
\end{bmatrix}
\begin{bmatrix}
N(mk)u_{j} \\
z_{j}
\end{bmatrix}
=u_{i}^{T}N(mk)u_{j}+z_{i}z_{j}.
$$
However, $[u_{j}^{T},z_{j}]^{T}$ is an eigenvector of $N$ with eigenvalue $
\lambda _{j}$ so we consider $N[u_{j}^{T},z_{j}]^{T}=\lambda
_{j}[u_{j}^{T},z_{j}]^{T}$ in partitioned form,
$$
\begin{bmatrix}
N(mk) & n \\
n^{T} & w
\end{bmatrix}
\begin{bmatrix}
u_{j} \\
z_{j}
\end{bmatrix}
=
\begin{bmatrix}
N(mk)u_{j}+z_{j}n \\
n^{T}u_{j}+wz_{j}
\end{bmatrix}
=
\begin{bmatrix}
\lambda _{j}u_{j} \\
\lambda _{j}z_{j}
\end{bmatrix}
$$
to find $N(mk)u_{j}=\lambda _{j}u_{j}-z_{j}n$ and $n^{T}u_{j}=\lambda
_{j}z_{j}-wz_{j}$. From orthogonality of the eigenvectors we have $
u_{i}^{T}u_{j}+z_{i}z_{j}=\delta _{ij}$ where $\delta _{ij}$ is the
Kronecker delta. Therefore we may rewrite
\begin{align}
Q_{ij} &=u_{i}^{T}\left( \lambda _{j}u_{j}-z_{j}n\right)
+z_{i}z_{j}=\lambda _{j}\left( \delta _{ij}-z_{i}z_{j}\right)
-z_{j}u_{i}^{T}n+z_{i}z_{j} \\
&=\lambda _{j}\left( \delta _{ij}-z_{i}z_{j}\right) -z_{j}\left( \lambda
_{i}z_{i}-wz_{i}\right) +z_{i}z_{j} \\
&=\lambda _{i}\delta _{ij}+z_{i}z_{j}(1+w-\lambda _{i}-\lambda _{j}).
\end{align}
The last entries in $U$ for the eigenvectors with nonzero last entries may
be found from the definition of the Kronecker product and are tabulated
below. The last entry of $N$ is $w=1+t(a_{k}+b_{m})$. The information
required to calculate $S$ using the above formula is given in the table.
$$
\begin{array}{c|c|c|c|c|}
\text{index in } S & 1 & 2 & 3 & 4 \\
\text{index in } Q & 1 & m & mk-m+1 & mk \\
\lambda \text{ (in }N \text{)} & 1+ta^{T}1_{k}+tb^{T}1_{m} & 1+ta^{T}1_{k} &
1+tb^{T}1_{m} & 1 \\
z & \alpha _{1}\beta _{1} & \alpha _{1}\beta _{2} & \alpha _{2}\beta
_{1} & \alpha _{2}\beta _{2}
\end{array}
$$
Explicit calculation of the $4\times 4$ determinant gives the expression for $f$ given
above. Division by the product $\left( \prod a_{i}\right) ^{m}\left( \prod
b_{i}\right) ^{k}$ to convert from $\left\vert N(mk)\right\vert $ to $
\left\vert M(mk)\right\vert $ incorrectly includes the last element $
a_{k}b_{m}$ of $D_{a}\otimes D_{b}$ and so the result is multiplied by $
a_{k}b_{m}$ to compensate for this, giving the final result shown.
[1] R.A. Horn, C.R. Johnson, Matrix Analysis, Cambridge University Press,
1985, Thm. 4.3.8, p. 185.
[2] R.A. Horn, C.R. Johnson, Topics in Matrix Analysis, Cambridge University
Press, 1991, Thm. 4.4.5, p. 268.