0
$\begingroup$

Let $C: \mathbb{C}^m \to \mathbb{C}^m$ be a sparse Hermitian operator that I only know via matrix-vector products. Using Chebyshev filtering, I obtain the k highest eigenvalues $\lambda$ and eigenvectors $X$ and split the operator into deterministic and stochastic portions. Let \begin{equation*} C_{det} = \sum_{i = 1}^k \lambda_i | X_i \rangle \langle X_i | \end{equation*} which is basically the reduced spectral representation. Then you stochastically approximate the rest, \begin{equation*} Cv = C_{det} |v \rangle + \frac{1}{N} \sum_{m=1}^N \langle r_m | (C- C_{det})v \rangle r_m \end{equation*} I'm taking this idea from random vector approximations of smooth functions, where basically if $r_z \in \mathbb{R}^m$ ranges from -1 to 1, using a normalization factor, $E[r_z(i) r_z(j)] = \delta_{ij}$ and $E[\langle r_z | f \rangle r_z ] = f$. I'm trying to do the same thing basically for a matrix that I don't actually have the elements of. I already have code for this and it converges pretty nicely, but I'm trying to figure out the mathematical soundness of this and whether it's a fluke that the stochastic part converges basically.

$\endgroup$

0

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.