For any input symmetric matrix with zero diagonals W, I have the following implementation in PyTorch. I was wondering if the following can be improved in terms of efficiency,
P.S. Would current implementation break backpropagation?
import torch
W = torch.tensor([[0,1,0,0,0,0,0,0,0],
[1,0,1,0,0,1,0,0,0],
[0,1,0,3,0,0,0,0,0],
[0,0,3,0,1,0,0,0,0],
[0,0,0,1,0,1,1,0,0],
[0,1,0,0,1,0,0,0,0],
[0,0,0,0,1,0,0,1,0],
[0,0,0,0,0,0,1,0,1],
[0,0,0,0,0,0,0,1,0]])
n = len(W)
C = torch.empty(n, n)
I = torch.eye(n)
for i in range(n):
for j in range(n):
B = W.clone()
B[i, j] = 0
B[j, i] = 0
tmp = torch.inverse(n * I - B)
C[i, j] = tmp[i, j]