Skip to main content
Tweeted twitter.com/StackCodeReview/status/1346698027755581441
added 30 characters in body
Source Link
Blade
  • 337
  • 2
  • 11

For any input symmetric matrix with zero diagonals W, I have the following implementation in PyTorch. I was wondering if the following can be improved in terms of efficiency,

P.S. Would current implementation break backpropagation?

import torch

W = torch.tensor([[0,1,0,0,0,0,0,0,0],
                  [1,0,1,0,0,1,0,0,0],
                  [0,1,0,3,0,0,0,0,0],
                  [0,0,3,0,1,0,0,0,0],
                  [0,0,0,1,0,1,1,0,0],
                  [0,1,0,0,1,0,0,0,0],
                  [0,0,0,0,1,0,0,1,0],
                  [0,0,0,0,0,0,1,0,1],
                  [0,0,0,0,0,0,0,1,0]])

n = len(W)
C = torch.empty(n, n)
I = torch.eye(n)
for i in range(n):
    for j in range(n):
        B = W.clone()
        B[i, j] = 0
        B[j, i] = 0

        tmp = torch.inverse(n * I - B)

        C[i, j] = tmp[i, j]

For any input matrix W, I have the following implementation in PyTorch. I was wondering if the following can be improved in terms of efficiency,

P.S. Would current implementation break backpropagation?

import torch

W = torch.tensor([[0,1,0,0,0,0,0,0,0],
                  [1,0,1,0,0,1,0,0,0],
                  [0,1,0,3,0,0,0,0,0],
                  [0,0,3,0,1,0,0,0,0],
                  [0,0,0,1,0,1,1,0,0],
                  [0,1,0,0,1,0,0,0,0],
                  [0,0,0,0,1,0,0,1,0],
                  [0,0,0,0,0,0,1,0,1],
                  [0,0,0,0,0,0,0,1,0]])

n = len(W)
C = torch.empty(n, n)
I = torch.eye(n)
for i in range(n):
    for j in range(n):
        B = W.clone()
        B[i, j] = 0
        B[j, i] = 0

        tmp = torch.inverse(n * I - B)

        C[i, j] = tmp[i, j]

For any input symmetric matrix with zero diagonals W, I have the following implementation in PyTorch. I was wondering if the following can be improved in terms of efficiency,

P.S. Would current implementation break backpropagation?

import torch

W = torch.tensor([[0,1,0,0,0,0,0,0,0],
                  [1,0,1,0,0,1,0,0,0],
                  [0,1,0,3,0,0,0,0,0],
                  [0,0,3,0,1,0,0,0,0],
                  [0,0,0,1,0,1,1,0,0],
                  [0,1,0,0,1,0,0,0,0],
                  [0,0,0,0,1,0,0,1,0],
                  [0,0,0,0,0,0,1,0,1],
                  [0,0,0,0,0,0,0,1,0]])

n = len(W)
C = torch.empty(n, n)
I = torch.eye(n)
for i in range(n):
    for j in range(n):
        B = W.clone()
        B[i, j] = 0
        B[j, i] = 0

        tmp = torch.inverse(n * I - B)

        C[i, j] = tmp[i, j]
Source Link
Blade
  • 337
  • 2
  • 11

increase efficiency of loops and element-wise operations in PyTorch implementation

For any input matrix W, I have the following implementation in PyTorch. I was wondering if the following can be improved in terms of efficiency,

P.S. Would current implementation break backpropagation?

import torch

W = torch.tensor([[0,1,0,0,0,0,0,0,0],
                  [1,0,1,0,0,1,0,0,0],
                  [0,1,0,3,0,0,0,0,0],
                  [0,0,3,0,1,0,0,0,0],
                  [0,0,0,1,0,1,1,0,0],
                  [0,1,0,0,1,0,0,0,0],
                  [0,0,0,0,1,0,0,1,0],
                  [0,0,0,0,0,0,1,0,1],
                  [0,0,0,0,0,0,0,1,0]])

n = len(W)
C = torch.empty(n, n)
I = torch.eye(n)
for i in range(n):
    for j in range(n):
        B = W.clone()
        B[i, j] = 0
        B[j, i] = 0

        tmp = torch.inverse(n * I - B)

        C[i, j] = tmp[i, j]