Skip to main content
added 9 characters in body
Source Link
GZ0
  • 2.4k
  • 8
  • 19

A few hints for improving efficiency:

  • When W[i, j] == 0, B equals to W so tmp remains unchanged. In this case, the computation of C values can be done only once outside the loop instead of inside the loop.

  • torch.nonzero / torch.Tensor.nonzero can be used to obtain all indices of non-zero indices ofvalues in a tensor.

  • Since W is symmetric, C is also symmetric and only half of its values needsneed to be computed.

  • All repeated computation can be moved out of the loop to improve effciency.

Improved code:

n = len(W)
nIW = n * torch.eye(n) - W
nIB = nIW.clone()
C = nIB.inverse()

for i, j in W.nonzero():
    if i < j:
        nIB[i, j] = nIB[j, i] = 0
        C[j, i] = C[i, j] = nIB.inverse()[i, j]
        nIB[i, j] = nIB[j, i] = nIW[i, j]

Further performance improvement may be achieved according to this.

As for backpropagation, I have no idea how it can be done on elements of matrix inverses.

A few hints for improving efficiency:

  • When W[i, j] == 0, B equals to W so tmp remains unchanged. In this case, the computation of C values can be done only once outside the loop instead of inside the loop.

  • torch.nonzero / torch.Tensor.nonzero can be used to obtain all non-zero indices of a tensor.

  • Since W is symmetric, C is also symmetric and only half of its values needs to be computed.

  • All repeated computation can be moved out of the loop to improve effciency.

Improved code:

n = len(W)
nIW = n * torch.eye(n) - W
nIB = nIW.clone()
C = nIB.inverse()

for i, j in W.nonzero():
    if i < j:
        nIB[i, j] = nIB[j, i] = 0
        C[j, i] = C[i, j] = nIB.inverse()[i, j]
        nIB[i, j] = nIB[j, i] = nIW[i, j]

Further performance improvement may be achieved according to this.

As for backpropagation, I have no idea how it can be done on elements of matrix inverses.

A few hints for improving efficiency:

  • When W[i, j] == 0, B equals to W so tmp remains unchanged. In this case, the computation of C values can be done only once outside the loop instead of inside the loop.

  • torch.nonzero / torch.Tensor.nonzero can be used to obtain all indices of non-zero values in a tensor.

  • Since W is symmetric, C is also symmetric and only half of its values need to be computed.

  • All repeated computation can be moved out of the loop to improve effciency.

Improved code:

n = len(W)
nIW = n * torch.eye(n) - W
nIB = nIW.clone()
C = nIB.inverse()

for i, j in W.nonzero():
    if i < j:
        nIB[i, j] = nIB[j, i] = 0
        C[j, i] = C[i, j] = nIB.inverse()[i, j]
        nIB[i, j] = nIB[j, i] = nIW[i, j]

Further performance improvement may be achieved according to this.

As for backpropagation, I have no idea how it can be done on elements of matrix inverses.

added 2 characters in body
Source Link
GZ0
  • 2.4k
  • 8
  • 19

A few hints for improving efficiency:

  • When W[i, j] == 0, B equals to W so tmp remains unchanged. In this case, the computation of C values can be done only once outside the loop instead of inside the loop.

  • torch.nonzero / torch.Tensor.nonzero can be used to obtain all non-zero indices of a tensor.

  • Since W is symmetric, C is also symmetric and only half of its values needs to be computed.

  • All repeated computation can be moved out of the loop to saveimprove effciency.

Improved code:

n = len(W)
nIW = n * torch.eye(n) - W
nIB = nIW.clone()
C = nIB.inverse()

for i, j in W.nonzero():
    if i < j:
        nIB[i, j] = nIB[j, i] = 0
        C[j, i] = C[i, j] = nIB.inverse()[i, j]
        nIB[i, j] = nIB[j, i] = nIW[i, j]

Further performance improvement canmay be achieved according to this.

As for backpropagation, I have no idea how it can be done on elements of matrix inverses.

A few hints for improving efficiency:

  • When W[i, j] == 0, B equals to W so tmp remains unchanged. In this case, the computation of C values can be done only once outside the loop instead of inside the loop.

  • torch.nonzero / torch.Tensor.nonzero can be used to obtain all non-zero indices of a tensor.

  • Since W is symmetric, C is also symmetric and only half of its values needs to be computed.

  • All repeated computation can be moved out of the loop to save effciency.

Improved code:

n = len(W)
nIW = n * torch.eye(n) - W
nIB = nIW.clone()
C = nIB.inverse()

for i, j in W.nonzero():
    if i < j:
        nIB[i, j] = nIB[j, i] = 0
        C[j, i] = C[i, j] = nIB.inverse()[i, j]
        nIB[i, j] = nIB[j, i] = nIW[i, j]

Further performance improvement can be achieved according to this.

As for backpropagation, I have no idea how it can be done on elements of matrix inverses.

A few hints for improving efficiency:

  • When W[i, j] == 0, B equals to W so tmp remains unchanged. In this case, the computation of C values can be done only once outside the loop instead of inside the loop.

  • torch.nonzero / torch.Tensor.nonzero can be used to obtain all non-zero indices of a tensor.

  • Since W is symmetric, C is also symmetric and only half of its values needs to be computed.

  • All repeated computation can be moved out of the loop to improve effciency.

Improved code:

n = len(W)
nIW = n * torch.eye(n) - W
nIB = nIW.clone()
C = nIB.inverse()

for i, j in W.nonzero():
    if i < j:
        nIB[i, j] = nIB[j, i] = 0
        C[j, i] = C[i, j] = nIB.inverse()[i, j]
        nIB[i, j] = nIB[j, i] = nIW[i, j]

Further performance improvement may be achieved according to this.

As for backpropagation, I have no idea how it can be done on elements of matrix inverses.

deleted 3 characters in body
Source Link
GZ0
  • 2.4k
  • 8
  • 19

A few hints for improving efficiency:

  • When W[i, j] == 0, B equals to W so tmp remains unchanged. In this case, the computation of C values can be done only once outside the loop instead of inside the loop.

  • torch.nonzero / torch.Tensor.nonzero can be used to obtain all non-zero indices of a tensor.

  • Since W is symmetric, C is also symmetric and only half of its values needs to be computed.

  • All repeated computation can be moved out of the loop to save effciency.

Improved code:

n = len(W)
nIW = n * torch.eye(n) - W
CnIB = nIW.clone()
C = nIB.inverse()

for i, j in W.nonzero():
    if i < j:
        v = nIW[i, j].item()
        nIW[inIB[i, j] = nIW[jnIB[j, i] = 0
        C[j, i] = C[i, j] = nIWnIB.inverse()[i, j]
        nIW[inIB[i, j] = nIW[jnIB[j, i] = vnIW[i, j]

Further performance improvement can be achieved according to this.

As for backpropagation, I have no idea how it can be done on elements of matrix inverses.

A few hints for improving efficiency:

  • When W[i, j] == 0, B equals to W so tmp remains unchanged. In this case, the computation of C values can be done only once outside the loop instead of inside the loop.

  • torch.nonzero / torch.Tensor.nonzero can be used to obtain all non-zero indices of a tensor.

  • Since W is symmetric, C is also symmetric and only half of its values needs to be computed.

  • All repeated computation can be moved out of the loop to save effciency.

Improved code:

n = len(W)
nIW = n * torch.eye(n) - W
C = nIW.inverse()

for i, j in W.nonzero():
    if i < j:
        v = nIW[i, j].item()
        nIW[i, j] = nIW[j, i] = 0
        C[j, i] = C[i, j] = nIW.inverse()[i, j]
        nIW[i, j] = nIW[j, i] = v

As for backpropagation, I have no idea how it can be done on elements of matrix inverses.

A few hints for improving efficiency:

  • When W[i, j] == 0, B equals to W so tmp remains unchanged. In this case, the computation of C values can be done only once outside the loop instead of inside the loop.

  • torch.nonzero / torch.Tensor.nonzero can be used to obtain all non-zero indices of a tensor.

  • Since W is symmetric, C is also symmetric and only half of its values needs to be computed.

  • All repeated computation can be moved out of the loop to save effciency.

Improved code:

n = len(W)
nIW = n * torch.eye(n) - W
nIB = nIW.clone()
C = nIB.inverse()

for i, j in W.nonzero():
    if i < j:
        nIB[i, j] = nIB[j, i] = 0
        C[j, i] = C[i, j] = nIB.inverse()[i, j]
        nIB[i, j] = nIB[j, i] = nIW[i, j]

Further performance improvement can be achieved according to this.

As for backpropagation, I have no idea how it can be done on elements of matrix inverses.

added 25 characters in body
Source Link
GZ0
  • 2.4k
  • 8
  • 19
Loading
Post Undeleted by GZ0
added 10 characters in body
Source Link
GZ0
  • 2.4k
  • 8
  • 19
Loading
Post Deleted by GZ0
added 24 characters in body
Source Link
GZ0
  • 2.4k
  • 8
  • 19
Loading
Source Link
GZ0
  • 2.4k
  • 8
  • 19
Loading