I���m working on a project that heavily relies on computation graph manipulations but isn’t directly in the field of machine learning. However, we are using PyTorch due to its flexibility and support for dynamic computation graphs.
Our challenge lies in visualizing a model that doesn’t have traditional input parameters, as its functionality is not tied to optimizing weights in the usual ML sense. The forward pass involves operations over internal state variables, and we want to debug the graph to ensure all operations remain connected and gradients propagate correctly.
Most visualization tools I’ve come across (like torchviz) require a forward method with an input parameter to trace the graph. Is there a way to visualize or debug the computation graph of such a model where the operations are driven entirely by internal states?
Any pointers, best practices, or alternative tools to explore would be greatly appreciated. Thanks!