Skip to main content
0 votes
1 answer
59 views

PyTorch supports Intel GPU through torch.xpu, but PyTorch Lightning does not currently have built-in XPU accelerator support. Because NeuralForecast uses Lightning under the hood, that also blocks ...
Marek Ozana's user avatar
4 votes
2 answers
167 views

I am using pytorch lightning to train my model, here I use the lightning callback ModelCheckpoint, with the following settings: ModelCheckpoint( dirpath="path/to/dir", monitor="...
JacobM's user avatar
  • 41
1 vote
0 answers
107 views

I’m using PyTorch Lightning and trying to implement a simple callback. The code works at runtime, but the ty type checker reports invalid-method-override errors for on_train_start and on_train_end. ...
Nasrul Huda's user avatar
0 votes
1 answer
141 views

I'm trying to load a pre-trained PyTorch Lightning model from the DiffProtect repository (published in 2023) in Google Colab, but I'm encountering a numpy compatibility error. Environment: Google ...
Avi's user avatar
  • 2,291
Best practices
0 votes
0 replies
33 views

I work with multiple datasets and I repeat the same preprocessing to the data for every dataset. A convenient way of working with multiple datasets when using PyTorch, is to use the ...
exch_cmmnt_memb's user avatar
0 votes
0 answers
59 views

I'm fine-tuning T5-small using PyTorch Lightning and encountering a strange issue during validation and test steps. The Problem: During validation_step and test_step, model.generate() consistently ...
GeraniumCat's user avatar
6 votes
1 answer
529 views

My project uses PyTorch and Lightning. Since PyTorch is system dependent, users need to install it manually, based on their platform, using the platform-specific pip command provided by the PyTorch ...
SRobertJames's user avatar
  • 9,457
1 vote
1 answer
123 views

I've been trying to train some basic models using PyTorch Lightning on an M4 Max Mac Studio. While the training itself goes without hitch, there appears to be a problem when attempting to terminate ...
Rangumi's user avatar
  • 53
0 votes
0 answers
104 views

I'm conducting research with temporal graph data using Pytorch-geometric. I'm facing some issues of memory usage when making PyG data in dense format (with to_dense_batch() and to_dense_adj()). I have ...
Vincent Tsai's user avatar
0 votes
1 answer
35 views

I'm implementing a differentially private recommendation system using PyTorch Lightning and Opacus, but I'm encountering a RecursionError during training. Here's my setup: Problem When I run my ...
drey_1's user avatar
  • 45
0 votes
1 answer
193 views

Description I'm working with LightningDataModule and wanted to ensure that a method (_after_init) runs only once after full initialization, regardless of subclassing. For that, I implemented a custom ...
Aditya Khedekar's user avatar
1 vote
0 answers
111 views

I'm using torch LightningModule trainer. I create trainer with: trainer = pl.Trainer(max_epochs = 3) Each train epoch has 511 steps (total = 1533) and each validation epoch has 127 steps. I use ...
user3668129's user avatar
  • 4,912
1 vote
1 answer
123 views

I am trying to log the loss and auc for all 3 of my datasets - train, validation and test. The datamodule defines the 3 loaders and I finally invoke the model as: trainer.fit(model,datamodule) trainer....
Apurva's user avatar
  • 173
0 votes
1 answer
59 views

I'm using an iterableDataset because I have massive amounts of data. And since IterableDataset does not store all data in memory, we cannot directly compute min/max on the entire dataset before ...
Saffy's user avatar
  • 13
2 votes
3 answers
458 views

In the configuration management library Hydra, it is possible to only partially instantiate classes defined in configuration using the _partial_ keyword. The library explains that this results in a ...
Felix Benning's user avatar

15 30 50 per page
1
2 3 4 5
41