Skip to main content
Best practices
0 votes
0 replies
39 views

I am training WGAN-GP on Eurosat dataset, splitted into train/val/test sets in counts 18900/4050/4050. Since FID scores are widely used in GANs in image generation, I based my hyperparameter search on ...
oskocak-cell's user avatar
Best practices
0 votes
2 replies
54 views

I use google colab to train my model I have trained the model on fp32 and use random grid to search the hyperparameter. the training phase is slow; it takes around 3.24it/s. I want to ask can I use ...
Suebpong Pruttipattanapong's user avatar
Advice
0 votes
0 replies
37 views

I’m trying to reproduce an ML pipeline (built in SAS Viya Model Studio, but the question is general) consisting of: Robust PCA with automatic/optimal component selection XGBoost with auto-tuned ...
Grace Gonzalez's user avatar
1 vote
1 answer
139 views

I'm using Optuna to optimize LightGBM hyperparameters, and I'm running into an issue with the variability of best_iteration across different random seeds. Current Setup I train multiple models with ...
invalid syntax's user avatar
0 votes
0 answers
53 views

I’m running Optuna to tune hyperparameters for a TabM regression model (10 trials) on Kaggle (GPU: Tesla P100) to minimize RMSE. The optimization runs fine — all trials complete — but right after ...
GursimranSe's user avatar
0 votes
0 answers
136 views

I am using Optuna for hyperparameter tuning. I get messages as shown below: Trial 15 finished with value: 6.226334123011727 and parameters: {'iterations': 1100, 'learning_rate': 0.04262148853587423, '...
Quiescent's user avatar
  • 1,196
2 votes
1 answer
158 views

I want to undersample 3 cross-validation folds from a dataset, using say, RandomUnderSampler from imblearn, and then, optimize the hyperparameters of various gbms using those undersampled folds as ...
Sole Galli's user avatar
  • 1,144
1 vote
0 answers
88 views

I used TPESampler and set it as follows while optimizing with optuna: sampler=optuna.samplers.TPESampler(multivariate=True, n_startup_trials=10, seed=None). But in the 10 startup_trials process, it ...
YYYC's user avatar
  • 11
0 votes
0 answers
39 views

My impression is that every trial is run for one step. Then some trials are pruned and the remaining continue for another step and so on. However, the logs show: Trial 0 completed Trial 1 completed ...
Baymax Lim's user avatar
0 votes
1 answer
96 views

If I am using stratified 10-folds for classification/regression tasks, where do I need to define the logic for hyperparameter tuning using Scikit or Wandb? Should it be inside the loop or outside? I ...
Ayesha Kiran's user avatar
0 votes
1 answer
69 views

If you use cosine decay for example and you have starting learning rate and final learning rate, can you tune those hyperparameters so that final learning rate is in some ratio of starting learning ...
ict's user avatar
  • 1
2 votes
1 answer
76 views

I am trying to use keras-tuner to tune hyperparameters, like !pip install keras-tuner --upgrade import keras_tuner as kt from tensorflow.keras.models import Sequential from tensorflow.keras.layers ...
ThomasIsCoding's user avatar
0 votes
1 answer
100 views

Im trying to forecast a time series using prophet model in python, for which I would like to find the optimal tuning parameters (like changepoint_range, changepoint_prior_scale, ...
Arun Raaj Rajendhiran's user avatar
-2 votes
1 answer
136 views

I am trying to tune some hyperparameters for my neural network for an image segmentational problem. I set up the tuner as simple as it can be, but when I run my code i get the following error: 2025-02-...
Adam Bencsik's user avatar
0 votes
1 answer
91 views

I'm working on training a model that predicts which way in cache to evict based on cache features, access information, etc, etc... However, I have millions and millions of data samples. Thus, I cannot ...
Saffy's user avatar
  • 13

15 30 50 per page
1
2 3 4 5
58