Timeline for Is my use of tf.distribute.MirroredStrategy and strategy.scope() correct and safe for multi-GPU training in Keras?
Current License: CC BY-SA 4.0
Post Revisions
4 events
| when toggle format | what | by | license | comment | |
|---|---|---|---|---|---|
| Jan 6 at 9:56 | comment | added | Ahmed | @Sagar Thank you so much for your help ! Much appreciated ! | |
| Jan 5 at 6:22 | comment | added | Sagar |
Hi @Ahmed, Yes, your code is safe, keeping active items like model, optimizer, metrics, checkpoints inside strategy.scope() is the correct and standard approach. One minor suggestion is to use .batch(batch_size, drop_remainder=True) so the last uneven batch does nott crash the GPUs, and add .prefetch(tf.data.AUTOTUNE) so your training does not wait for data.
|
|
| Dec 30, 2025 at 16:08 | audit | Suggested edits | |||
| Dec 30, 2025 at 18:06 | |||||
| Dec 27, 2025 at 22:27 | history | asked | Ahmed | CC BY-SA 4.0 |