Skip to main content
0 votes
0 answers
59 views

I am implementing Soft Actor-Critic (SAC) and I am confused about the policy update step. What I want is: When I update the policy (actor), I do not want the parameters of the Q-networks (critics) to ...
user32396289's user avatar
1 vote
1 answer
157 views

import gymnasium as gym import dmc2gym gymenv = gym.make("CartPole-v0") gymenv.reset(seed=42, options=None) # It won't go wrong, no problem dmcenv = dmc2gym.make(domain_name="quadruped&...
Xingrui Zhuang's user avatar
1 vote
0 answers
78 views

I'm trying to train a PPO agent using stable-baselines3 in a simple physics-based projectile environment built with Pymunk. The goal is to find the launch angle that makes the projectile land as close ...
Jhj's user avatar
  • 43
0 votes
0 answers
19 views

I'm using the Basilisk spacecraft dynamics simulator (v2.2.1) on macOS (Apple M1, Python 3.9, SWIG 4.0.2). I'm running a reinforcement learning wrapper (bsk_rl) around a Basilisk-based simulation (...
jeena john's user avatar
0 votes
1 answer
74 views

As the code and its output shows the deepcopy of this environment does not copy aspects of the environment such as the 'action_space' and the attribute 'continuous'. How can this be resolved? ...
PerceptualRobotics's user avatar
1 vote
0 answers
119 views

I'm trying to make my own checkers bot to try and teach myself reinforment learning. I decided to try using Gymnasium as a framework and have been following the tutorials at https://gymnasium.farama....
kitfox's user avatar
  • 5,708
1 vote
0 answers
70 views

I'm currently working with the code of someone else to replicate some of their work (https://github.com/romanlee6/multi_LLM_comm). This code contains a custom OpenAI gym environment. I've managed to ...
Emma van Zoelen's user avatar
0 votes
1 answer
77 views

I am trying to set up a virtual environment using Conda to code a lab activity regarding RL, but it is proving to be quite a nightmare due to incompatibilities among different library versions. The ...
Javier's user avatar
  • 189
1 vote
1 answer
150 views

My working ENV System: ubuntu 18.04 Version: python 3.6/3.7/3.8(3 envvironments all the same result), gym 0.25.2, gym-retro 0.8.0 How did I do? I follow this guide to install gym-retro. and I ...
Chuckie Zhu's user avatar
0 votes
0 answers
49 views

I have a custom gym environment, where it has 3 continuous and 1 discrete action space. I would like to apply a reinforcement-learning algorithm, however I am not sure what to use. Below you can find ...
oakca's user avatar
  • 1,588
10 votes
4 answers
11k views

I'm beginner & trying to run this simple code but it is giving me this exception "module 'numpy' has no attribute 'bool8'" as you can see in screenshot below. Gym version is 0.26.2 & ...
Jitender's user avatar
  • 345
0 votes
1 answer
690 views

I ran "pip install gym==0.21.0" then got this Collecting gym==0.21.0 Using cached gym-0.21.0.tar.gz (1.5 MB) Preparing metadata (setup.py) ... error error: subprocess-exited-with-error ...
cy lin's user avatar
  • 13
0 votes
1 answer
68 views

I'm trying to integrate a RecurDyn simulation model with Gym for reinforcement learning training. The simulation model communicates with RecurDyn through an FMU file(FMI2.0) in Python. However, when I'...
sequoia00's user avatar
  • 111
0 votes
1 answer
147 views

I have made a game simulation with rest of the API available, and I would like to create a reinforcement learning AI in Python using gym from OpenAI. So, is it fine to make API calls inside the step ...
pandoux's user avatar
  • 49
0 votes
1 answer
120 views

I am trying to use the pokerenv library for a reinforcement learning project, but even the example code provided by the documentation itself produces the following error: ------------------------------...
darth momin's user avatar

15 30 50 per page
1
2 3 4 5
67