1,004 questions
0
votes
0
answers
59
views
SAC Implementation
I am implementing Soft Actor-Critic (SAC) and I am confused about the policy update step.
What I want is:
When I update the policy (actor), I do not want the parameters of the Q-networks (critics) to ...
1
vote
1
answer
157
views
How can I properly add seed/options to a dmc2gym environment with Gymnasium? [closed]
import gymnasium as gym
import dmc2gym
gymenv = gym.make("CartPole-v0")
gymenv.reset(seed=42, options=None) # It won't go wrong, no problem
dmcenv = dmc2gym.make(domain_name="quadruped&...
1
vote
0
answers
78
views
Stable-Baselines3 PPO agent doesn't learn in custom projectile environment (reward/action constant)
I'm trying to train a PPO agent using stable-baselines3 in a simple physics-based projectile environment built with Pymunk. The goal is to find the launch angle that makes the projectile land as close ...
0
votes
0
answers
19
views
How to assign a SWIG-wrapped std::vector<double> to a C++ attribute in a Python-bound Basilisk module?
I'm using the Basilisk spacecraft dynamics simulator (v2.2.1) on macOS (Apple M1, Python 3.9, SWIG 4.0.2).
I'm running a reinforcement learning wrapper (bsk_rl) around a Basilisk-based simulation (...
0
votes
1
answer
74
views
Python function deepcopy does not copy gym environment LunarLanderContinuous-v2 correctly
As the code and its output shows the deepcopy of this environment does not copy aspects of the environment such as the 'action_space' and the attribute 'continuous'.
How can this be resolved?
...
1
vote
0
answers
119
views
Cannot instance custom environment with OpenAI Gymnasium
I'm trying to make my own checkers bot to try and teach myself reinforment learning. I decided to try using Gymnasium as a framework and have been following the tutorials at https://gymnasium.farama....
1
vote
0
answers
70
views
How to use Matplotlib Renderer class in combination with custom OpenAI gym env
I'm currently working with the code of someone else to replicate some of their work (https://github.com/romanlee6/multi_LLM_comm). This code contains a custom OpenAI gym environment. I've managed to ...
0
votes
1
answer
77
views
Install TensorFlow and Gym with Conda for usage with Jupyter Notebook in macOS Sonoma, Intel processor
I am trying to set up a virtual environment using Conda to code a lab activity regarding RL, but it is proving to be quite a nightmare due to incompatibilities among different library versions. The ...
1
vote
1
answer
150
views
Unable to use all (or most) of gym-retro games
My working ENV
System: ubuntu 18.04
Version: python 3.6/3.7/3.8(3 envvironments all the same result), gym 0.25.2, gym-retro 0.8.0
How did I do?
I follow this guide to install gym-retro. and I ...
0
votes
0
answers
49
views
Applying reinforced-learning on combination of continuous and discrete action space environment
I have a custom gym environment, where it has 3 continuous and 1 discrete action space. I would like to apply a reinforcement-learning algorithm, however I am not sure what to use.
Below you can find ...
10
votes
4
answers
11k
views
Module 'numpy' has no attribute 'bool8' In cartpole problem openai gym
I'm beginner & trying to run this simple code but it is giving me this exception "module 'numpy' has no attribute 'bool8'" as you can see in screenshot below. Gym version is 0.26.2 & ...
0
votes
1
answer
690
views
Error while running "pip install gym==0.21.0" in WSL 2: "AttributeError: module 'pkgutil' has no attribute 'ImpImporter'"
I ran "pip install gym==0.21.0" then got this
Collecting gym==0.21.0
Using cached gym-0.21.0.tar.gz (1.5 MB)
Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error
...
0
votes
1
answer
68
views
The reset() problem of RecurDyn simulation model with Gym for reinforcement learning training
I'm trying to integrate a RecurDyn simulation model with Gym for reinforcement learning training. The simulation model communicates with RecurDyn through an FMU file(FMI2.0) in Python. However, when I'...
0
votes
1
answer
147
views
Is it fine to make an API call inside a reinforcement learning program?
I have made a game simulation with rest of the API available, and I would like to create a reinforcement learning AI in Python using gym from OpenAI.
So, is it fine to make API calls inside the step ...
0
votes
1
answer
120
views
TypeError: unsupported operand type(s) for >>: 'list' and 'int' in pokerenv
I am trying to use the pokerenv library for a reinforcement learning project, but even the example code provided by the documentation itself produces the following error:
------------------------------...