- 
                Notifications
    You must be signed in to change notification settings 
- Fork 6.8k
Closed
Labels
P1Issue that should be fixed within a few weeksIssue that should be fixed within a few weeksQSQuantsight triage labelQuantsight triage labelbugSomething that is supposed to be working; but isn'tSomething that is supposed to be working; but isn'trllibRLlib related issuesRLlib related issueswindows
Description
What happened + What you expected to happen
Expectation: Training CartPole
What Happens: WINDOWS FATAL EXECTION ACCESS VIOLATION
D:\ML\test_RLlib\TF_Env\Scripts\python.exe D:/ML/test_RLlib/test/main.py
2022-05-19 10:49:33,916	INFO services.py:1456 -- View the Ray dashboard at http://127.0.0.1:8265
D:\ML\test_RLlib\TF_Env\lib\site-packages\ray\tune\tune.py:455: UserWarning: Consider boosting PBT performance by enabling `reuse_actors` as well as implementing `reset_config` for Trainable.
  warnings.warn(
2022-05-19 10:49:36,775	WARNING trial_runner.py:1489 -- You are trying to access _search_alg interface of TrialRunner in TrialScheduler, which is being restricted. If you believe it is reasonable for your scheduler to access this TrialRunner API, please reach out to Ray team on GitHub. A more strict API access pattern would be enforced starting 1.12s.0
2022-05-19 10:49:36,900	INFO trial_runner.py:803 -- starting DQNTrainer_CartPole-v0_9caae_00000
(pid=1516) 
(DQNTrainer pid=7004) 2022-05-19 10:49:43,322	INFO trainer.py:2295 -- Your framework setting is 'tf', meaning you are using static-graph mode. Set framework='tf2' to enable eager execution with tf2.x. You may also then want to set eager_tracing=True in order to reach similar execution speed as with static-graph mode.
(DQNTrainer pid=7004) 2022-05-19 10:49:43,322	INFO simple_q.py:161 -- In multi-agent mode, policies will be optimized sequentially by the multi-GPU optimizer. Consider setting `simple_optimizer=True` if this doesn't work for you.
(pid=22604) 
(pid=10440) 
(pid=22456) 
(RolloutWorker pid=23604) Setting the path for recording to D:\ML\test_RLlib\test\results\DQNTrainer_2022-05-19_10-49-36\DQNTrainer_CartPole-v0_9caae_00000_0_2022-05-19_10-49-36\
(RolloutWorker pid=18268) Setting the path for recording to D:\ML\test_RLlib\test\results\DQNTrainer_2022-05-19_10-49-36\DQNTrainer_CartPole-v0_9caae_00000_0_2022-05-19_10-49-36\
(RolloutWorker pid=15504) Setting the path for recording to D:\ML\test_RLlib\test\results\DQNTrainer_2022-05-19_10-49-36\DQNTrainer_CartPole-v0_9caae_00000_0_2022-05-19_10-49-36\
(RolloutWorker pid=23604) 2022-05-19 10:49:49,852	WARNING rollout_worker.py:498 -- We've added a module for checking environments that are used in experiments. It will cause your environment to fail if your environment is not set upcorrectly. You can disable check env by setting `disable_env_checking` to True in your experiment config dictionary. You can run the environment checking module standalone by calling ray.rllib.utils.check_env(env).
(RolloutWorker pid=18268) 2022-05-19 10:49:49,864	WARNING rollout_worker.py:498 -- We've added a module for checking environments that are used in experiments. It will cause your environment to fail if your environment is not set upcorrectly. You can disable check env by setting `disable_env_checking` to True in your experiment config dictionary. You can run the environment checking module standalone by calling ray.rllib.utils.check_env(env).
(RolloutWorker pid=15504) 2022-05-19 10:49:49,846	WARNING rollout_worker.py:498 -- We've added a module for checking environments that are used in experiments. It will cause your environment to fail if your environment is not set upcorrectly. You can disable check env by setting `disable_env_checking` to True in your experiment config dictionary. You can run the environment checking module standalone by calling ray.rllib.utils.check_env(env).
(RolloutWorker pid=23604) 2022-05-19 10:49:49,938	DEBUG rollout_worker.py:1704 -- Creating policy for default_policy
(RolloutWorker pid=23604) 2022-05-19 10:49:49,938	DEBUG catalog.py:805 -- Created preprocessor <ray.rllib.models.preprocessors.NoPreprocessor object at 0x0000020B927EB100>: Box([-4.8000002e+00 -3.4028235e+38 -4.1887903e-01 -3.4028235e+38], [4.8000002e+00 3.4028235e+38 4.1887903e-01 3.4028235e+38], (4,), float32) -> (4,)
(RolloutWorker pid=23604) 2022-05-19 10:49:49,953	DEBUG worker_set.py:457 -- Creating TF session {'intra_op_parallelism_threads': 2, 'inter_op_parallelism_threads': 2, 'gpu_options': {'allow_growth': True}, 'log_device_placement': False, 'device_count': {'CPU': 1}, 'allow_soft_placement': True}
(RolloutWorker pid=18268) 2022-05-19 10:49:49,938	DEBUG rollout_worker.py:1704 -- Creating policy for default_policy
(RolloutWorker pid=18268) 2022-05-19 10:49:49,938	DEBUG catalog.py:805 -- Created preprocessor <ray.rllib.models.preprocessors.NoPreprocessor object at 0x000002AB85A7A100>: Box([-4.8000002e+00 -3.4028235e+38 -4.1887903e-01 -3.4028235e+38], [4.8000002e+00 3.4028235e+38 4.1887903e-01 3.4028235e+38], (4,), float32) -> (4,)
(RolloutWorker pid=18268) 2022-05-19 10:49:49,953	DEBUG worker_set.py:457 -- Creating TF session {'intra_op_parallelism_threads': 2, 'inter_op_parallelism_threads': 2, 'gpu_options': {'allow_growth': True}, 'log_device_placement': False, 'device_count': {'CPU': 1}, 'allow_soft_placement': True}
(RolloutWorker pid=15504) 2022-05-19 10:49:49,938	DEBUG rollout_worker.py:1704 -- Creating policy for default_policy
(RolloutWorker pid=15504) 2022-05-19 10:49:49,938	DEBUG catalog.py:805 -- Created preprocessor <ray.rllib.models.preprocessors.NoPreprocessor object at 0x000001A0229FA100>: Box([-4.8000002e+00 -3.4028235e+38 -4.1887903e-01 -3.4028235e+38], [4.8000002e+00 3.4028235e+38 4.1887903e-01 3.4028235e+38], (4,), float32) -> (4,)
(RolloutWorker pid=15504) 2022-05-19 10:49:49,953	DEBUG worker_set.py:457 -- Creating TF session {'intra_op_parallelism_threads': 2, 'inter_op_parallelism_threads': 2, 'gpu_options': {'allow_growth': True}, 'log_device_placement': False, 'device_count': {'CPU': 1}, 'allow_soft_placement': True}
(RolloutWorker pid=23604) 2022-05-19 10:49:50,623	INFO tf_policy.py:166 -- TFPolicy (worker=1) running on CPU.
(RolloutWorker pid=23604) 2022-05-19 10:49:50,692	INFO dynamic_tf_policy.py:709 -- Adding extra-action-fetch `q_values` to view-reqs.
(RolloutWorker pid=23604) 2022-05-19 10:49:50,693	INFO dynamic_tf_policy.py:709 -- Adding extra-action-fetch `action_dist_inputs` to view-reqs.
(RolloutWorker pid=23604) 2022-05-19 10:49:50,693	INFO dynamic_tf_policy.py:709 -- Adding extra-action-fetch `action_logp` to view-reqs.
(RolloutWorker pid=23604) 2022-05-19 10:49:50,694	INFO dynamic_tf_policy.py:709 -- Adding extra-action-fetch `action_prob` to view-reqs.
(RolloutWorker pid=23604) 2022-05-19 10:49:50,694	INFO dynamic_tf_policy.py:718 -- Testing `postprocess_trajectory` w/ dummy batch.
(RolloutWorker pid=23604) 2022-05-19 10:49:50,695	DEBUG dynamic_tf_policy.py:752 -- Initializing loss function with dummy input:
(RolloutWorker pid=23604) 
(RolloutWorker pid=23604) { 'action_dist_inputs': <tf.Tensor 'default_policy_wk1/action_dist_inputs:0' shape=(?, 2) dtype=float32>,
(RolloutWorker pid=23604)   'action_logp': <tf.Tensor 'default_policy_wk1/action_logp:0' shape=(?,) dtype=float32>,
(RolloutWorker pid=23604)   'action_prob': <tf.Tensor 'default_policy_wk1/action_prob:0' shape=(?,) dtype=float32>,
(RolloutWorker pid=23604)   'actions': <tf.Tensor 'default_policy_wk1/action:0' shape=(?,) dtype=int64>,
(RolloutWorker pid=23604)   'agent_index': <tf.Tensor 'default_policy_wk1/agent_index:0' shape=(?,) dtype=float32>,
(RolloutWorker pid=23604)   'dones': <tf.Tensor 'default_policy_wk1/dones:0' shape=(?,) dtype=float32>,
(RolloutWorker pid=23604)   'eps_id': <tf.Tensor 'default_policy_wk1/eps_id:0' shape=(?,) dtype=float32>,
(RolloutWorker pid=23604)   'new_obs': <tf.Tensor 'default_policy_wk1/new_obs:0' shape=(?, 4) dtype=float32>,
(RolloutWorker pid=23604)   'obs': <tf.Tensor 'default_policy_wk1/obs:0' shape=(?, 4) dtype=float32>,
(RolloutWorker pid=23604)   'prev_actions': <tf.Tensor 'default_policy_wk1/prev_actions:0' shape=(?,) dtype=int64>,
(RolloutWorker pid=23604)   'prev_rewards': <tf.Tensor 'default_policy_wk1/prev_rewards:0' shape=(?,) dtype=float32>,
(RolloutWorker pid=23604)   'q_values': <tf.Tensor 'default_policy_wk1/q_values:0' shape=(?, 2) dtype=float32>,
(RolloutWorker pid=23604)   'rewards': <tf.Tensor 'default_policy_wk1/rewards:0' shape=(?,) dtype=float32>,
(RolloutWorker pid=23604)   't': <tf.Tensor 'default_policy_wk1/t:0' shape=(?,) dtype=float32>,
(RolloutWorker pid=23604)   'unroll_id': <tf.Tensor 'default_policy_wk1/unroll_id:0' shape=(?,) dtype=float32>,
(RolloutWorker pid=23604)   'weights': <tf.Tensor 'default_policy_wk1/weights:0' shape=(?,) dtype=float32>}
(RolloutWorker pid=23604) 
(RolloutWorker pid=18268) 2022-05-19 10:49:50,627	INFO tf_policy.py:166 -- TFPolicy (worker=3) running on CPU.
(RolloutWorker pid=18268) 2022-05-19 10:49:50,697	INFO dynamic_tf_policy.py:709 -- Adding extra-action-fetch `q_values` to view-reqs.
(RolloutWorker pid=18268) 2022-05-19 10:49:50,697	INFO dynamic_tf_policy.py:709 -- Adding extra-action-fetch `action_dist_inputs` to view-reqs.
(RolloutWorker pid=18268) 2022-05-19 10:49:50,698	INFO dynamic_tf_policy.py:709 -- Adding extra-action-fetch `action_logp` to view-reqs.
(RolloutWorker pid=18268) 2022-05-19 10:49:50,698	INFO dynamic_tf_policy.py:709 -- Adding extra-action-fetch `action_prob` to view-reqs.
(RolloutWorker pid=18268) 2022-05-19 10:49:50,698	INFO dynamic_tf_policy.py:718 -- Testing `postprocess_trajectory` w/ dummy batch.
(RolloutWorker pid=15504) 2022-05-19 10:49:50,617	INFO tf_policy.py:166 -- TFPolicy (worker=2) running on CPU.
(RolloutWorker pid=15504) 2022-05-19 10:49:50,687	INFO dynamic_tf_policy.py:709 -- Adding extra-action-fetch `q_values` to view-reqs.
(RolloutWorker pid=15504) 2022-05-19 10:49:50,688	INFO dynamic_tf_policy.py:709 -- Adding extra-action-fetch `action_dist_inputs` to view-reqs.
(RolloutWorker pid=15504) 2022-05-19 10:49:50,688	INFO dynamic_tf_policy.py:709 -- Adding extra-action-fetch `action_logp` to view-reqs.
(RolloutWorker pid=15504) 2022-05-19 10:49:50,689	INFO dynamic_tf_policy.py:709 -- Adding extra-action-fetch `action_prob` to view-reqs.
(RolloutWorker pid=15504) 2022-05-19 10:49:50,689	INFO dynamic_tf_policy.py:718 -- Testing `postprocess_trajectory` w/ dummy batch.
(RolloutWorker pid=23604) 2022-05-19 10:49:51,114	DEBUG tf_policy.py:742 -- These tensors were used in the loss functions:
(RolloutWorker pid=23604) { 'action_dist_inputs': <tf.Tensor 'default_policy_wk1/action_dist_inputs:0' shape=(?, 2) dtype=float32>,
(RolloutWorker pid=23604)   'action_logp': <tf.Tensor 'default_policy_wk1/action_logp:0' shape=(?,) dtype=float32>,
(RolloutWorker pid=23604)   'action_prob': <tf.Tensor 'default_policy_wk1/action_prob:0' shape=(?,) dtype=float32>,
(RolloutWorker pid=23604)   'actions': <tf.Tensor 'default_policy_wk1/action:0' shape=(?,) dtype=int64>,
(RolloutWorker pid=23604)   'dones': <tf.Tensor 'default_policy_wk1/dones:0' shape=(?,) dtype=float32>,
(RolloutWorker pid=23604)   'new_obs': <tf.Tensor 'default_policy_wk1/new_obs:0' shape=(?, 4) dtype=float32>,
(RolloutWorker pid=23604)   'obs': <tf.Tensor 'default_policy_wk1/obs:0' shape=(?, 4) dtype=float32>,
(RolloutWorker pid=23604)   'q_values': <tf.Tensor 'default_policy_wk1/q_values:0' shape=(?, 2) dtype=float32>,
(RolloutWorker pid=23604)   'rewards': <tf.Tensor 'default_policy_wk1/rewards:0' shape=(?,) dtype=float32>,
(RolloutWorker pid=23604)   'weights': <tf.Tensor 'default_policy_wk1/weights:0' shape=(?,) dtype=float32>}
(RolloutWorker pid=23604) 
(DQNTrainer pid=7004) 2022-05-19 10:49:51,371	INFO worker_set.py:154 -- Inferred observation/action spaces from remote worker (local worker has no env): {'default_policy': (Box([-4.8000002e+00 -3.4028235e+38 -4.1887903e-01 -3.4028235e+38], [4.8000002e+00 3.4028235e+38 4.1887903e-01 3.4028235e+38], (4,), float32), Discrete(2)), '__env__': (Box([-4.8000002e+00 -3.4028235e+38 -4.1887903e-01 -3.4028235e+38], [4.8000002e+00 3.4028235e+38 4.1887903e-01 3.4028235e+38], (4,), float32), Discrete(2))}
(RolloutWorker pid=23604) 2022-05-19 10:49:51,360	DEBUG rollout_worker.py:779 -- Created rollout worker with env <ray.rllib.env.vector_env.VectorEnvWrapper object at 0x0000020B9A35B340> (<Monitor<TimeLimit<CartPoleEnv<CartPole-v0>>>>), policies {}
(RolloutWorker pid=18268) 2022-05-19 10:49:51,368	DEBUG rollout_worker.py:779 -- Created rollout worker with env <ray.rllib.env.vector_env.VectorEnvWrapper object at 0x000002AB95C6A340> (<Monitor<TimeLimit<CartPoleEnv<CartPole-v0>>>>), policies {}
(RolloutWorker pid=15504) 2022-05-19 10:49:51,364	DEBUG rollout_worker.py:779 -- Created rollout worker with env <ray.rllib.env.vector_env.VectorEnvWrapper object at 0x000001A032B8B340> (<Monitor<TimeLimit<CartPoleEnv<CartPole-v0>>>>), policies {}
(DQNTrainer pid=7004) 2022-05-19 10:49:51,437	DEBUG rollout_worker.py:1704 -- Creating policy for default_policy
(DQNTrainer pid=7004) 2022-05-19 10:49:51,437	DEBUG catalog.py:805 -- Created preprocessor <ray.rllib.models.preprocessors.NoPreprocessor object at 0x000002747AA78130>: Box([-4.8000002e+00 -3.4028235e+38 -4.1887903e-01 -3.4028235e+38], [4.8000002e+00 3.4028235e+38 4.1887903e-01 3.4028235e+38], (4,), float32) -> (4,)
(DQNTrainer pid=7004) 2022-05-19 10:49:51,437	DEBUG worker_set.py:457 -- Creating TF session {'intra_op_parallelism_threads': 8, 'inter_op_parallelism_threads': 8, 'gpu_options': {'allow_growth': True}, 'log_device_placement': False, 'device_count': {'CPU': 1}, 'allow_soft_placement': True}
(DQNTrainer pid=7004) 2022-05-19 10:49:51,922	INFO tf_policy.py:166 -- TFPolicy (worker=local) running on CPU.
(DQNTrainer pid=7004) 2022-05-19 10:49:51,978	INFO dynamic_tf_policy.py:709 -- Adding extra-action-fetch `q_values` to view-reqs.
(DQNTrainer pid=7004) 2022-05-19 10:49:51,979	INFO dynamic_tf_policy.py:709 -- Adding extra-action-fetch `action_dist_inputs` to view-reqs.
(DQNTrainer pid=7004) 2022-05-19 10:49:51,979	INFO dynamic_tf_policy.py:709 -- Adding extra-action-fetch `action_logp` to view-reqs.
(DQNTrainer pid=7004) 2022-05-19 10:49:51,980	INFO dynamic_tf_policy.py:709 -- Adding extra-action-fetch `action_prob` to view-reqs.
(DQNTrainer pid=7004) 2022-05-19 10:49:51,980	INFO dynamic_tf_policy.py:718 -- Testing `postprocess_trajectory` w/ dummy batch.
(DQNTrainer pid=7004) 2022-05-19 10:49:51,981	DEBUG dynamic_tf_policy.py:752 -- Initializing loss function with dummy input:
(DQNTrainer pid=7004) 
(DQNTrainer pid=7004) { 'action_dist_inputs': <tf.Tensor 'default_policy/action_dist_inputs:0' shape=(?, 2) dtype=float32>,
(DQNTrainer pid=7004)   'action_logp': <tf.Tensor 'default_policy/action_logp:0' shape=(?,) dtype=float32>,
(DQNTrainer pid=7004)   'action_prob': <tf.Tensor 'default_policy/action_prob:0' shape=(?,) dtype=float32>,
(DQNTrainer pid=7004)   'actions': <tf.Tensor 'default_policy/action:0' shape=(?,) dtype=int64>,
(DQNTrainer pid=7004)   'agent_index': <tf.Tensor 'default_policy/agent_index:0' shape=(?,) dtype=float32>,
(DQNTrainer pid=7004)   'dones': <tf.Tensor 'default_policy/dones:0' shape=(?,) dtype=float32>,
(DQNTrainer pid=7004)   'eps_id': <tf.Tensor 'default_policy/eps_id:0' shape=(?,) dtype=float32>,
(DQNTrainer pid=7004)   'new_obs': <tf.Tensor 'default_policy/new_obs:0' shape=(?, 4) dtype=float32>,
(DQNTrainer pid=7004)   'obs': <tf.Tensor 'default_policy/obs:0' shape=(?, 4) dtype=float32>,
(DQNTrainer pid=7004)   'prev_actions': <tf.Tensor 'default_policy/prev_actions:0' shape=(?,) dtype=int64>,
(DQNTrainer pid=7004)   'prev_rewards': <tf.Tensor 'default_policy/prev_rewards:0' shape=(?,) dtype=float32>,
(DQNTrainer pid=7004)   'q_values': <tf.Tensor 'default_policy/q_values:0' shape=(?, 2) dtype=float32>,
(DQNTrainer pid=7004)   'rewards': <tf.Tensor 'default_policy/rewards:0' shape=(?,) dtype=float32>,
(DQNTrainer pid=7004)   't': <tf.Tensor 'default_policy/t:0' shape=(?,) dtype=float32>,
(DQNTrainer pid=7004)   'unroll_id': <tf.Tensor 'default_policy/unroll_id:0' shape=(?,) dtype=float32>,
(DQNTrainer pid=7004)   'weights': <tf.Tensor 'default_policy/weights:0' shape=(?,) dtype=float32>}
(DQNTrainer pid=7004) 
(DQNTrainer pid=7004) 2022-05-19 10:49:52,371	DEBUG tf_policy.py:742 -- These tensors were used in the loss functions:
(DQNTrainer pid=7004) { 'action_dist_inputs': <tf.Tensor 'default_policy/action_dist_inputs:0' shape=(?, 2) dtype=float32>,
(DQNTrainer pid=7004)   'action_logp': <tf.Tensor 'default_policy/action_logp:0' shape=(?,) dtype=float32>,
(DQNTrainer pid=7004)   'action_prob': <tf.Tensor 'default_policy/action_prob:0' shape=(?,) dtype=float32>,
(DQNTrainer pid=7004)   'actions': <tf.Tensor 'default_policy/action:0' shape=(?,) dtype=int64>,
(DQNTrainer pid=7004)   'dones': <tf.Tensor 'default_policy/dones:0' shape=(?,) dtype=float32>,
(DQNTrainer pid=7004)   'new_obs': <tf.Tensor 'default_policy/new_obs:0' shape=(?, 4) dtype=float32>,
(DQNTrainer pid=7004)   'obs': <tf.Tensor 'default_policy/obs:0' shape=(?, 4) dtype=float32>,
(DQNTrainer pid=7004)   'q_values': <tf.Tensor 'default_policy/q_values:0' shape=(?, 2) dtype=float32>,
(DQNTrainer pid=7004)   'rewards': <tf.Tensor 'default_policy/rewards:0' shape=(?,) dtype=float32>,
(DQNTrainer pid=7004)   'weights': <tf.Tensor 'default_policy/weights:0' shape=(?,) dtype=float32>}
(DQNTrainer pid=7004) 
(DQNTrainer pid=7004) 2022-05-19 10:49:52,579	INFO rollout_worker.py:1727 -- Built policy map: {}
(DQNTrainer pid=7004) 2022-05-19 10:49:52,579	INFO rollout_worker.py:1728 -- Built preprocessor map: {'default_policy': <ray.rllib.models.preprocessors.NoPreprocessor object at 0x000002747AA78130>}
(DQNTrainer pid=7004) 2022-05-19 10:49:52,580	INFO rollout_worker.py:666 -- Built filter map: {'default_policy': <ray.rllib.utils.filter.NoFilter object at 0x000002747C501FA0>}
(DQNTrainer pid=7004) 2022-05-19 10:49:52,580	DEBUG rollout_worker.py:779 -- Created rollout worker with env None (None), policies {}
== Status ==
Current time: 2022-05-19 10:49:52 (running for 00:00:15.84)
Memory usage on this node: 14.6/15.8 GiB: ***LOW MEMORY*** less than 10% of the memory on this node is available for use. This can cause unexpected crashes. Consider reducing the memory used by your application or reducing the Ray object store size by setting `object_store_memory` when calling `ray.init`.
PopulationBasedTraining: 0 checkpoints, 0 perturbs
Resources requested: 4.0/12 CPUs, 0/1 GPUs, 0.0/2.27 GiB heap, 0.0/1.14 GiB objects
Result logdir: D:\ML\test_RLlib\test\results\DQNTrainer_2022-05-19_10-49-36
Number of trials: 3/3 (2 PENDING, 1 RUNNING)
+------------------------------------+----------+----------------+----------+-------------+
| Trial name                         | status   | loc            |    gamma |          lr |
|------------------------------------+----------+----------------+----------+-------------|
| DQNTrainer_CartPole-v0_9caae_00000 | RUNNING  | 127.0.0.1:7004 | 0.934952 | 0.000708551 |
| DQNTrainer_CartPole-v0_9caae_00001 | PENDING  |                | 0.976634 | 0.000561509 |
| DQNTrainer_CartPole-v0_9caae_00002 | PENDING  |                | 0.940114 | 0.000492675 |
+------------------------------------+----------+----------------+----------+-------------+
2022-05-19 10:49:52,620	INFO trial_runner.py:803 -- starting DQNTrainer_CartPole-v0_9caae_00001
(DQNTrainer pid=7004) 2022-05-19 10:49:52,605	WARNING util.py:60 -- Install gputil for GPU system monitoring.
(DQNTrainer pid=7004) 2022-05-19 10:49:52,659	WARNING trainer.py:1083 -- Worker crashed during call to `step_attempt()`. To try to continue training without the failed worker, set `ignore_worker_failures=True`.
(DQNTrainer pid=7004) 2022-05-19 10:49:52,664	ERROR worker.py:92 -- Unhandled error (suppress with 'RAY_IGNORE_UNHANDLED_ERRORS=1'): ray::RolloutWorker.par_iter_next() (pid=18268, ip=127.0.0.1, repr=<ray.rllib.evaluation.rollout_worker.RolloutWorker object at 0x000002AB859C89D0>)
(DQNTrainer pid=7004) ModuleNotFoundError: No module named 'pyglet'
(DQNTrainer pid=7004) 
(DQNTrainer pid=7004) During handling of the above exception, another exception occurred:
(DQNTrainer pid=7004) 
(DQNTrainer pid=7004) ray::RolloutWorker.par_iter_next() (pid=18268, ip=127.0.0.1, repr=<ray.rllib.evaluation.rollout_worker.RolloutWorker object at 0x000002AB859C89D0>)
(DQNTrainer pid=7004)   File "python\ray\_raylet.pyx", line 656, in ray._raylet.execute_task
(DQNTrainer pid=7004)   File "python\ray\_raylet.pyx", line 697, in ray._raylet.execute_task
(DQNTrainer pid=7004)   File "python\ray\_raylet.pyx", line 663, in ray._raylet.execute_task
(DQNTrainer pid=7004)   File "python\ray\_raylet.pyx", line 667, in ray._raylet.execute_task
(DQNTrainer pid=7004)   File "python\ray\_raylet.pyx", line 614, in ray._raylet.execute_task.function_executor
(DQNTrainer pid=7004)   File "D:\ML\test_RLlib\TF_Env\lib\site-packages\ray\_private\function_manager.py", line 701, in actor_method_executor
(DQNTrainer pid=7004)     return method(__ray_actor, *args, **kwargs)
(DQNTrainer pid=7004)   File "D:\ML\test_RLlib\TF_Env\lib\site-packages\ray\util\tracing\tracing_helper.py", line 462, in _resume_span
(DQNTrainer pid=7004)     return method(self, *_args, **_kwargs)
(DQNTrainer pid=7004)   File "D:\ML\test_RLlib\TF_Env\lib\site-packages\ray\util\iter.py", line 1186, in par_iter_next
(DQNTrainer pid=7004)     return next(self.local_it)
(DQNTrainer pid=7004)   File "D:\ML\test_RLlib\TF_Env\lib\site-packages\ray\rllib\evaluation\rollout_worker.py", line 404, in gen_rollouts
(DQNTrainer pid=7004)     yield self.sample()
(DQNTrainer pid=7004)   File "D:\ML\test_RLlib\TF_Env\lib\site-packages\ray\util\tracing\tracing_helper.py", line 462, in _resume_span
(DQNTrainer pid=7004)     return method(self, *_args, **_kwargs)
(DQNTrainer pid=7004)   File "D:\ML\test_RLlib\TF_Env\lib\site-packages\ray\rllib\evaluation\rollout_worker.py", line 815, in sample
(DQNTrainer pid=7004)     batches = [self.input_reader.next()]
(DQNTrainer pid=7004)   File "D:\ML\test_RLlib\TF_Env\lib\site-packages\ray\rllib\evaluation\sampler.py", line 116, in next
(DQNTrainer pid=7004)     batches = [self.get_data()]
(DQNTrainer pid=7004)   File "D:\ML\test_RLlib\TF_Env\lib\site-packages\ray\rllib\evaluation\sampler.py", line 289, in get_data
(DQNTrainer pid=7004)     item = next(self._env_runner)
(DQNTrainer pid=7004)   File "D:\ML\test_RLlib\TF_Env\lib\site-packages\ray\rllib\evaluation\sampler.py", line 668, in _env_runner
(DQNTrainer pid=7004)     unfiltered_obs, rewards, dones, infos, off_policy_actions = base_env.poll()
(DQNTrainer pid=7004)   File "D:\ML\test_RLlib\TF_Env\lib\site-packages\ray\rllib\env\vector_env.py", line 291, in poll
(DQNTrainer pid=7004)     self.new_obs = self.vector_env.vector_reset()
(DQNTrainer pid=7004)   File "D:\ML\test_RLlib\TF_Env\lib\site-packages\ray\rllib\env\vector_env.py", line 227, in vector_reset
(DQNTrainer pid=7004)     return [e.reset() for e in self.envs]
(DQNTrainer pid=7004)   File "D:\ML\test_RLlib\TF_Env\lib\site-packages\ray\rllib\env\vector_env.py", line 227, in <listcomp>
(DQNTrainer pid=7004)     return [e.reset() for e in self.envs]
(DQNTrainer pid=7004)   File "D:\ML\test_RLlib\TF_Env\lib\site-packages\gym\wrappers\monitor.py", line 56, in reset
(DQNTrainer pid=7004)     self._after_reset(observation)
(DQNTrainer pid=7004)   File "D:\ML\test_RLlib\TF_Env\lib\site-packages\gym\wrappers\monitor.py", line 241, in _after_reset
(DQNTrainer pid=7004)     self.reset_video_recorder()
(DQNTrainer pid=7004)   File "D:\ML\test_RLlib\TF_Env\lib\site-packages\gym\wrappers\monitor.py", line 267, in reset_video_recorder
(DQNTrainer pid=7004)     self.video_recorder.capture_frame()
(DQNTrainer pid=7004)   File "D:\ML\test_RLlib\TF_Env\lib\site-packages\gym\wrappers\monitoring\video_recorder.py", line 132, in capture_frame
(DQNTrainer pid=7004)     frame = self.env.render(mode=render_mode)
(DQNTrainer pid=7004)   File "D:\ML\test_RLlib\TF_Env\lib\site-packages\gym\core.py", line 295, in render
(DQNTrainer pid=7004)     return self.env.render(mode, **kwargs)
(DQNTrainer pid=7004)   File "D:\ML\test_RLlib\TF_Env\lib\site-packages\gym\envs\classic_control\cartpole.py", line 179, in render
(DQNTrainer pid=7004)     from gym.envs.classic_control import rendering
(DQNTrainer pid=7004)   File "D:\ML\test_RLlib\TF_Env\lib\site-packages\gym\envs\classic_control\rendering.py", line 17, in <module>
(DQNTrainer pid=7004)     raise ImportError(
(DQNTrainer pid=7004) ImportError: 
(DQNTrainer pid=7004)     Cannot import pyglet.
(DQNTrainer pid=7004)     HINT: you can install pyglet directly via 'pip install pyglet'.
(DQNTrainer pid=7004)     But if you really just want to install all Gym dependencies and not have to think about it,
(DQNTrainer pid=7004)     'pip install -e .[all]' or 'pip install gym[all]' will do it.
(DQNTrainer pid=7004) 2022-05-19 10:49:52,664	ERROR worker.py:92 -- Unhandled error (suppress with 'RAY_IGNORE_UNHANDLED_ERRORS=1'): ray::RolloutWorker.par_iter_next() (pid=15504, ip=127.0.0.1, repr=<ray.rllib.evaluation.rollout_worker.RolloutWorker object at 0x000001A0229489D0>)
(DQNTrainer pid=7004) ModuleNotFoundError: No module named 'pyglet'
(DQNTrainer pid=7004) 
(DQNTrainer pid=7004) During handling of the above exception, another exception occurred:
(DQNTrainer pid=7004) 
(DQNTrainer pid=7004) ray::RolloutWorker.par_iter_next() (pid=15504, ip=127.0.0.1, repr=<ray.rllib.evaluation.rollout_worker.RolloutWorker object at 0x000001A0229489D0>)
(DQNTrainer pid=7004)   File "python\ray\_raylet.pyx", line 656, in ray._raylet.execute_task
(DQNTrainer pid=7004)   File "python\ray\_raylet.pyx", line 697, in ray._raylet.execute_task
(DQNTrainer pid=7004)   File "python\ray\_raylet.pyx", line 663, in ray._raylet.execute_task
(DQNTrainer pid=7004)   File "python\ray\_raylet.pyx", line 667, in ray._raylet.execute_task
(DQNTrainer pid=7004)   File "python\ray\_raylet.pyx", line 614, in ray._raylet.execute_task.function_executor
(DQNTrainer pid=7004)   File "D:\ML\test_RLlib\TF_Env\lib\site-packages\ray\_private\function_manager.py", line 701, in actor_method_executor
(DQNTrainer pid=7004)     return method(__ray_actor, *args, **kwargs)
(DQNTrainer pid=7004)   File "D:\ML\test_RLlib\TF_Env\lib\site-packages\ray\util\tracing\tracing_helper.py", line 462, in _resume_span
(DQNTrainer pid=7004)     return method(self, *_args, **_kwargs)
(DQNTrainer pid=7004)   File "D:\ML\test_RLlib\TF_Env\lib\site-packages\ray\util\iter.py", line 1186, in par_iter_next
(DQNTrainer pid=7004)     return next(self.local_it)
(DQNTrainer pid=7004)   File "D:\ML\test_RLlib\TF_Env\lib\site-packages\ray\rllib\evaluation\rollout_worker.py", line 404, in gen_rollouts
(DQNTrainer pid=7004)     yield self.sample()
(DQNTrainer pid=7004)   File "D:\ML\test_RLlib\TF_Env\lib\site-packages\ray\util\tracing\tracing_helper.py", line 462, in _resume_span
(DQNTrainer pid=7004)     return method(self, *_args, **_kwargs)
(DQNTrainer pid=7004)   File "D:\ML\test_RLlib\TF_Env\lib\site-packages\ray\rllib\evaluation\rollout_worker.py", line 815, in sample
(DQNTrainer pid=7004)     batches = [self.input_reader.next()]
(DQNTrainer pid=7004)   File "D:\ML\test_RLlib\TF_Env\lib\site-packages\ray\rllib\evaluation\sampler.py", line 116, in next
(DQNTrainer pid=7004)     batches = [self.get_data()]
(DQNTrainer pid=7004)   File "D:\ML\test_RLlib\TF_Env\lib\site-packages\ray\rllib\evaluation\sampler.py", line 289, in get_data
(DQNTrainer pid=7004)     item = next(self._env_runner)
(DQNTrainer pid=7004)   File "D:\ML\test_RLlib\TF_Env\lib\site-packages\ray\rllib\evaluation\sampler.py", line 668, in _env_runner
(DQNTrainer pid=7004)     unfiltered_obs, rewards, dones, infos, off_policy_actions = base_env.poll()
(DQNTrainer pid=7004)   File "D:\ML\test_RLlib\TF_Env\lib\site-packages\ray\rllib\env\vector_env.py", line 291, in poll
(DQNTrainer pid=7004)     self.new_obs = self.vector_env.vector_reset()
(DQNTrainer pid=7004)   File "D:\ML\test_RLlib\TF_Env\lib\site-packages\ray\rllib\env\vector_env.py", line 227, in vector_reset
(DQNTrainer pid=7004)     return [e.reset() for e in self.envs]
(DQNTrainer pid=7004)   File "D:\ML\test_RLlib\TF_Env\lib\site-packages\ray\rllib\env\vector_env.py", line 227, in <listcomp>
(DQNTrainer pid=7004)     return [e.reset() for e in self.envs]
(DQNTrainer pid=7004)   File "D:\ML\test_RLlib\TF_Env\lib\site-packages\gym\wrappers\monitor.py", line 56, in reset
(DQNTrainer pid=7004)     self._after_reset(observation)
(DQNTrainer pid=7004)   File "D:\ML\test_RLlib\TF_Env\lib\site-packages\gym\wrappers\monitor.py", line 241, in _after_reset
(DQNTrainer pid=7004)     self.reset_video_recorder()
(DQNTrainer pid=7004)   File "D:\ML\test_RLlib\TF_Env\lib\site-packages\gym\wrappers\monitor.py", line 267, in reset_video_recorder
(DQNTrainer pid=7004)     self.video_recorder.capture_frame()
(DQNTrainer pid=7004)   File "D:\ML\test_RLlib\TF_Env\lib\site-packages\gym\wrappers\monitoring\video_recorder.py", line 132, in capture_frame
(DQNTrainer pid=7004)     frame = self.env.render(mode=render_mode)
(DQNTrainer pid=7004)   File "D:\ML\test_RLlib\TF_Env\lib\site-packages\gym\core.py", line 295, in render
(RolloutWorker pid=23604) 2022-05-19 10:49:52,649	INFO rollout_worker.py:809 -- Generating sample batch of size 4
(RolloutWorker pid=23604) 2022-05-19 10:49:52,650	DEBUG sampler.py:609 -- No episode horizon specified, setting it to Env's limit (200).
(RolloutWorker pid=18268) 2022-05-19 10:49:52,650	DEBUG sampler.py:609 -- No episode horizon specified, setting it to Env's limit (200).
(RolloutWorker pid=15504) 2022-05-19 10:49:52,650	DEBUG sampler.py:609 -- No episode horizon specified, setting it to Env's limit (200).
(DQNTrainer pid=7004)     return self.env.render(mode, **kwargs)
(DQNTrainer pid=7004)   File "D:\ML\test_RLlib\TF_Env\lib\site-packages\gym\envs\classic_control\cartpole.py", line 179, in render
(DQNTrainer pid=7004)     from gym.envs.classic_control import rendering
(DQNTrainer pid=7004)   File "D:\ML\test_RLlib\TF_Env\lib\site-packages\gym\envs\classic_control\rendering.py", line 17, in <module>
(DQNTrainer pid=7004)     raise ImportError(
(DQNTrainer pid=7004) ImportError: 
(DQNTrainer pid=7004)     Cannot import pyglet.
(DQNTrainer pid=7004)     HINT: you can install pyglet directly via 'pip install pyglet'.
(DQNTrainer pid=7004)     But if you really just want to install all Gym dependencies and not have to think about it,
(DQNTrainer pid=7004)     'pip install -e .[all]' or 'pip install gym[all]' will do it.
(pid=14788) 
(DQNTrainer pid=11332) 2022-05-19 10:49:57,976	INFO trainer.py:2295 -- Your framework setting is 'tf', meaning you are using static-graph mode. Set framework='tf2' to enable eager execution with tf2.x. You may also then want to set eager_tracing=True in order to reach similar execution speed as with static-graph mode.
(DQNTrainer pid=11332) 2022-05-19 10:49:57,976	INFO simple_q.py:161 -- In multi-agent mode, policies will be optimized sequentially by the multi-GPU optimizer. Consider setting `simple_optimizer=True` if this doesn't work for you.
(pid=11748) 
(pid=13188) 
(pid=16516) 
(pid=) [2022-05-19 10:50:04,680 E 9068 23852] (raylet.exe) agent_manager.cc:107: The raylet exited immediately because the Ray agent failed. The raylet fate shares with the agent. This can happen because the Ray agent was unexpectedly killed or failed. See `dashboard_agent.log` for the root cause.
(bundle_reservation_check_func pid=21264) 
(bundle_reservation_check_func pid=10124) 
(bundle_reservation_check_func pid=15300) 
(pid=2700) 
(RolloutWorker pid=23604) 
(RolloutWorker pid=18268) 
(RolloutWorker pid=15504) 
(DQNTrainer pid=7004) 
(RolloutWorker pid=13824) Stack (most recent call first):
(RolloutWorker pid=13824)   File "D:\ML\test_RLlib\TF_Env\lib\site-packages\ray\_private\utils.py", line 116 in push_error_to_driver
(RolloutWorker pid=13824)   File "D:\ML\test_RLlib\TF_Env\lib\site-packages\ray\worker.py", line 449 in main_loop
(RolloutWorker pid=13824)   File "D:\ML\test_RLlib\TF_Env\lib\site-packages\ray\workers\default_worker.py", line 235 in <module>
(RolloutWorker pid=600) Stack (most recent call first):
(RolloutWorker pid=600)   File "D:\ML\test_RLlib\TF_Env\lib\site-packages\ray\_private\utils.py", line 116 in push_error_to_driver
(RolloutWorker pid=600)   File "D:\ML\test_RLlib\TF_Env\lib\site-packages\ray\worker.py", line 449 in main_loop
(RolloutWorker pid=600)   File "D:\ML\test_RLlib\TF_Env\lib\site-packages\ray\workers\default_worker.py", line 235 in <module>
(RolloutWorker pid=12964) Stack (most recent call first):
(RolloutWorker pid=12964)   File "D:\ML\test_RLlib\TF_Env\lib\site-packages\ray\_private\utils.py", line 116 in push_error_to_driver
(RolloutWorker pid=12964)   File "D:\ML\test_RLlib\TF_Env\lib\site-packages\ray\worker.py", line 449 in main_loop
(RolloutWorker pid=12964)   File "D:\ML\test_RLlib\TF_Env\lib\site-packages\ray\workers\default_worker.py", line 235 in <module>
(pid=) 2022-05-19 10:50:06,397	INFO context.py:67 -- Exec'ing worker with command: "D:\ML\test_RLlib\TF_Env\Scripts\python.exe" D:\ML\test_RLlib\TF_Env\lib\site-packages\ray\workers/default_worker.py --node-ip-address=127.0.0.1 --node-manager-port=58524 --object-store-name=tcp://127.0.0.1:64691 --raylet-name=tcp://127.0.0.1:63689 --redis-address=None --storage=None --temp-dir=C:\Users\peter\AppData\Local\Temp\ray --metrics-agent-port=64921 --logging-rotate-bytes=536870912 --logging-rotate-backup-count=5 --gcs-address=127.0.0.1:61713 --redis-password=5241590000000000 --startup-token=19 --runtime-env-hash=213246870
(pid=) 2022-05-19 10:50:06,397	INFO context.py:67 -- Exec'ing worker with command: "D:\ML\test_RLlib\TF_Env\Scripts\python.exe" D:\ML\test_RLlib\TF_Env\lib\site-packages\ray\workers/default_worker.py --node-ip-address=127.0.0.1 --node-manager-port=58524 --object-store-name=tcp://127.0.0.1:64691 --raylet-name=tcp://127.0.0.1:63689 --redis-address=None --storage=None --temp-dir=C:\Users\peter\AppData\Local\Temp\ray --metrics-agent-port=64921 --logging-rotate-bytes=536870912 --logging-rotate-backup-count=5 --gcs-address=127.0.0.1:61713 --redis-password=5241590000000000 --startup-token=17 --runtime-env-hash=213246870
(pid=) 2022-05-19 10:50:06,412	INFO context.py:67 -- Exec'ing worker with command: "D:\ML\test_RLlib\TF_Env\Scripts\python.exe" D:\ML\test_RLlib\TF_Env\lib\site-packages\ray\workers/default_worker.py --node-ip-address=127.0.0.1 --node-manager-port=58524 --object-store-name=tcp://127.0.0.1:64691 --raylet-name=tcp://127.0.0.1:63689 --redis-address=None --storage=None --temp-dir=C:\Users\peter\AppData\Local\Temp\ray --metrics-agent-port=64921 --logging-rotate-bytes=536870912 --logging-rotate-backup-count=5 --gcs-address=127.0.0.1:61713 --redis-password=5241590000000000 --startup-token=18 --runtime-env-hash=213246870
(pid=) 2022-05-19 10:50:07,241	INFO context.py:67 -- Exec'ing worker with command: "D:\ML\test_RLlib\TF_Env\Scripts\python.exe" D:\ML\test_RLlib\TF_Env\lib\site-packages\ray\workers/default_worker.py --node-ip-address=127.0.0.1 --node-manager-port=58524 --object-store-name=tcp://127.0.0.1:64691 --raylet-name=tcp://127.0.0.1:63689 --redis-address=None --storage=None --temp-dir=C:\Users\peter\AppData\Local\Temp\ray --metrics-agent-port=64921 --logging-rotate-bytes=536870912 --logging-rotate-backup-count=5 --gcs-address=127.0.0.1:61713 --redis-password=5241590000000000 --startup-token=14 --runtime-env-hash=213246870
(pid=) 2022-05-19 10:50:07,272	INFO context.py:67 -- Exec'ing worker with command: "D:\ML\test_RLlib\TF_Env\Scripts\python.exe" D:\ML\test_RLlib\TF_Env\lib\site-packages\ray\workers/default_worker.py --node-ip-address=127.0.0.1 --node-manager-port=58524 --object-store-name=tcp://127.0.0.1:64691 --raylet-name=tcp://127.0.0.1:63689 --redis-address=None --storage=None --temp-dir=C:\Users\peter\AppData\Local\Temp\ray --metrics-agent-port=64921 --logging-rotate-bytes=536870912 --logging-rotate-backup-count=5 --gcs-address=127.0.0.1:61713 --redis-password=5241590000000000 --startup-token=15 --runtime-env-hash=213246870
(pid=) 2022-05-19 10:50:07,303	INFO context.py:67 -- Exec'ing worker with command: "D:\ML\test_RLlib\TF_Env\Scripts\python.exe" D:\ML\test_RLlib\TF_Env\lib\site-packages\ray\workers/default_worker.py --node-ip-address=127.0.0.1 --node-manager-port=58524 --object-store-name=tcp://127.0.0.1:64691 --raylet-name=tcp://127.0.0.1:63689 --redis-address=None --storage=None --temp-dir=C:\Users\peter\AppData\Local\Temp\ray --metrics-agent-port=64921 --logging-rotate-bytes=536870912 --logging-rotate-backup-count=5 --gcs-address=127.0.0.1:61713 --redis-password=5241590000000000 --startup-token=13 --runtime-env-hash=213246870
(pid=) 2022-05-19 10:50:07,881	INFO context.py:67 -- Exec'ing worker with command: "D:\ML\test_RLlib\TF_Env\Scripts\python.exe" D:\ML\test_RLlib\TF_Env\lib\site-packages\ray\workers/default_worker.py --node-ip-address=127.0.0.1 --node-manager-port=58524 --object-store-name=tcp://127.0.0.1:64691 --raylet-name=tcp://127.0.0.1:63689 --redis-address=None --storage=None --temp-dir=C:\Users\peter\AppData\Local\Temp\ray --metrics-agent-port=64921 --logging-rotate-bytes=536870912 --logging-rotate-backup-count=5 --gcs-address=127.0.0.1:61713 --redis-password=5241590000000000 --startup-token=12 --runtime-env-hash=213246870
2022-05-19 10:50:33,288	WARNING worker.py:1382 -- The node with node id: d581586b7c7e0633fb90264635b2f193775bd304c5a049fce7f81e2a and ip: 127.0.0.1 has been marked dead because the detector has missed too many heartbeats from it. This can happen when a raylet crashes unexpectedly or has lagging heartbeats.
2022-05-19 10:50:33,303	WARNING resource_updater.py:51 -- Cluster resources not detected or are 0. Attempt #2...
== Status ==
Current time: 2022-05-19 10:50:33 (running for 00:00:56.53)
Memory usage on this node: 12.3/15.8 GiB
PopulationBasedTraining: 0 checkpoints, 0 perturbs
Resources requested: 8.0/12 CPUs, 0/1 GPUs, 0.0/2.27 GiB heap, 0.0/1.14 GiB objects
Result logdir: D:\ML\test_RLlib\test\results\DQNTrainer_2022-05-19_10-49-36
Number of trials: 3/3 (1 PENDING, 2 RUNNING)
+------------------------------------+----------+----------------+----------+-------------+
| Trial name                         | status   | loc            |    gamma |          lr |
|------------------------------------+----------+----------------+----------+-------------|
| DQNTrainer_CartPole-v0_9caae_00000 | RUNNING  | 127.0.0.1:7004 | 0.934952 | 0.000708551 |
| DQNTrainer_CartPole-v0_9caae_00001 | RUNNING  |                | 0.976634 | 0.000561509 |
| DQNTrainer_CartPole-v0_9caae_00002 | PENDING  |                | 0.940114 | 0.000492675 |
+------------------------------------+----------+----------------+----------+-------------+
(DQNTrainer pid=11332) Stack (most recent call first):
(DQNTrainer pid=11332)   File "D:\ML\test_RLlib\TF_Env\lib\site-packages\ray\_private\utils.py", line 116 in push_error_to_driver
(DQNTrainer pid=11332)   File "D:\ML\test_RLlib\TF_Env\lib\site-packages\ray\worker.py", line 449 in main_loop
(DQNTrainer pid=11332)   File "D:\ML\test_RLlib\TF_Env\lib\site-packages\ray\workers\default_worker.py", line 235 in <module>
(pid=) 2022-05-19 10:50:33,366	INFO context.py:67 -- Exec'ing worker with command: "D:\ML\test_RLlib\TF_Env\Scripts\python.exe" D:\ML\test_RLlib\TF_Env\lib\site-packages\ray\workers/default_worker.py --node-ip-address=127.0.0.1 --node-manager-port=58524 --object-store-name=tcp://127.0.0.1:64691 --raylet-name=tcp://127.0.0.1:63689 --redis-address=None --storage=None --temp-dir=C:\Users\peter\AppData\Local\Temp\ray --metrics-agent-port=64921 --logging-rotate-bytes=536870912 --logging-rotate-backup-count=5 --gcs-address=127.0.0.1:61713 --redis-password=5241590000000000 --startup-token=16 --runtime-env-hash=213246870
2022-05-19 10:50:33,803	WARNING resource_updater.py:51 -- Cluster resources not detected or are 0. Attempt #3...
2022-05-19 10:50:34,303	WARNING resource_updater.py:51 -- Cluster resources not detected or are 0. Attempt #4...
2022-05-19 10:50:34,819	WARNING resource_updater.py:51 -- Cluster resources not detected or are 0. Attempt #5...
2022-05-19 10:50:35,319	WARNING resource_updater.py:64 -- Cluster resources cannot be detected or are 0. You can resume this experiment by passing in `resume=True` to `run`.
2022-05-19 10:50:35,319	WARNING util.py:171 -- The `on_step_begin` operation took 2.016 s, which may be a performance bottleneck.
2022-05-19 10:50:35,319	INFO trial_runner.py:803 -- starting DQNTrainer_CartPole-v0_9caae_00002
Windows fatal exception: access violation
Process finished with exit code -1073741819 (0xC0000005)
Versions / Dependencies
ray, version 1.12.0
Python 3.9.12
gym 0.21.0
pip install ray
pip install "ray[rllib]" tensorflow torch
pip install ray[default]
pip install ray[tune]
pip install gym
Reproduction script
import ray
from ray import tune
from ray.rllib.agents.dqn import DQNTrainer
from ray.tune.schedulers import PopulationBasedTraining
import gym
import random
config = {
    "env":"CartPole-v0",
    "num_workers":3,
    "record_env":True,
    "num_gpus": 0,
    "framework":"tf",
    }
if __name__ == "__main__":
    pbt = PopulationBasedTraining(
        time_attr="time_total_s",
        perturbation_interval=7200,
        resample_probability=0.25,
        hyperparam_mutations={
            "lr": lambda: random.uniform(1e-3, 5e-5),
            "gamma": lambda: random.uniform(0.90, 0.99),
        },
    )
    import tensorflow as tf
    ray.init()
    tune.run(DQNTrainer, scheduler=pbt,
             config=config,
             num_samples=3,
             metric="episode_reward_mean",
             mode="max",
             local_dir="./results",
             sync_config=tune.SyncConfig(syncer=None),
             checkpoint_freq=500,
             keep_checkpoints_num=20)
    ray.shutdown()
Issue Severity
High: It blocks me from completing my task.
Metadata
Metadata
Assignees
Labels
P1Issue that should be fixed within a few weeksIssue that should be fixed within a few weeksQSQuantsight triage labelQuantsight triage labelbugSomething that is supposed to be working; but isn'tSomething that is supposed to be working; but isn'trllibRLlib related issuesRLlib related issueswindows