Best of luck to you in monetizing your efforts, hope this good publicity will help your cause. Thank you very much for opensourcing reference implementations of state-of-the art reinforcement learning algorithms.
One thing that would make playing with this tech more interesting to me and other newcomers is a guide on how to create a new environment for gym or universe, sort of a crash course on what steps need to be made in order to apply your algorithms to my existing problems
Thanks for your kind words and thanks for your suggestion. I agree it's sensible to provide information on how to connect your problem space to our library. We have some more blogposts on the roadmap and might add that one as well (we had some information on that in the documentation, but it's outdated as of now). Until then I would suggest you take a look at the source of our OpenAI gym connector:
I'm currently working on an RL project based on an OpenAI Gym environment and have been reviewing the different frameworks available. So far I’ve come across:
- OpenAI Baselines (more a collection of algorithms than a framework)
- Keras-RL (looked ideal but has been abandoned)
- Tensorflow Agents (An 'official'? Tensorflow library, but very basic- only one algo at present)
- rllab (Developed by OpenAI people but seems to be abandoned)
- OpenAI Lab (?)
- TensorForce
My main concerns are: 1. Soundness of the algo implementations. 2. Modularity, ease-of-use, compatibility.
I first looked at Baselines as it seemed to best address the first concern but ran into frustrations when for example the DeepQ implementation didn’t work if my Gym’s action_space was a Tuple space. I am working with a team unfamiliar with RL so want something that is as plug-n-play as possible, like Keras. So far TensorForce looks promising. Can anyone add anything more? Thanks
At least in terms of integration, TensorForce aims to be a "plug and play" library. However, RL is not at a stage right now where you can just plug an algorithm to any kind of problem and expect it to learn. Hyperparameter tuning is always necessary.
Still, TensorForce does provide pluggable implementations of state-of-the-art algorithms as well as runner utilities and environment abstractions to make it easy to connect your learning problem to it.
One thing that would make playing with this tech more interesting to me and other newcomers is a guide on how to create a new environment for gym or universe, sort of a crash course on what steps need to be made in order to apply your algorithms to my existing problems