This repository contains the implementation part of our work in which we compared several deep reinforcement learning frameworks: Dopamine, Horizon and Ray.
- Install Docker following the instructions from here.
- Install
nvidia-dockerfrom here to run experiments with GPU inside the container. - Build necessary Docker images by executing the following script inside the repository folder:
bash ./scripts/docker/init.sh
- Once the images are built, start Postgres and project containers by running
bash ./scripts/docker/start.sh. When the script is done, you'll have access to the interactive terminal inside the project container. - Once inside the container, you can validate the setup by running
bash ./scripts/test.sh. - Alternatively, proceed with the experiments by executing
bash ./scripts/evaluation/{environment}/eval_all.shfor the complete evaluation orbash ./scripts/evaluation/{environment}/eval_{framework}.shinside the container for evaluating single frameworks. - Run
tensorboard --logdir=resultsinside the container, and view the Tensorboard summary of evaluation results in your local browser atlocalhost:6006.
- Install Anaconda from here (make sure to download the Python 3 version). Leave Anaconda's installation directory default (home/miniconda3).
- Check Anaconda's version by executing
conda -V. If Anaconda's version is<=4.6.8, runconda update conda. - Create project environment by running
bash ./scripts/setup_local_env.sh. - You can validate your setup by running
bash ./scripts/test.sh. - If you'd like to evaluate the frameworks against the Park's Query Optimizer environment, then do the following extra steps:
- edit
/etc/hostsfile and add the following alias:127.0.0.1 docker-pg; - edit
/etc/hostsfile and add the following alias:127.0.0.1 drl-fw; - build the Postgres Docker image used by the environment:
docker build -t pg park/query-optimizer/docker/; - start the Postgres container:
docker start docker-pg || docker run --name docker-pg -p 0.0.0.0:5432:5432 --net drl-net --privileged -d pg.
- edit
- Run experiments for a specific environment by executing
bash ./scripts/evaluation/{environment}/eval_all.sh. Alternatively, you can run experiments for individual frameworks by runningbash ./scripts/evaluation/{environment}/eval_{framework}.sh. - View evaluation results in Tensorboard (
localhost:6006) after runningtensorboard --logdir=results. You may need to activate the proper environment first (conda activate drl-frameworks-env).