# Deep-Reinforcement-Learning-for-Dynamic-Spectrum-Access **Repository Path**: helloMRDJ/Deep-Reinforcement-Learning-for-Dynamic-Spectrum-Access ## Basic Information - **Project Name**: Deep-Reinforcement-Learning-for-Dynamic-Spectrum-Access - **Description**: Using multi-agent Deep Q Learning with LSTM cells (DRQN) to train multiple users in cognitive radio to learn to share scarce resource (channels) equally without communication - **Primary Language**: Unknown - **License**: MIT - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 1 - **Forks**: 0 - **Created**: 2020-04-02 - **Last Updated**: 2021-12-23 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # Deep-Reinforcement-Learning-for-Dynamic-Spectrum-Access ## Dependencies 1. tensorflow>1.0 2. matplotlib 3. python 3.x(windows) ### To install tensorflow ,follow the link [tensorflow - windows](https://www.tensorflow.org/install/install_windows) I recommend Installing with Anaconda ### To train the DQN ,run on terminal ```bash git clone https://github.com/shkrwnd/Deep-Reinforcement-Learning-for-Dynamic-Spectrum-Access.git cd Deep-Reinforcement-Learning-for-Dynamic-Spectrum-Access python train.py ``` To understand the code , I have provided jupyter notebooks: 1. How to use environment.ipynb 2. How to generate states.ipynb To run notebook,run on terminal ```bash jupyter notebook ``` Default browser will open, just open ipynb files and run them This work is an inspiration from the paper ``` O. Naparstek and K. Cohen, “Deep multi-user reinforcement learning for dynamic spectrum access in multichannel wireless networks,” to appear in Proc. of the IEEE Global Communications Conference (GLOBECOM), Dec. 2017 ```