Deemah Tashman, Postdoctoral fellow at Polytechnique Montreal and a member of Lincs lab, has her journal paper: Optimizing Cognitive Networks: Reinforcement Learning Meets Energy Harvesting Over Cascaded Channels published in IEEE Systems Journal, with the collaboration of Soumaya Cherkaoui and Walaa Hamouda.
This article presents a reinforcement learning-based approach to improve the physical layer security of an underlay cognitive radio network over cascaded channels. These channels are utilized in highly mobile networks such as cognitive vehicular networks (CVN). In addition, an eavesdropper aims to intercept the communications between secondary users (SUs). The SU receiver has full-duplex and energy harvesting capabilities to generate jamming signals to confound the eavesdropper and enhance security. Moreover, the SU transmitter extracts energy from ambient radio frequency signals in order to power subsequent transmissions to its intended receiver. To optimize the privacy and reliability of the SUs in a CVN, a deep Q-network (DQN) strategy is utilized where multiple DQN agents are required such that an agent is assigned at each SU transmitter. The objective for the SUs is to determine the optimal transmission power and decide whether to collect energy or transmit messages during each time period in order to maximize their secrecy rate. Thereafter, a DQN approach is proposed to maximize the throughput of the SUs while respecting the interference threshold acceptable at the receiver of the primary user. According to the findings, the authors’ proposed strategy outperforms two other baseline strategies in terms of security and reliability.
For more detail about the paper, visit: Optimizing Cognitive Networks: Reinforcement Learning Meets Energy Harvesting Over Cascaded Channels | IEEE Journals & Magazine | IEEE Xplore