Cognitive Radio Power Control Based on Deep Reinforcement Learning

CHEN Ling-ling HUANG Fu-sen YU Yue

Computer & Telecommunication ›› 2024, Vol. 1 ›› Issue (10) : 10.

Computer & Telecommunication ›› 2024, Vol. 1 ›› Issue (10) : 10. DOI: 10.15966/j.cnki.dnydx.2024.10.006

Cognitive Radio Power Control Based on Deep Reinforcement Learning

Author information +
History +

Abstract

With the rapid development of technology, people's demand for wireless spectrum is increasing. However, due to limited spectrum resources, how to effectively utilize these resources has become a major challenge in the field of radio. To address this is‐ sue, we establish a cognitive wireless network model where primary and secondary users share the same spectrum resources and work in a non cooperative manner to improve the throughput of secondary users. Then, we use the SumTree Sampling Deep QNetwork (ST-DQN) algorithm for power control to ensure priority and diversity in sample selection. Finally, a series of simulation experiments are conducted using Python to compare and analyze the performance indicators of reward, loss function, and sub user throughput with traditional Q-learning and free exploration algorithms. We find that the ST-DQN algorithm performs better in power control.

Key words

深度强化学习 / 认知无线电 / 功率控制

Cite this article

Download Citations
CHEN Ling-ling HUANG Fu-sen YU Yue. Cognitive Radio Power Control Based on Deep Reinforcement Learning[J]. Computer & Telecommunication. 2024, 1(10): 10 https://doi.org/10.15966/j.cnki.dnydx.2024.10.006

Accesses

Citation

Detail

Sections
Recommended

/