Lecturer of Department of Electronics & Telecommunications Engineering
Email: dovinhquang@tdtu.edu.vn
Office: M306
Education
- Ph.D. in Electrical Engineering, University of Ulsan, Ulsan City, South Korea, 2017 – 2020.
- M.E. in Electronic and Computer Engineering, RMIT University, Melbourne, Australia, 2011 – 2013.
- B.E. in Electrical and Electronics Engineering, Ho Chi Minh City University of Technology, Vietnam, 2004 – 2009.
Professional experience
- 2009 – 2011: Operation and Maintenance Engineer, Vietnam Telecom Service Company (Vinaphone), Vietnam.
- 2013 – 2014: Test Engineer, Jabil Vietnam Company Limited, Vietnam.
- 2014 – 2017: Lecturer, Faculty of Electrical and Electronics Engineering, Ton Duc Thang University, Vietnam.
- 2020 – 2021: Postdoctoral Research Fellow, Multimedia Communication Systems Laboratory, University of Ulsan, South Korea.
- 2021 – 2023: Postdoctoral Research Fellow, Artificial Intelligence Research Center, Pusan National University, South Korea.
- 2023 – Present: Lecturer, Faculty of Electrical and Electronics Engineering, Ton Duc Thang University, Vietnam.
Areas of Interest
Deep learning, deep reinforcement learning, energy harvesting, radio resource management, cognitive radio network, mobile edge computing, unmanned aerial vehicle, non-orthogonal multiple access, and reconfigurable intelligent surface
Publication
A. Journals
- S.-G. Jeong, Q. V. Do, and W.-J. Hwang, “Short-term photovoltaic power forecasting based on hybrid quantum gated recurrent unit,” in ICT Express, p. S2405959523001637, Dec. 2023.
- Q. V. Do, Q. -V. Pham and W. -J. Hwang, “Deep Reinforcement Learning for Energy-Efficient Federated Learning in UAV-Enabled Wireless Powered Networks,” in IEEE Communications Letters, vol. 26, no. 1, pp. 99-103, Jan. 2022.
- Q. V. Do and I. Koo, “Deep Reinforcement Learning Based Dynamic Spectrum Competition in Green Cognitive Virtualized Networks,” in IEEE Access, vol. 9, pp. 52193-52201, Mar. 2021.
- Q. V. Do and I. Koo, “A Transfer Deep Q-Learning Framework for Resource Competition in Virtual Mobile Networks With Energy-Harvesting Base Stations,” in IEEE Systems Journal, vol. 15, no. 1, pp. 319-330, Mar. 2021.
- Viet Tuan, P.; Ngoc Son, P.; Trung Duy, T.; Nguyen, S.Q.; Ngo, V.Q.B.; Q. V. Do; Koo, I. Optimizing a Secure Two-Way Network with Non-Linear SWIPT, Channel Uncertainty, and a Hidden Eavesdropper. Electronics, vol. 9, no. 8, p. 1222, Jul. 2020.
- Q. V. Do, and Insoo Koo, “Actor-critic deep learning for efficient user association and bandwidth allocation in dense mobile networks with green base stations,” in Wireless Networks, Nov. 2019.
- Q. V. Do, T. N. K. Hoan and I. Koo, “Optimal Power Allocation for Energy-efficient Data Transmission Against Full-duplex Active Eavesdroppers in Wireless Sensor Networks,” in IEEE Sensors Journal, vol. 19, no. 13, pp. 5333-5346, Jul. 2019.
- Q. V. Do, V. H. Vu & I. Koo (2019) An efficient bandwidth allocation scheme for hierarchical cellular networks with energy harvesting: an actor-critic approach, International Journal of Electronics, vol. 106, no. 10, pp. 1543-1566, Apr. 2019.
- Q. V. Do and I. Koo, “Learning Frameworks for Cooperative Spectrum Sensing and Energy-efficient Data Protection in Cognitive Radio Networks,” Applied Science, vol. 8, no. 5, p.722, May 2018.
- Q. V. Do, T.-N.-K. Hoan, and I. Koo, “Energy-Efficient Data Encryption Scheme for Cognitive Radio Networks,” in IEEE Sensors Journal, vol. 18, no. 5, pp. 2050-2059, Mar. 2018.
- Q. V. Do, I. Koo, “FPGA Implementation of LSB-Based Steganography,” Journal of Information and Communication Convergence Engineering, vol. 15, no. 3, pp. 151-159, Sep. 2017.
B. Conferences
- S.-G. Jeong, Q. V. Do, H.-J. Hwang, M. Hasegawa, H. Sekiya, and W.-J. Hwang, “UWB NLOS/LOS Classification Using Hybrid Quantum Convolutional Neural Networks,” in 2023 IEEE International Conference on Consumer Electronics-Asia (ICCE-Asia), Busan, Republic of Korea: IEEE, Oct. 2023, pp. 1–2.
- Q. V. Do and I. Koo, “Dynamic Bandwidth Allocation Scheme for Wireless Networks with Energy Harvesting Using Actor-Critic Deep Reinforcement Learning,” 2019 International Conference on Artificial Intelligence in Information and Communication (ICAIIC), Okinawa, Japan, 2019, pp. 138-142.