Energy-Efficient Sensor Calibration Based on Deep Reinforcement Learning

AUTHORS

Akm Ashiquzzaman,School of Electronics and Computer Engineering, Chonnam National University, Gwangju, South Korea
Tai-Won Um,Department of Information and Communication Engineering, Chosun University, Gwangju, South Korea
Jinsul Kim,School of Electronics and Computer Engineering, Chonnam National University, Gwangju, South Korea

ABSTRACT

The current development of IoT based sensor networks opens up a new era of network communication with a wide range and diverse usage. Sensor calibration to reduce power usage is essential to reduce energy consumption in sensors as well as improve the efficiency of devices. Reinforcement learning (RL) has been received much attention from researchers and now widely applied in many study fields to achieve intelligent automation. Though various types of sensors have been widely used in the field of IoT, rare researches were conducted in resource optimizing. In this novel research, a new style of power conservation has been explored with the help of RL to make a new generation of IoT devices with calibrated power sources to maximize resource utilization. Our proposed model using Deep Q learning (DQN) enables IoT sensors to maximize its resource utilization. This research focuses solely on the energy-efficient sensor calibration and simulation results show that the performance of the proposed method. This proposed model achieve a new state of the arts 96% accuracy for predicting and learning the game and give a novel solution for efficient sensor calibration.

 

KEYWORDS

Deep learning, Optimization, Reinforcement Learning, Simulation.

REFERENCES

[1]     J. Dofe, J. Frey, and Q. Yu, “Hardware security assurance in emerging iot applications,” in 2016 IEEE International Symposium on Circuits and Systems (ISCAS), pp.2050-2053, IEEE, (2016). DOI:10.1109/ISCAS.2016.7538981(CrossRef)(Google Scholar)
[2]     M. Mohammadi, A. Al-Fuqaha, S. Sorour, and M. Guizani, “Deep learning for iot big data and streaming analytics: A survey,” IEEE Communications Surveys & Tutorials, (2018).
[3]     E. Gibney, “Google ai algorithm masters ancient game of go,” Nature News, vol. 529, no. 7587, p. 445, (2016). DOI: 10.1038/529445a(CrossRef)(Google Scholar)
[4]     V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller, “Playing atari with deep reinforcement learning”, arXiv preprint arXiv:1312.5602, (2013).
[5]     C. Dong, M. C. Leu, and Z. Yin, “American sign language alphabet recognition using microsoft kinect,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 44-52, (2015).DOI: 10.1109/CVPRW.2015.7301347(CrossRef)(Google Scholar)
[6]     F. Shrouf, J. Ordieres-Mere, A. Garc´ ´ıa-Sanchez, and M. Ortega-Mier, “Optimizing the production scheduling of a single machine to´ minimize total energy consumption costs,” Journal of Cleaner Production, vol. 67, pp. 197-207, (2014). DOI: 10.1016/j.jclepro.2013.12.024(CrossRef)(Google Scholar)
[7]     E. Mocanu, D. C. Mocanu, P. H. Nguyen, A. Liotta, M. E. Webber, M. Gibescu, and J. G. Slootweg, “On-line building energy optimization using deep reinforcement learning,” IEEE Transactions on Smart Grid, (2018). DOI: 10.1109/TSG.2018.2834219(CrossRef)(Google Scholar)
[8]     R. S. Sutton, A. G. Barto, et al., Introduction to reinforcement learning, vol. 135. MIT press Cambridge, (1998).
[9]     A. Rosebrock, “Imagenet: Vggnet, resnet, inception, and xception with keras” Mars, (2017).
[10]  A. Ashiquzzaman, A. K. Tushar, M. R. Islam, D. Shon, K. Im, J.-H. Park, D.-S. Lim, and J. Kim, “Reduction of overfitting in diabetes prediction using deep learning neural network,” in International Conference on Information Theoretic Security, pp. 35-43, Springer, Singapore, (2017).DOI: 10.1007/978-981-10-6451-7_5(CrossRef)(Google Scholar)
[11]  Y. Hirose, K. Yamashita, and S. Hijiya, “Back-propagation algorithm which varies the number of hidden units,” Neural Networks, vol. 4, no. 1, pp. 61-66, (1991). DOI: 10.1016/0893-6080(91)90032-z(CrossRef)(Google Scholar)
[12]  G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R. R. Salakhutdinov, “Improving neural networks by preventing co-adaptation of feature detectors,” arXiv preprint arXiv:1207.0580, (2012)

CITATION

  • APA:
    Ashiquzzaman,A.& Um,T.W.& Kim,J.(2019). Energy-Efficient Sensor Calibration Based on Deep Reinforcement Learning. International Journal of Artificial Intelligence and Applications for Smart Devices, 7(1), 7-14. http://dx.doi.org/10.21742/IJAIASD.2019.7.1.02
  • Harvard:
    Ashiquzzaman,A.and Um,T.W.and Kim,J.(2019). "Energy-Efficient Sensor Calibration Based on Deep Reinforcement Learning". International Journal of Artificial Intelligence and Applications for Smart Devices, 7(1), pp.7-14. doi:http://dx.doi.org/10.21742/IJAIASD.2019.7.1.02
  • IEEE:
    [1]A.Ashiquzzamanand T.W.Umand J.Kim, "Energy-Efficient Sensor Calibration Based on Deep Reinforcement Learning". International Journal of Artificial Intelligence and Applications for Smart Devices, vol.7, no.1, pp.7-14, Nov. 2019
  • MLA:
    Ashiquzzaman Akmand Um Tai-Wonand Kim Jinsul. "Energy-Efficient Sensor Calibration Based on Deep Reinforcement Learning". International Journal of Artificial Intelligence and Applications for Smart Devices, vol.7, no.1, Nov. 2019, pp.7-14, doi:http://dx.doi.org/10.21742/IJAIASD.2019.7.1.02

ISSUE INFO

  • Volume 7, No. 1, 2019
  • ISSN(p):2288-6710
  • ISSN(o):2383-7292
  • Published:Nov. 2019

DOWNLOAD