• 대한전기학회
Mobile QR Code QR CODE : The Transactions of the Korean Institute of Electrical Engineers
  • COPE
  • kcse
  • 한국과학기술단체총연합회
  • 한국학술지인용색인
  • Scopus
  • crossref
  • orcid

References

1 
M. J. Yeo, and Y. P. Kim, “Trends of the PM 10 Concentrations and High PM 10 Concentration Cases in Korea,” Journal of Korean Society for Atmospheric Environment, vol. 35, no. 2, pp. 249-264, April 2019.URL
2 
J. Baek, S. Lee, B, Lee, D. Kang, M. Yeo, and K, Kim, “A Study on the Relationship between the Indoor and Outdoor Particulate Matter Concentration by Infiltration in the Winter,” Journal of the Architectural Institute of Korea, vol. 31, no. 9, pp. 137-144, September, 2015.DOI
3 
Querol, X., Moreno, T., Karanasiou, A., Reche, C., Alastuey, A., Viana, M., Font, O., de Miguel, E., Capdevila, M., “Variability of levels and composition of PM10 and PM2.5 in the Barcelona metro system,” Atmospheric Chemistry and Physics, vol. 12, no. 11, pp. 5055-507, 2012.URL
4 
Moreno, T., Perez, N., Reche, C., Martins, V., de Miguel, E., Capdevila, M., Centelles, S., Minguillon, M.C., Amato, F., Alastuey, A., Querol, X., Gibbons, W., “Subway platform air quality: assessing the influences of tunnel ventilation, train piston effect and station design,” Atmospheric Environment, vol. 92, pp. 461-468, 2014.DOI
5 
H. Lim, T. Yin, and Y. Kwon, “A Study on the Optimization of the Particulate Matter Reduction Device in Underground Subway Station,” 2019 Spring Conference of the Korean Institute of Industrial Engineers, pp. 3786-3786, Apr. 2019.URL
6 
S. Park, Y. Lee, Y. Yoon, M. Oh, M. Kim, and S. Kwon, “Prediction of Particulate Matter(PM) using Machine Learning,” Proceeding of the Korea Society for Railway Conference, pp. 499-500, May 2018.URL
7 
Y. Kim, B. Kim, and S. Ahn, “Application of spatiotemporal transformer model to improve prediction performance of particulate matter concentration,” Journal of Intelligent Information System, vol. 28, no. 1, pp. 329-352, 2022.DOI
8 
J. Kim, K. Lee, J. Bae, “Construction of real-time Measurement and Device of reducting fine dust in Urban Railway,” Proceeding of the Korea Society for Railway Conference, pp. 101-102, 2020.URL
9 
Y. Lee, Y. Kim, H. Lee, Y. J. Kim, B, H, and H. Kim, “Analysis of the Correlation between the Concentration of PM 2.5 in the Outside Atmosphere and the Concentration of PM 2.5 in the Subway Station,” Journal of Korean Society for Atmospheric, vol. 38, no. 1, pp. 1-12, 2022.URL
10 
M. S. Kim, “Research & Trends for Converged AI Technology based on Unsupervised Reinforcement Learning,” Journal of Korean Society of Computer Information, vol. 28, no. 1, June 2020.URL
11 
R. S. Michalski, J. G. Carbonell, and T. M. Mitchell, “Machine Learning: An Artificial Intelligence Approach,” 1983rd edition, Springer, 2013.URL
12 
K. Kwon, S. Hong, J. Heo, H. Jung, and J. Park, “Reinforcement Learning-based HVAC Control Agent for Optimal Control of Particulate Matter in Railway Stations,” The Transactions of the Korean Institute of Electrical Engineers, vol. 70, no. 10, pp. 1594-1600, 2021.URL
13 
R. S. Sutton, and A. G. Barto, Reinforcement Learning: An Introduction, 2nd ed. The MIT Press, 2018.URL
14 
J. R. Norris, “Markov Chains.” Cambridge University Press, 1997.URL
15 
M. Minsky, S. A. Papert, “Perceptrons: An Introduction to Computational Geometry.” MIT Press, 1987.URL
16 
C. M. Bishop, “Neural Networks for Pattern Recognition.” Clarendon: Oxford, 1995.URL
17 
V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra and M. Riedmiller, “Playing Atari with Deep Reinforcement Learning,” arXiv preprint arXiv:1312.5602, 2013.URL
18 
B. Recht, “A tour of reinforcement learning: The view from continuous control,” Annual Review of Control, Robotics, and Autonomous Systems, vol. 2, no. 1, pp. 253-279, 2019.DOI
19 
V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran, D. Wierstra, S. Legg, and D. Hassabis, “Human-level control through deep reinforcement learning,” Nature, vol. 518, no. 7540, pp. 529-533, Feb 2015.URL
20 
L.-J. Lin, “Self-improving reactive agents based on reinforcement learning, planning and teaching,” Machine Learning, vol. 8, no. 3, pp. 293-321, May 1992.DOI
21 
Keras. https://github.com/fchollet/keras. Accessed: 2021-08-27.URL