PERBANDINGAN MODEL RANDOM FOREST DAN XGBOOST UNTUK PREDIKSI KEJAHATAN KESUSILAAN DI PROVINSI JAWA BARAT
Abstract
Keywords
Full Text:
PDF (Bahasa Indonesia)References
L. Mauro and G. Carmeci, “A poverty trap of crime and unemployment,” Review of Development Economics, vol. 11, no. 3, pp. 450–462, 2007.
E. Gracia, A. López-Quílez, M. Marco, S. Lladosa, and M. Lila, “Exploring neighborhood influences on small-area variations in intimate partner violence risk: A bayesian random-effects modeling approach,” International journal of environmental research and public health, vol. 11, no. 1, pp. 866–882, 2014.
N. V. Chawla, K. W. Bowyer, L. O. Hall, and W. P. Kegelmeyer, “SMOTE: synthetic minority over-sampling technique,” Journal of artifi-cial intelligence research, vol. 16, pp. 321–357, 2002.
J. Han, M. Kamber, and D. Mining, “Concepts and techniques,” Morgan Kaufmann, vol. 340, pp. 94104–3205, 2006.
X. Zhang, L. Liu, M. Lan, G. Song, L. Xiao, and J. Chen, “Interpretable machine learning models for crime prediction,” Computers, Envi-ronment and Urban Systems, vol. 94, p. 101789, 2022.
S. Parthasarathy and A. R. Lakshminarayanan, “Naïve Bayes–AdaBoost Ensemble Model for Classifying Sexual Crimes,” in Data Intelli-gence and Cognitive Informatics, Springer, 2022, pp. 393–405.
A. Apicella, F. Isgrò, R. Prevete, and G. Tamburrini, “Middle-level features for the explanation of classification systems by sparse dic-tionary methods,” International Journal of Neural Systems, vol. 30, no. 08, p. 2050040, 2020.
L. Breiman, “Random forests,” Machine learning, vol. 45, no. 1, pp. 5–32, 2001.
T. Hastie, R. Tibshirani, J. H. Friedman, and J. H. Friedman, The elements of statistical learning: data mining, inference, and prediction, vol. 2. Springer, 2009.
J. Ali, R. Khan, N. Ahmad, and I. Maqsood, “Random Forests and Decision Trees,” International Journal of Computer Science Is-sues(IJCSI), vol. 9, Sep. 2012.
B. Sartono and U. D. Syafitri, “Metode pohon gabungan: Solusi pilihan untuk mengatasi kelemahan pohon regresi dan klasifikasi tunggal,” in Forum Statistika dan Komputasi, 2010, vol. 15, no. 1.
T. Chen and C. Guestrin, “Xgboost: A scalable tree boosting system,” in Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining, 2016, pp. 785–794.
S. M. Lundberg, G. G. Erion, and S.-I. Lee, “Consistent Individualized Feature Attribution for Tree Ensembles.” arXiv, 2018. doi: 10.48550/ARXIV.1802.03888.
D. R. Velez et al., “A balanced accuracy function for epistasis modeling in imbalanced datasets using multifactor dimensionality reduc-tion,” Genetic Epidemiology: the Official Publication of the International Genetic Epidemiology Society, vol. 31, no. 4, pp. 306–315, 2007.
G. King and L. Zeng, “Logistic regression in rare events data,” Political analysis, vol. 9, no. 2, pp. 137–163, 2001.
E. F. Schisterman, D. Faraggi, B. Reiser, and J. Hu, “Youden Index and the optimal threshold for markers with mass at zero,” Statistics in medicine, vol. 27, no. 2, pp. 297–315, 2008.
Y. Ma and H. He, “Imbalanced learning: foundations, algorithms, and applications,” 2013.
DOI: http://dx.doi.org/10.26798/jiko.v7i2.799
Article Metrics
Abstract view : 562 timesPDF (Bahasa Indonesia) - 347 times
Refbacks
- There are currently no refbacks.
Copyright (c) 2023 Adlina Khairunnisa