This article is part of the supplement: Proceedings of the 23rd International Conference on Genome Informatics (GIW 2012)
Phenotype prediction from genome-wide association studies: application to smoking behaviors
1 Interdisciplinary Program in Bioinformatics, Seoul National University, Seoul, 151-742, Korea
2 Center for Immunology and Pathology, National Institute of Health, Osong, Chungchungbuk-do, 363-951, Korea
3 Center for Genome Science, National Institute of Health, Osong, Chungchungbuk-do, 363-951, Korea
4 Department of Statistics, Seoul National University, Seoul, 151-742, Korea
BMC Systems Biology 2012, 6(Suppl 2):S11 doi:10.1186/1752-0509-6-S2-S11Published: 12 December 2012
A great success of the genome wide association study enabled us to give more attention on the personal genome and clinical application such as diagnosis and disease risk prediction. However, previous prediction studies using known disease associated loci have not been successful (Area Under Curve 0.55 ~ 0.68 for type 2 diabetes and coronary heart disease). There are several reasons for poor predictability such as small number of known disease-associated loci, simple analysis not considering complexity in phenotype, and a limited number of features used for prediction.
In this research, we investigated the effect of feature selection and prediction algorithm on the performance of prediction method thoroughly. In particular, we considered the following feature selection and prediction methods: regression analysis, regularized regression analysis, linear discriminant analysis, non-linear support vector machine, and random forest. For these methods, we studied the effects of feature selection and the number of features on prediction. Our investigation was based on the analysis of 8,842 Korean individuals genotyped by Affymetrix SNP array 5.0, for predicting smoking behaviors.
To observe the effect of feature selection methods on prediction performance, selected features were used for prediction and area under the curve score was measured. For feature selection, the performances of support vector machine (SVM) and elastic-net (EN) showed better results than those of linear discriminant analysis (LDA), random forest (RF) and simple logistic regression (LR) methods. For prediction, SVM showed the best performance based on area under the curve score. With less than 100 SNPs, EN was the best prediction method while SVM was the best if over 400 SNPs were used for the prediction.
Based on combination of feature selection and prediction methods, SVM showed the best performance in feature selection and prediction.