1 이민식, "카테고리 중립 단어 활용을 통한 주가 예측 방안: 텍스트 마이닝 활용" 한국지능정보시스템학회 23 (23): 123-138, 2017
2 이민식, "중립도 기반 선택적 단어 제거를 통한유용 리뷰 분류 정확도 향상 방안" 한국지능정보시스템학회 22 (22): 129-142, 2016
3 Sahlgren, M., "The distributional hypothesis" 20 (20): 33-53, 2008
4 Joachims, T., "Text categorization with support vector machines" University of Dortmund 1997
5 Yu, L.C., "Refining word embeddings for sentiment analysis" 545-550, 2017
6 Jolliffe, I.T., "Principal Component Analysis" Springer-Verlag 1989
7 Duda, R. O., "Pattern classification" Wiley 2000
8 Rapp, M., "PMSE dependence on aerosol charge, number density and aerosol size" 108 (108): 1-11, 2003
9 Roweis, S.T., "Nonlinear dimensionality reduction by Locally Linear Embedding" 290 (290): 2323-2326, 2000
10 Lewis, D.D., "Naive (Bayes) at forty: The independence assumption in information retrieval" 4-15, 1998
1 이민식, "카테고리 중립 단어 활용을 통한 주가 예측 방안: 텍스트 마이닝 활용" 한국지능정보시스템학회 23 (23): 123-138, 2017
2 이민식, "중립도 기반 선택적 단어 제거를 통한유용 리뷰 분류 정확도 향상 방안" 한국지능정보시스템학회 22 (22): 129-142, 2016
3 Sahlgren, M., "The distributional hypothesis" 20 (20): 33-53, 2008
4 Joachims, T., "Text categorization with support vector machines" University of Dortmund 1997
5 Yu, L.C., "Refining word embeddings for sentiment analysis" 545-550, 2017
6 Jolliffe, I.T., "Principal Component Analysis" Springer-Verlag 1989
7 Duda, R. O., "Pattern classification" Wiley 2000
8 Rapp, M., "PMSE dependence on aerosol charge, number density and aerosol size" 108 (108): 1-11, 2003
9 Roweis, S.T., "Nonlinear dimensionality reduction by Locally Linear Embedding" 290 (290): 2323-2326, 2000
10 Lewis, D.D., "Naive (Bayes) at forty: The independence assumption in information retrieval" 4-15, 1998
11 Sahami, M., "Learning limited dependence Bayesian classifiers" 334-338, 1996
12 Barkan, O., "Item2Vec: Neural Item Embedding for Collaborative Filtering"
13 Landauer, T.K., "Introduction to Latent Semantic Analysis" 25 : 259-284, 1998
14 Deerwester, S., "Indexing by latent semantic analysis" 41 (41): 391-407, 1990
15 Zhu, L., "Improved information gain feature selection method for Chinese text classification based on word embedding" 72-76, 2017
16 Pennington, J., "Glove: Global vectors for word representation" EMNLP 2014
17 Mika, S., "Fisher discriminant analysis with kernels" 1999
18 Mika, S., "Fisher discriminant analysis with kernels" 1999
19 Peng, H., "Feature selection based on mutual information: Criteria of maxdependence, max-relevance, min-redundancy" 27 (27): 2005
20 Lewis, D.D., "Feature selection and feature extraction for text categorization" 212-217, 1992
21 Li, J., "Feature Selection: a data perspective" 50 (50): 94:1-94:45, 2017
22 Azhagusundari, B., "Feature Selection based on Information Gain" 2 (2): 18-21, 2013
23 Bojanowski, P., "Enriching word vectors with subword information"
24 Mikolov, T., "Efficient estimation of word representations in vector space" 2013
25 Frome, A., "Devise: A Deep Visual-Semantic Embedding Model" 26 : 1-11, 2013
26 Peters, M., "Deep contextualized word representations" NAACL 2018
27 Kim, Y., "Convolutional neural networks for sentence classification" 1746-1751, 2014
28 Barkan, O., "Bayesian Neural Word Embedding" 2017
29 Zhou, P., "Attention-based bidirectional long short-term memory networks for relation classification" 207-213, 2016
30 Zhang, R., "An Information gainbased approach for recommending useful product reviews" 26 (26): 419-434, 2011
31 Mohan, P., "A study on impact of dimensionality reduction on Naive Bayes classifier" 10 (10): 2017