RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      KCI등재

      Application of AIG Implemented within CLASS Software for Generating Cognitive Test Item Models

      한글로보기

      https://www.riss.kr/link?id=A108320604

      • 0

        상세조회
      • 0

        다운로드
      서지정보 열기
      • 내보내기
      • 내책장담기
      • 공유하기
      • 오류접수

      부가정보

      다국어 초록 (Multilingual Abstract)

      Scale scores for cognitive domains have been used as an important indicator for both academic achievement and clinical diagnosis. For example, in education, Cognitive Abilities Test (CogAT) has been used to measure student’s capability in academic l...

      Scale scores for cognitive domains have been used as an important indicator for both academic achievement and clinical diagnosis. For example, in education, Cognitive Abilities Test (CogAT) has been used to measure student’s capability in academic learning. In a clinical setting, Cognitive Impairment Screening Test utilizes items measuring cognitive ability as a dementia screening test. We demonstrated a procedure of generating cognitive ability test items similar as in CogAT but the theory associated with the generation is totally different. When creating cognitive test items, we applied automatic item generation (AIG) that reduces errors in predictions of cognitive ability but attains higher reliability. We selected two cognitive ability test items, categorized as a time estimation item for measuring quantitative reasoning and a paper-folding item for measuring visualization. As CogAT has widely used as a cognitive measurement test, developing an AIG-based cognitive test items will greatly contribute to education field. Since CLASS is the only LMS including AIG technology, we used it for the AIG software to construct item models. The purpose of this study is to demonstrate the item generation process using AIG implemented within CLASS, along with proving quantitative and qualitative strengths of AIG. In result, we confirmed that more than 10,000 items could be made by a single item model in the quantitative aspect and the validity of items could be assured by the procedure based on ECD and AE in the qualitative aspect. This reliable item generation process based on item models would be the key of developing accurate cognitive measurement tests.

      더보기

      참고문헌 (Reference) 논문관계도

      1 박희주 ; 류현석 ; 권종겸 ; 류지훈, "온라인 교육을 위한 LMS 패러다임의 전환: 온라인 평가를 탑재한 학습분석학 기반의 LMS" 교육연구소 35 (35): 49-72, 2022

      2 Taherdoost, H., "Validity and reliability of the research instrument; how to test the validation of a questionnaire/survey in a research. How to test the validation of a questionnaire/survey in a research" 2016

      3 Warnimont, C., "The relationship between students' performance on the cognitive abilities test (COGAT) and the fourth and fifth grade reading and math achievement tests in Ohio" Bowling Green State University 2010

      4 Osborne, C., "Statistical calibration: A review" 309-336, 1991

      5 "Riverside Insights"

      6 Carmines, E. G., "Reliability and Validity Assessment" SAGE 1979

      7 Wesnes, K., "Practice effects on cognitive tasks : A major problem?" 1 (1): 473-, 2002

      8 Goldberg, T. E., "Practice effects due to serial cognitive assessment: implications for preclinical Alzheimer's disease randomized controlled trials" 1 (1): 103-111, 2015

      9 Jutten, R. J., "Monthly at-home computerized cognitive testing to detect diminished practice effects in preclinical Alzheimer's disease" 13 : 2021

      10 Lakin, J. M., "Making the Cut in Gifted Selection : Score Combination Rules and Their Impact on Program Diversity" 62 (62): 210-219, 2018

      1 박희주 ; 류현석 ; 권종겸 ; 류지훈, "온라인 교육을 위한 LMS 패러다임의 전환: 온라인 평가를 탑재한 학습분석학 기반의 LMS" 교육연구소 35 (35): 49-72, 2022

      2 Taherdoost, H., "Validity and reliability of the research instrument; how to test the validation of a questionnaire/survey in a research. How to test the validation of a questionnaire/survey in a research" 2016

      3 Warnimont, C., "The relationship between students' performance on the cognitive abilities test (COGAT) and the fourth and fifth grade reading and math achievement tests in Ohio" Bowling Green State University 2010

      4 Osborne, C., "Statistical calibration: A review" 309-336, 1991

      5 "Riverside Insights"

      6 Carmines, E. G., "Reliability and Validity Assessment" SAGE 1979

      7 Wesnes, K., "Practice effects on cognitive tasks : A major problem?" 1 (1): 473-, 2002

      8 Goldberg, T. E., "Practice effects due to serial cognitive assessment: implications for preclinical Alzheimer's disease randomized controlled trials" 1 (1): 103-111, 2015

      9 Jutten, R. J., "Monthly at-home computerized cognitive testing to detect diminished practice effects in preclinical Alzheimer's disease" 13 : 2021

      10 Lakin, J. M., "Making the Cut in Gifted Selection : Score Combination Rules and Their Impact on Program Diversity" 62 (62): 210-219, 2018

      11 "Latex"

      12 Bejar, I. I., "Item generation for test development" Erlbaum 199-217, 2002

      13 Irvine, Sidney H., "Item generation for test development" Routledge 2002

      14 Stanek, K. M., "Improvements in cognitive function following cardiac rehabilitation for older adults with cardiovascular disease" 121 (121): 86-93, 2011

      15 Thompson, B, "Homeschool Handbook" 2011

      16 Embretson, S., "Handbook of statistics, 26" 747-768, 2006

      17 Mislevy, R. J., "Focus article : On the structure of educational assessments" 1 (1): 3-62, 2003

      18 Flanagan, D. P., "Essentials of cross-battery assessment" John Wiley & Sons, Inc 2013

      19 Drasgow, F., "Educational measurement" American Council on Education/Praeger Publishers 2006

      20 Ryoo, J. H., "Development of a new measure of cognitive ability using automatic item generation and its psychometric properties" 12 (12): 21582440221095016-, 2022

      21 Schneider, W. J., "Contemporary Intellectual Assessment: Theories, Test, and Issues" Guilford Publications 99-144, 2012

      22 Schneider, W. J., "Contemporary Intellectual Assessment. Theories, Tests, and Issues" The Guilford Press 73-163, 2018

      23 "CLASS"

      24 McGrew, K. S., "CHC theory and the human cognitive abilities project : Standing on the shoulders of the giants of psychometric intelligence research" 37 (37): 1-10, 2009

      25 Bloom, B, "Bloom’s taxonomy"

      26 Gierl, M. J., "Automatic item generation: Theory and practice" Routledge 2012

      27 Behrens, J. T., "An evidence centered design for learning and assessment in the digital world" National Center for Research on Evaluation, Standards, and Student Testing (CRESST) 2010

      28 Gierl, M. J., "Advanced methods in automatic item generation" Routledge 2021

      29 Anderson, L. W., "A taxonomy for learning, teaching, and assessing: A revision of Bloom’s taxonomy of educational objectives" Longman 2001

      30 Bryan, V. M., "A meta-analysis of the correlations among broad intelligences : Understanding their relations" 81 : 2020

      31 Embretson, S. E., "A cognitive design system approach to generating valid tests : Application to abstract reasoning" 3 (3): 380-, 1998

      32 Mislevy, R. J., "A brief introduction to evidencecentered design" US Department of Education 2004

      더보기

      분석정보

      View

      상세정보조회

      0

      Usage

      원문다운로드

      0

      대출신청

      0

      복사신청

      0

      EDDS신청

      0

      동일 주제 내 활용도 TOP

      더보기

      주제

      연도별 연구동향

      연도별 활용동향

      연관논문

      연구자 네트워크맵

      공동연구자 (7)

      유사연구자 (20) 활용도상위20명

      이 자료와 함께 이용한 RISS 자료

      나만을 위한 추천자료

      해외이동버튼