RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      검색결과 좁혀 보기

      선택해제
      • 좁혀본 항목 보기순서

        • 원문유무
        • 원문제공처
          펼치기
        • 등재정보
          펼치기
        • 학술지명
          펼치기
        • 주제분류
          펼치기
        • 발행연도
          펼치기
        • 작성언어

      오늘 본 자료

      • 오늘 본 자료가 없습니다.
      더보기
      • 무료
      • 기관 내 무료
      • 유료
      • Open Source를 이용한 MicroPACS의 구성과 활용

        유연욱,김용근,김영석,원우재,김태성,김석기,You, Yeon-Wook,Kim, Yong-Keun,Kim, Yeong-Seok,Won, Woo-Jae,Kim, Tae-Sung,Kim, Seok-Ki 대한핵의학기술학회 2009 핵의학 기술 Vol.13 No.1

        목적 : Small-scalled PACS, Pc-based PACS로 표현되는 MicroPACS 시스템 구축에 대한 관심도가 급격하게 증가하고 있는 추세이다. MicroPACS 시스템은 PACS를 작은 규모에서 사용할 수 있도록 구성해놓은 것이고, 이 시스템을 구성하기 위해서는 DICOM viewer나 연결프로그램 등이 필요하다. 이것은 공개소스프로그램(Open Source Program)을 통해서 어느 누구나 쉽게 무료로 다운로드를 받을 수 있게 되어있다. 본 논문은 Open source program으로 MicroPACS를 직접 구성해보았고, 저장매체로서의 활용가치를 측정하기위하여 성능, 안정성 측면에서 기존의 광 저장매체(CD, DVDRAM)와 비교 분석하였다. 실험재료 및 방법 : 1. 소형 PACS를 구축하기 위해서 먼저 다음 기준에 맞는 DICOM Server Software를 검색한다. (1) 윈도우체제에서 사용가능할 것. (2) Free ware일 것. (3) PET/CT scanner와 호환되어야 할 것. (4) 사용하기 쉬워야 할 것. (5) 저장의 한계가 없어야 할 것. 2. (1) MicroPACS의 성능을 평가하기 위해 환자 1명의 Data ($^{18}F$-FDG Torso PET/CT)를 현재 Back-up장치로 쓰이는 광 저장매체(CD, DVD-RAM)와 MicroPACS에 저장하는데 소요되는 시간(Back up time)과 workstation으로 복구되기까지의 시간(Retrieval time)을 비교해 보았다. (2) PET/CT 검사를 시행했던 환자 1명의 병록번호와 검사 시행날짜를 핵의학과 직원 7명을 대상으로 알려주고 Data를 찾는데 소요되는 시간을 MicroPACS와 광 저장매체(CD, DVD-RAM)에서 각각 측정하여 비교하였다. 3. 기존의 백업장치로 쓰였던 CD들 중에서 2004년부터 2006년까지 500장을 무작위로 뽑아서 loading을 하였고 그중에서 얼마만큼의 에러가 발생하였는지를 측정하여 MicroPACS의 안정성을 비교평가하였다. 결과 : 1. Server와 DICOM viewer 기능을 갖춘 11개의 open source software 중에서 Conquest DICOM Server를 선택하였다. 2. (1) Backup과 Retrieval 시간 비교(단위 : 분)는 다음과 같다; DVD-RAM(5.13,2.26)/Conquest DICOM Server (1.49,1.19) by GE DSTE (p<0.001), CD (6.12,3.61)/Conquest (0.82,2.23) by GE DLS (p<0.001), CD (5.88,3.25)/Conquest (1.05,2.06) by SIEMENS. (2) CD ($156{\pm}46$초), DVD-RAM ($115{\pm}21$초) and Conquest DICOM Server ($13{\pm}6$초). 3. 1년간 MicroPACS에서의 데이터손실은 없었으며(0%), 500장의 CD 중에서 14개(2.8%)가 Loading하는데 실패하였다. 결론 : 현재 많은 병원에서 도입되고 있는 Full PACS를 open source software를 통하여 소규모의 PACS로 재현해 보았고, 그 결과 가능하다는 결론이 나왔다. 데이터 저장의 유용성을 평가한 결과에서 MicroPACS를 이용하는 것이 기존의 광저장매체를 사용하는 것보다 효율적이고 작업속도가 향상 된다는 것을 확인할 수 있다. Purpose: Recently, most hospitals are introducing the PACS system and use of the system continues to expand. But small-scaled PACS called MicroPACS has already been in use through open source programs. The aim of this study is to prove utility of operating a MicroPACS, as a substitute back-up device for conventional storage media like CDs and DVDs, in addition to the full-PACS already in use. This study contains the way of setting up a MicroPACS with open source programs and assessment of its storage capability, stability, compatibility and performance of operations such as "retrieve", "query". Materials and Methods: 1. To start with, we searched open source software to correspond with the following standards to establish MicroPACS, (1) It must be available in Windows Operating System. (2) It must be free ware. (3) It must be compatible with PET/CT scanner. (4) It must be easy to use. (5) It must not be limited of storage capacity. (6) It must have DICOM supporting. 2. (1) To evaluate availability of data storage, we compared the time spent to back up data in the open source software with the optical discs (CDs and DVD-RAMs), and we also compared the time needed to retrieve data with the system and with optical discs respectively. (2) To estimate work efficiency, we measured the time spent to find data in CDs, DVD-RAMs and MicroPACS. 7 technologists participated in this study. 3. In order to evaluate stability of the software, we examined whether there is a data loss during the system is maintained for a year. Comparison object; How many errors occurred in randomly selected data of 500 CDs. Result: 1. We chose the Conquest DICOM Server among 11 open source software used MySQL as a database management system. 2. (1) Comparison of back up and retrieval time (min) showed the result of the following: DVD-RAM (5.13,2.26)/Conquest DICOM Server (1.49,1.19) by GE DSTE (p<0.001), CD (6.12,3.61)/Conquest (0.82,2.23) by GE DLS (p<0.001), CD (5.88,3.25)/Conquest (1.05,2.06) by SIEMENS. (2) The wasted time (sec) to find some data is as follows: CD ($156{\pm}46$), DVD-RAM ($115{\pm}21$) and Conquest DICOM Server ($13{\pm}6$). 3. There was no data loss (0%) for a year and it was stored 12741 PET/CT studies in 1.81 TB memory. In case of CDs, On the other hand, 14 errors among 500 CDs (2.8%) is generated. Conclusions: We found that MicroPACS could be set up with the open source software and its performance was excellent. The system built with open source proved more efficient and more robust than back-up process using CDs or DVD-RAMs. We believe that the operation of the MicroPACS would be effective data storage device as long as its operators develop and systematize it.

      • Research and Application of Multi-Source Data Integration Based on Ontology

        Hongyan Yun,Jianliang Xu,Craig A.Knoblock,Ruibo Xu 보안공학연구지원센터 2016 International Journal of u- and e- Service, Scienc Vol.9 No.9

        In view of existing structure heterogeneous and semantic heterogeneous, a multi-source data integration approach based on ontology is proposed. In this approach, constructed domain ontologies are used to describe data sources, and semantic integration for multi-source data is realized by using an information integration toolkit named KARMA to map multiple datasets to RDF data. By analyzing Food and Agriculture Organization (FAO)’s indicator datasets and World Health Organization (WHO)’s datasets, main concepts are extracted to construct indicator ontology, food security ontology, health ontology and human security ontology. KARMA models are created for mapping FAO datasets and WHO datasets to RDF data. Based on constructed ontology and published RDF data, an application system named Food Security Indicators Management System (FSIMS) is developed to implement food security data query, statistical analysis and comparison functions. FSIMS has positive effects on food security risk management; it also demonstrates the applicability of the proposed semantic integration method, and the validity of constructed domain ontologies and published RDF data.

      • KCI등재

        원천 데이터 품질이 빅데이터 분석결과의 유용성과 활용도에 미치는 영향

        박소현,이국희,이아연 한국데이터전략학회 2017 Journal of information technology applications & m Vol.24 No.4

        This study sheds light on the source data quality in big data systems. Previous studies about big data success have called for future research and further examination of the quality factors and the importance of source data. This study extracted the quality factors of source data from the user’s viewpoint and empirically tested the effects of source data quality on the usefulness and utilization of big data analytics results. Based on the previous researches and focus group evaluation, four quality factors have been established such as accuracy, completeness, timeliness and consistency. After setting up 11 hypotheses on how the quality of the source data contributes to the usefulness, utilization, and ongoing use of the big data analytics results, e-mail survey was conducted at a level of independent department using big data in domestic firms. The results of the hypothetical review identified the characteristics and impact of the source data quality in the big data systems and drew some meaningful findings about big data characteristics.

      • Data Source Management using weight table in u-GIS DSMS

        Kim, Sang-Ki,Baek, Sung-Ha,Lee, Dong-Wook,Chung, Warn-Il,Kim, Gyoung-Bae,Bae, Hae-Young Korea Spatial Information System Society 2009 한국공간정보시스템학회 논문지 Vol.11 No.2

        The emergences of GeoSensor and researches about GIS have promoted many researches of u-GIS. The disaster application coupled in the u-GIS can apply to monitor accident area and to prevent spread of accident. The application needs the u-GIS DSMS technique to acquire, to process GeoSensor data and to integrate them with GIS data. The u-GIS DSMS must process big and large-volume data stream such as spatial data and multimedia data. Due to the feature of the data stream, in u-GIS DSMS, query processing can be delayed. Moreover, as increasing the input rate of data in the area generating events, the network traffic is increased. To solve this problem, in this paper we describe TRIGGER ACTION clause in CQ on the u-GIS DSMS environment and proposes data source management. Data source weight table controls GES information and incoming data rate. It controls incoming data rate as increasing weight at GES of disaster area. Consequently, it can contribute query processing rate and accuracy

      • 보건의료 빅데이터 이용 활성화를 위한 오픈 소스 데이터 분석 프로그램의 개발

        박형득,이상수 한국보건의료기술평가학회 2015 보건의료기술평가 Vol.3 No.1

        Objectives: An era of open and transparent information in Korean healthcare area is now underway. Health Insurance Review and Assessment Service has disclosed the National Patient Sample claims data since 2009 and National Health Insurance Service announced to disclose 9 year period cohort national health insurance claims database to healthcare stakeholders. Since Korea uses the fee-for-service scheme as basic payment system for all of medical treatments excepting for 7 common diseases which are run by Diagnosis Related Group, it is easy to identify the medical treatment practice and resource utility information for individual medical procedures. The use of SAS software is the generally accepted data analysis tool as the average data size of national health insurance claims data easily exceeds over 30 Giga Bytes. However, the data analysis using SAS is labor-intensive and time-consuming works and has a low accessibility due to its costly license fees. As the need to analyze the Healthcare Big Data faster and appropriately rises, demand for development of new data analysis tool is also significantly increasing. Methods: Open-source big data analysis program with the name of BigPy was developed using Py- thon which is a high-level object oriented programming language. BigPy’s design philosophy empha- sizes on code readability and reusability, and its syntax allows users to express concepts in fewer lines of code than would be possible in statistical software such as SAS or R. Results: Bigpy program is com- posed of a series of data analysis macro and functions. The functions in BigPy can easily read, trim, sort, and merge the healthcare big data with database format and convert large dataset to a Hierarchical Data Format Version 5 file. Conclusion: Healthcare stakeholders now have access to promising new value of knowledge that is called Big Data. The efforts on Big Data analysis can address problems related to variability in healthcare quality and consequently improve healthcare treatments. The development of open-source data analysis is a noteworthy and promising methodology to handle the healthcare big data in a rapid and a cost-effective way.

      • KCI등재

        오픈 소스 구현 구조: 이기종 IoT 플랫폼에서의 데이터 시각화 방법

        임정현,여요쉰,산가오양,노병희 한국차세대컴퓨팅학회 2024 한국차세대컴퓨팅학회 논문지 Vol.20 No.4

        무선 통신 기술의 발달에 따라 사물인터넷 분야의 연구가 활발히 진행되고 있다. 하지만 많은 연구들에서 다루는 IoT 플랫폼은 데이터의 시각화 방법보다 전송 방법 및 수집량 증대에 초점을 두고 있다. 많은 양의 데이터를 수집하더라도 처리가 이루어지는 방식이나 데이터 시각화 방법에 따라 수집한 데이터를 처리하고 분석하는 속도 및 효율성이 크게 달라진다. 본 논문에서는 데이터의 분석과 처리의 효율 증대 및 시각화에 중점을 두고 오픈 소스 기반 IoT 플랫폼에 시각화 오픈 소스를 연동하는 방법을 제안한다. 제안 방법은 이기종 센서들로부터 Wi-Fi 네트워크를 통해 데이터를 수집하고, 별도로 구현된 대시보드에서 수집한 데이터를 사용하는 형태로 웹 페이지에 결과를 출력한다. 또한 실제 환경에서 플랫폼을 개발/운용하는 과정에 대해 소개한다. With the development of wireless communication technology, research in the field of IoT is actively progressing. However, IoT platforms covered in many studies focus on increasing transmission method and collection amount rather than data visualization method. Even if a large amount of data is collected, the speed and efficiency of processing and analyzing the collected data vary greatly depending on how processing is performed or data visualization method. In this paper, we propose a method of linking visualization open source to open source-based IoT platform, focusing on increasing the efficiency and visualization of data analysis and processing. The proposed method collects data through a Wi-Fi network from heterogeneous sensors and outputs results on web pages in the form of using data collected from a separately implemented dashboard. It also introduces the process of developing/operating the platform in a real environment

      • KCI등재

        다변량 분석을 통한 분석 도구 소프트웨어 성능 비교 연구

        김준혁,조창섭,최용락 한국IT정책경영학회 2017 한국IT정책경영학회 논문지 Vol.9 No.6

        With the development of the Internet environment, SNS, search portal sites and various internet media have started to produce unstructured data. In addition, as the information technology developed, the storage disk can be easily obtained by the sudden price drop of the storage medium. This has led to the era of big data that can collect, analyze, utilize and predict existing structured data and unstructured data on a large storage disk. The various collected data are used in various fields. Distribution, and marketing to medical services. As the accumulated data is diversified, the reliability of the predictable probability increases and the value of the data increases. Recently, it is important to understand how fast and accurate the same data can be analyzed due to the distribution of enough storage disks. For this purpose, each company and organization develops various analysis tools, but the performance of analytical tools that handle them when analyzing large amounts of data is not considered important. As a representative analysis tool software, there are SPSS for commercial software and R for open source software. However, research on the performance of both analysis tools is lacking. In this paper, we aim to compare data analysis performance through multivariate analysis of mass data for commercial software and open source analysis tool, and to help selection of more useful analysis tools.

      • KCI등재

        Big Data Architecture Design for the Development of Hyper Live Map (HLM)

        문수정,편무욱,배상원,이도림,한상원 한국측량학회 2016 한국측량학회지 Vol.34 No.2

        The demand for spatial data service technologies is increasing lately with the development of realistic 3D spatial information services and ICT (Information and Communication Technology). Research is being conducted on the real-time provision of spatial data services through a variety of mobile and Web-based contents. Big data or cloud computing can be presented as alternatives to the construction of spatial data for the effective use of large volumes of data. In this paper, the process of building HLM (Hyper Live Map) using multi-source data to acquire stereo CCTV and other various data is presented and a big data service architecture design is proposed for the use of flexible and scalable cloud computing to handle big data created by users through such media as social network services and black boxes. The provision of spatial data services in real time using big data and cloud computing will enable us to implement navigation systems, vehicle augmented reality, real-time 3D spatial information, and single picture based positioning above the single GPS level using low-cost image-based position recognition technology in the future. Furthermore, Big Data and Cloud Computing are also used for data collection and provision in U-City and Smart-City environment as well, and the big data service architecture will provide users with information in real time.

      • KCI등재

        데이터 마이닝을 이용한 임상연구 데이터베이스 기반 원혈의 주치 특성

        최다현 ( Dha-hyun Choi ),이서영 ( Seoyoung Lee ),이인선 ( In-seon Lee ),류연희 ( Yeonhee Ryu ),채윤병 ( Younbyoung Chae ) 경락경혈학회 2021 Korean Journal of Acupuncture Vol.38 No.2

        Objectives : Source acupoint is one of the representative acupoints to treat various diseases in each meridian. We aimed to identify the patterns of selection of Source acupoints and their associations with diseases using clinical trials data. Methods : We extracted the frequency of Source acupoints across 30 diseases from clinical trials database. Acupuncture treatment regimens were retrieved from the Cochrane Database of Systematic Reviews. The frequency of Source acupoint use was calculated as the number of studies using a certain acupoint divided by the total number of included studies. Using hierarchical clustering and multidimensional scaling, the characteristics of Source acupoints were analyzed based on the similarity of the relationships between the Source acupoints and the diseases. Results : A total of 421 clinical trials were included for this analysis. LR3, HT7, KI3, and LI4 acupoints were most frequently used for the treatment of 30 diseases. Cluster analysis showed that LR3 and LI4 acupoints were grouped together and HT7 and KI3 acupoints were grouped together. Multidimensional scaling revealed that LR3, LI4, HT7, and KI3 acupoints have intrinsic properties in the two-dimensional space. Conclusions : The present study identified the selection patterns of the Source acupoints using clinical trials data. Our finding will provide the understanding of the characteristics of Source acupoints.

      • KCI등재

        데이터 과학시대 텍스트 데이터 분석기법에 대한 간략한 소개와 제도화를 위한 제언 : 텍스트 데이터에 대한 차원축소 기법을 중심으로

        백영민 ( Young Min Baek ),박미사 ( Misa Park ) 연세대학교 사회과학연구소 2018 社會科學論集 Vol.49 No.1

        정보의 디지털화로 텍스트 자료는 폭발적으로 증가하고 있다. 지금까지 텍스트 자료는 연구자가 직접 읽거나, 연구자가 작성한 코딩지침을 습득한 연구보조원이 읽고 해석한 결과를 활용하는 방식을 통해 분석되었다. 그러나 텍스트 자료의 폭발적 증가로 인해 과거와 같은 방식의 텍스트 분석은 현실적 적용가능성이 낮아지는 상황이다. 대용량의 텍스트 자료를 체계적으로 처리하기 위해 텍스트를 데이터, 즉 의미의 기본단위들의 집합(a set of textual features)으로 간주하고, 텍스트 데이터에 알고리즘을 적용하여 텍스트 자료의 의미를 추정하는 통계적 기법들이 속속 개발되고 있으며 실제 사회과학 연구에서도 활용되고 있다. 본 논문에서는 이들 텍스트 데이터 기법을 (1) 사전기반 접근, (2) 비지도 기계학습 접근, (3) 지도 기계학습 접근으로 구분한 후, 각 접근법이 어떻게 텍스트 데이터를 간주하며 어떤 과정을 통해 문서에 내재한 토픽이나 감정을 추출하는지 간략히 소개하였다. 또한 이들 알고리즘 기반 텍스트 분석방법을 사회과학 연구에 어떻게 활용할 수 있으며, 사회과학 연구에 알고리즘 기반 텍스트 분석방법을 효과적으로 적용하기 위해서는 사회과학연구방법론 교육에서 Python이나 R과 같은 오픈소스 컴퓨터 언어를 기본 프로그램으로 사용해야 해야 한다는 주장을 제기하였다. Due to the advent of digitalization, the amount and scope of textual data explodes and provides social scientists many opportunities to exploit the advantages of the increased volume of text data. However, the plausibility of traditional manual content analysis is hardly comprehensive and/or useful because of expensive cost hiring human coders and limited amount of time for the voluminous textual data. In this sense, algorithmic understanding of textual data (i.e., identifying textual data as matrix where documents are on the row and tokens, usually words, are on the column), provides theoretical and practical solutions for the analyses of textual data, in terms of topic detection and sentiment analysis. This study overviews a variety of algorithmic approach to textual data, and provides three groups: (1) lexicon-based approach, (2) unsupervised machine learning approach, and (3) supervised machine learning approach. Those three approaches were introduced with plain words to social scientists and how they can be exploited to understand large-scale textual data and how the predicted meanings of textual data for associational or causal analysis for social scientific theory building and testing. In the discussion section, some practical suggestions regarding how those methods can be used, and what should be done for the algorithmic approach settles in social scientific fields.

      연관 검색어 추천

      이 검색어로 많이 본 자료

      활용도 높은 자료

      해외이동버튼