RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      검색결과 좁혀 보기

      선택해제
      • 좁혀본 항목 보기순서

        • 원문유무
        • 원문제공처
        • 등재정보
          펼치기
        • 학술지명
          펼치기
        • 주제분류
        • 발행연도
          펼치기
        • 작성언어

      오늘 본 자료

      • 오늘 본 자료가 없습니다.
      더보기
      • 무료
      • 기관 내 무료
      • 유료
      • KCI등재후보

        MPI-GWAS: a supercomputing-aided permutation approach for genome-wide association studies

        Paik, Hyojung,Cho, Yongseong,Cho, Seong Beom,Kwon, Oh-Kyoung Korea Genome Organization 2022 Genomics & informatics Vol.20 No.1

        Permutation testing is a robust and popular approach for significance testing in genomic research that has the advantage of reducing inflated type 1 error rates; however, its computational cost is notorious in genome-wide association studies (GWAS). Here, we developed a supercomputing-aided approach to accelerate the permutation testing for GWAS, based on the message-passing interface (MPI) on parallel computing architecture. Our application, called MPI-GWAS, conducts MPI-based permutation testing using a parallel computing approach with our supercomputing system, Nurion (8,305 compute nodes, and 563,740 central processing units [CPUs]). For 10<sup>7</sup> permutations of one locus in MPI-GWAS, it was calculated in 600 s using 2,720 CPU cores. For 10<sup>7</sup> permutations of ~30,000-50,000 loci in over 7,000 subjects, the total elapsed time was ~4 days in the Nurion supercomputer. Thus, MPI-GWAS enables us to feasibly compute the permutation-based GWAS within a reason-able time by harnessing the power of parallel computing resources.

      • KCI등재

        리눅스 클러스터에서 MPI 기반 병렬 프로그램의 동적 동시 스케줄링 기법

        김혁(Hyuk Kim),이윤석(Yunseok Rhee) 한국컴퓨터정보학회 2008 韓國컴퓨터情報學會論文誌 Vol.13 No.1

        빈번한 메시지를 주고 받는 MPI 기반의 병렬 프로그램에서 효과적으로 통신이 이뤄지기 위해서는 송수신 프로세스들이 각 노드에서 동시에 스케줄되어야 한다. 그러나, 일반적으로 클러스터 컴퓨터를 구성하는 각 노드는 범용시분할 운영체제를 기반으로 하며, 이 경우 병렬 프로그램을 구성하는 프로세스들은 각 스케줄러에 의해 자율적으로 관리되므로 이들을 동시에 함께 실행시키는 것은 쉽지 않다. 본 연구에서는 리눅스 클러스터에서 효과적으로 병렬 MPI 프로그램을 실행시키기 위해, 메시지 교환 정보를 활용하여 통신에 참여하는 프로세스들이 동시에 스케줄되는 기법을 제안하고 실제 구현을 통해 성능을 살펴보았다. NPB 병렬 벤치마크의 수행을 통해 측정한 결과에 따르면, 통신량이 높은 프로그램에서 33-56%의 실행 시간 감소 효과를 보였다. For efficient message passing of parallel programs, it is required to schedule the involved two processes at the same time which are executed on different nodes, that is called 'co-scheduling'. However, each node of cluster systems is built on top of general purpose multitasking OS, which autonomously manages local processes. Thus it is not so easy to co-schedule two (or more) processes in such computing environment. Our work proposes a co-scheduling scheme for MPI-based parallel programs which exploits message exchange information between two parties. We implement the scheme on Linux cluster which requires slight kernel hacking and MPI library modification. The experiment with NPB parallel suite shows that our scheme results in 33-56% reduction in the execution time compared to the typical scheduling case, and especially better performance in more communication-bound applications.

      • KCI등재

        Parallel implementation of finite volume based method for isoelectric focusing

        심재술,Prashanta Dutta,Cornelius F. Ivory 대한기계학회 2009 JOURNAL OF MECHANICAL SCIENCE AND TECHNOLOGY Vol.23 No.12

        A message passing interface (MPI) based parallel simulation algorithm is developed to simulate protein behavior in non-linear isoelectric focusing (IEF). The mathematical model of IEF is formulated based on mass conservation, charge conservation, ionic dissociation-association relations of amphoteric molecules and the electroneutrality condition. First, the concept of parallelism is described for isoelectric focusing, and the isoelectric focusing model is implemented for 96 components: 94 ampholytes and 2 proteins. The parallelisms were implemented for two equations (mass conservation equation and electroneutrality equation). The CPU times are presented according to the increase of the number of processors (1, 2, 4 and 8 nodes). The maximum reduction of CPU time was achieved when four CPUs were employed, regardless of the input components in isoelectric focusing. The speed enhancement was defined for comparison of parallel efficiency. Computational speed was enhanced by maximum of 2.46 times when four CPUs were used with 96 components in isoelectric focusing.

      • KCI등재

        병렬계산기법을 이용한 선체주위 점성유동장의 LES해석

        최희종(HEE-JONG CHOI),박종천(JONG-CHUN PARK),윤현식(HYUN-SIK YOON),전호환(HO-HWAN CHUN),강대환(DAE-HWAN KANG) 한국해양공학회 2006 韓國海洋工學會誌 Vol.20 No.4

        The large-eddy simulation(LES) technique,based on a message passing interface method(MPI),was applied to investigate the turbulent flow phenomena around a ship. The Smagorinski model was used in the present LES simulation to simulate the turbulent flow around a ship. The SPMD(sidsngle program multiple data) technique was used for parallelization of the program using MPI. All computations were performed on a 24-node PC cluster parallel machine, composed of 2.6 ㎓ CPU, which had been installed in the Advanced Ship Engineering Research Center(ASERC). Numerical simulations were performed for the Wigley hull, and the Series 60 hull(CB=0.6) using 1/4-, 1/2-, 1- and 2-million grid systems and the computational results had been compared to the experimental ones.

      • KCI등재후보

        클러스터 환경에서의 MPI 기반 병렬 서열 유사성 검색에 관한 연구

        홍창범(Chang-Bum Hong),차정호(Jeoung-Ho Cha),이성훈(Sung-Hoon Lee),신승우(Seung-Woo Shin),박근준(Keun-Joon Park),박근용(Keun-Young Park) 한국컴퓨터정보학회 2006 韓國컴퓨터情報學會論文誌 Vol.11 No.6

        생물정보학 연구 있어서 아미노산이나 염기서열에 대한 유사성이나 상동성을 찾아내는 작업은 유전자의 기능에 대한 예측이나 단백질 구조를 예측하는 연구의 기반이 된다. 이러한 서열 데이터는 컴퓨터의 도입으로 매우 빠르게 증가하고 있다. 이러한 시점에서 서열에 대한 검색 속도는 매우 중요한 요소이기 때문에 대량의 서열정보를 다루기 위해서는 SMP(Sysmmetric Multi-Processors) 컴퓨터나 클러스터를 이용하고 있다. 본 논문에서는 서열 검색에 사용되는 BLAST(Basic Local Alignment Search Tool)의 속도향상을 위한 방법으로 클러스터 환경에서 병렬화 하는 nBLAST 알고리즘의 병렬화에 대해 제안한다. nBLAST는 기존의 BLAST 소스코드에 대한 수정 없이 병렬라이브러리인 MPI(Message Passing Interface)를 이용하여 질의를 분할하여 병렬화 하기 때문에 환경설정 등의 복잡한 과정을 거치지 않고 손쉽게 BLAST에 알고리즘에 대한 병렬화를 할 수 있다. 또한, 실험을 통하여 28대의 리눅스 클러스터에서 nBLAST를 수행하여 노드 수의 증가에 따른 성능 향상을 확인하였다. In the field of the bioinformatics, it plays an important role in predicting functional information or structure information to search similar sequence in biological DB. Biological sequences have been increased dramatically since Human Genome Project. At this point, because the searching speed for the similar sequence is highly regarded as the important factor for predicting function or structure, the SMP(Sysmmetric Multi-Processors) computer or cluster is being used in order to improve the performance of searching time. As the method to improve the searching time of BLAST(Basic Local Alighment Search Tool) being used for the similarity sequence search, We suggest the nBLAST algorithm performing on the cluster environment in this paper. As the nBLAST uses the MPI(Message Passing Interface), the parallel library without modifying the existing BLAST source code, to distribute the query to each node and make it performed in parallel, it is possible to easily make BLAST parallel without complicated procedures such as the configuration. In addition, with the experiment performing the nBLAST in the 28 nodes of LINUX cluster, the enhanced performance according to the increase in the number of the nodes has been confirmed.

      • SCIESCOPUSKCI등재

        Initial Design Domain Reset Method for Genetic Algorithm with Parallel Processing

        Lim, O-Kaung,Hong, Keum-Shik,Lee, Hyuk-Soo,Park, Eun-Ho The Korean Society of Mechanical Engineers 2004 JOURNAL OF MECHANICAL SCIENCE AND TECHNOLOGY Vol.18 No.7

        The Genetic Algorithm (GA), an optimization technique based on the theory of natural selection, has proven to be a relatively robust means of searching for global optimum. It converges to the global optimum point without auxiliary information such as differentiation of function. In the case of a complex problem, the GA involves a large population number and requires a lot of computing time. To improve the process, this research used parallel processing with several personal computers. Parallel process technique is classified into two methods according to subpopulation's size and number. One is the fine-grained method (FGM), and the other is the coarse-grained method (CGM). This study selected the CGM as a parallel process technique because the load is equally divided among several computers. The given design domain should be reduced according to the degree of feasibility, because mechanical system problems have constraints. The reduced domain is used as an initial design domain. It is consistent with the feasible domain and the infeasible domain around feasible domain boundary. This parallel process used the Message Passing Interface library.

      • KCI등재

        동기 병렬연산을 위한 응용수준의 결함 내성 연산시스템

        박필성 ( Pil-seong Park ) 한국인터넷정보학회 2008 인터넷정보학회논문지 Vol.9 No.5

        대규모 병렬 시스템의 MTBF(mean time between failures)는 아주 짧아 겨우 수 시간 단위에 불과하여 장시간의 연산 도중 연산 실패로 끝나 소중한 계산 시간이 낭비되는 경우가 많다. 그러나 현재의 MPI(Message Passing Interface) 표준은 이에 대한 대안을 제시하지 않고 있다. 본 논문에서는, 비표준의 결함 내성 MPI 라이브러리가 아닌 MPI 표준 함수들만을 사용하여, 일반적인 동기 병렬 연산에 적용할 수 있는 응용 수준의 결함 내성 연산 시스템을 제안한다. An MTBF(mean time between failures) of large scale parallel systems is known to be only an order of several hours, and large computations sometimes result in a waste of huge amount of CPU time. However, the MPI(Message Passing Interface), a de facto standard for message passing parallel programming, suggests no possibility to handle such a problem. In this paper, we propose an application-level fault tolerant computation system, purely on the basis of the current MPI standard without using any non-standard fault tolerant MPI library, that can be used for general scientific synchronous parallel computation.

      • A Parallel Algorithm of String Matching Based on Message Passing Interface for Multicore Processors

        Jiaxing Qu,Guoyin Zhang,Zhou Fang,Jiahui Liu 보안공학연구지원센터 2016 International Journal of Hybrid Information Techno Vol.9 No.3

        Multicore has long been considered an attractive platform for string matching. However, some existing traditional algorithms of string matching do not adapt to multicore platform, which pose new challenges to parallelism designs. In this paper, we introduce a multicore architecture with message passing interface to address these challenges. We exploit the popular Aho-Corasick algorithm for the string matching engine. Data parallelism is utilized to design optimization technique of string matching. The experiments show that an implementation of the 8-core system achieves up to 10.5 Gbps throughput on the average.

      • KCI등재

        분산 메모리 시스템에서 압력방정식의 해법을 위한 MPI와 Hybrid 병렬 기법의 비교

        전병진(Byoung Jin Jeon),최형권(Hyoung Gwon Choi) 대한기계학회 2015 大韓機械學會論文集B Vol.39 No.2

        본 연구에서는 분산 메모리시스템에서의 압력 방정식의 병렬해법을 위하여 MPI(Message Passing Interface)와 하이브리드 병렬기법을 사용하였다. 두 모델은 영역분할 기법을 활용하며, 하이브리드 기법은 성능이 양호한 두 가지 영역분할에 대해 수행하였다. 두 병렬기법의 성능을 비교하기 위해서 다양한 문제 크기에 대해 최대 96개의 쓰레드를 사용하여 속도향상을 측정하였다. 병렬 성능은 캐쉬 메모리에 따른 문제의 크기 및 MPI 통신, OpenMP 지시어의 부하에 대해 영향을 받음을 확인하였다. 문제의 크기 가 작은 경우에는 쓰레드가 증가할수록 MPI 통신 및 OpenMP 지시어 부하에 대한 비율이 상대적으로 크기 때문에 병렬 성능이 좋지 않으며, MPI 통신 부하보다는 OpenMP 지시어 부하가 상대적으로 크므로 MPI 병렬 기법의 병렬 성능이 더 우수하다. 문제의 크기가 큰 경우에는 캐쉬 메모리의 활용도가 높고 MPI 통신 및 OpenMP 지시어 부하에 대한 비율이 낮아 병렬 성능이 좋으며, OpenMP 지시어보다 MPI 통신에 의한 부하가 더 지배적이어서 하이브리드 병렬 성능이 MPI 병렬 성능보다 더 양호하다. The message passing interface (MPI) and hybrid programming models for the parallel computation of a pressure equation were compared in a distributed memory system. Both models were based on domain decomposition, and two numbers of the sub-domain were selected by considering the efficiency of the hybrid model. The parallel performances for various problem sizes were measured using up to 96 threads. It was found that in addition to the cache-memory size, the overhead of the MPI communication/OpenMP directives affected the parallel performance. For small problems, the parallel performance was low because the percentage of the overhead of the MPI communication/OpenMP directives increased as the number of threads increased, and MPI was better than the hybrid model because it had a smaller communication overhead. For large problems, the parallel performance was high because, in addition to the cache effect, the percentage of the communication overhead was relatively low compared to that for small problems, and the hybrid model was better than MPI because the communication overhead of MPI was more dominant than that of the OpenMP directives in the hybrid model.

      • KCI등재

        3차원 아음속 난류 공동 유동에 대한 수치적 연구

        최홍일(Hongil Choi),김재수(Jaesoo Kim) 한국전산유체공학회 2008 한국전산유체공학회지 Vol.13 No.1

        Generally flight vehicles have many cavities such as wheel wells, bomb bays and windows on their external surfaces and the flow around these cavities makes separation, vortex, shock and expansion waves, reattachment and other complex flow phenomenon. The flow around the cavity makes abnormal and three-dimensional noise and vibration even thought the aspect ratio (L/D) is small. The cavity giving large effects to the flow might make large noise, cause structural damage or breakage, harm the aerodynamic performance and stability, or damage the sensitive devices. In this study, numerical analysis was performed for cavity flows by the unsteady compressible three dimensional Reynolds-Averaged Navier-Stokes (RANS) equations with Wilcox's k-ω turbulence model. The MPI(Message Passing Interface) parallelized code was used for calculations by PC-cluster. The cavity has the aspect ratios of 2.5, 3.5 and 4.5 with the W/D ratio of 2 for three-dimensional cavities. The Sound Pressure Level (SPL) analysis was done with FFT to check the dominant frequency of the cavity flow. The dominant frequencies were analyzed and compared with the results of Rossiter's formula and Ahuja& Mendoza's experimental datum.

      연관 검색어 추천

      이 검색어로 많이 본 자료

      활용도 높은 자료

      해외이동버튼