RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      검색결과 좁혀 보기

      선택해제
      • 좁혀본 항목 보기순서

        • 원문유무
        • 원문제공처
          펼치기
        • 등재정보
        • 학술지명
          펼치기
        • 주제분류
          펼치기
        • 발행연도
          펼치기
        • 작성언어
        • 저자
          펼치기

      오늘 본 자료

      • 오늘 본 자료가 없습니다.
      더보기
      • 무료
      • 기관 내 무료
      • 유료
      • KCI등재

        병렬 및 분산 컴퓨팅 : 에너지 절감형 서버 클러스터에서 급변하는 부하 처리를 위한 유연한 다중 임계치 기반의 서버 전원 모드 제어

        안태준 ( Tae June Ahn ),조성철 ( Sung Choul Cho ),김석구 ( Seok Koo Kim ),천경호 ( Kyong Ho Chun ),정규식 ( Kyu Sik Chung ) 한국정보처리학회 2014 정보처리학회논문지. 컴퓨터 및 통신시스템 Vol.3 No.9

        에너지 절감형 서버 클러스터에서는 에너지절감을 고려하지 않는 기존 환경에서만큼의 서비스 품질을 보장하면서 전력소비를 최대로 절감하는 것이 목표이다. 에너지 절감형 서버 클러스터에 관한 기존 연구에서는 현재의 사용자요청을 처리하는데 필요한 최소한의 서버 대수를 계산하여 해당 서버만을 활성화하도록 서버 전원 모드를 일정주기마다 제어한다. 부하가 급격하게 변하는 상황에서는 서버 수를 빨리 증가하지 못하기 때문에 기존 연구에서는 서비스품질이 떨어진다. 이 문제를 해결하기 위해, 본 논문에서는 부하추세를 급증, 증가, 완만, 감소, 급감하는 5가지 상황으로 분류하여 필요한 서버 대수를 계산할 때 각 상황에 맞는 다중 임계치를 적용한다. 또한 부하추세를 5등급으로 나누는 기준을 서버가 부하를 추가로 감당할 수 있는 잔여용량에 따라 유연하게 조정하는 방법을 추가로 사용한다. 실험은 서버 15대로 클러스터를 구성하여 수행하였다. SPECweb이라는 전문 벤치마킹 툴을 이용하여 부하가 급격하게 변화하는 패턴들을 생성하여 사용하였다. 실험 결과는 서비스품질은 에너지절감을 고려하지 않는 기존의 클러스터링 방식 수준으로 향상되었으며, 소비전력은 부하 패턴에 따라 최대 약 50% 절감되었음을 보여준다. Energy aware server cluster aims to reduce power consumption at maximum while keeping QoS(quality of service) as much as energy non-aware server cluster. In the existing methods of energy aware server cluster, they calculate the minimum number of active servers needed to handle current user requests and control server power mode in a fixed time interval to make only the needed servers ON. When loads change rapidly, QoS of the existing methods become degraded because they cannot increase the number of active servers so quickly. To solve this QoS problem, we classify load change situations into five types of rapid growth, growth, normal, decline, and rapid decline, and apply five different thresholds respectively in calculating the number of active servers. Also, we use a flexible scheme to adjust the above classification criterion for multi threshold, considering not only load change but also the remaining capacity of servers to handle user requests. We performed experiments with a cluster of 15 servers. A special benchmarking tool called SPECweb was used to generate load patterns with rapid change. Experimental results showed that QoS of the proposed method is improved up to the level of energy non-aware server cluster and power consumption is reduced up to about 50 percent, depending on the load pattern.

      • KCI등재

        Estimation-Based Load-Balancing with Admission Control for Cluster Web Servers

        Saeed Sharifian,Seyed Ahmad Motamedi,Mohammad Kazem Akbari 한국전자통신연구원 2009 ETRI Journal Vol.31 No.2

        The growth of the World Wide Web and web-based applications is creating demand for high performance web servers to offer better throughput and shorter user-perceived latency. This demand leads to widely used cluster-based web servers in the Internet infrastructure. Load balancing algorithms play an important role in boosting the performance of cluster web servers. Previous load balancing algorithms suffer a significant performance drop under dynamic and database-driven workloads. We propose an estimation-based load balancing algorithm with admission control for cluster-based web servers. Because it is difficult to accurately determine the load of web servers, we propose an approximate policy. The algorithm classifies requests based on their service times and tracks the number of outstanding requests from each class in each web server node to dynamically estimate each web server load state. The available capacity of each web server is then computed and used for the load balancing and admission control decisions. The implementation results confirm that the proposed scheme improves both the mean response time and the throughput of clusters compared to rival load balancing algorithms and prevents clusters being overloaded even when request rates are beyond the cluster capacity.

      • KCI등재

        무선 인터넷 프록시 서버 클러스터에서 호스트 부하 정보에 기반한 동적 부하 분산 방안

        곽후근(Hukeun Kwak),정규식(Kyusik Chung) 한국정보과학회 2006 정보과학회논문지 : 정보통신 Vol.33 No.3

        무선 인터넷 프록시 서버 클러스터에서 부하 분산기는 사용자의 요청을 각 서버로 분산시키는 역할을 한다. 리눅스 가상 서버(LVS: Linux Virtual Server)는 소프트웨어적으로 사용되는 부하 분산기로써 여러 가지 스케줄링 방식들을 지원한다. LVS 스케줄링 방식에는 라운드 로빈 방식, 해슁 기반 방식, 또는 서버와 부하 분산기 사이에서 서버로 연결된 커넥션 개수를 이용하는 방식이 있다. 일부 향상된 방법에서는 각 서버별로 서버의 최고 성능 범위 안에서 허용된 커넥션 개수의 상한값과 하한값을 사전에 결정하여 이를 스케줄링 시에 적용한다. 그러나, 이러한 스케줄링 방법들에서는 서버의 실시간 부하 정보들이 부하 분산에 반영되지 않는다. 본 논문에서는 서버 부하 정보에 기반한 동적 스케줄링 방식을 제안한다. 제안된 방식에서는 부하 분산기가 서버의 실시간 CPU 부하 정보를 바탕으로 가장 적은 부하를 가지는 서버에 새로운 요청을 할당한다. 16대로 구성된 클러스터링 컴퓨터와 정적 컨텐츠(이미지와 HTML)를 가지고 실험을 수행하였다. 실험결과 CPU를 많이 사용하는 요청과 호스트의 성능이 다른 경우에 대하여 종래의 스케줄링 방식보다 성능이 향상됨을 확인하였다. A server load balancer is used to accept and distribute client requests to one of servers in a wireless internet proxy server cluster. LVS(Linux Virtual Server), a software based server load balancer, can support several load balancing algorithms where client requests are distributed to servers in a round robin way, in a hashing-based way or in a way to assign first to the server with the least number of its concurrent connections to LVS. An improved load balancing algorithm to consider server performance was proposed where they check upper and lower limits of concurrent connection numbers to be allowed within each maximum server performance in advance and apply the static limits to load balancing. However, they do not apply run-time server load information dynamically to load balancing. In this paper, we propose a dynamic load balancing scheme where the load balancer keeps each server CPU load information at run time and assigns a new client request first to the server with the lowest load. Using a cluster consisting of 16 PCs, we performed experiments with static content(image and HTML). Compared to the existing schemes, experimental results show performance improvement in the cases of client requests requiring CPU-intensive processing and a cluster consisting of servers with difference performance.

      • A High Availability Clusters Model Combined with Load Balancing and Shared Storage Technologies for Web Servers

        A. B. M. Moniruzzaman,Md. Waliullah,Md. Sadekur Rahman 보안공학연구지원센터 2015 International Journal of Grid and Distributed Comp Vol.8 No.1

        This paper designs and implements a high availability clusters and incorporated with load balance infrastructure of web servers. The paper described system can provide full facilities to the website hosting provider and large business organizations. This system can provide continuous service though any system components fail uncertainly with the help of Linux Virtual Server (LVS) load-balancing cluster technology and combined with virtualization as well as shared storage technology to achieve the three-tier architecture of Web server clusters. This technology not only improves availability, but also affects the security and performance of the application services being requested. Benefits of the system include node failover overcome; network failover overcome; storage limitation overcome and load distribution.

      • Performance Analysis of NIC Caching for Web Server Clusters on System Area Network

        Jin-Ha Kim,최규상 에스케이텔레콤 (주) 2009 Telecommunications Review Vol.19 No.3

        High performance Web servers have become a mandate for an increasing number of network-based applications and services. To meet this need, System Area Networks (SANs) have been designed to free up valuable server resources such as CPU cycles, to improve performance of applications running on the server. In this paper, we examine the architecture of Web Server Clusters based on SANs and exploit the abundant local memory available in programmable Network Interface Cards (NICs) for SANs. We explore two NIC caching schemes, namely, exclusive caching and inclusive caching, to reduce disk accesses and communication cost (in terms of data transfer latency), respectively. In addition, we explore impact of an I/O interconnection architecture, cache replacement policy and scalability on performance with the NIC caching schemes in Web Server Clusters. To maximize performance benefit from those caching schemes, we analyze performance implication of NIC caching on the Web server under several Web workloads of different characteristics. Moreover, we developed a simulator which is validated with an 8-node prototype implementation for performance evaluation. We conduct extensive experiments to compare the performance of Web servers with or without using NIC caching. Our results show that the proposed NIC caching schemes significantly increase throughput over the original system which does not use the NIC caching. The exclusive caching scheme becomes beneficial when a workload shows a low popularity skewness, while the inclusive caching scheme is particularly effective with workloads with the large file sizes and high popularity skewness. The results provide important guidance for tuning NIC caching schemes in Web Server Clusters.

      • KCI등재

        웹 클러스터 구성을 위한 시뮬레이션 분석

        강성열,송영효,Kang, Sung-Yeol,Song, Young-Hyo 한국디지털정책학회 2008 디지털융복합연구 Vol.6 No.2

        High-volume web sites often use clusters of servers with load balancing as a way to increase the performance, scalability, and availability of the sites. Load balancing, usually performed by load balancer in front of such clusters, is a technique to spread workload between several computers or resources, in order to get optimal resource utilization or response time. In this paper we examine the performance for several configurations of cluster-based web servers using a simulation approach. We investigate two types of buffering scheme (common and local) for web clusters and three load balancing policies (uniformly random, round robin, and least queue first), using response time as a performance measure. We also examine two basic approaches of scaling web clusters: adding more servers of same type or upgrading the capacity of the servers in the clusters.

      • KCI등재

        서비스 보안 수준을 고려한 서버 클러스터링 기반 침입감내시스템 설계

        권현,이용재,윤현수 육군사관학교 화랑대연구소 2016 한국군사학논집 Vol.72 No.2

        Internet is an open space where a number of computer systems are connected to one another. Unfortunately, as systems provide many functionalities with users, they have vulnerabilities that can be used by malicious users who try to intrude into a system. Although such malicious activities by either internal or external adversaries can be defended by conventional security systems such as Intrusion Detection and Prevention System (IDPS), it is not always possible to defend a target system against the attacks completely. For this reason, Intrusion Tolerance System (ITS) has been proposed to maintain service provision even under threatening environments, where some attacks succeed in part. In this paper, we propose an ITS based on the server clustering scheme where servers are grouped into a cluster according to the security level because each service requires a different security level. The proposed scheme allocates spare servers to a cluster, called a security cluster, since more resources should be ready to be assigned to the security cluster to maintain a higher security level. By this way, the proposed system can maintain the performance of services to clients. 최근 적 사이버전 수행으로 인해서 컴퓨터 시스템 내에 취약점이 증대되고 있고 악의적인 해커들을 통해서 취약점을 이용한 공격이 이뤄지고 있다. 비록 기존 방어솔루션인 IDS나 IPS를 통해서 내외부적 공격을 방어하고 있지만, 제로데이와 같이 알려지지 않은 공격 등으로 부터 시스템을 완벽히 방어하기에는 불가능하다. 따라서 침입감내시스템은 적의 침입을 어느정도 허용하더라도 중요서비스를 지키고 정상적인 서비스를 제공하는 데에 목표가 있다. 이 논문에서는 서버 클러스터링을 통한 침입감내시스템을 제안하였다. 서비스별 중요도에 따라서 Security cluster와 Normal cluster로 구분하여서 보안성이 중요시 되는 서비스를 보호하면서도 중요 서비스의 최소 서비스 보장을 유지할 수 있는 침입감내시스템을 제안하였다. Cloudsim 시뮬레이터를 통해서 보안성 수준과 요구되는 가상화머신의 수를 기존 침입감내시스템인 SCIT(Self Cleansing Intrusion Tolerance)와 비교분석하였다.

      • KCI등재후보

        클러스터 웹 서버 상에서 히스토그램 변환을 이용한 내용 기반 부하 분산 기법

        홍기호 ( Gi Ho Hong ),권춘자 ( Chun Ja Kwon ),최황규 ( Hwang Kyu Choi ) 한국인터넷정보학회 2005 인터넷정보학회논문지 Vol.6 No.2

        최근 인터넷 사용자의 기하급수적 증가에 따라 저렴한 가격의 고성능 대용량 클러스터 웹 서버 시스템에 관심이 증대되고 있다. 클러스터 웹 서버 시스템은 저렴한 비용, 높은 확장성과 가용성 등의 장점과 더불어 대규모 사용자에 대한 성능의 극대화를 목적으로 연구 개발되고 있으며, 최근에는 성능 향상을 위한 내용 기반의 부하 분산 기법에 관심이 모아지고 있다. 본 논문에서는 이러한 클러스터 웹 서버 상에서 사용자의 접근 빈도와 파일의 크기를 고려하여 각 서버 노드에 부하를 균등하게 할당하는 새로운 내용 기반의 부하 분산 기법을 제안한다. 제안된 기법은 웹 서버 로그의 각 URL 항목에 해시 함수를 적용하여 얻어지는 해시 값에 그 빈도와 전송된 파일의 크기를 고려한 누적 히스토그램을 생성한다. 사용자 요청은 [해시 값-서버 노드] 매핑에 의한 히스토그램 변환 과정을 통하여 각 서버 노드에 균등하게 할당된다. 제안된 기법은 누적 히스토그램을 주기적으로 갱신함으로써 동적으로 클러스터 웹 서버 시스템의 부하를 고르게 분산시킬 수 있으며, 또한 서버 노드의 캐시를 활용함으로써 전체 클러스터 시스템의 성능을 향상시킬 수 있다. 시뮬레이션을 통한 성능 분석에서 제안된 기법은 전통적인 라운드 로빈 방법보다는 월등히 우수함을 보이고, 기존의 내용 기반 WARD 방법보다는 약 10% 정도의 우수한 성능을 나타낸다. As the Internet users are increasing rapidly, a cluster web server system is attracted by many researchers and Internet service providers. The cluster web server has been developed to efficiently support a larger number of users as well as to provide high scalable and available system. In order to provide the high performance in the cluster web server, efficient load distribution is important, and recently many content-aware request distribution techniques have been proposed. In this paper, we propose a new content-aware load balancing technique that can evenly distribute the workload to each node in the cluster web server. The proposed technique is based on the hash histogram transformation, in which each URL entry of the web log file is hashed, and the access frequency and file size are accumulated as a histogram. Each user request is assigned into a node by mapping of [hashed value-server node] in the histogram transformation. In the proposed technique, the histogram is updated periodically and then the even distribution of user requests can be maintained continuously. In addition to the load balancing, our technique can exploit the cache effect to improve the performance. The simulation results show that the performance of our technique is quite better than that of the traditional round-robin method and we can improve the performance more than 10% compared with the existing workload-aware load balancing(WARD) method.

      • KCI등재후보

        자가 치유 기법을 이용한 고가용도 웹 서버 클러스터

        정지영,김영로 (사)디지털산업정보학회 2009 디지털산업정보학회논문지 Vol.5 No.1

        Although the web is becoming a widely accepted medium, it provides relatively poor performance and low availability. A cluster consists of a collection of interconnected stand-alone computers working together and provides a high-availability solution in application area such as web services or information systems. Web server clusters require a high-availability service with a proactive and practical fault management. However, as the system complexity grows, it is not easy to meet the requirement. Therefore, web server clusters must have self-fault management capability for meeting high-availability requirement. In this paper, we propose high availability web server clusters using self-healing technique with a minimal human intervention. Our experimental results show that a proposed method can be used to improve the availability of web server clusters.

      • A Power and Performance Management Simulation Platform for Web Application Server Cluster

        Zhi Xiong,Zhongliang Xue,Weihong Cai,Lingru Cai,Juan Yang 보안공학연구지원센터 2016 International Journal of Future Generation Communi Vol.9 No.12

        Web application server cluster has been widely used to improve the performance of web application servers. Because web load is highly variable, we need to dynamically manage cluster’s deployment so as to reduce power consumption and meanwhile satisfy load performance demand. To facilitate researchers to evaluate a management strategy or choose key parameters for it, we propose a CloudSim-based simulation platform in this paper. It can simulate different cluster deployment algorithm, request scheduling algorithm and load feature, where cluster’s deployment includes the on/off state, CPU frequency and request scheduling parameter(s) of each server. By the aid of HookTimer component, the platform supports periodical and conditional deployment trigger modes, and can calculate some common performance indicators. The usage of interface, dynamic proxy technique and XML configuration file make the platform have good extensibility and configurability. In addition, a request-number-triggered management strategy is proposed and simulated by the platform. The simulation results demonstrate the feasibility of the platform.

      연관 검색어 추천

      이 검색어로 많이 본 자료

      활용도 높은 자료

      해외이동버튼