RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      Cache replacement and data placement for reducing data layer overhead in multi-cloud online social network services

      한글로보기

      https://www.riss.kr/link?id=T14858852

      • 0

        상세조회
      • 0

        다운로드
      서지정보 열기
      • 내보내기
      • 내책장담기
      • 공유하기
      • 오류접수

      부가정보

      다국어 초록 (Multilingual Abstract)

      Numerous Online Social Network (OSN) service providers have been introducing a cache system and manipulating the database using data replication or data sharding techniques to improve service performance in a Multiple Cloud servers (Multi-Cloud) environment. The cache system and the database manipulation techniques can mitigate the bottleneck problem at the data layer. However, the existing cache algorithm cannot distinguish between data that should be stored in cache memory for an extended period and data that can be evicted relatively quickly. Hence, the cache efficiency of the system was reduced. The existing data replication techniques not only generate tremendous traffic for synchronization between data but also store considerable redundant data, thereby incurring large storage costs. In addition, it does not provide dynamic load balancing considering the resource status of each cloud server. Consequently, it cannot cope with the
      performance degradation caused by the resource contention at the data layer. Moreover, the existing data sharding techniques did not consider the location of users, location of cloud servers, and service characteristics. Thus, it could not reduce latency delay efficiently. Therefore, in this dissertation, we introduce novel cache algorithms and a novel database manipulation technique to resolve such limitations. First, to improve the cache efficiency of the cache system, memory space is divided and separately allocated to each user. The size of each memory space is adjusted according to each user’s usage amount. This dissertation introduces two ways for predicting each user’s service usage amount. (1) One is by using the statistical characteristics that the more friends a user has on the OSN service, the more frequently the user uses the service. (2) The other is to predict each user's usage amount using the machine learning technique
      with the logs of each user's actions on the service. Second, we introduce an adaptive data placement technique that can replace the existing data replication and data sharding techniques. This approach is designed to reduce resource contention at the data layer using a data balancing technique, which locates data
      from a cloud server to another according to the amount of traffic. To provide acceptable latency delay, it also considers the relationship between users and the distance between user and cloud when transferring data.
      To validate our approaches, we experimented with actual user data collected from Twitter. The results show that the cache algorithms can improve cache efficiency by an average of over 24% and reduce the execution delay by an average of over 2000 ms. Further, the data placement approach can reduce the resource contention by an average of over 59%, reduce storage volume to at least 50%, and maintain the latency delay under 50 ms.
      번역하기

      Numerous Online Social Network (OSN) service providers have been introducing a cache system and manipulating the database using data replication or data sharding techniques to improve service performance in a Multiple Cloud servers (Multi-Cloud) envir...

      Numerous Online Social Network (OSN) service providers have been introducing a cache system and manipulating the database using data replication or data sharding techniques to improve service performance in a Multiple Cloud servers (Multi-Cloud) environment. The cache system and the database manipulation techniques can mitigate the bottleneck problem at the data layer. However, the existing cache algorithm cannot distinguish between data that should be stored in cache memory for an extended period and data that can be evicted relatively quickly. Hence, the cache efficiency of the system was reduced. The existing data replication techniques not only generate tremendous traffic for synchronization between data but also store considerable redundant data, thereby incurring large storage costs. In addition, it does not provide dynamic load balancing considering the resource status of each cloud server. Consequently, it cannot cope with the
      performance degradation caused by the resource contention at the data layer. Moreover, the existing data sharding techniques did not consider the location of users, location of cloud servers, and service characteristics. Thus, it could not reduce latency delay efficiently. Therefore, in this dissertation, we introduce novel cache algorithms and a novel database manipulation technique to resolve such limitations. First, to improve the cache efficiency of the cache system, memory space is divided and separately allocated to each user. The size of each memory space is adjusted according to each user’s usage amount. This dissertation introduces two ways for predicting each user’s service usage amount. (1) One is by using the statistical characteristics that the more friends a user has on the OSN service, the more frequently the user uses the service. (2) The other is to predict each user's usage amount using the machine learning technique
      with the logs of each user's actions on the service. Second, we introduce an adaptive data placement technique that can replace the existing data replication and data sharding techniques. This approach is designed to reduce resource contention at the data layer using a data balancing technique, which locates data
      from a cloud server to another according to the amount of traffic. To provide acceptable latency delay, it also considers the relationship between users and the distance between user and cloud when transferring data.
      To validate our approaches, we experimented with actual user data collected from Twitter. The results show that the cache algorithms can improve cache efficiency by an average of over 24% and reduce the execution delay by an average of over 2000 ms. Further, the data placement approach can reduce the resource contention by an average of over 59%, reduce storage volume to at least 50%, and maintain the latency delay under 50 ms.

      더보기

      분석정보

      View

      상세정보조회

      0

      Usage

      원문다운로드

      0

      대출신청

      0

      복사신청

      0

      EDDS신청

      0

      동일 주제 내 활용도 TOP

      더보기

      주제

      연도별 연구동향

      연도별 활용동향

      연관논문

      연구자 네트워크맵

      공동연구자 (7)

      유사연구자 (20) 활용도상위20명

      이 자료와 함께 이용한 RISS 자료

      나만을 위한 추천자료

      해외이동버튼