http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
고선규 한국지방정치학회 2016 한국지방정치학회보 Vol.6 No.1
This study has two main purposes. The first one is to investigate the characteristics of the Big Data. The second aim is to provide the political Implication of Big Data in Korea. Big Data phenomenon is spreading radically throughout the business,, decision making, and political area. recognizing the performance of Big Data to the Election, party, candidates are aggressively adopting it. The problem of Big Data implementation is the acquisition of reliable data. To implementation the Big Data, the quantative attributes of Big Data such as volume, velocity, and variety, meanwhile the qualitative attributes of Big Data which impact election. Big Data utilization in the 2012 US presidential election. Obama`s camp have been conducted on Big Data-based campaign. Also in 2012, South Korea, Big Data of the camp of the 18th president candidates. However, in 2012 Korea, through the 18th presidential election based Big Data, showed a significant difference with the United States. Nevertheless, the utilization of Big Data election will be an Important trend. 이 논문에서는 새롭게 등장하고 있는 ‘빅데이터(Big Data)’의 내용, 특징을 살펴보고 한국선거에서 빅데이터를 활용 가능하기 위한 제도적 정비와 이것이 가지는 정치적 함의에 주목하여 살펴보기로 한다. 빅데이터는 IT기술과 인터넷기술, 데이터 기술이 발달하면서 대규모 데이터에서 특징과 패턴을 추출하는 데이터 분석기술이 발전되면서 상황이 급진전되었다. 빅데이터가 선거에 활용되는 계기는 선거결과 예측, 유권자의 니즈를 파악하여 이에 실시간으로 맞춤형 대응이 가능하다는 점에서 주목받기 시작하였다. 2012년 미국 대통령선거에서 민주당 오바마후보는 지역별로 민주당 선호지수, 변동성지수, 투표참여 지수를 계산하여 유권자를 분류하고, 선거운동원은 대쉬보드(Dash board)를 토대로 마이크로 타기팅을 전개하였다. 빅데이터 선거기법은 민주당의 선거자금 모금에도 활용되었다. 그리고 유권자의 미디어정보, SNS데이터가 통계적인 분석은 물론 행동과학적 분석법, 행동경제학, 심리학 등의 기법들과 함께 활용되었다. 한국선거에서 빅데이터 활용 가능성은 매우 제한적이다. 그러므로 빅데이터가 본격적으로 활용되기 위해서는 개표결과데이터, 여론조사데이터, 공약데이터, 인구주택데이터 등에 대한 오픈데이터화가 필요하다. 한국선거에서 빅데이터의 함의는 마이크로 타기팅에 있다. 데이터에 입각한 마이크로 타기팅은 한국의 선거문화를 근본적으로 변화시키는 동인이 될 수 있을 것이다. 빅데이터 선거는 선거의 과학화, 합리화, 효율화에 기여하게 될 것이다.
황홍섭 한국사회과교육연구학회 2019 사회과교육 Vol.58 No.1
제4차 산업혁명은 빅데이터와 인공지능을 핵심으로 지능정보사회를 지향하고 있다. 이에 따라 우리의 삶과 교육 전반에 패러다임 전환을 요구하고 있다. 이에 본 연구의 목적은 빅데이터를 활용하여 사회과 교수·학습 모형을 탐색하는 것이다. 연구 목적을 달성하기 위해, 첫째 목표는 모형을 탐색하기에 앞서 현재 및 미래 사회의 모습인 복잡계 및 초복잡계를 검토하여 빅데이터 활용교육의 의의를 논의하였다. 둘째 목표는 빅데이터 활용 사회과 교수·학습 모형을 빅데이터, 교수·학습 및 미래교육에 대한 단어 검색 결과와 문헌연구를 바탕으로 탐색한다. 셋째 목표는 빅데이터 활용 사회과 교수·학습 모형을 적용한 수업사례를 개발하였다. 본 연구를 위한 빅데이터 검색어는 빅데이터, 사회과 교수·학습, 미래교육으로 하였고, 분석 대상은 3개의 포털 사이트(네이버, 다음, 구글)와 SNS(트위터, 페이스북), 그리고 RISS의 국내학술지논문 및 학위논문으로 하였다. 분석 기법으로 텍스트 마이닝(text mining)을 비롯한 워드클라우드 분석(wordcloud analysis)과 네트워크 분석(network analysis)을 활용하였다. 연구결과는 다음과 같다. 첫째, 4차 산업혁명으로 인한 사회는 복잡계 및 초복잡계 현상이 가속화되고 있으며, 빅데이터는 이러한 복잡계 및 초복잡계 현상을 잘 담아내고 있다. 이러한 빅데이터는 사회과 교육에 있어서 시대에 맞는 정보원, 자료, 도구, 수단, 멘토, 조력자, 협력자등 다양하게 표현되며, 이것을 사회과 교육에 적극적으로 활용할 필요성이 있다. 둘째, 빅데이터를 활용한 교수·학습 모형은 빅데이터가 복잡계 및 초복잡계의 특성을 잘 반영하고 있기 때문에 현재 및 미래사회의 문제 해결을 통해 학습자의 가치를 창출하여 최적화된 행동을 할 수 있도록 해 주는 문제해결학습 모형이 적절하다. 특히 빅데이터를 활용한 교수·학습 모형은 복잡한 문제를 전수조사에 가까운 데이터를 마이닝(mining)하면서 문제를 단순화해가면서 해결해가는 귀납적 절차에 의한 자기 주도의 연역적 학습이 가능하다. 셋째, 빅데이터 를 활용한 사회과 교수·학습 활용 사례는 현장에서 빅데이터를 어떻게 활용하여 수업을 할 것인가에 대한 방향성을 제시해 줄 것이다. 결론적으로 빅데이터를 활용한 사회과 교수·학습 모형은 복잡계(complex system) 및 초복잡계(supercomplex system) 현상으로 나타나는 현재 및 미래사회에서 창의적 문제 해결을 위한 적절한 모형으로 2015개정 교육과정에서 요구하는 사회과 핵심역량을 구현하기에 유용하다. 빅데이터를 활용한 교수·학습 모형은 협력과 개인화(personalization) 전략을 통해 학습자 맞춤형 수업을 지원하여 창의 융복합 인재 양성에 기여할 것으로 기대된다. The fourth industrial revolution is aiming at intelligence information society with the big data and artificial intelligence. As a result, we are demanding a paradigm shift. The purpose of this study is to explore social studies teaching and learning models using big data. In order to achieve the purpose of the research, the first goal is to examine the significance of the big data utilization education by examining the complexity and the supercomplexity which is the present and future society before exploring the model. The second goal is to explore the social studies teaching and learning model of big data utilization based on the word search results and the research on big data, teaching and learning, and future education. The third goal is to develop a case study using the social studies teaching and learning model of big data utilization. The search words for this study were “big data”, “social studies teaching and learning”, and “future education”. The subjects of analysis were three portal sites(Naver, Daum, Google), SNS(Twitter, Facebook), RISS(Research Information Sharing Service). The results of the study are as follows. First, the society caused by the fourth industrial revolution is accelerating the complexity and supercomplexity phenomenon, and big data is well - suited to this complexity and supercomplexity phenomenon. These big data are represented in various ways such as information sources, materials, tools, means,analysis and evaluation tools, mentors, helpers, and cooperators in social studies education, and it is necessary to actively utilize them in social studies education. Second, the teaching and learning model using big data reflects the characteristics of the complex system and the complexity of the big data. Therefore, it can create the value of the learners through solving the problems of the present and future society, The problem-solving learning model is appropriate. In particular, teaching and learning models using big data can lead to self-directed deductive learning through inductive procedures that solve complex problems by mining data close to the whole survey while simplifying the problems. Third, social studies teaching and learning using big data will give directions on how to use big data in the field.In conclusion, the social studies teaching and learning model using big data is an appropriate model for solving creative problems in present and future societies that appear as complex system and supercomplex system phenomenon. It is useful for implementing competencies. The teaching and learning model using big data is expected to contribute to the cultivation of creative talents by supporting learner customized class through cooperation and personalization strategy.
빅테이터(Big Data) 융합예술의 세계적 양상과 전망분석
태혜신 한국무용과학회 2023 한국무용과학회지 Vol.40 No.1
Big Data is the first emerging core technology described in the World Economic Forum among the top 10 technologies. Internet of Things (loT), Robotics, 3D printing, artificial intelligence, new materials, 5G mobile communication, big data analysis, gene editing, virtual reality (VR), and augmented reality (AR) are the results of big data. These big data results are very useful in the field of society as a whole and art. However, there are not many cases of studying big data convergence art as creative art. Therefore, this study aims to provide basic data on big data convergence art and predict the future through research on the global development of big data convergence art. To this end, literature research was conducted using Internet data such as degree papers, academic journals and newspaper articles, websites, and blogs. The results of the study are as follows. The Internet of Things Art (IoT Art) area was hardly found in Korea as a starting stage. However, some Internet of Things artwork based on disruptive technology was being produced. On the other hand, AI creative art based on big data algorithms for each art area was developing innovatively and rapidly at home and abroad. Currently, AI works in the field of visual art and music are not distinguished from human works, and they surpass humans in that many works are created in an instant. In terms of big data, the visual field needs to build labeled image big data that can further develop the qualitative performance of AI creation, and the music field is very important to set optimal parameters. In the field of literature, AI is expanding its scope from phrase and sentence generation to scenarios, poems, and novels. They are creating their own new literary works by using existing literary works as big data. Of course, it is difficult to say that AI is engaged in independent creative activities rather than visual and music, but the era has already come when it takes on some of the roles of production and writer of completed works. In terms of big data, there is a high possibility that more interesting works will be created by expanding the size of the data. The dance field has developed from the dancing robot stage to the choreography AI stage. The robot's dance movements are more natural, delicate, and dynamically similar to humans than in the early stages. AI choreography is in its early stages and currently shows various motion samples, but AI auxiliary choreography activities are predicted through performance development in the future. In terms of big data, the most urgent thing at present is the work of big dataizing dancer movements. 빅데이터는 세계경제포럼에서 10대 기술 중 첫 번째로 서술한 떠오르는 핵심기술이다. 사물인터넷(loT), 로봇공학(Robotics), 3D 프린팅, 인공지능(AI), 신소재, 5세대 이동통신(5G), 빅데이터 분석, 유전자 편집, 가상현실(VR), 증강현실(AR) 등은 빅데이터의 결과물들이다. 이러한 빅데이터 결과물들은 사회 전반과 예술분야에서 매우 유용하게 활용되고 있다. 그러나 현재 창작예술로서 빅데이터 융합예술을 연구한 사례는 많지 않다. 이에 본 연구는 세계적인 빅데이터 융합예술 전개 양상 분석를 통해 빅데이터 융합예술의 기초자료 제공하고 미래를 전망해 보고자 한다. 이를 위해 2019년 12월부터 2022년 11월까지 3년간 저서, 학위논문, 학술지 및 신문기사, 홈페이지, 블로그 등의 인터넷 자료 등을 활용한 문헌 연구를 실시하였다. 연구결과는 다음과 같다. 사물인터넷 예술(IoT Art)영역은 시작 단계로 국내에서는 거의 찾아볼 수 없었다. 다만, 와해성 기술기반의 IoT Art작품은 일부 제작되고 있었다. 반면에 각 예술영역별 빅데이터 알고리즘 기반의 AI 창작예술은 국내외에서 혁신적, 급진적으로 발전하고 있었다. 현재 시각예술과 음악 분야의 AI 작품은 인간 작품과 구분되지 않으며, 일순간에 많은 작품들을 생성한다는 점에서는 인간을 능가한다. 빅데이터 측면에서 시각 분야는 AI 창작의 질적 성능을 더욱 발전시킬 수 있는 라벨링 된 이미지 빅데이터 구축 및 음악 분야는 최적의 매개변수 설정과 다양한 새로운 AI 알고리즘 개발이 필요하다. 문학 분야 AI는 문구, 문장 생성에서 나아가 시나리오, 시, 소설로 영역을 확장하며 기존 문학 작품들을 빅데이터로 활용해 새로운 문학 작품을 생성하고 있다. 시각예술과 음악 분야처럼 주체적 창작 활동을 한다고 보기 어렵지만, AI가 완성된 작품 생산 및 작가 역할을 일부 맡는 시대가 이미 도래했다. 빅데이터 측면에서는 자료 크기 확장과 AI 알고리즘을 세밀하게 다듬어야 한다. 무용 분야는 댄싱로봇 단계에서 안무 AI 단계로 발전했다. 로봇의 댄스움직임도 초기 단계보다 자연스럽고 섬세하며 역동적으로 인간과 비슷하다. AI 안무는 시작 단계로 현재는 다양한 움직임 샘플링을 보여주는 수준이지만 앞으로 성능 발전을 통한 AI 보조안무가 활동이 예측된다. 빅데이터 측면에서 현재 가장 시급한 점은 바로 무용수 움직임 빅데이터화 작업이다.
동성혜 미국헌법학회 2019 美國憲法硏究 Vol.30 No.2
본 논문은 정치빅데이터의 유용성을 정치적 커뮤니케이션의 본질 가운데 선거전략과 여론형성 및 분석이라는 정치과정 차원에 초점을 맞추었다. 이를 중심으로 2012년 미국 대통령선거 당시 오바마 캠프가 정치빅데이터를 활용한 내용을 분석하였다. 제4차 산업혁명시대의 핵심 기술인 빅데이터는 사회변화와 기술혁신의 연결고리로 인간과 사회, 자연과 사물에 기술을 접목시켜 만들어낸 ‘초연결성 네트워크’의 모든 정보들의 집합체이다. 이러한 방대한 양의 빅데이터는 존재 자체가 갖는 의미보다는 수집과 분석, 공유를 통하여 무엇을 분석하고 어떻게 해석하느냐에 대한 ‘통찰’이 전제되어야 미래를 예측할 수 있다. 정치 영역에서의 빅데이터도 마찬가지다. IT기술의 발전과 확산은 정당, 정치인, 유권자 모두의 정치적 인식과 행위에 영향을 줌으로써 정치과정의 패러다임을 변화시키고 있다. 정치 영역에서의 빅데이터에 대한 접근은 ‘인간에 대한 정보’와 ‘상호작용’이라는 점에서 정치적 커뮤니케이션 차원에서 바라보았고, 정치빅데이터 활용을 정치권력의 획득과 유지를 위한 정치활동으로 여론형성과 선거 등 정치과정 차원에서 접근하였다. 특히 인터넷 상에서 참여・공유・개방의 웹 2.0을 기반으로 정보를 생산하는 소셜미디어의 등장은 쌍방향 소통 방식으로 이루어진다. 이는 정치적 여론과 이슈의 생성, 정치세력의 조직화까지 정치적 영향력에서 그 효과를 극대화, 일상화, 활성화시키는 잠재력을 보여주고 있다. 이러한 맥락에서 정치빅데이터의 개념을 정치적 목적, 혹은 정치 활동에 필요한 정보를 수집・저장하고 정치적으로 유의미한 ‘인사이트’를 찾아내어 새로운 형태의 정치적 가치를 추출해내는 일련의 과정으로 정의하였다. 정치빅데이터의 특징은 두 가지다. 사회현실을 파악하고 사회변화의 방향을 예측하여 그에 맞는 적절한 정책 혹은 정치적 방향을 세우는 것과 소셜미디어 등을 통한 개개인의 정치적 욕구를 표현하는 정치참여의 통로가 되고 있다는 점이다. 정치빅데이터의 활용 성공 사례로 평가를 받고 있는 2012년 미국 대통령선거 당시 오바마 캠프의 선거전략과 여론조사 활용을 통한 정치빅데이터의 정치적 유용성을 확인하였다. 분석구조는 유권자 데이터 집적 과정, 이를 기반으로 한 맞춤형 선거전략 사례 분석을 살펴보았다. 그 결과 정치빅데이터가 선거전략으로써 또한 여론분석으로써 유용하며, 유권자 데이터 집적 과정 자체가 또 다른 선거캠페인의 전 과정임을 확인하였다. 또한 집적과 분석을 통해 세대별, 연령별, 지역별 마이크로 타기팅 전략 수립의 가능성을 살펴보았다. 이는 정책공약을 확정짓는데도 유용하였지만 전략적 방향성을 바꾸는 데도 상당히 유용하게 활용되었다. 오프라인에서는 각 지역단위까지의 조직선거에 영향을 미쳤을 뿐 아니라 인터넷 상에서는 소셜미디어를 활용한 선거캠페인에 다양한 방식으로 유용하게 활용되었다. 향후 정치빅데이터의 적극 활용을 위해서는 유권자의 데이터 확보와 동시에 개인정보 침해를 방지하기 위한 대책 마련, 지속적인 경험의 축적과 이를 정확히 분석할 수 있는 전문가의 확보, 선거캠페인에서 소셜미디어 활용 여부를 놓고 정치빅데이터의 전부인양 생각하는 차원에서 넘어 정치빅데이터가 선거전략에서 패러다임의 전환을 일으키고 있다는 인식의 변화 등이 요구된다. This paper focuses on the usefulness of political big data in the political process of election strategy and opinion formation and analysis among the essence of political communication. This study analyzed the contents of Obama Camp's political big data at the time of US presidential election in 2012. Big Data, a key technology in the fourth industrial revolution, is the link between social change and technological innovation. It is a collection of all the information in the hyper-connectivity networks, created by combining technology with humans, society, nature and things. The vast volume of such big data must be based on insights into what to analyze and how to interpret, through collection, analysis and sharing before predicting the future. The same is true of big data in the political field. The development and spread of IT technologies is changing the paradigm of political processes by influencing the political perceptions and behaviors of political parties, politicians and voters alike. Access to big data in the political field is viewed as political communication in terms of ‘information about man’ and ‘interaction’. This study approached in the political process that is election and formation of public opinion as a political activity to acquire and sustain political power. In particular, the advent of social media, which produce information based on web 2.0 of participation, sharing, and openness over the Internet, consists of two-way communication. It demonstrates the potential to maximize, generalize, and activate the effects of political influence that can generalize political opinions and organize political power. In this context, this study defines the concept of political big data as a process that is gathering and storing information for political purposes or for political activities and extracting new forms of political value by finding politically meaningful “insights”. There are two features of this political big data. First, determine appropriate policies or political direction by understanding the reality of society and anticipating the direction of social change. Second, participate in politics by expressing individual political desires via social media. Obama Camp's election strategy at the time of the 2012 US presidential election, which has been evaluated as a successful example of political big data, and the political usefulness of political big data through the use of public opinion polls. Structural analysis is a customized campaign strategy based on the aggregation of voter's data and expert of big data. And it is an analysis by comparing polls and political big data by platform. It was confirmed that political big data was useful as electoral strategy and public opinion analysis, and that the process of gathering voter data was the predecessor of another election campaign. Also, it was looked at the possibility of creating micro-targeting strategies by age, generation, region through aggregation and analysis. In addition, this study found it possible to identify the public's opinion to develop an election strategy through analysis by comparing polls and political big data by platform. This was useful in confirming the policy promises but also used to change the strategic direction. Not only did offline influence organizational elections at each local level, but online they were also used in a variety of ways to be useful in election campaigns utilizing social media. In conclusion, we need to secure voter data and prepare to prevent personal information from being violated to make active use of political big data. Also, we need to have ongoing experience and gain expertise to analyze accurately. Finally, a shift in the perception that political big data is creating a paradigm shift in election strategies is needed.
Linpei Zhai,Jae Eun Lee 위기관리 이론과 실천 2021 Crisisonomy Vol.17 No.9
The purpose of this study is to review the way how to use big data to improve the government’s crisis & emergency management capability to respond to public health crises and to suggest the future directions for improving the scientific application of big data analysis. This study classifies the specific manifestations of big data during the COVID-19 epidemic, and analyzes the advantages of using big data. Using big data to improve the government’s crisis management capabilities is mainly reflected in the following aspects: advancement of precision in response to the epidemic; promotion of the government’s internal and external cooperation; enhancement of the ability to respond to internet public opinion; promotion of the transformation of public decision-making from traditional experience to intelligent and scientific. In order to better integrate big data with public crisis governance, this study is concluded with a discussion of suggestions: improvement of big data application capabilities; enhancement of big data governance; people-oriented and paying attention to internet public opinion; innovation of public decision-making methods for big data governance.
권영진(Young jin Kwon),정우진(Woo-Jin Jung) 한국지능정보시스템학회 2019 지능정보연구 Vol.25 No.2
According to the recent IDC (International Data Corporation) report, as from 2025, the total volume of data is estimated to reach ten times higher than that of 2016, corresponding to 163 zettabytes. then the main body of generating information is moving more toward corporations than consumers. So-called “the wave of Big-data” is arriving, and the following aftermath affects entire industries and firms, respectively and collectively. Therefore, effective management of vast amounts of data is more important than ever in terms of the firm. However, there have been no previous studies that measure the effects of big data investment, even though there are number of previous studies that quantitatively the effects of IT investment. Therefore, we quantitatively analyze the Big-data investment effects, which assists firm’s investment decision making. This study applied the Event Study Methodology, which is based on the efficient market hypothesis as the theoretical basis, to measure the effect of the big data investment of firms on the response of market investors. In addition, five sub-variables were set to analyze this effect in more depth: the contents are firm size classification, industry classification (finance and ICT), investment completion classification, and vendor existence classification. To measure the impact of Big data investment announcements, Data from 91 announcements from 2010 to 2017 were used as data, and the effect of investment was more empirically observed by observing changes in corporate value immediately after the disclosure. This study collected data on Big Data Investment related to Naver ‘s’ News’ category, the largest portal site in Korea. In addition, when selecting the target companies, we extracted the disclosures of listed companies in the KOSPI and KOSDAQ market. During the collection process, the search keywords were searched through the keywords ‘Big data construction’, ‘Big data introduction’, ‘Big data investment’, ‘Big data order’, and ‘Big data development’. The results of the empirically proved analysis are as follows. First, we found that the market value of 91 publicly listed firms, who announced Big-data investment, increased by 0.92%. In particular, we can see that the market value of finance firms, non-ICT firms, small-cap firms are significantly increased. This result can be interpreted as the market investors perceive positively the big data investment of the enterprise, allowing market investors to better understand the company’s big data investment. Second, statistical demonstration that the market value of financial firms and non - ICT firms increases after Big data investment announcement is proved statistically. Third, this study measured the effect of big data investment by dividing by company size and classified it into the top 30% and the bottom 30% of company size standard (market capitalization) without measuring the median value. To maximize the difference. The analysis showed that the investment effect of small sample companies was greater, and the difference between the two groups was also clear. Fourth, one of the most significant features of this study is that the Big Data Investment announcements are classified and structured according to vendor status. We have shown that the investment effect of a group with vendor involvement (with or without a vendor) is very large, indicating that market investors are very positive about the involvement of big data specialist vendors. Lastly but not least, it is also interesting that market investors are evaluating investment more positively at the time of the Big data Investment announcement, which is scheduled to be built rather than completed. Applying this to the industry, it would be effective for a company to make a disclosure when it decided to invest in big data in terms of increasing the market value. Our study has an academic implication, as prior research looked for the impact of Big-data investment has bee
빅데이터의 효과적인 처리 및 활용을 위한 클라이언트-서버 모델 설계
박대서(Dae Seo Park),김화종(Hwa Jong Kim) 한국지능정보시스템학회 2016 지능정보연구 Vol.22 No.4
Recently, big data analysis has developed into a field of interest to individuals and non-experts as well as companies and professionals. Accordingly, it is utilized for marketing and social problem solving by analyzing the data currently opened or collected directly. In Korea, various companies and individuals are challenging big data analysis, but it is difficult from the initial stage of analysis due to limitation of big data disclosure and collection difficulties. Nowadays, the system improvement for big data activation and big data disclosure services are variously carried out in Korea and abroad, and services for opening public data such as domestic government 3.0 (data.go.kr) are mainly implemented. In addition to the efforts made by the government, services that share data held by corporations or individuals are running, but it is difficult to find useful data because of the lack of shared data. In addition, big data traffic problems can occur because it is necessary to download and examine the entire data in order to grasp the attributes and simple information about the shared data. Therefore, We need for a new system for big data processing and utilization. First, big data pre-analysis technology is needed as a way to solve big data sharing problem. Pre-analysis is a concept proposed in this paper in order to solve the problem of sharing big data, and it means to provide users with the results generated by pre-analyzing the data in advance. Through preliminary analysis, it is possible to improve the usability of big data by providing information that can grasp the properties and characteristics of big data when the data user searches for big data. In addition, by sharing the summary data or sample data generated through the pre-analysis, it is possible to solve the security problem that may occur when the original data is disclosed, thereby enabling the big data sharing between the data provider and the data user. Second, it is necessary to quickly generate appropriate preprocessing results according to the level of disclosure or network status of raw data and to provide the results to users through big data distribution processing using spark. Third, in order to solve the problem of big traffic, the system monitors the traffic of the network in real time. When preprocessing the data requested by the user, preprocessing to a size available in the current network and transmitting it to the user is required so that no big traffic occurs. In this paper, we present various data sizes according to the level of disclosure through pre - analysis. This method is expected to show a low traffic volume when compared with the conventional method of sharing only raw data in a large number of systems. In this paper, we describe how to solve problems that occur when big data is released and used, and to help facilitate sharing and analysis. The client-server model uses SPARK for fast analysis and processing of user requests. Server Agent and a Client Agent, each of which is deployed on the Server and Client side. The Server Agent is a necessary agent for the data provider and performs preliminary analysis of big data to generate Data Descriptor with information of Sample Data, Summary Data, and Raw Data. In addition, it performs fast and efficient big data preprocessing through big data distribution processing and continuously monitors network traffic. The Client Agent is an agent placed on the data user side. It can search the big data through the Data Descriptor which is the result of the pre-analysis and can quickly search the data. The desired data can be requested from the server to download the big data according to the level of disclosure. It separates the Server Agent and the client agent when the data provider publishes the data for data to be used by the user. In particular, we focus on the Big Data Sharing, Distributed Big Data Processing, Big Traffic problem, and construct the detailed module of
Handling Endogeneity Challenge in Big Astronomical Data
Sumedha Arora,PankajDeep Kaur 보안공학연구지원센터 2015 International Journal of Signal Processing, Image Vol.8 No.7
Using Big Data in statistically valid ways is posing a great challenge. The main misconception that lies in using Big Data is the belief that volume of data can compensate for any other deficiency in data. There is a need to use some standards and transparency when using Big Data in survey research. Certain surveys that are based on the Big Data tend to generate more complications and complexities in data such as some important variables tend to correlate with some errournious data. This correlation of data with residual noise causes the endogeneity problem. It is to be solved as a fact the main aim of research work is answering question which could only be done when data is fully analyzed. Through this we can utilize all available information. This paper throws light on addressing endogeneity particularly to the astronomical data set and also provides solutions and techniques for handling endogeneity in the respective data set. Finally it couples big data i.e. whole data of sky with the time domain.
빅데이터를 기반으로 한 경관디자인 국내 학위 연구 동향 분석
박혜경,이재호 한국공간디자인학회 2023 한국공간디자인학회논문집 Vol.18 No.7
(Background and Purpose) Research based on big data is being actively conducted in various fields such as cities, architecture, landscapes, and design. The scope of use of big data is gradually expanding, and the number of cases using big data is steadily increasing in the landscape design field, but trend analysis or trend research related to this is still insufficient. Therefore, this study aims to identify the types and analysis techniques of big data used according to the research characteristics and targets in the landscape design field by conducting a survey and trend analysis of studies using big data. Through this, the characteristics of big data analysis techniques that are highly utilized by type and field/target of research can be incorporated into the landscape design process, or it is intended to be a foundation study that can contribute to the insights or follow-up research necessary for the proposal of new research. (Method) This study conducted the first and second surveys that limit the categories to be investigated, targeting domestic master's and doctoral dissertations related to design using big data. In the first survey, seven words (design, landscape, city, architecture, product, vision, design + landscape) related to big data and design were combined to search for papers, and three words related to the main perspective of this study (city, landscape, design) were narrowed down, and the research fields of these studies and applied big data analysis techniques were identified. In the second survey, "methodology" and "process" were added and recombined into the subject word to extract research used in design based on the first survey. In this process, a total of 47 papers were identified, the final four were selected and the research contents were analyzed. (Results) The surveys confirmed that the interest and utilization of research using big data are continuously increasing in all areas of "landscape", "design", and "city". Text mining techniques were being used as the most basic method for big data analysis, and it was confirmed that certain phenomena were analyzed from various angles by using text mining in parallel or additional separate techniques depending on the subject. (Conclusions) Research in the field of landscape design using big data analysis techniques is expected to continue in the future. In particular, in the landscape design-related fields, the use of opinion mining (emotional analysis) was on the rise to solve user-centered problems, and need to revitalize various research that can discover new formativeness and aesthetics through emotion. In addition, if guidelines and processes are developed by applying and combining various big data techniques, the level of related fields is expected to increase, such as minimizing errors in carrying out certain tasks and securing quality above a certain level.
황홍섭(Hong-Seop Hwang) 한국사회과교육연구학회 2016 사회과교육 Vol.55 No.3
본 연구의 목적은 사회과 교육에 있어서 빅데이터의 활용방안을 모색하는 것이다. 이를 위해서 먼저, 빅데이터와 관련된 개념, 처리 기술 및 기법을 검토하였다. 다음으로 빅데이터를 활용하기 위한 전제로서 빅데이터 인프라 구축방안을 검토한 후 그 활용방안을 제시하였다. 연구결과, 빅데이터 인프라 구축방안으로서 2가지 즉 첫째, 교육빅데이터 인프라구축과 교육관리분석 시스템 구축, 둘째, 학습분석시스템을 통한 거꾸로 교실 수업 모델 을 제시하였다. 빅데이터 활용방안으로서 3가지 첫째, 사회과 빅데이터 인프라 구축을 통한 체계적인 사회과 교육과정 개발과 내용선정, 둘째, 빅데이터 분석을 통한 사회과 교수내용지식(Pedagogical Content Knowledge, PCK) 구성의 적합성 검토, 셋째, 웹기반 및 웹GIS기반 빅데이터 활용방안을 제시하였다. 아울러 미래 사회과 수업에서 테크놀로지를 적극적으로 활용하기 위한 빅데이터 교육자 양성을 위한 강좌 신설이 필요하다. The purpose of this study is to seek ways to build big data Infrastructure and utilize big data in social studies. To do this, first check the concept of big data, analytical skills. Next, present how to build big data infrastructure and utilize big data. The results of this study are as follows: there are two ways to build big data Infrastructure. First, build the educational big data infrastructure, management and analysis system in education. Second, present the flipped classroom model through the learning analysis system. And there are three ways to utilize big data. First, to develop and select systematic social studies curricula and content by building big data Infrastructure in social studies. Second, to reconstruct classes by examining the Conformity of Pedagogical Content Knowledge(PCK) construction through analysing big data. Third, to present elaboration of PCK by actively combining pedagogical content and method knowledge while utilizing web-based and web-GIS-based big data analysis techniques. In addition, this study proposes a new training course to create big data educators who will utilize this technology in future social studies classes.