RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      검색결과 좁혀 보기

      선택해제

      오늘 본 자료

      • 오늘 본 자료가 없습니다.
      더보기
      • 인도음악 연구를 통한 인터랙티브 멀티미디어음악 제작 연구 : 멀티미디어음악 작품 <Drawing Down the Moon>을 중심으로

        강현우 동국대학교 2017 국내석사

        RANK : 249695

        This study will explore the composition of a multimedia work of Indian music using elements such as real-time reverse effects produced for use on guitar and vocals, projection mapping, and interactive sound visualization. Because there is a strong religious emphasis on the celebration of god in Indian music, much of it is mythical in content. The phrase “Drawing Down the Moon” refers to a ritual in which priests summon the power of the moon through invocation. This is also commonly referred to as “Esbat,” which follows the phases of the moon. The message that “Drawing Down the Moon” seeks to convey is that these phases, in which the moon changes shapes before returning to its original form, reflect the ever-changing nature of the universe itself. Using real-time reverse effect, it was possible to maintain the original pitch of the vocals, but certain consonant sounds (like spitting) from the vocals also gave a different effect to the overall texture. With the same reverse effect on the guitar, subtle pitch changes occurred, the overtones of which were maximized in order to emulate the timbre of the sitar, an instrument commonly used in India. In order to make the real-time reverse effect, “Fast Fourier Transform” (FFT) was used. FFT is a useful tool in distinguishing the fine differences between musical intervals. In this piece, the Max/MSP Object, “pfft~”, designed to simplify spectrum audio processing, made use of FFT. Reversing a musical sound necessitates the reversal of the sound’s recording itself. However, if reversed, the pitch of the original sound will differ based on their temporal order. In order to produce a reverse effect in which the temporally ordered pitches remain constant, the role of FFT is of the utmost importance. For the live performance, this sound-effect patch was utilized through Live 9’s Max for Live. For sound visualization, objets were installed with projection mapping. The image of the moon, matched to the rhythm of the tabla, produced motion graphics which were mapped on the installed objet. Following the volume of the vocal and guitar, interactive sound visualization was produced by communicating between programs in Open Sound Control (OSC), a communication network for sharing computers, instruments and other multimedia tools. Using OSC, a patch was created through which images changed according to the volume of the performer. These images were subsequently mapped upon a scrim drop. Because this method, is, for the most part safe, with little latency, the image moved intuitively, matched to sound of the performer. The most important aspect in the creation of a successful multimedia piece is direction method. For this direction to be harmonious, it is necessary that the content, as well as the visual and auditory elements be accepted as natural by the audience. Going forward, to ensure that multimedia works continue to stand out, continuous research inquiries into technological development as well as direction must be made.

      • 베이스 기타의 실시간 사운드 프로세싱을 이용한 멀티미디어음악 작품 제작 연구 : (멀티미디어음악 작품 <Overcome>을 중심으로)

        이도희 동국대학교 영상대학원 2022 국내석사

        RANK : 249695

        작품 <Overcome>을 중심으로 베이스 기타의 실시간 사운드 프로세싱(real-time sound processing)을 통한 새로은 음색을 구연하는 효과를 제작하는데 중점을 두었다. 또한 그에 반응하는 영상을 제작하여 멀티미디어음악 작품 제작 연구를 다루고자 한다. Multimedia music is a combination of technology and art, and art can be expressed in the aesthetic sense of the artist's individual and reconstructed in various forms based on aesthetic value. Technology creates a new media genre as a means of expressing this form of art over time. Multimedia is constantly evolving, such as interactive media art, in which music and images are newly changed according to changes or reactions in the surrounding environment, from the traditional way of composing music for a given video or scene as it becomes possible to visualize sounds that were previously impossible. Studying on the production of multimedia music works using real-time sound processing of bass guitar performances. By producing video that interacts with real-time sound processing that can only be done by computer music at the same time as bridging the gap with the public in the field of computer music, multimedia music is intended to produce intuitive interactive multimedia music works that make it easier to appreciate the public. For Real-time sound processing, Max for Live which is part of the Max/MSP with produced tape music uses and various sound effects such as reverb, chorus, delay, phase vector is applied to granular synthesis in a series to create sound that could not naturally occur in the bass guitar. This allowed new acoustic and artistic expressions beyond the physical limitations of the instrument. The visualized video is also produced in real time through Arena which can be changing in the parameters and image effects of the video visualized what the music wanted to express to help understand the work. The Arena application also using the data transmitted by OSC protocol can be able to create intuitive interactions from made music and video interactions felt by the audience and helped them understand the work by responding in real time to images and effects. The multimedia music <<Overcome>> was used of the bass guitar's attack and volume data of the performance. This could be applied to interactions and visualizations with images without strain. However, there is still a wide variety of data available in music, as well for volume and attack, as to study multimedia music in the future and come up with a visualization method of music, can use more data to organize the work in more individual and colorful ways and deliver the messages contained in the work in it in various ways.

      • 실시간 제어를 통한 멀티미디어음악창작연구 : 멀티미디어음악작품 "~사이(間)"을 중심으로

        이희준 동국대학교 영상정보통신대학원 1998 국내석사

        RANK : 249695

        I analyzed a work of the multimedia art which is based on a real-time expression and a character of sound and studied a object of the multimedia art through a real-time control. First of all, I explained a structure of the work and described a way of the expression, the sound and the image used in my work. In a word, the character of the multimedia work by a real-time control is as follows. First, the subject matter of the modern music goes with a time. The expression of the multimedia art by a real-time control can cope with a true character of the modern music. Second, a method of the expression for a work can be different using a computer. This can be a device in order to make a impression more deeply for a larger audience in a limited space. Third, a synesthetic expression of music and an image must be developed continuously because it can cause a curiosity and an interest of people and make a new voluntarism(~~ism). A development of the computer makes variable methods for a musical expression and realize an unlimited imagination. So the audience can grant a variable meaning and the value getting access to the progressive modern music. Consequently, this paper will be of help to catch a new method of the multimedia work presentation by a real-time control and operate creatively.

      • 북소리의 실시간 프로세싱에 의한 인터랙티브 멀티미디어음악 제작 연구 : 멀티미디어음악 작품 <The Palpitation>을 중심으로

        이승지 동국대학교 2017 국내석사

        RANK : 249695

        'The Palpitation' is work of music that was expressed the suspense of heart beat sound using drumbeat. This work expand the artistic roles by utilizing the bass drum, a korean traditional musical instrument to the multimedia music and expressed the other multimedia music unlike existing music through combination of traditional musical instrument and computer music. Even though the drum player is only one, the effect that play many people and make to replay the various sound whenever play a drum by utilizing a drum as the interface. The drum of the korean traditional music take care of the rhythm that's why playing the improvise performance perfectly is difficult. To this supplement, the drum player made possible for the performance unlike existing korean traditional music to add to improvisation in the rhythm whenever. The video projected to the net cover that was installed in front of the triangle structure and player. The triangle structure projected to the structure in real time by extracting the volume value of drum. Because of the circle drum, produced the triangle shape that stacked in regular sequence to big and great sound express more powerfully visually than circle shape that impress the soft. As the net cover install in front of the player, projected the performance video and directed the dreamlike feeling as if to play the drum in the smoke. In this work, the drum sound and the volume value was combined with a computer music using these techniques. The drum sound replay the various processing sound according to the volume value whenever you play the drum and even though only one drum, we studied the effect such as playing the several drum. And big, broad sound of the drum directed such as looking with eyes the sound through the projection maping visualizations to the audience.

      • 포스트모더니즘 이후의 멀티미디어음악 제작을 위한 실시간 인터랙티브 오디오-비주얼 시스템 연구 : 멀티미디어음악 작품 <Anthropocene Desolation II>를 중심으로

        김연주 동국대학교 영상대학원 2023 국내석사

        RANK : 249679

        Audio-visual work of art requires technical achievements, aesthetics, and the very idea of the person who wish to create them. This study explores the technical part of the work and the idea and the cultures behind it. Computer music has earned remarkable achievements as today’s computer technology prevails and expends to other technologies and cultures. As an art-art, computer music is recognized as a serious form of art that could be studied in the academic context and has been discussed for decades in the field of academic music. And for its experimental, ‘artistic’ nature, computer music alienates itself from the general public. Meanwhile, electronic music constantly expends to and absorbs the minorities and even the mass culture. Precisely, computer music is included in the context and the history of electronic music, and electronic music is also engaged in and leans on computer technology’s achievements. However, to clarify this point, this study defines computer music as the counterpart of today’s electronic music as there seems to be ‘objective’ but unseen differences between the two concepts. That is, there is relation of power, which is not visible, between computer music and today’s electronic music. This study brings today’s electronic music to the academic field of computer music, in the way of audio visualization. Audio-visual describes music in a visual form. As 3D graphics has accomplished remarkable achievements with the developments of computer technology, audio-visual art has been capable of realizing real-time interaction and more detailed visual expressions. Unlike music video, audio visual requires sophisticated data filtering, and data visualization technique. This study suggests FFT as an advanced method of data filtering for audio visual. FFT stands for Fast Fourier Transform, and was introduced by James Cooley and John Tukey in the year of 1965. Fourier transform or Fourier analysis converts a signal from the time domain to the frequency domain and vice versa. The quality of data filtering is up to elaborate data analysis system design. FFT could be an optimized data filtering solution for this task. The study constructs two key systems on max/MSP and Jitter. The two systems are: 1) audio data filtering system 2) audio visualization system. For establishing data controllable audio system, two different data filtering systems were used. One transform signal data to absolute numbers. The other transform signal data to frequency data. For the visual expression, jitter generated 3D models are used. The sound design also has a great portion for this study. The multimedia piece for this study named <Anthropocene Desolation II> has musical background of ‘leftfield bass’ and ‘breakcore’. It’s hard to define these genres by its musical structure, but generally these two genres sound ‘violent’ and ‘metallic’. Therefore, to make this piece sounds more like these genres, this study employs comb filter, wavefolding, and granular synthesis techniques which have been known for their brutal, unique sonic features. This study aims at building visualization system that visualizes subculture-based electronic music into three-dimensional space in real-time by detecting specific audio data using FFT audio analysis system and combining the processed data with 3D models.

      • 범패를 이용한 실시간 인터랙티브 멀티미디어음악 제작 연구 : 멀티미디어음악 작품 <Aruna>를 중심으로

        강신애 동국대학교 2016 국내석사

        RANK : 249679

        Bumpae is a kind of Buddhist music when monks perform ancestral rites for Buddha. Bumpae includes dancing, singing, and playing instruments. The songs of Bumpae have a unique feature which uses vowels to connect each letter of scriptures. So, the vowels have taken the largest part of the songs of Bumpae. In this study, a song ‘Bokcheongge’ is selected. This study focuses on the feature of Bumpae and has developed a sound effector and a video system for it. The sound effector produces sound which has the same vocality of original sound by using real-time formant analysis and resynthesis technique. So, this effector can give different effects such as delay effectors used for vocals which cannot be naturally produced. Next, the video system has been made for implying meaning of the lyrics of Bumpae. Sound visualization techniques have been used. All colors and shapes used have been selected based on buddhistic world view. The pitch and amplitude variation affects the shape, size, and color of objects in the system. Combination of the sound effector and video system can create a synergy effect while making a new impression to the audience.

      • 피아노 연주의 실시간 사운드 프로세싱을 이용한 멀티디미어작품 제작 연구 : 멀티미디어음악 작품 < Matter Flow >를 중심으로

        최아영 동국대학교 대학원 2019 국내석사

        RANK : 249679

        < Matter Flow > is a multimedia music work made by using real-time sound processing of a piano performance. It was made by using two multimedia contents which are audio and video. < Matter Flow > is an artwork that studies sound visualization. Music and videos interact with each other through the real time volume value inputs from a piano performance during the sound visualization. This work aims to deeply communicate with the audience in artistic by conveying visual and aural experience at the same time. The composition of <Matter Flow> can be divided into music and video systems. Max program was used for the sound processing and also the program was used to create a control system for the music and video. Addition to this, a Processing program and an After Effect program were used to create the videos and an Arena program had a role to arrange and mix the videos. The video scenarios are properly constructed fitting the flow of music. This work’s system was meant to be simple enough so that one person could deal with the system for himself, which required a lot of efforts because the system was really complicated. In order to do this, therefore, some unnecessary parts of the system had to be removed and some parts of the system have been automated. And this work contains a principle that the music interacts with the images by using real-time volume values of piano performance. Other various elements of the data from the instrument needed to be researched as well as the volume values. This research implements multimedia music works that combines music and video. It controls the imaging system with the sound inputs data from the volume, intensity, and continuous sound length of the piano performance. Many parts of the system needed to be controlled, but the simplifying process allowed it to be simple enough that one person was able to completely control all functions of the system. The interaction and harmony of the music and videos have made it more intuitive and lucid for the audience to enjoy the work. The challenge for this research is to study the various data that can be extracted from sounds and to create a variety of visualizations that touch and impress the viewers.

      • 우두 드럼의 실시간 사운드 프로세싱을 이용한 멀티미디어음악 작품 제작 연구 : 멀티미디어음악 작품 <Cosmos>를 중심으로

        김진우 동국대학교 2021 국내석사

        RANK : 249679

        본 논문은 서로 다른 두 가지 종류의 예술을 융합하기 위한 목적으로 제작한 멀티미디어 작품 연구 논문이다. 아프리카 타악기인 우두 드럼 연주에 실시간 사운드 프로세싱을 이용하고 음악에 실시간으로 반응하는 영상을 제작하여 청각적 요소와 시각적 요소 두 가지로 인터랙션이 이루어지게 하였다.

      • 실시간 타악기 연주에 의한 인터랙티브 멀티미디어음악 제작 연구 : 멀티미디어음악작품 <振天>을 중심으로

        이미보 동국대학교 영상대학원 2007 국내석사

        RANK : 249679

        The work Jin-Cheon is played with tape music and percussion instrument in real time with image. This multimedia music work presents interactive by related music and image in real time. The volume value is received from percussions. when the tape music and percussions played, the music and visual image linked to show interaction between real time percussion play and reflection. Set two microphones in front of four percussions then the volume value is obtained through Max/MSP. The received each volume value controls each visual image through Jitter and the Max/MSP and Jitter are most important program for this interactive system. The image controled by volume value from tape music and percussions play in real time. The image is automatically output when pre-set figure value is obtained. After that, Max/MSP automatically received volume values followed by definite range and then the volume value is sent to Jitter as an image. This thesis indicates new genre called multimedia music through the work Jin-Cheon which is controled image in real time to audience.

      • Max/MSP와 Generative Art를 이용한 멀티미디어음악 제작 연구 : 멀티미디어음악 작품 <Trans-Human>을 중심으로

        라지웅 동국대학교 2018 국내석사

        RANK : 249663

        The purpose of this study is to fuse two art genres and ultimately broaden the scope of music. For that reason, the music has been composed using a glitch noise unfamiliar to the public. In addition, computer graphics that change shapes and colors were utilised according to the volume of music. This thesis consists of two-part; a sound design system for glitch noise and real-time image control system. There are various ways to make glitch sounds artificial such as using a microphone to record sounds from a broken speaker, excessive Amplitude Modulation(AM), Ring Modulation(RM) and so forth. However, in this study, most of the glitch sounds that are used in Trans-Human were made by Sound Forge, which is an audio editing program by Magix Sound. In addition to using Sound Forge, the glitch sounds have been made by sound processing in Max/MSP. Generative computer graphics were made with Processing, which is based in Java. In order to change the graphic’s shapes and colors, OSCp5 was used. The role of OSCp5 is crucial to communicate with Max for Live and Processing. Following the volume of the music, the graphic was changed by communicating between programs in OSC. The real-time controlled computer graphics complemented the awkward ambience of the noise music the audience felt, and it added a fun element of multimedia performance that could otherwise be boring and provided an interest to the audience. In addition, research on how to creatively combine music with new technologies is needed.

      연관 검색어 추천

      이 검색어로 많이 본 자료

      활용도 높은 자료

      해외이동버튼