Wednesday, May 16, 2007

3D Models

Using 3D Studio Max as part of the Workshop module we created a number of potential device and environment designs.
devices
3D(Metal)

and 2D(Paper)


performance environments
Semi-dome - Ideally we would like the user to be to feel surrounded by the performance.


Enclosed Space - Probably a more realistic and practical setup than the semidome.



3 Screens - Another option


We designed a menu system in Director to view animations of these models. http://richie.idc.ul.ie/~hackettc/Thesis/3DMenu.exe

Monday, May 14, 2007

Lo-Fi Director Prototype


Here is a simple lo-fi prototype we created in Director as a proof of design for the Workshop module. Click on image to open Director .exe file. It consists of 25 synthetic Marimba sounds taken from Logic, and 25 gifs created in Photoshop. We mapped images to the rising notes on the 3rd and 4th octaves of the Marimba. As the notes get higher, the colour, sharpness, size, speed, and amount of circles all change.

http://richie.idc.ul.ie/~hackettc/Thesis/Prototype.exe

Presentation May 4th

Here's our Thesis presentation for Cs6022 Principles of Interactive Media. Main points it presents include Origins of Design Concept, Background Reading, Empirical Work, Planned Objective, Technological Requirements, Planned Schedule of Activities from now til end, and Major concerns.
Thesis Presentation


Liam, Annette and Mikael were quite happy with our rationale.
Liam and Annette seemed to get lost in the technical detail, and emphasised the need to look at usability issues, scenarios, who's the user and how'll they'll interact with device.
Mikael advised us to look up, what i think is a PD plug in called Speer - must get back onto him about this. He also recommended we contact Oyvind Brandstegg.

Sunday, May 6, 2007

Ryuichi Sakamoto - MPI X IPM - "Music Plays Images X Images Play Music"(Bibo no Aozora (Live '97) )

Ryuichi Sakamoto (pf),Toshio Iwai (images).at The Yebisu Garden Hall,22th December,1997.

Schedule for Summer

Here is the schedule for the summer. Originally the evaluation only had one small block towards the end of august. As the design is an iterative process, we are constantly evaluating. Therefore the diagram below now contains a continuous amount of evaluation stages.

Our next steps are to speak to musicians, music academics, classmates etc to gather info such as what is lacking in traditional musical instruments(marimba), features that could be added through technology, and ideas on a sound to visuals model.






Tuesday, May 1, 2007

Projectors

According to Mikael, it's probably best to use three projectors rather than trying to split one projection(loss of quality). The choice of projectors comes down to DLP(Digital Light Processing) or LCD(Liquid Crystal Display) - DLP is cheaper for equal power outputs, with better blacks but worse with moving images. LCd has better resolution but is more expensive and the blacks are not as deep.

Screens

Our original idea was to have three rectangular screens located infront of the performance. currently we are thinking of a semi-dome shaped enclosure. '
Audio-Visual art & VJ Culture'(Faulkner M) warns not to fall into the trap of always using traditional 4:3 or 16:9 screens in a front of the classroom arrangement. If you build your own screens you can experiment with polygons or other shapes, and even create 360 degree immersive environments. Almost any flat reflective material can be used for the screen but screen fabrics will normally give you the best reflectance and good viewing angles.

NIME(New Interfaces for Musical Expression)

The International Conference on New Interfaces for Musical Expression is currently in its 5th year. Researchers and musicians from all over the world gather to share their knowledge and late-breaking work on new musical interface design. The conference started out as a workshop at the Conference on Human Factors in Computing Systems (CHI) in 2001. Since then, international conferences have been held around the world, hosted by groups dedicated to research on New Interfaces for Musical Expression.

Website: http://www.nime.org/index.html
Defintaley worth exploring in finalising our concept!

Øyvind Brandtsegg

Øyvind Brandtsegg is a composer and musician. His field of interest lies in the use of compositional techniques for improvisation. This has led to the design of custom musical instruments with computers and sensor technology. Musical knowledge or intelligence is implemented as part of the computer instrument, this is sometimes referred to as Compositionally Enabled Instruments. Sound installations have a natural place in his work, and he views an installation as an autonomous musical instrument. Øyvind has performed with the groups Krøyt and Motorpsycho throughout Europe. He has written music for interactive dance, theatre and TV, in addition to the writing of compositional frameworks for improvisation and composition for sound installation.

Evelyn Glennie

Dame Evelyn Elizabeth Ann Glennie, DBE (bornJuly 19, 1965 in Aberdeen) is a Scottish virtuoso percussionist. She was the first full-time solo professional percussionist in 20th century western society. She will be coming to the Gleneagle Hotel,Killarney for conferences on the 12th, 13th abd 14th July

Website: http://www.evelyn.co.uk

Gary Burton

Gary Burton (b. Anderson, IN, January 23, 1943) is an American jazz vibraphonist and composer who peculiarly credits jazz pianist Bill Evans as a main inspiration for his approach toward the vibraphone

Check his website at http://www.garyburton.com/

John Cage

John Milton Cage Jr. (September 5, 1912August 12, 1992) was an American composer, philosopher, writer and printmaker.[1] He is perhaps best known for his 1952 composition 4'33", whose three movements are performed without playing a single note.
Cage was an early composer of what he called "chance music"—referred to by others as aleatoric music—where some elements are left to be decided by chance; he is also well known for his non-standard use of musical instruments and his pioneering exploration of electronic music. His works were sometimes controversial, but he is generally regarded as one of the most important composers of his era, especially in his raising questions about the definition of music.
John Cage put his Zen Buddhist beliefs into practice through music. He described his music as "purposeless play", but "this play is an affirmation of life—not an attempt to bring order out of chaos, nor to suggest improvements in creation, but simply to wake up to the very life we are living, which is so excellent once one gets one’s mind and desires out the way and lets it act of its own accord." Hence comes his favorite saying nichi nichi kore kōnichi or, every day is a good day.
Cage was also an avid amateur mycologist and mushroom collector: he co-founded the New York Mycological Society with three friends and his mycology collection is presently housed by the University of California, Santa Cruz. He was a long-term collaborator and romantic partner of choreographer Merce Cunningham.
Cage is also known as the inventor of the mesostic, a type of poem.

Acousmatic Music

Acousmatic music is a specialised sub-set of electroacoustic music. It is created using non-acoustic technology, exists only in recorded form in a fixed medium, and is composed specifically to be heard over loudspeakers. The musical material is not restricted to the sounds of musical instruments or voices, nor to elements traditionally thought of as 'musical' (melody, harmony, metrical rhythms, and so on), but rather admits any sound, acoustic or synthetic, and any way of combining or juxtaposing sounds, as potentially musical.
The term acousmatique was first used by the French composer Pierre Schaeffer, in his book Traité des Objets Musicaux (1966). It is said to be derived from akusmatikoi, the outer circle of Pythagoras' disciples who only heard their teacher speaking from behind a veil. In a similar way, one hears acousmatic music from behind the 'veil' of loudspeakers, without seeing the source of the sound.
Acousmatic composers use this invisibility of sound sources as a positive aspect of the creative process, in one of two ways. The first is to separate the listener from the visual and physical context of the sounds being used, in order to permit a more concentrated and abstract form of listening unencumbered by the real-world associations or 'meaning' of the sounds. This form of listening is known as reduced listening (Schaeffer), and it allows both acoustic and synthetic sounds to be used to create an abstract musical discourse the focus of which is the detail of individual sounds, and the evolution and interaction of these sounds. The second approach is to deliberately evoke real-world associations by using identifiable sounds (real world objects, voices, environments) to create mental images in sound.
Although these two contrasting approaches are in some ways diametrically opposed, they may nevertheless be combined in order to exploit the tension that exists between them.

Max Matthews - Radio Baton

Max Vernon Mathews was a pioneer in the world of computer music. He studied electrical engineering at the California Institute of Technology and the Massachusetts Institute of Technology, receiving a Sc.D. in 1954. Working at Bell Labs, Mathews wrote MUSIC, the first widely-used program for sound generation, in 1957. For the rest of the century, he continued as a leader in digital audio research, synthesis, and human-computer interaction as it pertains to music performance.
"Starting with the Groove program in 1970, my interests have focused on live performance and what a computer can do to aid a performer. I made a controller, the radio-baton, plus a program, the conductor program, to provide new ways for interpreting and performing traditional scores. In addition to contemporary composers, these proved attractive to soloists as a way of playing orchestral accompaniments. Singers often prefer to play their own accompaniments. Recently I have added improvisational options which make it easy to write compositional algorithms. These can involve precomposed sequences, random functions, and live performance gestures. The algorithms are written in the C language."
The software package Max/MSP is named after him.

Keyboard Expression, Velocity / PressureSensitivity, Aftertouch

Keyboard expression often shortened to expression is the ability of the keyboard of a keyboard instrument to respond to the dynamics of the music.
For example, the piano responds exceptionally well to the force with which the keys are initially pressed; It is velocity sensitive. Several of its predecessors, such as the harpsichord, were less velocity sensitive than the piano, which is one of the key advantages of the piano.
The clavichord and some electronic keyboards also respond to the force with which a key is held down after the initial impact; They are pressure sensitive. This can be used by a skilled clavichord player to slightly correct the intonation of the notes, and/or to play with a form of vibrato known as bebung.
There is some confusion relating to the term pressure sensitive, with some using it as a synonym for velocity sensitive. To avoid this confusion, pressure sensitivity is sometimes called aftertouch.

Pure Data

Pure Data
If one is controlling anything from simple sign wave to an audio device. It has been argued that Pure Data is most effective software.Pure Data used to trigger events between devices.

GEM
"GEM" stands for Graphics Environment for Multimedia and is an external (plugin) for the computer-music software PD.

Flash.
Serial Proxy (XML Socket server) a small program that runs on your Mac/PC and keeps a live connection between the serial port and Flash. Flash sockets to read or write to Arduino.

Max/MSP Jitter
A graphical environment for music, audio and multimedia. Jitter extends the programming environment to support real-time manipulation of video, 3D graphics and other data sets within a unified processing architecture.

VVVV
vvvv is a toolkit for real time video synthesis. It is designed to facilitate the handling of large media environments with physical interfaces, real-time motion graphics, audio and video that can interact with many users simultaneously.