Sie sind auf Seite 1von 35

OpenFrameworks Lections

4. Interactive audio

Denis Perevalov perevalovds@gmail.com

See in-depth details in my book Mastering openFrameworks Books examples are free, see masteringof.wordpress.com

What is Digital Sound

What is the sound at all


Sound in the broad sense - Elastic waves propagating longitudinally in the environment and pose in her mechanical vibrations; in the narrow sense - The subjective perception of these oscillations special sense organs of animals or humans. Like any wave, the sound is characterized by amplitude and frequency.(Wikipedia)

http://blog.modernmechanix.com/mags/qf/c/PopularScience/9-1950/med_sound.jpg

Representation of sound in digital form


The real sound is captured by a microphone, then subjected to an analog-digital conversion. It is characterized by temporal resolution - sampling, [Procedure - Sampling] amplitude resolution - capacity. [Procedure - quantization]

Amplitude

Time

http://upload.wikimedia.org/wikipedia/commons/thumb/9/9a/Digital.signal.svg/567px-Digital.signal.svg.png

Sampling frequency
8,000 Hz - telephone, enough for speech. 11,025 Hz - games, samples for electronic music. 22 050 Hz - the same as 11,025 Hz.

44 100 Hz - many synths and sample libraries. Audio CD.


48 000 Hz - the recording studio, live instruments and vocals. DVD. 96,000 Hz - DVD-Audio (MLP 5.1).

192,000 Hz - DVD-Audio (MLP 2.0).

Capacity
Length - the number of bits used to represent the signal samples in the quantization (in our case - the quantization of the amplitude).

8-bit samples of electronic music.


12-bit studio sound effects. 16 bitscomputer game players, samples, Audio CD.

18-bit studio sound effects


24-bit live sound, vocals,DVD-Audio 32 bitsrepresentation of floating point numbers, so accuracy is not lost for sounds, so is used for internal processing of sound. 64-bit and floating point audio processing.

Representation of sound in memory


Example 1 second 16-bit audio with a frequency of 44100 Hz can be represented as a vector X = (x_1 x_2, ..., ..., x_44100), where 0 <= x_i <= 2 ^ 1.16 = 65535. Representation of sounds such a way - with the help of the vector - calledPCM(Pulse Code Modulation). It is the most common. It is analogous to pixel representation of images.

The fundamental difference between sound and image


Since the images are very convenient to operate at the level of pixels. In particular, 1. two images, we believe the same if their values pixels close.

2. You can change the images based on the values of neighboring pixels (for example, the operation of smoothing).
For audio in PCM format for both possibilities not applicable

The fundamental difference between sound and image


1. To the 1 st octave 440.00 Hz

2. it also shifted the phase

3. mi 2 nd octave 659.26 Hz

1.

+ 3.

2. + 3. (Audacity was used for sound generation)

The last two sound sound the same. And their function amplitude - much different. Thus, chelovecheckoe ear perceives the sound spectrum, ie the composition of its frequency, not amplitude representation of sound.

That it is easy / difficult to do "straight" with the sound in PCM


Easy:Changing and rearranging the individual samples, without regard to its neighbors - Rearrange the pieces, - Change the volume slices, - Do the reverse - the coup of sound from end to beginning, - To mix several sounds - Mix and change the stereo channels - Do simple compression, - Adding just an echo. Samplers, portastudii and studio program make it masterly. Hard: Accounting neighboring samples - Compare two sounds at the similarities, - Suppress the low and high frequencies, - Add reverb. This usually is not done right in the PCM, and by the spectral representation of sound (window Fourier transform).

Storage formats of sound


WAV wav = header + bytes of PCM Keeps sound without quality loss (Similar to the images - bmp) MP3 These losses are well suited for storing music. (Similar to the images - jpg) AMR These losses, suitable for storage of speech. Used in mobile telephony (2011). (Similar to the images - png)

Ways to generate a digital audio

Ways to generate a digital audio


That is, how to build PCM-representation of a sound or music: 1. Sampling Used for the production of the music. Devices - samplers 2. (Subtractive) Synthesis Used mainly for modern electronic music. Devices - keyboards. 3. FM-synthesis 4. Additive synthesis 5. Granular Synthesis

6. S & S - Sample & Synthesis - Sampling, analysis and subsequent synthesis today one of the best technologies play "live" instruments.

Sampling
Recording: "Live Sound" - a microphone - ADC - PCM-format. Playback: PCM-format - DAC - speaker.

Additional options: you can change the playback speed, then increase the tone and speed of the sample. Modern algorithms also enable you to change the tone of the sample without changing its speed, and vice versa.

Sampler Akai MPC1000

http://josephdbferguson.com/uploads/akai_mpc1000.jpg

(Subtractive) Synthesis
In precomputer time: a few simple waves (rectangular, sinusoidal, triangular) processed a set of filters (bass, treble, cut the desired frequency). The resultant was going to the speakers. Now: done digitally. There are difficulties - should carefully consider the problems associated with the digital representation of sound ("aliasing").

Synthesizer Minimoog

http://www.jarrography.free.fr/synths/images/moog_minimoog.jpg

Sample playback in openFrameworks

The project "soundscape"


User poke the mouse in different parts of the screen and begins to be heard a sound
http://www.freesound.org/samplesViewSingle.php?id=221

// Declare variables ofSoundPlayer sample;// Sample player ofPoint p; // point and the radius - to draw a circle float rad; void testApp:: setup () { sample.loadSound ("sound.wav"); // Load sample from the folder bin / data sample.setVolume (0.5f);// Volume [0, 1] sample.setMultiPlay (true);// Allow you to run multiple samples ofSetFrameRate (60) // speed drawing frame ofSetBackgroundAuto (false); // turn off background erase ofBackground (255,255,255); }

The project "soundscape"


void testApp:: update () { ofSoundUpdate ();// Update the status of the sound system }

void testApp:: draw () {


// If the sound is played, draw a transparent circle ofEnableAlphaBlending (); if (sample.getIsPlaying ()) { // Random color ofSetColor (ofRandom (0, 255), ofRandom (0, 255), ofRandom (0, 255), 20); ofCircle (px, py, rad); } ofDisableAlphaBlending (); }

The project "soundscape"


// Clicked the mouse void testApp:: mousePressed (int x, int y, int button) { float h = ofGetHeight (); // screen height // Calculate the desired playback speed of the sample, // In this case 1.0 - is the original sample rate float speed = (h - y) / h * 3.0; if (speed> 0) { sample.play ();// Start of a new sample sample.setSpeed (speed); // Set the playback speed // Remember the point and the radius of the circle to draw p = ofPoint (x, y); rad = (3 - speed); rad = 20 * rad * rad; } }

The project "soundscape"

Additive synthesis
Additive synthesis based on the construction of sound by summing a set of harmonics (ie, sine waves of different frequencies) with variable volume.
Any sound can be represented with arbitrary accuracy as the sum of a large number of harmonics with varying volume. But in practice, work with a large number of harmonics requires large computational resources. Although, at present, there are several hardware and software additive synthesizers.

Project scenario "Additive Synthesizer"


A user on a white background with his hands in front of the camera. Therenharmonics. The screen is divided intonvertical strips, each considered to be the number of pixels, the brightness is less than a certain threshold. This number determines the volume of the corresponding harmonics. Use n = 20 sinusoidal harmonics with frequencies 100 Hz 200 Hz ... 2000 Hz

Harmonics are played with looped samples, which simply changes the volume.

Synth Code 1 / 4
// Declare variables // Video-grabber for "capture" the video frames ofVideoGrabber grabber; int w;// Width of the frame int h;// Height of the frame const int n = 20;// Number of harmonics ofSoundPlayer sample [n];// Samples of harmonics float volume [n]; // Volume of harmonics int N [n];// Number of pixels in the play harmonica ofSoundPlayer sampleLoop; // Sample a drum loop

Synth Code 2 / 4
// Initialize void testApp:: setup () { w = 320; h = 240; grabber.initGrabber (w, h); // Connect the camera // Grab samples harmonics for (int i = 0; i <n; i + +) { int freq = (i +1) * 100; sample [i]. loadSound (ofToString (freq) + ". wav"); // Files are called 100.wav, ... sample [i]. setVolume (0.0);// Volume sample [i]. setLoop (true);// Looping Sound sample [i]. play ();// Start sound }

Synth Code 3 / 4
// Update the state void testApp:: update () { grabber.grabFrame (); // grab a frame if (grabber.isFrameNew ()) {// if you come to a new frame for (int i = 0; i <n; i + +) {volume [i] = 0; N [i] = 0;} // Reset the harmonic unsigned char * input = grabber.getPixels (); // pixels of the input image for (int y = 0; y <h; y + +) { for (int x = 0; x <w; x + +) { // Input pixel (x, y): int r = input [3 * (x + w * y) + 0]; int g = input [3 * (x + w * y) + 1]; int b = input [3 * (x + w * y) + 2]; int result = (r + g + b> 400)? 0: 1;// Threshold int i = (x * n / w);// In which to write the result of harmonic volume [i] + = result; N [i] + +; }} // Set the new volume of harmonics for (int i = 0; i <n; i + +) { if (N [i]> 0) {volume [i] / = N [i];} // Normalize the volume [0, 1] sample [i]. setVolume (volume [i] / n); // Volume. // Divide by n, otherwise it will be distortion of the output sound }} OfSoundUpdate ();// Update the status of the sound system }

Synth Code 4 / 4
// Draw void testApp:: draw () { ofBackground (255,255,255); // Set the background color float w = ofGetWidth ();// Screen height and width float h = ofGetHeight (); ofSetColor (255, 255, 255); // Else draw a picture frame is incorrect grabber.draw (0, 0, w, h);// Output frame

// Draw the harmonics ofEnableAlphaBlending (); // Enable transparency ofSetColor (0, 0, 255, 80);// Blue color with opacity of 80 for (int i = 0; i <n; i + +) { float harmH = volume [i] * h;// Height of the bar harmonics i ofRect (i * w / n, h - harmH, w / n, harmH); } ofDisableAlphaBlending ();// Disable transparency }

Performance on the additive synthesizer

http://www.youtube.com/watch?v=y70Oxk1RAOM

Sound synthesis in openFrameworks

Introduction
Sound synthesis on openFrameworks performed at the lowest level, "byte".
Therefore it is suitable esperimentalnyh projects with sound. In complex projects, it is more convenient to use a special type of library SndObj (See enlargement oxfSndObj) or some other program like PureData or Max / MSP, which binds openFrameworks protocol OSC.

Program Structure
For sound synthesis conventional structure of the program improves audioRequested (). It is called the sound driver, when you need to fill in another piece of sound buffer sound.

Program Structure
in testApp.h, class testApp add: void audioRequested (float * input, int bufferSize, int nChannels); in the setup () to add: ofSoundStreamSetup (2,0, this, 22050, 256, 4); / 2 output channels, // 0 input channels, // 22050 - sampling rate, samples per second // 256 - buffer size // 4 - how to use buffers. Affects the delay. // Buffer size and number of buffers - set the balance between delay and the resulting sound, "Glitter," which occurs when the computer is not fast enough.

Program Structure
in testApp.cpp add: void testApp::audioRequested (Float * output, // output buffer int bufferSize, // buffer size int nChannels // number of channels ) { // Example of sound "white noise" to two channels for (int i = 0; i <bufferSize; i + +) { output [i * nChannels] = ofRandomf (); // [-1,1] output [i * nChannels + 1] = ofRandomf (); } }

Example
See an example audioOutputExample in OpenFrameworks.

Mouse moves 1. up and down - changing the tone of the sound. 2. left and right - changing panorama. Mouse click - generated noise.

Example of synthesis: RubberGravity


The rubber squares tensile generate sound.

http://www.youtube.com/watch?v=Pz6PO4H1LT0

Homework
Using the example audioOutputExample, add to the example of the swinging pendulum of sound generation. Namely: Let the position of the pendulum in the Y sets the pitch, and the position of the pendulum on X - panning.

Das könnte Ihnen auch gefallen