An introduction to music technology dan hosken pdf download






















Since the compression has begun to move, or propagate, via chain reaction through the instrument, the absence of airflow allows a rarefaction to form.

Also, due to the reduced airflow over the reed, there is no more lift on the reed and the reed opens back up, allowing the air to flow freely into the instrument again. A double-reed instrument such as an oboe works similarly, except that there are two reeds that close together due to the airflow. In the case of a flute, air blown across the blowhole is split by the edge of the blowhole and some of the air enters the flute, causing a compression see Figure 1.

This build-up of pressure then deflects the entering air out of the blowhole. The compression moves down the inside of the flute and, with no air entering the mouthpiece, a rarefaction forms. As with the reed instruments, a steady stream of air from the performer is converted by the mouthpiece of the instrument into very fast puffs—compressions and rarefactions.

The air over the reed causes the reed to rise and reduce the airflow. The reduced airflow over the reed causes the reed to open back up and the cycle starts again. Air is blown across the blowhole of the flute and is deflected into the flute, creating a compression. A rarefaction forms as the compression propagates down the pipe. Flutes that you blow directly into, such as toy flutes, whistles, ocarinas, and recorders, work similarly in that when you blow into them the air is split by the edge of the vent.

The air that goes into the instrument creates a compression. This pressure build-up causes the air to be redirected out of the vent thereby creating a rarefaction. With the backpressure relieved, the air split by the edge of the vent again enters the pipe. In brass instruments, such as trumpets, trombones, French horns, and tubas, lip buzzing creates compressions and rarefactions.

The performer starts with closed lips and no air flowing into the instrument see Figure 1. When the performer blows, the pressure forces the lips open and air flows into the instrument creating a compression. Vocal production is similar in many ways to the production of sound in brass instruments. This repeated opening and closing creates the compressions and rarefactions necessary for sound. As with each of these physical descriptions, vocal production is actually somewhat more complex.

The vocal folds can vibrate, while at the same time that stream of air is also used to buzz a brass mouthpiece, vibrate a reed, or excite the air column in a flute.

Singing-while-playing is a common contemporary classical performance technique used in music from the twentieth century to today. Much of the music we listen to comes out of loudspeakers and headphones. Although this music may have originally been created in a variety of ways by the a b Figure 1. The simplest kind of speaker has a cone that moves back and forth in response to an analog electrical signal from an electric guitar, stereo, or iPod see Figure 1.

An electromagnet attached to the speaker converts this electrical signal into the physical movement of the cone. When the cone moves forward, a compression forms and when it moves backward a rarefaction forms. Resonance In the description above of sound generation by musical instruments, the discussion stopped once the mouthpiece, vocal cords, or string had produced a series of compressions and rarefactions.

However, there is a reason that these instruments all have bodies, barrels, or pipes: resonance. Once the initial vibration is started, the sound wave passes into the body of the instrument, which can determine pitch and overall timbre. The pitch of stringed instruments, percussive instruments, and voices is determined by the vibrating elements of strings, membranes or bars, and vocal cords.

The resonators for those instruments—the body of a violin, the shell of a drum, and the throat, mouth, and nasal cavities of a human—are responsible for shaping the timbre of the sound. This should not be thought of as a trivial task. For example, in order to speak, we must shape our resonators continuously to produce different vowels.

To make a violin sound beautiful, the sound wave created by the bow and strings must be modified by the materials and overall shape of the body of the instrument. A bowed string without a resonator can sound quite thin and unimpressive. The forward motion of the speaker cone causes a compression. The pitch of brass and woodwind instruments is determined by a combination of the mouthpiece and the resonator. For brass instruments, the pitch is determined by the length of the air column and the pitch of the buzzing from the mouthpiece.

For woodwinds, the pitch is almost entirely determined by the length of the air column as controlled by the keys. In addition to strongly influencing the pitch of brass and woodwind instruments, the resonator also shapes the timbre as it does with strings, percussion, and voice. The Medium Thus far, it has been assumed that the forward and backward activity of a vibrating object is taking place in air. For sound to propagate, it requires an elastic medium, and the molecules in water fulfill this requirement.

The classic example of the latter is the vacuum of space. However, the density of the molecules in between these celestial bodies is not high enough to allow the chain reaction of compressions and rarefactions to form. The next step is for someone, or something, to receive this series of compressions and rarefactions and interpret them.

In other words, we need ears with which to hear. The ear can be divided into three basic parts: the outer ear, the middle ear, and the inner ear see Figure 1. The outer ear consists of the fleshy part on the outside of your head and a canal that funnels sound waves into your head.

The flesh of the ear, or pinna, helps us locate the sound source, because it changes the incoming sound subtly filters it depending on what direction the sound is coming from. The two ears working together also provide directional cues through the time difference between when sound reaches one ear and the other and through an intensity difference if sound arriving at one ear is partially blocked by the head.

The shape and length of the ear canal influences the frequency balance of the sound waves that pass through it by emphasizing frequencies between about 2, and 5, Hz, just as speaking into a tube changes the quality of your voice by emphasizing certain frequencies. As a result, our hearing is most acute around those frequencies. Based on a drawing in Chittka, L. Perception Space—The Final Frontier. PLoS Biol 3 4 : e The middle ear consists of the tympanic membrane, or eardrum, and a series of three bones, collectively referred to as the ossicles, which connect the eardrum to the inner ear.

When a sound wave reaches the eardrum, it vibrates in sympathy. To get a sense of how this works, try pointing a trumpet, or some other loud instrument, at the head of a timpani drum and playing loudly. The drumhead will vibrate madly without ever being struck. Similarly, you can also sing into a piano while holding the damper pedal down and hear the strings vibrate in sympathy with your voice.

With your eardrum moving back and forth, the energy that a voice or instrument originally imparted to the air has now been turned into a vibration in your body. The vibration of the eardrum is next passed to the ossicles. The individual ossicles are called the malleus, the incus, and the stapes, and are known colloquially as the hammer, the anvil, and the stirrup due to their respective shapes. These three bones work together to amplify mechanically the relatively small movement of the eardrum; this is one of the reasons why our hearing is so sensitive.

The middle ear also connects to your Eustachian tubes, which connect at the other end to your throat and allow your body to keep the air pressure in the middle ear matched with the air pressure outside of your head. The last of the three ossicles connects to the oval window of an organ called the cochlea, which makes up your inner ear. The cochlea is a fluid-filled tube that is coiled up like a snail. So far all of the changes in energy have been mechanical: vibrating string to vibrating air to vibrating eardrum to vibrating ossicles to vibrating fluid.

The vibrating fluid in the cochlea causes various parts of the basilar membrane, which runs down the middle of the cochlea, to vibrate as well see Figure 1.

On this membrane are thousands of tiny hair cells and corresponding nerve receptors that are part of the organ of Corti. As different parts of the basilar membrane are set in motion by the vibrating fluid, the moving hair cells cause nerves to fire, sending signals down the auditory nerve to the brain. The movement of the basilar membrane is dependent on the frequencies present in the incoming sound wave, which causes different hair cells to fire for different frequencies.

As a result, different parts of the basilar membrane are sensitive to different frequencies: high frequencies nearest to the oval window, low frequencies toward the center of the spiral. In this way, the basilar membrane separates the incoming sound wave into its component frequencies. This is one reason that we can identify multiple simultaneous pitches in music: each pitch causes the most movement in a different part of the basilar membrane, and the nerves that fire at each location are sent separately to the brain.

The frequency range of the piano is given for reference. Your cochlea is a powerful, but sensitive, organ, and damage to it is irreversible. Hearing loss will be discussed in the next chapter, where the amplitude of sounds is considered. Once the cochlea has done its job, the nerve signals are sent to the brain.

The study of our auditory system and the way the brain decodes and analyzes the resultant nerve impulses is referred to as music perception, or psychoacoustics.

Some aspects of this field of study will be taken up in future chapters as they relate to sound, audio, sampling, and synthesis. The study of mental processes and mental representation in music is referred to as music cognition. Naturally, there can be a great degree of overlap between the study of music perception and the study of music cognition.

So if a tree falls in the forest and no one is there to hear it, the tree still causes a pattern of compressions and rarefactions that propagate through the air, but with no ears to receive the disturbances and no brain to process the signals, there is no sound. Do squirrels count? If this is a musical sound, there are a variety of questions that you might have about it. What is its pitch? How loud is it? What does it actually sound like?

How is it articulated? What is its rhythm? These questions can be boiled down to a list of musical sound properties that are of interest: pitch, loudness, timbre, articulation, and rhythm. However, while these properties are present in some way in all sounds, they are not as well defined for non-musical sounds as they are for musical ones.

The waveform view is a graph of the change in air pressure Figure 2. SOUND at a particular location over time due to a compression wave. If you were measuring the air pressure right in front of a vibrating string, the pressure would change as shown in Table 2. This change in air pressure over time can be graphed on a simple x-y graph with time being the x and the air pressure being the y. This gives us the waveform view of sound see Figure 2.

Each of steps in Table 2. Pitch can therefore be described as a perceptual property of sound. The waveform view shows what is happening to air molecules when they are disturbed by something that vibrates, not what is happening in your brain. The waveform view, then, is a physical representation, not a perceptual Table 2. Not moving; in regular position Normal 2.

The numbers above the graph correspond to the numbered string motions in Table 2. Each of the sound properties mentioned above—pitch, loudness, timbre, articulation, and rhythm—is a perceptual property. In order to use the physical waveform view to understand something about these perceptual properties, we need to identify physical properties that are related to them. At first, this may seem like a meaningless distinction.

However, in the act of perception, our ears change the physical properties of sound in various ways, such as emphasizing certain frequencies, and our brain analyzes the perceived sound properties in relation to perceptions it has encountered before, such as identifying a sound as the harmonic interval of a perfect fifth played on a piano. In addition, the ear can be fooled. The physical property that is related to pitch is frequency. In the string example, frequency is the rate at which a string moves through a full cycle of motions from center, to forward, to center, to backward, to center and then repeats.

These cycles of motion in turn create compression and rarefaction cycles in the air that repeat at the same rate frequency. As discussed in the previous chapter, various musical instruments and other sound sources have different physical motions, but they all produce the necessary compression and rarefaction cycles at a rate related to the rate of their physical motions. The cycles per second, or cps, measurement is also referred to as hertz, or Hz, after the German physicist, Heinrich Hertz.

This rate can be determined from the waveform view by measuring the amount of time the sound wave takes to go through a compression-rarefaction cycle. This measurement is called the period of the waveform and is measured in seconds per cycle. Since the period gives us the number of seconds per cycle, we can obtain the number of cycles per second by inverting the period. The frequency, f, is equal to the reciprocal of the period, T.

Period T2 is twice as long as Period T1, resulting in half the frequency. If you think about the vibrating string, this makes sense. The longer it takes the string to move back and forth, the slower the string is moving and the lower the frequency.

If the string takes less time to move back and forth, the string must be moving faster and the frequency will be higher. Table 2. The frequencies for the notes on a piano keyboard are given in Figure 2. There are a wide range of frequencies that occur in the world, but we are only sensitive to a certain range.

The frequency range of human hearing is about 20 Hz to 20, Hz 20 kHz. This forms one of the distinctions between frequency and pitch. As we age, our sensitivity to high frequencies gradually diminishes. One company has put this fact to interesting use by marketing an anti-loitering device that emits a relatively loud high frequency tone of approximately 17 kHz.

Theoretically, this tone is only hearable by people under the age of about A dog whistle is a classic example of this phenomenon. A dog whistle is theoretically too high for humans to hear, but well within the hearing range of dogs at least of small dogs.

Frequencies like this above our hearing range are referred to as ultrasonic. Elephants, C8 2, C4 is middle C. Some organ pipes also produce infrasonic frequencies that are felt rather than heard. It is possible to fool the ear in various ways with regard to the relationship between frequency and pitch. For example, the cognitive psychologist, Roger Shepard, developed an illusion in which a series of tones appears to rise endlessly, but never leaves a relatively narrow range of frequencies.

In his honor, these are referred to as Shepard tones. The tones in this illusion are made up a number of frequencies in octaves. As all the frequencies rise, the higher frequencies gradually fade out and frequencies below them gradually fade in. As the loudness relationships between the octaves change, our ears shift smoothly from the octaves that are fading out to the octaves that are fading in without us being consciously aware of it.

This illusion has been likened to the visual illusions of M. Escher, particularly his Ascending and Descending, which was inspired by a design by Lionel and Roger Penrose. Amplitude is determined by how much the air pressure in a compression or rarefaction deviates from the normal air pressure. In the case of a stringed instrument, the harder the string is plucked or bowed, the farther from the normal position the string moves and the greater the deviation in air pressure from the norm is in a compression or rarefaction.

Struck instruments such as percussion generate greater amplitude in the same way. For instruments driven by breath, the greater the airflow, the more the air molecules get packed together before the reed, lips, or vocal cords close, and the greater the amplitude of the resultant sound wave.

On the waveform view, amplitude is measured from the x-axis to the peak or the trough so that it represents the deviation of air pressure from normal see Figures 2. A1 A2 time Figure 2. This amplitude measurement is usually given in relation to a reference value, resulting in a sound pressure level. This level is expressed in units known as decibels dB , or specifically decibels of sound pressure level dB SPL. As with frequency, the range of human hearing for loudness is limited to just a part of the full range of possible sound pressure levels.

The quietest sound we can possibly hear is given as 0 dB SPL and is referred to as the threshold of hearing. Any compression wave with a lower pressure or intensity would be measured in negative dB SPL and would not be perceivable by humans. The loudest sound that we can bear is approximately dB SPL and is referred to as the threshold of pain. Anything above this is both physically painful and damaging to our hearing. However, it is important to note that prolonged exposure to sound pressure levels significantly lower than this can still cause hearing damage.

As we get older, our hearing naturally gets worse, with the higher frequencies slowly falling away. However, our modern noisy lifestyle can cause our hearing to deteriorate faster.

Many people have had the experience of leaving a rock concert and noticing that their hearing is dull or muted. This sensation, referred to as temporary threshold shift, often goes away after a few hours or days! Every time you subject your ears to extremely loud sounds, or even merely loud sound over a period of time, you contribute to the hastening deterioration of your hearing.

The dullness comes from over-exciting the hair cells in the cochlea that are responsible for high frequency sounds. If the sonic abuse is severe enough, the dull sensation may be a permanent threshold shift instead of a temporary one; in other words: permanent hearing damage.

The ringing sound in the ear can be temporary, like the post-concert muffled sensation discussed above, but it can also eventually become permanent. There are mechanical devices that help reduce the effects of such hearing loss, ranging from hearing aids to cochlear implants, but there is currently no mechanical device that hears with the sensitivity of your natural hearing system.

The best solution to hearing loss is to prevent it in the first place. The first and best way to prevent loudness-induced hearing loss is to avoid loud sounds. It may not be necessary to have your car stereo so loud that the entire neighborhood can hear it. The 85 dB mark is often used in workplace settings to determine whether hearing protection is required.

If you compare these sound pressure levels to those in Table 2. There has also been some concern that personal listening devices PLDs , such as iPods, can contribute to hearing loss, and guidelines have been published that suggest volume limitations and a time limit on PLD listening per day.

Rough guidelines are listed in Table 2. The greater number of dB SPL that a PLD outputs at a given percentage of maximum volume, the lower the exposure limit would be at that volume. For situations involving everyday noise, such as noise from a bus, subway, lawnmower, or chainsaw, inexpensive earplugs can reduce the loudness of these sounds before they reach your hearing system.

However, inexpensive earplugs tend to change the balance of frequencies, and hence the timbre, of sounds that reach your middle and inner ears by reducing higher frequencies more than lower ones.

This is problematic for music. In musical situations, more expensive earplugs that are molded specifically to fit your ears can reduce the loudness of sounds evenly across all frequency ranges. These custom-molded earplugs are naturally more expensive than the cheap earplugs; you can consider it an investment in the longevity of your career.

Amplitude and Loudness Perception Decibels are used when expressing sound pressure levels because they reduce a wide range of numbers down to a manageable range. Our hearing is very sensitive, resulting in a ratio of the intensity of a sound at the threshold of pain to the intensity of a sound at the threshold of hearing of about 1,,,, one trillion to 1.

Small changes in decibel values, then, can reflect rather large changes in the actual intensity of a sound. A change of 3 dB SPL indicates a doubling of the physical measurement of intensity. Different filters can be inserted to reduce sound by 9 dB, 15 dB, or 25 dB. Another discrepancy between physical measurements and perception is the difference in perceived loudness levels for sounds at different frequencies.

We are more sensitive to frequencies between about 1 kHz and 5 kHz, so those sounds require less intensity, and hence fewer dB SPL, to sound as loud as lower frequency sounds. Our sensitivity to this frequency range makes some sense given that a number of consonants in our language have significant energy in that range.

TIMBRE amplitude The perceptual property of timbre is related to the physical property of the shape of the wave, or the waveform.

Thus far in the discussion of sound, it has been assumed that the vibrating object is moving in the simplest possible way. The shape produced by that simple back and forth motion is called a sine wave after the mathematical function that produces such a shape. Figures 2. A real-world vibrating object seldom moves in such a simple fashion. Typically, the back and forth motion will be more complicated, resulting in an equally complicated graph of the changing amplitude over time.

Figure 2. In addition, timbre is a complicated phenomenon and can be influenced by the other sound properties pitch, loudness, articulation and the overall sonic context what other instruments are playing, whether it is noisy, etc. The discussion of the spectrum in the next chapter will provide us with more tools for analyzing timbre.

However, there is a collection of largely artificial waveforms that can be used as a sort of rudimentary timbral vocabulary. The simplest is the sine wave mentioned above.

The other standard waveforms are the triangle wave, the sawtooth wave, the square wave, and a version of the square wave called a pulse wave. A single cycle of each of these waveforms is shown in Figure 2. These waveforms are T primarily relevant because they formed the basis for early A analog synthesizer sounds. This may appear to be historical trivia, but many current software synthesizers and some hardware synths use analog-modeling techniques to create sound. There are even some actual analog hardware synthesizers that are still being made.

Period and amplitude are indicated. Another standard waveform is one that has no regular pattern at all: noise. Noise is, of course, all around us in the form of traffic sound, jackhammers, and ocean waves, but it is also very much present in musical sounds. Noise is an important sound component during the attack phase of almost all instruments, such as flutes, guitars, and violins, and is a prominent component in most percussion sounds, such as cymbals and snare drums.

Noise was also an important sound source in analog synthesizers and is used today in analog and analog-modeling synthesis. Noise can have a variety of qualities that are usually described using colors, such as white noise very harsh and grating and pink noise still noisy, but more pleasant. Representative waveforms for white noise and pink noise are given in Figure 2. These are only representative, because noise does not have a predictable amplitude pattern.

For example, the loudness of an accented note rises more quickly from silence and to a higher maximum loudness than a note that is not accented. A note that is staccato will have a quick rise and a quick fall off at the end. Articulation is not just limited to musical notes: the loudness of the non-musical sounds around us also changes over time. A thunderclap has a sudden jump in loudness a b Figure 2.

The duration for each is about the same as the period of a Hz periodic waveform. A motorcycle roaring toward you has a long, slow increase in loudness followed by a long slow decrease as it passes you and roars on. Each of these sounds has its own articulation. When loudness was discussed above, it was related to the amplitude of the individual cycle of a waveform, whose duration is quite short: the period of A the tuning A is just over 0.

The changes in loudness referred to as articulation are taking place over much larger spans of time. An eighth note at quarter equals is 0. In other words, you could fit over cycles of a waveform whose frequency is Hz into that eighth note. Even at the lowest frequency that humans can hear, 20 Hz, the period is 0. The physical property that is related to articulation is referred to as an amplitude envelope because it contains or envelops many repetitions of the waveform see Figure 2.

To represent the amplitude envelope, we will continue to use the waveform view amplitude vs. This envelope also has its roots in analog synthesis and is widely found in various forms on hardware and software synthesizers.

Only the top of the envelope is usually shown because many waveforms are the same on the top and on the bottom.

The frequency is extremely low—20 Hz—so you can still see the individual waveforms within the envelope. The difference in amplitude between the peak and the sustain levels reflects the degree of initial accent on the note. A strong accent will have a greater peak and a greater fall off from the peak to the sustain level, whereas a note that is attacked more gently will have a lower peak and a smaller difference between those two levels.

The sustain segment is the most characteristic segment for this envelope model. Only instruments in which the performer continuously supplies energy to the instrument by blowing, bowing, or some other means, will have such a sustain segment. These instruments include flutes, clarinets, trumpets, trombones, violins, cellos, and organs. The release portion of the envelope reflects how the sound falls away from the sustain level to silence.

With a struck or plucked instrument, the performer initially imparts energy to the instrument and then allows the vibrations to damp down naturally. Examples of this type of instrument include drums, cymbals, vibraphones, guitars, and pizzicato violins. The duration of the attack segment reflects the force with the instrument is activated—how hard the string is plucked or the drum is hit. It can also reflect the materials that impart the initial impulse.

A drum hit with a big fuzzy mallet will likely have a somewhat longer attack than one hit with a hard mallet, and a guitar string plucked with the flesh of the finger will have a somewhat longer attack than one plucked by a pick. The duration of the release portion of the envelope is related to how hard the instrument was struck or plucked: the harder the strike, the longer the release.

The size and material of the vibrating object also impact the release: a longer string or larger drumhead is likely to vibrate longer than short strings and small drumheads. Of course, these envelope models are not necessarily mutually exclusive. In struck or plucked notes, the release of one note can overlap the attack of the next, but the envelope will still be articulated for each note.

Many hardware and software synthesizers do not have separate controls for these two envelope types. Exponential line segments are common in many synths. In this, rhythm is similar to melody, which also consists of multiple notes. In addition, rhythm is often perceived in a hierarchical fashion with individual events combining to form beats and beats combining to form meter.

At the level of a group of notes, aspects of rhythm can be seen in the waveform view by identifying patterns in the attacks of the notes, referred to as transient patterns. The term transient is used because the attack-decay portions of an envelope form a short-term transition from no sound to the sustain or release of a sound. Some sound types, such as drums, form patterns that have strong transients and no sustain, whereas other sounds, such as slurred woodwinds, brass, or strings, form patterns in which the transient can be difficult to see.

Viewing transients in the waveform view involves even more zooming out than with amplitude envelopes see Figure 2. The analysis of transients as a pattern of beats and bars is a standard feature in many recording programs and is generally referred to as beat detection see Figure 2. This process allows you to manipulate audio as separate logical chunks, the same way you can manipulate notes.

The vertical lines indicate identified beats. Courtesy of Digidesign, Inc. Sound that consists of clearly defined transients in a regular pattern is easier for humans and for software to parse into beats and bars. Legato passages in strings, winds, and brass where the notes are not re-attacked, and hence have fewer transients, are more difficult for software that relies solely on transient detection to parse.

However, our perceptual system is more successful here because, in addition to transient detection, we can also bring pitch detection and other techniques to bear on the problem. Software that also utilizes such a multi-faceted approach will be similarly more successful. This table will be further refined in the following chapter. That representation is useful in many instances, but it falls short with regard to timbre.

The physical property related to timbre found in the waveform view is the waveform itself. However, the small collection of standard waveforms sine, triangle, sawtooth, square, pulse is of limited use when discussing real-world timbres.

A better representation of sound for investigating timbre is the spectrum view. To understand the spectrum view it is useful first to consider the more familiar overtone series. The overtone series represents the frequencies that are present in a single note, including the fundamental, and is usually shown using traditional music notation see Figure 3.

This traditional notation is somewhat misleading, because it implies that every note is really a chord and that all the frequencies are distinct pitches. There is really just one pitch associated with an overtone series—the pitch related to the fundamental frequency.

However, there are some useful features about the frequencies in the overtone series that can be seen with traditional notation. For example, for brass players, the fundamental frequencies of the pitches available at each valve fingering or slide position are found in the overtone series that is built on the lowest note at that fingering or position.

However, the terms harmonics and partials are also used. The distinction between these terms is subtle. From Holmes, fundamental frequency plus the first overtone, the second overtone, etc. The website is up to date and provides a large collection of resources, such as audio examples, YouTube videos and URLs to relevant online content. The online component is a helpful expansion of the material covered in the book and allows dissemination of various types of information conveniently gathered in one place, which is practical in a teaching environment.

Hosken is pragmatic in his approach and prioritises the importance of the presented material to students operating in the contemporary music production landscape. An example of this method is the choice of topics for the appendices. The decision to include the discussion on computer hardware and software in the appendices was dictated by the fact that, while important, these topics are intuitively understood by current generations of students who grew up with computer technologies.

While a good understanding of the fundamental knowledge related to computer hardware and software is very helpful in troubleshooting of the technical problems as well as the efficient day-to-day work with music technology, the priority in the volume is given to sound-related topics, which is a sensible choice.

The second edition of the book, reviewed here, sees a restructure of some of the content as well as new additions. The new edition offers several references to mobile platforms, particularly iOS-based apps facilitating music creation and performance as well as computer-assisted instruction. This discussion on mobile apps is a welcome update, as since the launch of the iOS App Store in musicians have gained access to an ever-growing range of tactile apps with unparalleled music capabilities.

In addition to the information on iOS apps, the updated text features references to hardware accessories relevant to the iOS platform. The book helps to facilitate an understanding of the key principles that lie behind the technology, rather than discussing specific software or hardware tools.

While the text does not focus on specific software there are numerous references to popular plugins and digital audio workstation DAW programs. These examples offer a fairly broad overview of available software options, with the most frequently mentioned DAW applications being Pro Tools, Reason and Logic Pro, while other popular DAWs such as Cubase and Ableton are mentioned only in passing.

This approach helps to avoid the dangers of analysing minute details of technological tools that change at a rapid pace, which would quickly render such analysis obsolete. In my own tertiary teaching practice, I found the segments of the book discussing the properties of sound, MIDI, synthesis, sampling and bit rates to be of particular value to students new to these aspects of music technology.

Such content has proven to be an excellent resource for introductory information on these topics. The book is designed for makers and creators who want to use technology in their present or future professional activities and for whom, Hosken argues, it is important to understand how music technology works. He does not discuss technology separate to music, which, I believe, is a step than can help practitioners who frequently wear a hat of musician and sound technician at the same time.

In my practice as a music producer I found that it is often easy to fall into the trappings of technological solutions to problems encountered in a mix and forget about the musical ones. An example of a discussion just as important to music performers as it should be to music technologists is Chapter Three, featured in the first section of the book focused on Sound, where topics such as harmonics, overtones and timbre modification are discussed.

A limitation of the book is the lack of more in-depth explanation of some complex topics or processes that might be challenging for beginners. Author : David Byrne Publisher: S. Aber wie genau funktioniert und wirkt Musik — akustisch, wirtschaftlich, sozial und technologisch? Mit Verve und Witz nimmt er die Leser mit auf eine inspirierende Reise.

Music Technology in Education lays out the principles of music technology and how they can be used to enhance musical teaching and learning in primary and secondary education. It has been completely updated to reflect mobile technologies, social networks, rich media environments, and other technological advances. Topics include: Basic audio concepts and recording techniques Enhanced music instruction with interactive systems, web-based media platforms, social networking, and musicianship software Administration and management of technology resources Distance education and flexible learning Music Technology in Education provides a strong theoretical and philosophical framework for examining the use of technology in music education while outlining the tools and techniques for implementation in the classroom.

Reflective Questions, Teaching Tips, and Suggested Tasks link technology with effective teaching practice. The companion website provides resources for deeper investigation into the topics covered in each chapter, and includes an annotated bibliography, website links, tutorials, and model projects. This practical music technology workbook enables students and teachers to get the best possible results with the available equipment.

The workbook provides step-by-step activities for classroom-based and independent project work, covering the skills and techniques used in modern music production. The activities are supplemented with basic concepts, hints and tips on techniques, productions skills and system optimisation to give students the best possible chance of passing or improving their grade.

The book is includes screenshots throughout from a variety of software including Cubasis, Cubase SX, Logic and Reason, though all activities are software- and platform-independent.

This title deals with both the practical use of technology in music and the key principles underpinning the discipline. It targets both musicians exploring computers, and technologists engaging with music, and does so in the confidence that both groups can learn tremendously from the cross-disciplinary encounter. The Routledge Companion to Music, Technology, and Education is a comprehensive resource that draws together burgeoning research on the use of technology in music education around the world.

Rather than following a procedural how-to approach, this companion considers technology, musicianship, and pedagogy from a philosophical, theoretical, and empirically-driven perspective, offering an essential overview of current scholarship while providing support for future research.

The 37 chapters in this volume consider the major aspects of the use of technology in music education: Part I. Examines the historical and philosophical contexts of technology in music.

This section addresses themes such as special education, cognition, experimentation, audience engagement, gender, and information and communication technologies. Part II.



0コメント

  • 1000 / 1000