There are six interactive touchscreens where you can experiment. Put the headphones on and touch the screen to choose a language.
The Shepard tone or Shepard scale, introduced in 1964 by psychologist Roger Shepard, is the illusion of an infinitely ascending or descending scale that never transcends the boundaries of one's own hearing. This effect is achieved by sounds with a partial tone structure arranged in octaves under a bell- shaped filter curve. This means that the partials are always strongest in the middle and always weakest at the upper and lower margins. In other words, the scale seems to keep going up or down without disappearing completely below the upper or lower hearing threshold.
The ear is one of the earliest functioning sensory organs of a developing human being, even before the heart and brain are active. Already between week 20 and 24, an auditory stimulus can initiate movement in the fetus. From week 28 onwards, one can presuppose the ability to hear in a healthy fetus. It perceives signals and sound through its skin, the air, and the bones. It can hear the heart beat and voice of the pregnant person, as well as their digestive sounds and stomach rumbling. Conversations, music, ambient sound are received through a filter. Predominantly, it is the darker tones that penetrate. Vowels tend to be perceived as consonants, which is why the fetus primarily hears the speech melody and less the percussive elements of spoken elements.
Waveform is the visual representation of varying amplitudes of an oscillation over time. The four basic periodic waveforms are sine, square, triangle and sawtooth. In acoustics, waveform refers to the envelope of acoustic oscillations in an oscillogram of graphical user interfaces.
With a filter, an audio signal can be changed depending on the frequency in amplitude and phase. The cutoff parameter is used, for example, to reduce or cancel unwanted signal components. The set cutoff frequency can also be increased with the resonance parameter.
An LFO (Low Frequency Oscillator) is used to modulate a sound parameter both permanently and rhythmically according to a recurring pattern (depending on the set waveform). An LFO creates movement in the sound, making it more lively and literally more vibrant. It makes sounds "breathe," "shimmer" or "pulsate".
The human ear can perceive frequencies ranging from about 16 to 18000 Hz. The lower the frequency, the lower the sound; the higher the frequency, the higher the sound. However, the limits of our hearing are very individual. The older we get, the more our ability to hear higher frequencies decreases (Statistically, we lose about 1000 Hz every ten years). Often, hearing is also impaired by music that is too loud, regardless of the musical genre. High frequencies are mostly affected because their receptors are located at the beginning of the cochlea. If these fine sensory cells are damaged, certain acoustic signals are no longer transmitted to the brain.
How music sounds is influenced to a great degree by the room or place it is played in. The main reason for this is sound reflection and absorption. Depending on the size of the space and the surrounding surfaces, sound waves are reflected with varying intensity. Sound propagates spherically in the form of a sound wave and hits surfaces that partly reflect and absorb it. In general, smooth surfaces lead to strong reflection, while uneven surfaces reduce reflection – particularly the high frequencies. In room acoustics, sound waves are often regarded as sound rays that are reflected back at the same angle just like light. In this way, one can systematically target sound through reflection (e.g. by avoiding echoes). Propagation and reflection each reduce the sound energy, which is converted into heat. The shortest possible path from sound source to listener is called "direct sound". Soon after, the sound waves reflected from walls and ceilings arrive on the receiving end ("first reflections"). They are succeeded by a proliferating number of different reflections, whose amplitudes decrease continually and are perceived as a gradually fading "reverberation".
Human sound perception differs greatly from the animals’ world of sound. We can only guess how spoken word and music is received by other human beings, let alone by animals? Can a turtle hear us talk? What does a fish hear in the aquarium bowl? Would a bat appreciate Chopin’s piano concertos? Is pop music torture to a cat’s ears? In a project that was started at the institute of Research in Music Education at the Hanover University of Music, Drama and Media in 2002, a team under the supervision of Prof. Dr. Reinhard Kopiez began studying these questions. The aim was to create audiograms of certain species through methods of digital signal processing. The foundation of this project was the vast data collection in Richard R. Fay (1988): “Hearing in vertebrates: A psychophysics databook”, Winnetka, IL: Hill-Fay Associates. This book is presumably the biggest collection on vertebrates’ hearing capacity and the collected data, won through animal behavior experiments, tells us about the hearing threshold of particular animals or the resolution of their directional hearing. But how does a carp hear a Beethoven Symphony under water? One can only take a wild guess. If animals are able to perceive sound in the same way as humans do cannot be absolutely verified, but from Fay’s audiograms we can conclude specific information about the perceivable frequency spectrum of each animal. This is what Kopiez and his team managed to make audible and comprehensible with their experiments.