Worked examples

The folder slab.experiments contains the full code from actual psychoacoustic experiments in our lab. We use this folder mainly to make the code available and enable easy replication. The examples are well documented and may give you and idea of the typical structure of such experiments. To run these experiments, import them from slab.experiments:

from slab.experiments import motion_speed
motion_speed.main_experiment(subject='test')

Currently available are:

slab.experiments.room_voice_interference.main_experiment(subject=None, do_jnd=True, do_interference=True)

Interference between room and voice processing. Pre-recorded voice recordings are presented in different simulated rooms (the large stimulus set is not included). Just-noticeable differences for changes in room volume and voice parameters (glottal pulse rate and vocal tract length) are first measured, then 3-alternative- forced-choice trial are presented with the reference in a larger room. Does a simultaneous voice change impede the detection of the room change? The experiment requires a set of recorded spoken word stimuli, each of which was offline manipulated to change speaker identity (using the STRAIGHT algorithm) and then run through a room acoustics simulation to add reverberation consistent with rooms of different sizes. The filenames of the recordings contain the word and the voice and room parameters, so that the correct file is loaded for presentation.

This experiment showcases participant data handling, AFC trials, prerecorded stimulus handling, among others.

slab.experiments.motion_speed.main_experiment(subject=None)

A complex spatially extended moving sound is generated (‘moving_gaussian’). This stimulus simulates the acoustics of a free-field loudspeaker arc. A gaussian profile moves from left to right or right to left across the virtual speaker array and the speed of the movement and modulation depth (across space) can be varied. Detection thresholds for motion direction are measured at different motion speeds. Then the effect of adaptation by a long moving adapter at one speed on the detectability of motion at different speeds is measured.

This experiment showcases complex stimulus generation and staircases, among others.

Quick standard experiments:

Audiogram

Run a pure tone audiogram at the standard frequencies 125, 250, 500, 1000, 2000, 4000 Hz using an adaptive staircase:

from matplotlib import pyplot as plt
freqs = [125, 250, 500, 1000, 2000, 4000]
threshs = []
for frequency in freqs:
    stimulus = slab.Sound.tone(frequency=frequency, duration=0.5)
    stairs = slab.Staircase(start_val=50, n_reversals=18)
    print(f'Starting staircase with {frequency} Hz:')
    for level in stairs:
        stimulus.level = level
        stairs.present_tone_trial(stimulus)
        stairs.print_trial_info()
    threshs.append(stairs.threshold())
    print(f'Threshold at {frequency} Hz: {stairs.threshold()} dB')
plt.plot(freqs, threshs) # plot the audiogram

Temporal modulation transfer function

Measure temporal modulation transfer functions via detection thresholds for amplitude modulations. The parameters of the test replicate Fig. 2 in Viemeister [1979] and present sinusoidal 2 to 4000 Hz modulations in a 77-dB wideband noise carrier using an adaptive staircase.

from matplotlib import pyplot as plt
mod_freqs = [2, 4, 8, 16, 32, 64, 125, 250, 500, 1000, 2000, 4000]
threshs = []
base_stimulus = slab.Sound.pinknoise(duration=1.)
base_stimulus.level = 77
for frequency in mod_freqs:
stairs = slab.Staircase(start_val=0.8, n_reversals=16, step_type='db',
            step_sizes=[10,5], min_val=0, max_val=1, n_up=1, n_down=2)
    print(f'Starting staircase with {frequency} Hz:')
    for depth in stairs:
        stimulus = base_stimulus.am(frequency=frequency, depth=depth)
        stairs.present_afc_trial(stimulus, base_stimulus)
    threshs.append(stairs.threshold(n=14))
    print(f'Threshold at {frequency} Hz: {stairs.threshold(n=14)} modulation depth')
plt.plot(freqs, threshs) # plot the transfer function