Portfolio comes with a set of predefined instrument templates
that are stored on the system disc in the audio directory. These
default templates are described individually in
Instrument Templates, in the 3DO Music and Audio Programmer's Reference. Because these instruments are all designed to run on the DSP, their names all end with the extension .dsp
.
A task can create a surprisingly large variety of instrumental sounds using the default instrument templates. To do so, the task creates different instruments, connects them so that one instrument can process the output of another, and then sets knobs accordingly. If you want more variety within a single instrument, you can create your own instrument templates using the development tool ARIA. Custom instrument templates can be stored wherever you wish.
Table 1: Sampled sound instruments. ----------------------------------------------------------------------- Instrument Name |Sample |Sample Storage |Playback |Stereo/Mo |Size |Format |Sample Rate|no ----------------------------------------------------------------------- sampler.dsp |16-bit |Literal |Variable |Mono ----------------------------------------------------------------------- samplerenv.dsp |16-bit | |Variable | ----------------------------------------------------------------------- samplermod.dsp |16-bit | |Variable | ----------------------------------------------------------------------- varmono8.dsp |8-bit |Literal |Variable |Mono ----------------------------------------------------------------------- varmono8_s.dsp |8-bit | |Variable |Mono ----------------------------------------------------------------------- varmono16.dsp |16-bit |Literal |Variable |Mono ----------------------------------------------------------------------- fixedmonosample.dsp |16-bit |Literal |44100 Hz |Mono ----------------------------------------------------------------------- fixedmono8.dsp |8-bit |Literal |44100 Hz |Mono ----------------------------------------------------------------------- fixedstereosample.dsp |16-bit |Literal |44100 Hz |Stereo ----------------------------------------------------------------------- fixedstereo16swap.dsp |16-bit |Literal (little |44100 Hz |Stereo | |endian) | | ----------------------------------------------------------------------- fixedstereo8.dsp |8-bit | |44100Hz |Stereo ----------------------------------------------------------------------- halfmonosample.dsp |16-bit |Literal |22050 Hz |Mono ----------------------------------------------------------------------- halfmono8.dsp |8-bit |Literal |22050 Hz |Mono ----------------------------------------------------------------------- halfstereo8.dsp |8-bit |Literal |22050 Hz |Stereo ----------------------------------------------------------------------- halfstereosample.dsp |16-bit | |22050Hz |Stereo ----------------------------------------------------------------------- dcsqxdmono.dsp |8-bit |SQXD 2:1 |44100 Hz |Mono ----------------------------------------------------------------------- dcsqxdstereo.dsp |8-bit |SQXD 2:1 |44100 Hz |Stereo ----------------------------------------------------------------------- dcsqxdhalfmono.dsp |8-bit |SQXD 2:1 |22050 Hz |Mono ----------------------------------------------------------------------- dcsqxdhalfstereo.dsp |8-bit |SQXD 2:1 |22050 Hz |Stereo ----------------------------------------------------------------------- dcsqxdvarmono.dsp |16-bit |SQXD |Variable |Mono ----------------------------------------------------------------------- adpcmvarmono.dsp |16-bit |ADPCM |Variable |Mono ----------------------------------------------------------------------- adpcmmono.dsp |4-bit |ADPCM Intel/DVI |44100 |Mono | |4:1 | | ----------------------------------------------------------------------- adpcmhalfmono.dsp |4-bit |ADPCM Intel/DVI |22050 |Mono | |4:1 | | -----------------------------------------------------------------------
Although sample data used by the a\Audio folio is typically stored in the AIFC format (an unsupported variation of Apple Computer Inc.'s AIFF format), the data stored in an AIFC file can be compressed with several different compression formats, or it can be uncompressed using literal sample values. The dcsqxd
instruments are designed to play compressed sample data with square/xact/delta compression; the adpcm
instrument is designed to play compressed sample data using ADPCM compression; the other instruments are designed to play literal data.
A sampled sound instrument's input sample size is the size of the sample it expects to read: 4 bits, 8 bits, or 16 bits. The instrument's output is in 16-bit samples, so if it reads 8-bit original samples it must convert them to 16-bit values. The instruments designed to read literal sample data simply add 8 less-significant bits of 0s to the 8-bit value (10010111 becomes 10010111 00000000, for example). The instruments designed to read sample data compressed in square/xact/delta compression format use a technique to convert the 8-bit samples to 16-bit values with significant information in both the high- and low-order bytes.
Every sampled sound instrument has an output of 44,100 16-bit samples per second (44,100 Hz), a frequency designed for high-fidelity sound reproduction. Some sample data may have been originally recorded at 22,050 Hz, a frequency with less fidelity that requires only half the storage space for samples. If the instruments read those half-frequency tables at a 44,100 Hz playback rate, the sampled sound plays twice as fast and sounds an octave higher than it was recorded. To compensate, several sampled sound instruments have a playback sample rate of 22,050 Hz. When they read 22,050 Hz recorded sample data, they interpolate an intermediate value between each sample read to produce a 44,100 Hz final audio signal that does not change the original sample's pitch or duration.
The instruments sampler.dsp
, varmono8.dsp
, varmono16.dsp
, dcsqxdvarmono.dsp
, and adpcmvarmono.dsp
have variable playback sample rates. By playing sample data at a rate higher or lower than its original recording rate, these instruments can raise the pitch of the sample higher or lower than it was originally recorded. Keep in mind that you can also use the other instruments to change original pitch if their fixed playback sample rate is different from the original recording's sample rate. For example, using a 44,100 Hz instrument to play back a voice recording made at 22,050 Hz produces the voices an octave higher, that speak twice as fast, a cheap way to produce chipmunk voices.
Portfolio sampled sound voices come in mono and stereo varieties. Mono instruments read sample data so that all samples go in succession to a single output channel. Stereo instruments read sample data so that odd samples go in succession to the left output and even samples go in succession to the right output.
The Music library (described in Music Library Calls, in the 3DO Music and Audio Programmer's Reference) includes a call named SelectSamplePlayer()
that returns the name of an appropriate sampled sound instrument to play a given sample.
Sound Synthesis Instruments. Portfolio's sound synthesis instruments generate their own audio signals instead of reading them from sample data. Those instruments are:
triangle.dsp
generates a triangle-wave signal.
sawtooth.dsp
generates a sawtooth-wave signal (and has a grittier sound than a triangle-wave signal).
sawenv.dsp
generates a sawtooth-wave signal and includes a built-in envelope player.
sawenvsvfenv.dsp
generates a sawtooth-wave signal and includes two built-in envelope players.
pulser.dsp
generates a pulse-wave signal that is modulated by a triangle wave, creating a siren sound.
noise.dsp
generates a white-noise signal.
rednoise.dsp
generates a more gritty noise signal than noise.dsp
generates.
filterednoise.dsp
generates a filtered noise signal. Changing the frequency of the filter moves the frequency range of the noise up or down, useful for producing wind or other whooshing sounds.
impulse.dsp
generates impulse waveforms.
pulse.dsp
generates pulse waveforms.
square.dsp
generates square waveforms.
If two or more mixer instruments operate, their outputs to the DAC are added together. If the results of an added sample frame go over 0x7FFF, it is clipped to 0x7FFF, which can result in horrible distortion so it is important to keep the final results down to an acceptable level. System amplitude allocation is a technique that helps a task avoid clipping distortion (see Allocating Amplitude).
These mixers accept audio signals and feed a final audio signal to the DAC:
directout.dsp
accepts a left input and a right input, and feeds those audio signals directly to the left and right DAC channels.
mixer2x2.dsp
accepts two audio inputs and mixes them into a stereo signal. It feeds the stereo signal directly to the left and right DAC channels.
mixer8x2.dsp
accepts eight audio inputs and mixes them into a stereo signal. It sends the stereo signal directly to the left and right DAC channels.
mixer8x2amp.dsp
adds a master gain controller.
mixer12x2.dsp
accepts 12 audio inputs and mixes them into a stereo signal. It sends the stereo signal directly to the left and right DAC channels.
Keep in mind that you must connect an instrument to a mixer before the instrument can be heard. Only mixers send their output to the DAC, where it is turned into an analog audio signal for reproduction.
Submixers. Submixers, unlike mixers, do not send their mixed stereo signal directly to the DAC. Instead, they provide a left output and a right output that can be sent to another instrument. Portfolio submixer instruments are:
submixer2x2.dsp
accepts two inputs and mixes them into two outputs.
submixer4x2.dsp
accepts four inputs and mixes them into two outputs.
submixer8x2.dsp
accepts eight inputs and mixes them into two outputs.
Effects Instruments. Effects instruments typically accept an audio signal, alter it, and pass the altered signal out. In the case of delay-effects instruments, they accept an audio signal and pass it out through DMA to memory, where it can be altered by another instrument. Portfolio's effects instruments are:
deemphcd.dsp
is a standard "feed-forward, feed-back" filter designed by Ayabe-san of MEI for CD de-emphasis.
svfilter.dsp
accepts a signal, filters it, and sends the result to its output. svfilter.dsp
is a state-variable filter, which has knobs that control frequency, resonance, and amplitude. It has lowpass, bandpass, and highpass outputs.
delaymono.dsp
accepts a signal and writes the signal directly to a sample buffer, where it can be reread to create a reverb loop. (This is discussed in Adding Reverberation.)
delay1tap.dsp
writes samples to an output, then reads them back on another FIFO and mixes them. It has a separate "effects send" mix and a separate output mix.
delaystereo.dsp
writes input to an output FIFO. It is used as a building block for reverberation and echo effects.
Portfolio provides the following important dedicated control-signal instruments:
envelope.dsp
accepts envelope contour values through its knobs. It uses the values to create an envelope-control output signal.
pulse_lfo.dsp
uses extended precision arithmetic to give lower frequencies than pulse.dsp
. It also has better resolution at the same frequency. The frequency of this instrument is 256 times lower than its corresponding high frequency version. It is useful as a modulation source for controlling other instruments, or for bass instruments.
add.dsp
performs a signed addition between its two inputs.
multiply.dsp
accepts two signals connected to its two knobs, multiplies one signal by the other, and sends the results through its output. The result is that of a traditional ring modulator. (Typically an audio signal is connected to one knob and a much lower frequency control signal is connected to the other knob.)
timesplus.dsp
accepts signals connected to each of its three knobs (A, B, and C). It then multiplies the A signal by the B signal, the C signal to the results, and sends the final results through its output. (Typically an audio signal is connected to A, and constant control values are connected to B and C.)
subtract.dsp
accepts two inputs and outputs the difference between the two. The output is clipped.
minimum.dsp
accepts two inputs and outputs the smaller of the two. This can be used for clipping.
maximum.dsp
accepts two inputs and outputs the larger of the two. This can be used for clipping.
envfollower.dsp
tracks the positive peaks of an input signal. It outputs a fairly smooth signal that can be used to control other signals.
randomhold.dsp
generates new random numbers at a given rate and holds steady until a new number is chosen.
triangle_lfo.dsp
is a triangle wave generator that uses extended precision arithmetic to give lower frequencies than triangle.dsp
. It also has better resolution at the same frequency.
square_lfo.dsp
is a square wave generator that uses extended precision arithmetic to give lower frequencies than square.dsp
. It also has better resolution at the same frequency.
envelope.dsp
is used with instruments that do not contain their own envelope player. Once connected to another instrument's knob, envelope.dsp
applies an envelope to the connected instrument by changing the knob values.Other Instruments.
directin.dsp
outputs the stereo Audio Input signal in Anvil hardware. Each task that intends to use this instrument must successfully enable audio input with EnableAudioInput()
before the instrument can be loaded.
tapoutput.dsp
permits reading the accumulated stereo output from all currently running output instruments.
benchmark.dsp
outputs current Tick count.
Instrument Resources. The specifications also list the resources required for each instrument: memory requirements, hardware requirements, and a value measured in DSP ticks. DSP ticks are DSP time units used during each frame of the DSP output. (A DSP frame is the time the DSP takes to put out one pair of samples to the DAC, which usually takes place 44,100 times per second.) The DSP can, at this writing, execute 565 ticks per 44,100 Hz frame.
The Audio folio allocates resources as instruments are created from templates and it totals the DSP ticks required for each instrument. If the total number of ticks goes above the possible frame total or the system runs out of other resources necessary for instrument allocation, the Audio folio refuses to allocate any more instruments. It is important to keep track of how many DSP ticks you are using with each instrument creation, because instruments are most likely to use up DSP ticks before using up other system resources.
Item LoadInsTemplate( char *Name, Item AudioDevice )
*Name
, which points to a string containing the filename of the file that has the instrument template; and AudioDevice
, which is the item number of the device on which you want the instrument to be played. Pass 0 for the AudioDevice
number if you want to use the system audio device, currently the DSP. In this release, the DSP is the only device available for instrument playback.When the call executes, it uses the specified instrument template file to create an instrument template item. The call returns the item number of the template if successful, or if unsuccessful, returns a negative number (an error code).
Item DefineInsTemplate( uint8 *Definition, int32 NumBytes, Item Device, char *Name )
*Definition
, which is a pointer to the beginning of the instrument template file image; NumBytes
, which is the size of the instrument template file image in bytes; Device
, which is the item number of the device on which you want the instrument to be played; and *Name
, a pointer to the name of the instrument template file image. At this writing, you should use 0 as the device number to specify the DSP. The DSP is the only audio device currently available.
When executed, DefineInsTemplate()
uses the instrument template file image to create an instrument template item. If successful, it returns the item number of the instrument template. If unsuccessful, it returns a negative value (an error code).
Item CreateInstrument (Item InsTemplate, const TagArg *tagList)
CreateInstrument()
creates an instrument item defined by the instrument template and allocates the DSP and system resources the instrument requires. The call returns the item number of the new instrument if successful; if unsuccessful, it returns a negative value (an error code).Note that you can create as many instruments as you like from a single loaded instrument template. In fact, some tasks can be set up to create new instruments whenever the task does not have enough voices to play desired notes. For example, if a task needs to play a four-voice chord but has created only three instruments of the same kind to play that chord, it can create one more instrument to play the chord.
Item LoadInstrument( char *Name, Item AudioDevice, uint8 Priority )
*Name
, which points to a string containing the filename of the file that contains the instrument template. It also accepts the item number AudioDevice
of the device in the template on which the instrument is to be played. Both of these arguments are the same as those used in the call LoadInsTemplate()
. The third parameter accepted, Priority,
is a value from 0 to 200 that sets the priority of the allocated instrument. Priority
is the same as the Priority argument to AllocInstrument()
.
When LoadInstrument()
executes, it loads the specified instrument template, and creates an instrument from that template. It returns the item number of the instrument if successful, or a negative number (an error code) if unsuccessful.
The instrument template loaded with this call remains in memory, but there is
no item number to use to unload it from memory. To do so, you must use
the UnloadInstrument()
call
(see Deleting an Instrument and
Its Template). Before unloading an instrument, be sure to
disconnect the it with the DisconnectInstruments()
call
(see Disconnecting One Instrument
From Another). See also "CreateInstrument" in
Audio Folio Calls.