Audio applications are very common nowadays and represent a significant part of our lives. Audio apps have their own specifications which have to be tested thoroughly and must work flawlessly to ensure an excellent user experience.
In this article we discuss the theoretical aspects of audio files and audio testing, creating a necessary base for each tester before starting the actual process of testing devices and applications that render audio files.
We shall start by talking a bit about the audio apps that can be installed on mobile devices like a tablet or a phone, on different operating systems (Android, iOS etc). These apps must be able to recognize and run quite a few file formats, ranging from uncompressed audio formats (AIFF & WAV) and lossless audio formats (ALAC, FLAC, WMA) to compressed, lossy audio formats like MP3 and AAC. This depends entirely on what the owner of the app specifies and actually wants of the end product.
All audio files come with their own characteristics, among which the most important would be the sample rate or frequency, bit rate and bit depth.
Before we go any further, we should offer a few definitions about these technical terms and expressions.
Let’s start with what is an audio file format? Basically, it is a standardized way to encode information for storage in a computer file. Each audio format has its own specifications and limitations regarding the three parameters described below.
The sample rate can be defined as how many times per second a sound is sampled. The process of sampling involves taking snapshots of (“reading”) an audio signal at a very fast rate measured in Hertz (Hz). 1 Hz = 1 sample/second, 44100 Hz means 44100 samples per second. The higher the number, the better the audio quality. This also depends on the bit rate.
The bit rate simply refers to how much data (bits) is transmitted in a given amount of time. 320 kbps (kilobits per second) means that 320000 bits are transmitted in a second.
The bit depth defines the number of bits of information found in each sample (check sample rate above). A bit depth of 16 or 24bit means that the device uses 16 or 24 bits per sample.
As apps have to face such a wide range of file types, each audio app must be able to play at least the most common of them, which would be MP3, AAC and WAV or, sometimes, FLAC.
MP3 is a file format that contains an audio stream of MPEG-1 Audio or MPEG-2 Audio encoded data. It is a lossy data compression format, meaning that during the encoding process, some of the data is discarded or lost. This is done because certain components of the sound are considered to be beyond the hearing capabilities of most humans and because of space-efficiency concerns. A 320 kbps MP3 file can be in excess of 75% smaller than a WAV file (check below).
AAC (Advanced Audio Coding) was designed to succeed the MP3 format by generally having a higher sound quality at the same bit rate. It is still a lossy data compression format like its predecessor. Though MP3 is an almost universal standard format, AAC is the default audio format for iOS-based devices and others.
WAV stands for Waveform Audio File Format and was developed by Microsoft and IBM for raw and typically uncompressed audio. Its encoding format (linear pulse-code modulation or LPCM) retains all of the samples of an audio track and for this reason, professional users or audio experts commonly use the WAV-LPCM audio for maximum audio quality.
FLAC (Free Lossless Audio Codec) is another lossless audio encoding format but it is an open format with royalty-free licensing.
Finally, audio files can have one, two or more audio channels (mono, stereo and multi-channeled files), meaning that they can be played through devices that have one speaker, two speakers or surround speakers. But that doesn’t mean that a mono audio file cannot be played through a surround system or a multi-channeled file cannot be played through a single speaker. It only means that these files are designed specifically, not exclusively, for a certain amount of speakers. It also largely depends on the hardware used to distribute correctly each channel to each speaker, in the case of files with more than one channel. Some file formats support more than two channels, but not all of them, as most are designed to output stereo sound given that we, as humans, have only two ears.
We know this amount of tech-talk can be a bit overwhelming but these notions are required to understand what comes up next.
Testing would begin by first ensuring we have the right test tracks to be played in the app. The most common sample rates for MP3 files are 44100 Hz (44.1 kHz) and 48000 Hz (48 kHz) while the bit rates are usually between 256 and 320 kbps. It would be best to have files that have a constant bit rate as they can be more precisely used in the testing process. The bit depth for such files is usually of 16 bit or 24 bit, rarely reaching 32bit.
To give an example of the efforts necessary to ensure that most aspects of playback have been covered, a tester should use MP3 test tracks that have the following specifications:
- 16bit, 44100 Hz, 256 kbps;
- 16bit, 44100 Hz, 320 kbps;
- 16bit, 48000 Hz, 256 kbps;
- 16bit, 48000 Hz, 320 kbps;
- 24bit, 48000 Hz, 320 kbps;
We will come back to them later on.
As the main purpose of an audio app is to render audio files as sound, testing such applications should always be accompanied by a good pair of headphones as the playback should not only work on the device’s internal speakers. If your device also has Bluetooth capabilities, a BT speaker should also be available to the tester. Also, the tester should know exactly on which type of devices the owner wants the app to be able to run in particular. Usually, audio applications are meant for most Android or iOS devices but some can have specific filters or effects designed for particular tablets or phones.
In the second part of this article we will discuss in more detail about the practical aspects of audio testing like how to create necessary audio files, how to use them in apps and what to look for when running tests while using them.