Compression in Digital Audio - Implications? (Long!)

AVForums

Help Support AVForums:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.
D

DevillEars

Guest
Apart from the sampling rate and sample size, there are two key factors that influence the perceived sound quality of music reproduced from a digital source medium:

a) Data loss errors (also referred to as bit drop-out)
b) Timing errors (also known as 'jitter')

In the early days of CD, the general opinion was that only data loss errors could negatively impact CD sound quality and the use of error detection and correction mechanisms such as CIRC were adequate to almost eliminate data loss errors, which resulted in the infamous claim of "Perfect Sound, Forever".  Within a few years, it became apparent that "Jitter" was as, if not more, important in achieving good sound quality and efforts were dedicated to bringing down timing errors to below 130 picoseconds (The limit of measurement at the time).

In the more than 25 years since CD's introduction, we have seen great improvements in CD sound quality that have come about through minimising data loss errors at source to minimise the CIRC 'effort' required to execute correction, and through efforts to minimise jitter.

Then along came 'compressed digital audio' formats.....

Data compression emerged from the IT industry and was targeted at reducing data storage requirements and to minimise the bandwidth utilisation across networks when moving data.  Initial efforts resulted in "Lossy" compression where the algorithms used could result in lost data bits and error detection/correction algorithms were introduced as a means to minimise (but not eliminate) data loss errors resulting from this lossy compression.  Obviously, any lossy compression mechanism that could not guarantee accuracy was not viable for use with financial data (either storage or transmission) and 'lossless' compression algorithms were introduced which guaranteed data accuracy for use in financial transaction storage and transmission.

As these things do, these compression techniques crossed-over from IT to digital audio....

Both Philips (with DCC) and Sony (with Mini-Disk) initially invested heavily in technologies to minimise the impact of 'lossy' compression on sound quality on these two source media.  These technologies were based on psychoacoustics (Philips with PASC - 'Precision Adaptive Sub-band Coding' - and Sony with ATRAC - 'Adaptive TRAnsform Coding')  - a belief system that stated that, under certain conditions, missing sonic elements could be masked.  DCC did not last long and Mini-Disk lasted a bit longer.

The introduction of MP3 (a lossy compression format) and the facilities offered to search the Internet and download music heralded a new resurgence in compressed digital audio.  Apple, conscious of the past failures of compressed formats and wanting to differentiate their offering, selected a lossless compression (ALAC).  Other lossless compression-based codecs followed.

This heralded the arrival of music availability on a lossless compressed digital audio format, and this is where the story really starts...

JUST HOW LOSSLESS IS LOSSLESS???

ALAC is claimed to achieve between 40% and 60% compression and, on decompression, to return ALL OF THE DATA (as one would expect from a lossless compression/decompression algorithm, or 'codec').

SO, DATA LOSS IS ELIMINATED IN THE CODEC, BUT THAT'S ONLY ONE OF THE TWO FACTORS THAT IMPACT ON DIGITAL SOUND QUALITY!

THE BIG QUESTION, FROM MY PERSPECTIVE, IS JUST HOW MUCH OF THE TIMING INTEGRITY IS RETAINED???

 

Latest posts

Top