Home Theater Forum and Systems banner

1 - 20 of 26 Posts

·
Registered
Joined
·
38 Posts
Discussion Starter · #1 ·
With all the talk about how many modern recordings are very poor in regards to dynamics, distortion, and clipping; can anyone walk me through how to analyze the quality of a recording using Audacity? Are there any good tutorials you have saved?
 

·
Registered
Joined
·
2,261 Posts
This makes for a great thought experiment, as well as a real-world one!

What makes a recording high-quality to you? Is it the same as what other people think? Can it be measured? How do you know? Do you believe there are aspects to quality that can't be measured, or is that too far down the rabbit-hole? Music "mastering" is an art/science that can, in the right hands, turn a good recording into an excellent one. Do records mastered by industry legends have measurable differences?

It's generally agreed that wide dynamic range, low noise-floor, and minimal limiting/compression (high headroom), etc. bring recorded music to life. I once researched poor recordings and believe I have links stored to many interesting articles. I can try to dig them up if you're interested.

In the meantime, you might like to try looking for digital clipping and abrupt brickwall filtering, which are easily betrayed by frequency analysis. Also look for amplitude swings between minimum and maximum scale. If the music shows up like a solid horizontal band, somebody ran hot compression and squashed the life out of the music--made it really loud, but tore at its artistic value. Radio stations like to do that a lot. :unbelievable:
 

·
Registered
Joined
·
16 Posts
BlueRockinLou hit the nail on the head by posing the question of how one defines "quality". For example, virtually all popular music genres rely to a greater or lesser extent on distortion. Take the distortion away from an electric guitar and what you're left with is pretty much unrecognisable as an electric guitar! So then the question becomes; how much, which types and under what circumstances will distortion be perceived as good or bad. Unfortunately, when it comes to "quality" in this regard there is usually no way or method of measuring it because it is a human perception rather than an intrinsic measurable property of the actual sound waves.

It's generally agreed that wide dynamic range, low noise-floor, and minimal limiting/compression (high headroom), etc. bring recorded music to life.
Mmmm, with the exception of the "low noise floor" the other assertions in this quote are either incorrect or correct only within certain limits and/or under certain circumstances. For example, limiting/compression does not necessarily affect headroom and a high or low headroom makes no difference at all. I'm assuming you know that "headroom" is the difference between the peak level of the recording and the maximum level permissible by the format. In the case of digital audio the maximum level permissible is 0dBFS and whether the recording peaks at say -2dBFS or -10dBFS (IE. Has 2dB or 10dB of headroom) makes no difference. It only potentially makes a difference at the extremes: Extremely little or no headroom can, under certain circumstances, cause clipping and a large amount of headroom can cause noise floor related issues.

Also, a wide dynamic range (DR) is only desirable within fairly narrow limits, as well as depending on the type/genre of music. Even considering the genre which generally has the most DR (symphonic music) it would not be desirable to have a DR wider than about 60dB, which equates to roughly 1,000 times less DR than the maximum perceivable DR offered by CD (16bit format)! Let's say we made a recording using the full perceivable DR of 16bit audio (120dB); the listener would need to adjust their amp so the quietest parts of the recording were audible above the noise floor of their listening environment and let's be generous and say that noise floor is at 40dBSPL. The peak section/s of this theoretical recording would therefore be at 160dBSPL (120dB above the 40dB noise floor), which is far in excess of the pain threshold, well within the range likely to cause permanent hearing damage and not far from the levels which can cause death! Of course, it's practically impossible to build a sound system with 120dB of DR capable of 160dBSPL peak output and therefore a listener would simply not hear any of the quieter sections of our theoretical recording. From this, it's hopefully obvious that the higher the noise floor of the listening environment, the smaller the desirable DR. In most cases, the noise floor of the average listening environment, coupled with the average consumer's comfortable peak listening level dictates a DR of 40dB or less. This also explains why many radio stations traditionally apply additional compression/limiting, as a significant percentage of their listeners may be listening whilst driving and cars are obviously a very high noise floor listening environment.

Adding some or even a considerable amount of limiting/compression and reducing DR is therefore generally desirable. The problem with what's called "the loudness war" is when a completely excessive amount of compression/limiting is applied, far more than would be reasonable to overcome even very poor listening environments. In some cases DR may have been reduced to as little as 12dB or even less but even then that might not reduce quality noticeably for some genres/productions specifically designed for it.

G
 

·
Registered
Joined
·
2,261 Posts
With all the talk about how many modern recordings are very poor in regards to dynamics, distortion, and clipping; can anyone walk me through how to analyze the quality of a recording using Audacity? Are there any good tutorials you have saved?
An older tutorial, but may be helpful.

Info on digital audio quality from Audicity

This primer on professional audio quality measurement software may lend some clues as to how to proceed using Audacity instead.
 

·
Registered
Joined
·
2,261 Posts
Mmmm, with the exception of the "low noise floor" the other assertions in this quote are either incorrect or correct only within certain limits and/or under certain circumstances. For example, limiting/compression does not necessarily affect headroom and a high or low headroom makes no difference at all. I'm assuming you know that "headroom" is the difference between the peak level of the recording and the maximum level permissible by the format. In the case of digital audio the maximum level permissible is 0dBFS and whether the recording peaks at say -2dBFS or -10dBFS (IE. Has 2dB or 10dB of headroom) makes no difference. It only potentially makes a difference at the extremes: Extremely little or no headroom can, under certain circumstances, cause clipping and a large amount of headroom can cause noise floor related issues.

Also, a wide dynamic range (DR) is only desirable within fairly narrow limits, as well as depending on the type/genre of music. Even considering the genre which generally has the most DR (symphonic music) it would not be desirable to have a DR wider than about 60dB, which equates to roughly 1,000 times less DR than the maximum perceivable DR offered by CD (16bit format)! Let's say we made a recording using the full perceivable DR of 16bit audio (120dB); the listener would need to adjust their amp so the quietest parts of the recording were audible above the noise floor of their listening environment and let's be generous and say that noise floor is at 40dBSPL. The peak section/s of this theoretical recording would therefore be at 160dBSPL (120dB above the 40dB noise floor), which is far in excess of the pain threshold, well within the range likely to cause permanent hearing damage and not far from the levels which can cause death! Of course, it's practically impossible to build a sound system with 120dB of DR capable of 160dBSPL peak output and therefore a listener would simply not hear any of the quieter sections of our theoretical recording. From this, it's hopefully obvious that the higher the noise floor of the listening environment, the smaller the desirable DR. In most cases, the noise floor of the average listening environment, coupled with the average consumer's comfortable peak listening level dictates a DR of 40dB or less. This also explains why many radio stations traditionally apply additional compression/limiting, as a significant percentage of their listeners may be listening whilst driving and cars are obviously a very high noise floor listening environment.

Adding some or even a considerable amount of limiting/compression and reducing DR is therefore generally desirable. The problem with what's called "the loudness war" is when a completely excessive amount of compression/limiting is applied, far more than would be reasonable to overcome even very poor listening environments. In some cases DR may have been reduced to as little as 12dB or even less but even then that might not reduce quality noticeably for some genres/productions specifically designed for it.

G
Thank you for that wealth of info. You and I may be saying almost the same thing. I should have cited supporting levels and specifications as you did. I've been trying to learn more about these issues and came across some articles by mastering engineer Bob Katz here and here. Bob is but one expert in the field, and you can't believe everything you read. But I do believe that choices between signal-to-noise ratio, dynamic range, loudness, and headroom should be made with sound "quality" as the highest priority. It should not be compromised in the name of average or noisy listening environments and sound systems. The format can always be dumbed-down from there. Not so the other way 'round. Give us the best "quality" possible, or at minimum, release an alternate high-quality version. I know, I know, it's been tried (e.g. SACD & DVD-A). Mass-market economics takes over and, viola, poor recordings prevail. Thank the stars for quality-conscious CD labels like Audioquest, Mapleshade, Higher Octave, JVC XRCD, and Audio Fidelity; or LP labels like Analogue Productions and Mobile Fidelity.

I like to listen at realistic levels, but not actual ones. I can think of a few hard rock songs that have some content below the noise floor of most rooms, but I know of none that have enough DR to make me care it's slightly obscured. I suppose such hard rock material exists, but in no case would I drive my gear into self-destruction for the sake of testing practical enjoyment. The higher-quality (less compressed) recordings I listens to are, as you say, a challenge. It would be delightful to have quietest passages far above the noise floor, but again as you say, noot practical. So we compromise by coaxing them out of the depths by raising volume. But there's not much difference between a quiet sound just under the noise threshold, and one just over it. So where do we draw the line? Mine is at a high enough volume to enjoy subtle details without cringing at peaks. Your mileage may vary, but I find loud digital to become quickly fatiguing. Not so with loud analog--makes me want to spin album after album. Don't get me wrong, I like both for their unique virtues.
 

·
Registered
Joined
·
16 Posts
120dB does represent true arena rock concert levels, but do I really listen that loud.
Live arena rock concert levels do sometimes reach as high as 120dBSPL, from an audience listening position. However, that 120dBSPL would be the peak level including the noise floor of the venue, not 120dBSPL above the venue's noise floor!

Bob Katz is certainly a great audio expert but there are a few points in the article you linked to which are incorrect, simply because they are significantly out of date.

G
 

·
Registered
Joined
·
2,261 Posts
POINT #1:
Let's say we made a recording using the full perceivable DR of 16bit audio (120dB); the listener would need to adjust their amp so the quietest parts of the recording were audible above the noise floor of their listening environment and let's be generous and say that noise floor is at 40dBSPL. The peak section/s of this theoretical recording would therefore be at 160dBSPL (120dB above the 40dB noise floor), which is far in excess of the pain threshold, well within the range likely to cause permanent hearing damage and not far from the levels which can cause death!
G
POINT#2:
Live arena rock concert levels do sometimes reach as high as 120dBSPL, from an audience listening position. However, that 120dBSPL would be the peak level including the noise floor of the venue, not 120dBSPL above the venue's noise floor!
G
I'm confused. First you say consider 120dB above the noise floor to prove one point, then you say not to add it above the noise floor to prove the same point. I must have missed something.

POINT #3:
In most cases, the noise floor of the average listening environment, coupled with the average consumer's comfortable peak listening level dictates a DR of 40dB or less. This also explains why many radio stations traditionally apply additional compression/limiting, as a significant percentage of their listeners may be listening whilst driving and cars are obviously a very high noise floor listening environment. Adding some or even a considerable amount of limiting/compression and reducing DR is therefore generally desirable.
G
I'll agree 40dB recorded dynamic range may be sufficient for the masses, but not for a high-end system in a dedicated listening room--and it certainly doesn't dictate anything but compromised dynamics. I also agree a "judicious" amount of compression/limiting benefits final recording quality.

Bob Katz is certainly a great audio expert but there are a few points in the article you linked to which are incorrect, simply because they are significantly out of date.
G
The copyright notice at the bottom of the webpage reads "2013." Digital technology advances quickly, so I'll give you benefit-of-the-doubt on that one, but I do doubt that digital recording & mastering techniques have changed so drastically during the same period as to render them obsolete.
 

·
Registered
Joined
·
16 Posts
I'm confused. First you say consider 120dB above the noise floor to prove one point, then you say not to add it above the noise floor to prove the same point. I must have missed something.
Yes you did, maybe that was due to the way I explained though? The point I was trying to make is that a CD (16bit recording) is capable of a perceived DR of 120dB, meaning 120dB above the noise floor of your listening environment. An arena rock concert with a peak of 120dBSPL has a max perceivable DR of around 60dB and probably less in practise, due to the listening environment having a noise floor of probably around 60dBPL.

I'll agree 40dB recorded dynamic range may be sufficient for the masses, but not for a high-end system in a dedicated listening room--and it certainly doesn't dictate anything but compromised dynamics. I also agree a "judicious" amount of compression/limiting benefits final recording quality.
I would make two counter-arguments: 1. A 40dB DR is not going to benefit many genres of popular music, which are specifically designed for a lower DR. Even before the "loudness wars" most rock and popular music had a DR of less than 40dB. 2. For certain genres and for those with high-end systems in a room with well treated acoustics/isolation then I agree that more than 40dB of DR can be desirable, 60dB or even slightly more might be desirable and indeed, there are some symphonic recordings which employ 60dB of DR. However, from an economic stand-point, a product aimed not only at a minority genre but at a tiny fraction of that minority genre is not a product which offers much opportunity for profit, as you're obviously aware from your previous post.

The copyright notice at the bottom of the webpage reads "2013." Digital technology advances quickly, so I'll give you benefit-of-the-doubt on that one, but I do doubt that digital recording & mastering techniques have changed so drastically during the same period as to render them obsolete.
The copyright of the webpage may say 2013 but at a guess I would say the article itself is probably around a decade old and parts of it appear considerably older than that! There have been developments which invalidate some of what's stated in the article, an example being; a standardised method for measuring loudness which has been a legal requirement for TV in North America for the last few years and which are also making in-roads into the music industry, which prompted Bob Katz to write an article about a couple of years ago making the statement that an end to the loudness wars was in sight.

G
 

·
Registered
Joined
·
16 Posts
Thanks, it was an interesting article, although it only really approached the issue of designing an amp capable of the 120dB of dynamic range, rather than the practicality of actually listening to the resultant system. The author used a typical listening environment noise floor of 50dB which would put the peaks of our 120dB of DR at 170dB! We're talking about a level which will cause severe hearing damage and is pretty close to the figure (180dB) usually quoted as causing death!!

As an aside; this also demonstrates that as a distribution format, so called HD (24bit) audio is completely useless/pointless, except as a marketing gimmick! CD (16bit) with it's perceived DR of 120dB, already exceeds the playback systems available and human hearing limitations, so the benefit of adding an additional 24dB of DR (144dB DR with 24bit) to the already completely excessive DR of 16bit is ...?

G
 

·
Registered
Joined
·
16 Posts
Bits is bits. Perfect sound forever. Yadda, yadda, yadda.
Not sure I understand? Bit are bits and are effectively perfect, what we store in those bits is not though. Until we can invent perfect mics, perfect mic pre-amps, perfect recording acoustics, perfect speakers and listening environments, not to mention perfect musicianship and perfect music/sound production aesthetics, then we're never going to get "perfect sound". Maybe I missed your point?

G
 

·
Registered
Joined
·
2,261 Posts
The OP wants to correlate objective audio measurements with subjective audio quality. Our discussion has started to veer in and out of "O" vs. "S" territory--a place which has been covered in many forums ad-nauseam (here's just one). You seem intelligent with decent audio knowledge. You also have some strong opinions which are very different from my own. For instance, I believe death-by-audio is perfectly acceptable :hsd: :yikes:

Though the terms "Bits is Bits" and "Perfect Sound Forever" are often used in derogatory fashion, I meant no disrespect. I merely meant to signal my reluctance to engage in a progressively subjective argument (i.e. I do not believe most serious music listeners set volume to synch recorded noise floor with that of their listening environment. I also do not believe that dynamic range is the end-all, be-all indicator of sound quality. Increasing bit-depth benefits SNR and affords recording/mastering engineers greater flexibility in making decisions regarding headroom vs. DR).

"Bits is/are Bits" (a.k.a. "They're Just Ones and Zeros")
Refers to the attitude that digital audio is unflawed and supreme. Digital=Good. Analog=Bad. What many don't realize--or want to admit--is that their precious digital medium does not store a musical signal; it stores a representation of it. That is key and worth repeating. CD's store only guesses about a given signal's amplitude. No timing information is supplied! A "one" or "zero" interpreted at the wrong time is just as bad as misinterpreting a "one" as a "zero." Many myths surround digital, as well as analog:
Some Digital Myths (particularly Item-8)
Popular Audio Myths
Science vs Audio?

"Perfect Sound Forever"
Refers to the marketing hype cooked up by Sony/Philips to introduce Redbook CD to the world in 1982. It means those inherently unflawed bits described above were yours forever as supplied on indestructible compact discs. Speaks for itself = No Comment = ROFLOL.
This article says it better than I can

In the end, I try to recognize when I am wrong, learn from others--but above all--keep an open mind. If digital was so perfect when it was introduced, why does today's digital sound so much better? More importantly, why is everyone so convinced digital audio is a flat-lined technology with no promise of future improvement? DSD anyone? See, there's that subjective dragon again. :devil:
 

·
Registered
Joined
·
2,261 Posts
With all the talk about how many modern recordings are very poor in regards to dynamics, distortion, and clipping; can anyone walk me through how to analyze the quality of a recording using Audacity? Are there any good tutorials you have saved?
Still trying to help...
Here's an analysis of Pink Floyd's Dark Side of the Moon. Different recordings/masterings and different digital media were compared using SOUNDFORGE. Not exactly what you asked for, but I'm hoping it might help.
 

·
Registered
Joined
·
16 Posts
In the end, I try to recognize when I am wrong, learn from others--but above all--keep an open mind. [1] If digital was so perfect when it was introduced, why does today's digital sound so much better? [2] More importantly, why is everyone so convinced digital audio is a flat-lined technology with no promise of future improvement?
To answer your last question [2] first; because digital audio reached a point of accuracy a few years ago which very significantly exceeded the ability of human hearing. In other words, while measurable improvements to digital audio can be made, they are many times beyond the capabilities of even someone with theoretically perfect hearing to detect. Why then would anyone bother to continue develop new digital audio products and to answer to your first question [1]; it all comes down to marketing. "Perfect sound forever" was a marketing slogan at the time and bore no relationship to the actual science, the actual science of the time would have been "More accurate sound for longer" and today the science can state "More sonic accuracy than you can even hear"! And, this is the problem for consumers, what is marketing hype or marketing pseudo-science and what is the actual truth? If, as the science contends, digital audio has exceeded human hearing, what are all the audio equipment manufacturers to do? Their only two choices are to file for chapter 11 and quietly go out of business or to continue creating new products and convince consumers that they are better, and they've developed some relatively simple and very effective ways of doing just that, even with products/formats which actually perform measurably poorer than previous versions!! Unless one is involved in this area professionally and dealing with these issues almost daily, separating the hype/pseudo-science from the truth is near impossible for the consumer, even for the diligent consumer, as of course it's designed to be!

The articles you linked to are again very out of date, some of what they contain is true and will be for the foreseeable future (laws of physics, limits of human hearing) and some of it is simply no longer valid. There are also several points in your last post which are wrong, not subjective but factual points. I would be happy to explain further but I suppose the question is why should you believe what I have to say, especially if it contradicts what you think you already know? I suppose I could state my 20+ years as a pro audio engineer, the most impressive parts of my CV and that I am not affiliated with any manufacturer but would that really make much difference? It generally wouldn't to me, I know a couple of audio engineers who've been in the biz for 20 odd years and still managed to remain blissfully ignorant about how digital audio actually works :)

G
 

·
Registered
Joined
·
2,261 Posts
Apologies to the moderators if this post has teetered off-topic, or has otherwise become non-useful. I am trying to keep my response within the OP's focus on audio quality.

To answer your last question [2] first; because digital audio reached a point of accuracy a few years ago which very significantly exceeded the ability of human hearing. In other words, while measurable improvements to digital audio can be made, they are many times beyond the capabilities of even someone with theoretically perfect hearing to detect. Why then would anyone bother to continue develop new digital audio products and to answer to your first question [1]; it all comes down to marketing. "Perfect sound forever" was a marketing slogan at the time and bore no relationship to the actual science, the actual science of the time would have been "More accurate sound for longer" and today the science can state "More sonic accuracy than you can even hear"! And, this is the problem for consumers, what is marketing hype or marketing pseudo-science and what is the actual truth? If, as the science contends, digital audio has exceeded human hearing, what are all the audio equipment manufacturers to do? Their only two choices are to file for chapter 11 and quietly go out of business or to continue creating new products and convince consumers that they are better, and they've developed some relatively simple and very effective ways of doing just that, even with products/formats which actually perform measurably poorer than previous versions!! Unless one is involved in this area professionally and dealing with these issues almost daily, separating the hype/pseudo-science from the truth is near impossible for the consumer, even for the diligent consumer, as of course it's designed to be!
For many, music is treated as a destination activity rather than as aural wallpaper. It's an art form that elicits emotional response through the ear/brain mechanism whose response can be partially--but not totally--characterized through scientific measurement and physical law. That's a FACT no lynch mob of engineers, physicists, or acousticians can change. These professionals can recite formulae ad-infinitum and thrust "proof" galore of why people cannot possibly hear a difference between component/recording "A" and "B," yet people repeatedly claim they can.

Irrespective of whether or not those listeners can prove their claims, the FACT they think they hear differences is enough to validate their claims to themselves. It doesn't matter what anyone else says or thinks--your reference to "blissful ignorance" if you will. It only matters that the difference they think they hear is real to them. The hobby is supposed to be fun, yet subjective phenomena seem to elicit their own (usually negative) emotions from objectivists.

As an electrical engineer, I recognize the importance of scientific methodology in listening evaluation (e.g. level-matching, single-variable change, environmental control, etc.). Do I always choose to wear that hat? When listening for pleasure, my answer is a resounding "NO!" In that case, scientific knowledge and "audiophile" experience is a huge detriment to listening pleasure--the "man behind the curtain" if you will. It's a spoiler that diverts attention from the listening experience. As a music lover, I know what moves me and how I want my system to sound (hint: it's not neutral and distortion-free). You're right: no experiment or data published by the audio community, professional or otherwise, will sway me from my feelings--I choose to remain "blissfully ignorant" as far as my enjoyment of music is concerned.

The articles you linked to are again very out of date, some of what they contain is true and will be for the foreseeable future (laws of physics, limits of human hearing) and some of it is simply no longer valid. There are also several points in your last post which are wrong, not subjective but factual points. I would be happy to explain further but I suppose the question is why should you believe what I have to say, especially if it contradicts what you think you already know? I suppose I could state my 20+ years as a pro audio engineer, the most impressive parts of my CV and that I am not affiliated with any manufacturer but would that really make much difference? It generally wouldn't to me, I know a couple of audio engineers who've been in the biz for 20 odd years and still managed to remain blissfully ignorant about how digital audio actually works :)
G
The last thing I want to do is perpetuate false information, so please enlighten me; and feel free to provide references which support your viewpoints.
 

·
Registered
Joined
·
16 Posts
For many, music is treated as a destination activity rather than as aural wallpaper. It's an art form that elicits emotional response through the ear/brain mechanism whose response can be partially--but not totally--characterized through scientific measurement and physical law. That's a FACT no lynch mob of engineers, physicists, or acousticians can change.
It's extremely important to make a distinction here! Digital audio is NOT directly concerned with music, musicality, emotional response or qualitative subjective judgements such as good or bad. It is concerned ONLY with the accuracy of converting and storing analogue audio signals. I'm no expert in neuroscience but as far as I am aware there are no available measurement techniques for subjective qualities such as musicality or even for determinations such as whether an audio signal is music. The distinction we need to therefore make is between digital audio as an audio container format (or formats) and what exactly those formats are used to contain. Not being able to make that distinction is both the underlying cause of many/most arguments between some hardcore audiophiles and the "scientists" and, the basis of one of the most powerful weapons used by marketing departments.

For example, it's a relatively trivial exercise to take a so called HD music mix (pre-master) and create two masters, one of which we deliberately, very marginally degrade in some way. We could put that marginally degraded master on whatever HD format/media you (or anyone else) considers to be the best/highest quality digital audio format and the un-degraded master on a CD. What would happen if we stated or directly implied they were the same version just on different formats? Anyone comparing the CD with the HD version and with good enough hearing/equipment to tell the difference would likely conclude that CD is better than HD. Of course in practise what happens is the other way around, the degraded version is put on CD, to demonstrate the superiority of HD equipment/products and therefore the need to upgrade or pay more for a HD version. In other words, we need to make a distinction between the actual format (CD/HD) and the content which is stored on that format because in this simple (and unfortunately common) example, the difference is entirely in the mastering process and has nothing to do with any inherent quality differences between the formats!

These professionals can recite formulae ad-infinitum and thrust "proof" galore of why people cannot possibly hear a difference between component/recording "A" and "B," yet people repeatedly claim they can. Irrespective of whether or not those listeners can prove their claims, the FACT they think they hear differences is enough to validate their claims to themselves. It doesn't matter what anyone else says or thinks--your reference to "blissful ignorance" if you will. It only matters that the difference they think they hear is real to them.
I don't accept this statement! Or rather, I only accept it under certain circumstances and not as a blanket statement. Referring back to my example above; some "people" may not hear any differences between the CD and HD versions, some would claim to hear differences but not have a preference, some would claim that the CD is better (and with sufficient unrelated examples claim that CD is the superior format) and no doubt there would be the odd one or two who feel the deliberately degraded HD version sounds better. In either of the two latter cases, regardless of what they think they hear, of what they can prove or validate to themselves, how valid would you (or any reasonable person) consider their conclusion to be? In this example you and I both know that the ones who thought the CD was better were the closest to the actual truth but due to ignorance (that the HD version had been deliberately doctored), they arrived at an entirely invalid conclusion. This is of course just one type of example but there are many others where some ignorance of digital audio itself or some ignorance of the processes of creating material which is contained as digital audio lead to similarly invalid conclusions. Arguments based on more bits equals more resolution or higher quality is a typical example.

In the case of analogue (such as vinyl) verses digital, digital is many times more accurate in every respect. In this sense then, digital=good and analogue=bad. As I mentioned before though, distortion (inaccuracy) is not always perceived as bad and in some cases it's a required and deliberately added component of music. Therefore, in the sense of the subjective perception of individuals there can be no absolute determination of good or bad which is applicable to all individuals. It's therefore entirely valid for an individual to state that they prefer analogue to digital, although it's far more logically problematic for that individual to make the blanket statement that analogue is better than digital.

There's one other case to consider. The case which requires science to prove that something does not exist, which of course it cannot. All science can do is provide evidence that to a degree of certainty something does not exist. Arguments based on audibility fall into this category. For example, high sample rates (those above 44.1/48kHz such as 96kHz or 192kHz), don't provide any more "resolution" than standard sample rates, they provide the same resolution but over a wider frequency band. The question therefore becomes what is an acceptable band of frequencies or put another way, what are the frequency limitations of human hearing? A sample rate of 44.1kHz allows the recording of audio frequencies up to 22kHz and science cannot prove that no one exists who can hear above 22kHz. However, frequency response hearing tests have been around for nearly a century or so and all the evidence suggests that perfect hearing for humans is 20Hz-20kHz although only at a young age, by later teenage years 20Hz-18kHz would be an excellent response. I believe there have been a few cases of individuals perceiving up to 22kHz but I know of no cases of anyone hearing beyond 22kHz. This doesn't mean of course that no one does or could exist who can hear higher than this but it does mean that a sampling rate of 44.1kHz is more than sufficient for all, with the potential exception of an individual with some rare and as yet unmeasured genetic hearing mutation. More widely (IE. Not specifically frequency related), inaccuracies in digital audio generally not only fall outside of the consensus of human hearing limitations but often do so by a factor of several or in some cases by over a thousand times. I believe we're therefore on pretty safe ground to assume that any perceived inaccuracies/imperfections in current digital audio technology are due to some some other cause; bias or deficiencies elsewhere in the recording/playback chain being just two of many potential causes.

Increasing bit-depth benefits SNR and affords recording/mastering engineers greater flexibility in making decisions regarding headroom vs. DR).
A high bit depth (such as 24bit) effectively eliminates SNR as a limitation for the recording format. However, this doesn't afford the recording engineer any more SNR, it just means that the SNR is defined by limitations other than the recording media. For example, the noise floor of; the recording environment, the mics and the mic pre-amps are the limiting SNR factor rather than the recording format. As the dynamic range of this other equipment (and indeed of what we're recording) is so much less that what is theoretically offered by 24bit, this gives us the opportunity of a very considerable amount of headroom while recording before we have to worry about recorder SNR issues. This is extremely useful, as of course when we start recording we have relatively little idea of how high the peaks in that recording are going to reach. However, this is not the case for the mastering engineer because the recording has obviously already been made and we know exactly both where and what the peaks are. Headroom is therefore no longer required and what the mastering engineer produces effectively has no headroom and therefore there are no headroom decisions! There are DR decisions the mastering engineer has to make but these aren't related to headroom (as there is none!).

What many don't realize--or want to admit--is that their precious digital medium does not store a musical signal; it stores a representation of it. That is key and worth repeating.
Agreed, to an extent (!), and I will therefore repeat it: Digital audio only stores extremely accurate audio signals, whether those audio signals are perceived by the listener as "musical" is beyond the scope of digital audio.

CD's store only guesses about a given signal's amplitude. No timing information is supplied!
Now both of these statements I will have to disagree with! Firstly, timing information is supplied. Is the result perfect? No ... but it greatly exceeds the limitations of human hearing and generally exceeds to an even greater extent the timing limitations/imperfections of other equipment in the recording, mixing and playback chain. Secondly, CDs do not store guesses about a signal's amplitude, they store extremely precise values, albeit with a built-in amount of pre-determined error. It's the nature of this error which makes all the difference and which leads those who don't understand digital audio to invalid/incorrect conclusions!

Each bit of data we add halves the amount of error, so a 4bit recording has half the amount of error as a 3bit recording. The first vital piece of information most are ignorant of/missing is that all the error is converted to noise (specifically, uncorrelated noise). So what you have at any bit depth is a perfect signal plus an amount of noise which is defined by that bit depth. You do not have both noise and error, once the error is converted into noise there is no longer any error. Therefore, both our 4bit and 3bit recordings contain a perfect signal but our 3bit recording has twice the amount (+6dB) of accompanying noise. Error free 16bit audio (CD) therefore has a noise floor of -96dB, while error free 24bit audio has the much lower noise floor of -144dB, due to the fact that the amount of error with 24bit is proportionately lower. It's a simple basic calculation, each additional bit halves the amount of error, so when converted to noise each additional bit provides 6dB more DR (a 6dB lower noise floor), so: 16bitx6=96, 24bitx6=144 and SACD (DSD), which is a 1bit digital audio format, therefore has a DR of just 6dB. However, a 6dB DR is terrible and SACD does not sound thousands of times noisier or crappier than CD! Which brings us on to the second vital piece of information most are ignorant of/missing: Noise-Shaped Dither. During the conversion of error to noise (dithering), we can't do anything to reduce the amount of noise but we can move it to areas of the frequency spectrum where we are insensitive to it. This is called Noise-Shaping and products which allowed audio engineers to apply it started coming on to the market about 20 years ago. Although they were expensive to start with, by about a decade ago it was relatively cheap and ubiquitous. Noise-Shaping is what makes DSD possible, SACD would be so noisy without it, you'd hardly be able to hear any signal! The huge amount of noise is still there but it's been moved to above 20kHz where you can't hear it. Noise-Shaping is also what gives CD the quoted perceptual DR of 120dB, although in the most critical/sensitive hearing band it could theoretically be as high as 150dB. Noise-Shaping is not generally used on 24bit audio for obvious reasons.

I do not believe most serious music listeners set volume to synch recorded noise floor with that of their listening environment.
I believe that everyone does, at least to an extent. How many listeners would happily sit for more than a few seconds or so in silence, knowing that they were not hearing what was going on because their volume was too low? I'm sure few would happily sit there in silence and miss entire quiet sections. What they would do after a while is turn the volume up but then of course the loud sections might now be uncomfortably loud, in which case they'd have to turn it down again. What they are doing is reducing the DR, a crude manual version of exactly what an audio compressor does!

Sorry I haven't provided any references, I don't have the time but you can look up dithering and noise-shaping easily enough if you're so inclined and a book like Digital Audio Explained (Nika Aldrige), while quite old, will give you a very usable grounding which is manufacturer hype free!

G
 
1 - 20 of 26 Posts
Top