Content Type
Profiles
Articles
Polestar
Blogs
Downloads
Store
Gallery
Forums
Everything posted by pkane2001
-
That's not how David was using Duhem-Quine. I quoted the one point he deemed necessary to highlight in this discussion to demonstrate that measurements can't be trusted. That's just being used as an excuse to ignore anything that's inconveniently opposing his opinion and perception. Don't know why you bring up DSD or HQPlayer. I've done plenty of testing and measurements with HQP and DSD and none of them were limited to 20kHz. HQP has some nice feature, and Jussi certainly has done an amazing job of creating a high-performing software. But just because something is hard to do and requires more hardware and software to run, doesn't make it necessarily any better at sound reproduction. And, by the way, I developed my own DSD modulators and created and tested many dozens of filters for my own software.
-
Unlike you invoking Quine to dismiss all possible measurements? You're right, that's funny.
-
Fully agree. But here, the fun seems to be at the expense of others. I'm not defending ASR in that it is a huge and diverse community that is, overall, measurement-centric and often very intolerant of any other views. ASR doesn't do Science, just like AS doesn't do Style. We agree. But what seems to be happening here is that any mention of science, objectivity or even measurements elicits the same, predictable reaction. Happens in nearly every thread in this Objectify forum, usually proceeding from measurements don't matter to science can't possibly be used for audio, to science can't be trusted, to ASR bashing. Makes it a bit hard to have an intelligent conversation, since it all appears to be about proving someone else wrong at all costs, even to the point of bringing in Quine to dismiss all scientific conclusions and all measurements. Like the this gem when talking about all of science (since no individual finding can be considered independent of all others): I'd expect any followers of Duhem-Quine to not use computers, but rather stone tablets. After all, science could have gotten everything wrong, and every scientific finding is just as shaky as a straw hut in a hurricane. Nothing in science can be known for certain, therefore all can be safely ignored... if it doesn't fit your perception of audio.
-
That would be an exaggeration, as there are a decent number of scientists on ASR with audio and audio-adjacent fields of study. But certainly that's not who predominantly posts there. Measurements in audio are not scientific research, they are just plain old engineering. But good engineering is driven and powered by science. This thread has gone way off course into fringe philosophy discussions and completely unnecessary jokes and attacks at the expense of others who are not here to defend themselves. As usual, not a shred of anything objective discussed in the objectify forum. Predictable.
-
Of course. And the issues brought up in the video are all well-known set up and calibration issues, not a reason to "not trust measurements", simply issues to be aware of. There's nothing in the video that I found newsworthy, except maybe the clickbait title ;)
-
You lost me. Falsification is an adversarial process and doesn't require bias.
-
No, but scientists do.
-
Absolutely. Science progresses as better instruments become available, better precision becomes possible. Any properly done measurements include error ranges and calibration results that put an upper limit on the precision of measurement. A valid scientific measurement includes this error analysis. Any interpretation of such a result that relies on a higher resolution than the capability of the measurement system can't be trusted until proven through better measurements or through other means. AP isn't the most precise, low-noise instrument out there, but it is calibrated and the precision and accuracy are known for all measurements. What's with the FFT fetish? ;) What resolution do you need for proper FFT analysis and what are you trying to analyze? I routinely run 4-16M size FFTs with a general purpose PC that is 5 or 6 years old. But more often than not, a 32k FFT is just as effective for my purposes.
-
Again, I didn't claim this, so why do you keep bringing this up? I said, and I'll repeat: any interpretation of scientific results must itself be objective, repeatable and falsifiable. Science is an adversarial system, sure. That's what makes it effective. But that's hardly a reason to conclude that "measurements can't be trusted". This is not how science works.
-
That’s the interpretation you seem to be getting wrong. I’m not at all confident about science supporting me, I’m confident that science can be used to find and refine explanations for natural phenomena than any other method developed by humans. As I said, I’m confident in scientific process.
-
I have confidence in the scientific process. Anything else you seem to be reading into my statements is your interpretation and generalization and not reality.
-
David, surprisingly, we come to a common disagreement yet again 😉
-
Objectively confirmed is not "often a matter of some dispute", it is a matter of requirement. Science has been doing this for hundreds of years, with great success. Sure, there have been mistakes and even fraud perpetrated by those less studied or less scrupulous, but these are exceptions that prove the rule. Objectivity is a necessary condition for any self-correcting system of accumulated knowledge, science being a prime example. Subjective conclusions beyond logic only require additional experimentation when supported by objective evidence.
-
Let's modify the topic, then: "you can't trust measurements" should be "you can't trust someone's interpretation of measurements until it is objectively confirmed". Repeating a measurement is such an old hat that it's easy to assume everyone knows it is a necessary and required condition for trusting it. An interpretation of measurements can also be confirmed. Objectively. And if it's not confirmed, sure, then "you can't trust that interpretation". Again, nothing new.
-
Sometimes that's true. But calling something "consistently wrong" requires objective evidence, and not just a personal opinion.
-
Good thing measurements can be reproduced and confirmed by others. It's one of those little things that make them objective ;)
-
A more interesting paper (IMHO) than the one in the OP is this: Acoustic structure of the five perceptual dimensions of timbre in orchestral instrument tones Taffeta M. Elliott,a) Liberty S. Hamilton, and Frederic E. Theunissen [https://doi.org/10.1121/1.4770244] The five dimensional timbre model in this paper is what is used to provide "subjective" inputs to the neural net in the brain activation study.
-
True. AI is starting to exhibit some of the same issues humans have ('hallucinations' in generative AI). All it is is a pattern match that isn't accurate, causing a missed recognition or incorrect prediction. These are often the result of skewed/biased data, incomplete training, or over-training of a model, as well as insufficient size of the model. Human brain contains about 100 trillion connections. ChatGPT 4 is getting close at 1.8 trillion :)
-
Agree. And that's why the results of the study, while interesting, a 63% recognition rate between different orchestral pieces is just not great. But then, the neural net isn't just recognizing the music, it's mapping the music to predicted areas of neuronal activation in the brain. A much more complex task. Perhaps because they were not designed for this purpose?
-
Sorry, couldn't answer in detail earlier (and still really can't from this d*mn tiny screen), but neural networks are hardly an objective way to measure anything. Training them, and selecting the right inputs as well as selecting training and testing data is an art rather than science. What's more, there is no guarantee these will make as accurate a prediction on a wider data set, for example one that includes the same exact piece played through two different amplifiers. Accuracy of the timbre model (63%, +/-1%) to predict a brain activation pattern, while may be a few percent better than the previous, competing STM model (60%, +/-1%), is still very low and not a major improvement, IMHO. Also remember, this is while trying to differentiate between completely different, orchestral recordings. I think I could come up with a 100% accurate method of differentiating diverse orchestral pieces using any number of existing measurements, without resorting to subjective descriptors or fMRI :) PS: the numbers quoted may be slightly off, not looking at the article right now. These are from memory