djembeweaver wrote:Next I found the note I was singing on a piano, and compared that to the frequency of my tone fundamental from the spectrum analysis. Here's where the discrepency lies: I was singing a major third of the fundamental frequency.
It's interesting to hear you say that because the same thing happens to me occasionally too. I often sing the fundamental, particularly when comparing pitches of different drums or trying to tune a drum to a target pitch. Every now and then, I find myself in the exact same quandary. I sing a note, stop, try again because I realize "no that's not the fundamental but a harmonic", and find myself singing the fundamental exactly a major third below the first attempt.
So that begs the question of why I was singing a major third of the fundamental and why couldn't I hear the fundamental as given by audacity...
I have no idea. Maybe it has something to do with hearing a harmonic an octave above the fundamental and then transposing that down? I'm just guessing here.
I know all my intervals and can sing them.
Same here. I'm pretty sure that there is nothing wrong with your understanding or perception of pitch. This is bound to be some psycho-acoustic phenomenon, where harmonics suggest a fundamental that is a major third above the true fundamental. It's not the step up from (0,1) to (1,1) because that's a factor of 1.59, which is too much; a major third is 1.26. (1.59 is an augmented fifth.)
The second possibility is that there is a major third partial hidden in there somewhere. Indeed when I looked at a spectrum of a tone again and cranked up the resolution to maximum, a small peak appeared at the major third. It was very quiet though...I even downloaded a sine wave generator and yup - I can hear a major third in a sine wave of 392 Hz (so surely this can't be a pure sine wave...Michi what do you think?)
I honestly don't know. Possibly, there are beat effects too. If you play frequencies f1 and f2 simultaneously, there are beat effects that generate f1 - f2 as real audible sound. It's possible that the major third we are hearing has something to do with this.
One observation that may be relevant is that the bandwidth of the peaks gets progressively smaller as the frequency increases. Does that mean that the note becomes more 'pure' at higher frequencies?
No. A Fourier transform decomposes a sound into its fundamental frequencies. There actually is no other sound contained in the signal other than what a Fourier transform reveals. If you take all the harmonics shown in a Fourier transform and mix them together in the correct proportion of loudness, you re-create the original sound.
The frequency intervals get narrower as we go up the series simply because that's how the series works. (The logarithmic scale of the diagram visually exaggerates this effect. On a linear scale, the steps are much wider.)
If this is the case then a tone actually contains a range of frequencies and Audacity takes the mean. Alternatively this could be an artifact of the analysis. Indeed audacity gives slightly different values for M1 across successive tones (ditto the other partials).
A tone does contain a range of frequencies. Assuming an ideal and perfect membrane, the only
frequencies you should find are those in the series. However, because a goat skin is far from perfect, and a drum is far from circular, and because the shell and the skin are coupled resonators, the mathematical model can describe reality only so far. There are other frequencies present (even though the harmonics in the series dominate).
Moreover, the sampling has its own source of error, both in the time and the frequency domain. The values reported by Audacity depend on the sampling rate of the source. For CD source material at 44.1 kHz sampling rate, if you ask Audacity to use 1024 samples, you get 512 frequency bins. Each bin has a center frequency, but with an error band of 43 Hz. The actual value of the harmonic may be up to 21.5 Hz lower or higher. Double the samples, and you get twice as many frequency bins, with a frequency error of 21.5 Hz (10.075 Hz up or down). But now, the error in the time domain doubles. An FFT is only as good as its input data. It can't infer information that wasn't there in the first place.
Anyway, one last thing I did was get 2 other people to try to sing the note of the tone. They both found it really difficult (even though they are both good musicians) and each sang a different note.
Difficult to know. Even a different position in the room of the respective listeners can have a major effect, due to standing waves.
My co-teacher has a very large living room with gabled roof, probably around 6 m (18 ft) high at the tallest point. We recently were practicing with the dunduns on their side. For a few rhythms that need ballet style, we put the sangban upright. The pitch of the sangban was noticeably different when upright, about a semitone lower. We actually put the drum on its side and back upright several times to compare. We both agreed that there was about a semitone difference.
What's happening here is that the sound of the drum reflects off different parts of the room in the respective positions, which can lead to certain frequencies being amplified by resonance more than others. The drum doesn't project all frequencies with the same intensity in all directions, so the pitch (or, rather, the received harmonics) change as the position of the drum changes…