Some Google folks wrote how they do the Now Playing feature offline with very little resource impact. It’s a great paper to read and quite easy to understand https://arxiv.org/abs/1711.10958
https://youtu.be/b6xeOLjeKs0?si=sDBTe7CCKxoT5-k5
https://youtu.be/ev0Ay1m4MWs?si=OFprAycQpipTXaTP
Long story short they turn songs into digital fingerprints, where it can be stored in a tiny way. When you start recording you’re making a fingerprint as well. Then it just starts testing yours against its database.
Wasn’t SoundHound doing it way before Shazam? Man, SoundHound gets no love just because they didn’t wheel & deal their way onto the control center :-p
I want to say I had both apps on my iPhone 3G or 4 back in the day. I recall that early on, Shazam returned results more often than soundhound.
Around 10 years ago I was trying to write something like it for a company. It is a lot of time, so probably now there is a lot of AI involved, but 10 years ago the path was to build a heatmap of the binary and try to find matches
I always felt like it had something to do with calculating the bmp and key and the words or something and they kept a database of this data to filter things down
It’s just relative upbeats and downbeats. Easy to calculate on the fly and no language recognition necessary.
It probably calculates the similarity of soundwaves between your unidentified song and a bunch of songs from their database.
After all, what is music if not math?
I too, like Tool
What is life if it isn’t math. Everything is fucking math.