That's a good point, Kenny. I think with music they can actually analyse very well the 'sound' and 'style' of a song, because the fundamentals of harmonics, rhythm are mathematical, so lend themselves to that capacity to say 'X' is similar to 'Y'.
But I wonder if, absent textual tags (which there would always be, so it's a moot point), their recommendation algorithm would, in musical terms, recommend satanic metal to someone who listened to Christian metal with a very similar musical style.
The music bit is quite easy, the words, less so.
And that's an extreme example - capturing the subtleties of the 'voice' of a writer is far harder.
One of my early experiments with ChatGPT, for example, was to ask it to write the lyrics for 'a song in the style of Leonard Cohen, about an old preist comforting a boy whose father has been killed at war'. That example just popped into my head for some reason. I guess because of Ukraine and because Cohen seemed useful as a poetical songwriter for whom lyrics were supremely important. Neil Simon could have worked as well, maybe.
It came up with a plausible, if hamfisted, attempt.
The chorus involved repetitions of 'Hallelujah', which was kind of clever, given the religious element, but also very copy-paste.
I then asked it to do the same in the style of Bernie Taupin and The Beastie Boys.
It hardly changed anything (except swapping out the borrowed trademark 'Hallelujah' for something else) , and the Beastie Boys version wasn't even a viable rap.
Musically, I imagine AI at the same stage of development could do a much better job of reproducing the style of the three artists (Elton John in the Taupin case).
The feel and underlying voice of language is hard to perceive and replicate algorithmically, I think.
At least I hope so...