Why the quality of audio analysis metadatasets matters for music
Iâve been thinking for some time about the derived metadata that Spotify and other digital streaming services construct from the music on their platforms. Spotifyâs current business revolves around providing online streaming access to music and podcasts, as well as related content like playlists, to users.
Like any good SaaS business, their primary goal is to acquire and keep customers. As a digital streaming service business, the intertwined goal is to provide quality content to those customers. The best way to do both of those is to derive and collect metadata about customer usage patterns, but also about the content being delivered to the customers. The more you know about the content being delivered, the more you can create new distribution mechanisms for the content and make informed deals to acquire new content.
Creating metadatasets from the intellectual property of artists #
Today, when labels and distributors provide music to digital streaming services (artists canât provide it directly), they grant those services permission to make the music tracks available to users of the digital streaming services. Based on my review of the Spotify Terms and Conditions of Use, Spotify for Artists Terms and Conditions, and the distribution agreement for a commonly-used distribution service, DistroKid, artists donât grant explicit permission for what services do nextâcreate metadata about those tracks. An exception relevant with the DistroKid distribution agreement is if artists sign up for an additional service, DistroLock, they then are bound by an additional addendum granting the service permission to create an audio fingerprint to uniquely represent the track so that it can be used for copyright enforcement and possibly to pay out royalties.
In his book Metadata, Jeffrey Pomerantz defines metadata as âa means by which the complexity of an object is represented in a simpler form.â In this case, streaming services like Spotify create different types of metadata to represent the complexity of music with various audio features, audio analysis statistics, and audio fingerprints. The services also gather âuse metadataâ about how customers use their servicesâat what point in a song a person hits skip, what devices they use to listen, their location when listening, and other data points.
Creating metadatasets is crucial to delivering content #
Pandora has patents for the types of music metadata that they create, that behind the âmusic genome projectâ. Spotify also has patents (and a crucial one from their acquisition of the Echo Nest) to do the same, as well as many that cover the various applications of those metadata.
These companies can use these metadatasets as marketing tools, as weâve seen with the #SpotifyWrapped campaign; to correlate the music metadata with use metadata, such as to create new music marketing methods like contextual playlists; to select advertising that matches up sonically well with the tracks being listened to; and to provide these insights to artists and labels, making them more reliant on their service as a distribution and marketing mechanism.
Spotify currently provides a subset of the insights they derive from the combination of use metadata with music track metadata to artists with the Spotify for Artists service. The end user license agreement for the service makes it clear that itâs a free service and Spotify cannot be held responsible for the relative accuracy of the data available. Emphasis mine:
Spotify for Artists is a free service that we are providing to you for use at our discretion. Spotify for Artists may provide you with the ability to view demographic data on your fans and usage data of your music. While we work hard to ensure the accuracy of the data, we do not guarantee that the Spotify for Artists Service or the data that we collect from the Service will be error-free or that mistakes, including mistakes in the data insights that we provide to you, will not happen from time to time.
Itâs likely that some labels have already negotiated access to various insights and metadata that Spotify creates and collects.
Other valuable insights that can be derived from these metadatasets include: the types of music that people listen to in certain cities, which tracks are most popular in certain cities, what types of music people tend to listen to in different seasons, and even what types of music people of different ages, genders, education levels, and classes tend to listen to.
These insights, provided to artists, labels, and distributors, guide marketing campaigns, tour planning, artist-specific investments, and even music production styles. Thing is, itâs tough to decipher exactly how these companies create the metadatasets that all these valuable insights rely on, and how the accuracy of that metadata is (if at all) validated.
How the metadatasets get made #
In an episode of Vox Earworm, the journalist Matt Daniels of The Pudding and Estelle Caswell of Vox briefly discuss how the metadatasets of Spotify and Pandora were created, pointing out that Spotify has 35 million songs, but the metadataset is algorithmically generated. Meanwhile, Pandora has only 2 million songs, but those 450 total attributes were defined and applied by a combination of trained musicologists and algorithms to the songs. Their discussion starts at 1:45 in this episode and continues for about 90 seconds.
The features in the metadatasets have been defined by algorithms written by trained musicologists, amateur musicians, or even ordinary data scientists without musical training or expertise. The specific features collected by Spotify are publicly available in their audio features API and audio analysis API endpoints, and both include metadata that objectively describe each track, such as duration, as well as more subjective features such as acousticness, liveness, valence, and instrumentalness.
The more detailed audio analysis API features splits up each track into various sections and segments, and computes features and confidence levels for each of the sections and segments.
Spotify, building off the Echo Nest technology, relies on web scraping and algorithms to create these metadatasets. According to a patent filed by the Echo Nest in 2011, three different types of metadata are created:
- Acoustic metadata, which is the ânumerical or mathematical representation of the sound of a trackâ,
- Cultural metadata, which ârefers to text-based information describing listenerâs reactions to a track or songâ, and
- Explicit metadata, which ârefers to factual or explicit information relating to musicâ.
The explicit metadata is information such as âtrack nameâ or âartist nameâ or âcomposer, while the acoustic metadata can be an acoustic fingerprint to represent the song, or can include features like âtempo, rhythm, beats, tatums, or structure, and spectral information such as melody, pitch, harmony, or timbre.â The cultural metadata is where the more subjective features come from, and it can come from a variety of different subjective sources: âexpert opinion such as music reviewsâ, âlisteners through Web sites, chat rooms, blogs, surveys, and the likeâ, as well as information âgenerated by a community of listeners and automatically retrieved from Internet sites, chat rooms, blogs, and the like.â The patent gives other examples such as âsales data, shared collections, lists of favorite songs, and any text information that may be used to describe, rank, or interpret music.â It can also build off of existing databases made available by companies like Gracenote, AllMusic (referenced as AMG, now RhythmOne, in the patent), and others.
Pandora doesnât share an API for their Music Genome Project data, but they do mention that it contains 450 total attributes, or features in the data. I dug into their patents and it is clear that the number of features used varies depending on the type of music, and the features given as examples in the patents range from vocalist gender, distortion in electric guitar, type of background vocals, genre, era, syncopation, and lead vocal present in song(also). Pandora uses a combination of musicologists and algorithms to assign values.
Representation in the metadatasets, representation in the taco bell #
We know a little about how Spotify and Pandora create their metadatasets. We know less about how representative those metadatasets are, both in terms of feature coverage and music coverage.
Barely knowing which features are available for Pandora, and even while having a decent idea of what Spotify has available, itâs possible that the features that exist in the metadatasets are incomplete. The features in the metadatasets could be limited to those that were the easiest to compute at the time, those that are deemed interesting by the creators, or even those that are highly-correlated with profitable user behavior. Itâs expensive to create, store, and apply new metadata features, so businesses must have a clear value proposition before developing new models or tasking more musicologists with the creation of a unique audio feature.
Based on the locations of Spotify, Pandora, and the companies informing their metadatasets, itâs likely that the datasets that these metadatasets and their features are built on arenât representative of music worldwide but instead include bias toward music that is easily available in their geographic locations.
The size of the datasets that underpin the metadata creation variesâPandora has 2 million tracks, Spotify has 35 millionâthe representativeness of the data sample is more important than the size. And that is a variable that we have almost no information about.
I havenât done (and canât do) the data analysis to determine the distribution of tracks in those giant datasets. Without that I can only speculate:
- Itâs possible that both of them have a disproportionate concentration of artists that create and record music in the United States and Western Europe.
- Itâs almost certain that both of those datasets contain only music recorded in the digital or digital-adjacent eras. Music recorded in analog tape eras that havenât been digitized canât be represented in the datasets.
- Itâs unlikely that the datasets include music by artists lacking the internet connection necessary to digitally distribute their music, even if it is digitized.
We could learn more about the representativeness of the datasets used to create the metadatasets if we knew more about how the metadatasets themselves are validated. But again, thatâs another area that lacks clarity.
How the metadatasets get validated⊠or not #
Their uniqueness of their businesses are built on these metadatasets, but it doesnât seem like there are processes in place to validate the features developed and in use by Pandora and Spotify across the industry. Thereâs no central database of tracks that I know of, a âTomâs Dinerâ of audio feature validation, that can be used to tune the accuracy of audio features that exist in multiple industry metadatasets. Instead, much like the lossy compression of an MP3, there is just the âclose enough for our purposesâ approximation for validation.
Pandora uses its musicologists to validate the features assigned to tracks by other musicologists and by algorithms, and uses a selection and ranking module to arrive at a âwisdom of the crowd of expertsâ result for the eventual list of features associated with a track. The accuracy of a feature is a relative score based on how many other experts associated that same feature with a track.
Spotify uses a prediction model to predict the subjective (and harder-to-compute) features such as liveness, valence, danceability, and presence of spoken word lyrics. In the patent filing, they disclose the validation methods used for the features predicted by that model:
- Comparing the results of the model to a âground truth datasetâ created from already-labeled data sourced in part from âcrowdsourced online music datasets such as SOUNDCLOUD, LAST.FM, and the likeâ [sic].
- Evaluating the percentage of true positives, false negatives, and true negatives returned by the model predictions for features with a binary value (true or false).
The patent then describes taking appropriate steps to bolster training data and improve coverage of the datasets to produce more accurate results in response to the validation results. However, since this is a patent filing rather than a blog post describing their data science practices, we donât know how often the prediction models and training datasets are updated, or what other methods are used to compile and validate the training datasets themselves.
Lacking an objectively true value for many of these audio features, itâs difficult for services to reliably validate their metadatasets. In fact, rather than comparatively validating their metadatasets, many of the metadatasets are built on top of each other. The Spotify patent for the prediction model makes it clear that the âground truth datasetâ used for validation is partially sourced from other metadatasets. This Echo Nest patent that I discussed earlier makes it clear that different types of metadata can come from pre-existing metadatasets.
Without large-scale understanding of metadata validity across these existing metadatasets, itâs likely that errors and biases in the metadata can proliferate as new ones are created. Eventually, that lack of quality metadata can have a disproportionate effect on the artists creating the music that this metadata is derived from.
Why metadata quality matters #
Spotify and Pandora both rely extensively on these metadatasets to deliver valuable streaming services to customers and to create engaging content like playlists and stations for their listeners. Spotify has positioned itself as a valuable distribution and marketing mechanism for artists, to the point that theyâve devised a new scheme where artists and labels can pay for privileges like prominent playlist placement or spotlights in Spotify.
Metadata underpins the business model of these companies, shaping our experience of music by directly affecting how music is distributed and consumed. But we donât know how valid the metadata is, we donât know if itâs biased, and we donât know how much of a feedback loop is involved in its interpretation to create new distribution and consumption mechanisms.
If these companies donât do more to improve the quality of metadata, artists can lose revenue and miss out on distribution opportunities. Listeners can get bored by the sameness of playlists, or the inaccurate interpretations of their radio station requests, and stop using Spotify and Pandora to discover new music. Without representative and valid metadata, music loses.
What went into writing this #
I read a lot over the past few months that informed my thinking in this essay, or some of the points that I made, without being something I quoted or linked directly in the text. I also am grateful to the conversations I had with my former colleague Jessica about this topic, and the feedback that my former colleague Neal gave me on an earlier version of this post.
Spotify background #
- I read the Spotify API documentation for the audio features and audio analysis endpoints.
- The Spotify for Artists FAQ was informative, especially the following questions.
- How do I get my music on Spotify?
- My music is mixed up with another artist
- Whatâs a unique link?
- How does Fans Also Like work?
- How often are my stats updated in Spotify for Artists?
- How far back do my stats go?
- How does Spotify process my audio files?
- My track doesnât sound as loud as other tracks on Spotify. Why?
- Posts on the Spotify engineering blog, Spotify Labs.
- The Winding Road to Better Machine Learning Infrastructure Through Tensorflow Extended and Kubeflow
- Views From The Cloud: A History of Spotifyâs Journey to the Cloud, Part 1
- Spotifyâs Event Delivery â The Road to the Cloud (Part II)
- Spotifyâs Event Delivery â The Road to the Cloud (Part III)
- Spotifyâs Event Delivery â Life in the Cloud
- Analytics at Spotify
- Spotify Unwrapped: How we brought you a decade of data
- Big Data Processing at Spotify: The Road to Scio (Part 1)
- Big Data Processing at Spotify: The Road to Scio (Part 2)
- Scio 0.7: a deep dive
- Patents filed by Spotify or The Echo Nest, in an attempt to learn how they create music metadata:
- A Hacker Noon article by Sophia Ciocca: Spotifyâs Discover Weekly: How machine learning finds your new music
- This Hypebot article by Bruce Houghton: Spotify’s Paid Promotion Tool Is Called Marquee and Artists, Indie Labels Can’t Afford To Use It
Pandora background #
- An article in the New York Times Magazine by Rob Walker: The Song Decoders at Pandora
- A sponsored article in Forbes Insights by their Insights Team: Forbes Insights: How Pandora Knows What You Want To Hear Next
- Two essays in the East Bay Express:
- By Chris Parker: Personal Shoppers
- By Kara Platoni: Pandora’s Box
- Several Pandora patents in an attempt to learn about some of the features that they create and how they create them:
- US7003515B1 - Consumer item matching method and system
- https://patents.google.com/patent/US20160253416A1/en
- US10088978B2 - Country-specific content recommendations in view of sparse country data
- US8306976B2 - Methods and systems for utilizing contextual feedback to generate and modify playlists
- US9729910B2 - Advertisement selection based on demographic information inferred from media item preferences
- US20160379274A1 - Relating Acoustic Features to Musicological Features For Selecting Audio with Similar Musical Characteristics
- US10129314B2 - Media feature determination for internet-based media streaming
- US10387489B1 - Selecting songs with a desired tempo
Other content #
- An article on Billboard by Emily White: Predicting What You Want To Hear: Music And Data Get It On
- A few Penny Fractions email newsletter missives by David Turner:
- A Water & Music Patreon post written by Cherie Hu: Decoding 8tracks' demise, and what it reveals about the state of music streaming
- An essay on Music Business Worldwide by Cherie Hu: Spotify Needs To Make A Decision About Its Future, Based On Whether It Actually Believes Its Own Mission Statement
- A Water & Music email newsletter missive written by Cherie Hu: Exclusive: Chartmetric’s inaugural six-month data report reveals hidden music trends beyond streaming
- A podcast episode from Chartmetricâs podcast, How Music Charts: Global Music Marketing With Christine Osazuwa
- A series of posts on the Chartmetric blog by Jason Joven:
- An essay in The Guardian by Siraj Datoo: How Shazam uses big data to predict music’s next big artists
- An article on Toptalâs engineering blog by Jovan Jovanovic: How does Shazam work? Music Recognition Algorithms, Fingerprinting, and Processing
- A Medium post by Trey Cooper: How Shazam Works
- An essay in The Atlantic by Derek Thompson: The Shazam Effect
- Derek Thompsonâs book: Hit Makers: How to Succeed in an Age of Distraction
- Abe Winterâs blog post: The coming IP war over facts derived from books
- An article on Wired by Eliot Van Buskirk: 4 Ways One Big Database Would Help Music Fans, Industry
- An article on The Verge by Dani Deahl: Metadata is the biggest little problem plaguing the music industry
- An article on MakeUseOf by Dave Parrack: Music Geeks Can Now Edit Spotify’s Metadata (as of 2018, but no longer possible).
- An article on Mediumâs Cuepoint by Cherie Hu: How Has Streaming Affected our Identities as Music Collectors?
- My own essay on avoiding biased data analysis: Unbiased data analysis with the data-to-everything platform: unpacking the Splunk rebrand in an era of ethical data concerns
- This Newsweek article by Brian Moon: From Spotify to Shazam: How Big-Data Remade the Music Industry One Algorithm at a Time
- An essay on Art Forum by Jace Clayton: Stream Logic: Jace Clayton on Carl Stone and close listening in the Spotify era
- An article on Music Week by Mark Sutherland: Tech it for granted: Why the music biz still needs instinct as well as data to succeed in the digital age
- An article on Ars Technica by Cathleen OâGrady: Spotify data shows how music preferences change with latitude