You might know it best from your stereo, but alongside its streaming service Spotify operates an API that stores a treasure trove of musical metadata.
This data is powered by The Echo Nest, an early innovator in the world a music intelligence, which Spotify uses to drive things like your weekly ‘Discover’, taste profile, and mood playlists.
Using the Spotify API, you can pull a huge amount of data on individual tracks, artist, and even playlists – all of which provide ample inspiration for your visualisations and analysis.
While the documentation on their developer site may seem daunting for a beginner, the Alteryx community has developed a few tools that simplify the process.
Accessing The Spotify API
To access the Spotify API through Alteryx, you’ll need to download two macros from the Alteryx Gallery:
- Get Token for Spotify API – a macro that automates set-up up your authentication.
- Get Audio Features for Several Tracks – a macro that pulls song data about tracks.
You’ll also need to set up a Developer account on Spotify.
Next, it’s simply a process of following the steps outlined in these blogs, and within a few minutes you’ll have a dataset containing information on the songs in your favourite playlist.
Below you can see a sample of data I recently pulled on LCD Soundsystem using these macros.
Interpreting Song Feature Data
Together, these two macros can turn a playlist link into a rich dataset containing the audio features of each track – allowing for some cool visualisations.
Here’s a quick breakdown of what each measure means (full definitions):
A measure whether the track is acoustic.
Danceability describes how suitable a track is for dancing based on a combination of musical elements including tempo, rhythm stability, beat strength, and overall regularity.
Energy represents a perceptual measure of intensity and activity, based on range, perceived loudness, timbre, onset rate, and general entropy
Predicts whether a track contains no vocals. “Ooh” and “aah” sounds are treated as instrumental in this context. Rap or spoken word tracks are clearly “vocal”.
The key the track is in. Integers map to pitches using standard Pitch Class notation . E.g. 0 = C, 1 = C♯/D♭, 2 = D, and so on.
Detects the presence of an audience in the recording.
The overall loudness of a track in decibels (dB). Loudness values are averaged across the entire track and are useful for comparing relative loudness of tracks.
Mode indicates the modality (major or minor) of a track, the type of scale from which its melodic content is derived. Unlike other measures, which are represented in a range from 0 – 1, this is a Boolean string in which Major is represented by 1.
Speechiness detects the presence of spoken words in a track.
The overall estimated tempo of a track in beats per minute (BPM).
An estimated overall time signature of a track. The time signature (meter) is a notational convention to specify how many beats are in each bar (or measure).
The musical positiveness conveyed by a track.
Using these features, I recently created a dashboard that shows how the music of LCD Soundsystem has changed across their career.
Going Deeper Into Spotify
Audio track features are just one of the many different musical components you can extract with the Spotify API – to dive deeper you can look up their documentation here.
Junya Wang has also posted a cool Tableau Tip on the Data School Blog showing how to embed dynamic tracks into a dashboard.
Previous Data Schoolers have also put together a range of cool dashboards showcase different elements of music:
Nicholas Hills: Australian vs UK music taste
Anders Wold: Audio Features of the Top 50 songs for each year 2010-19
Ivy Yin: What music has my colleage been listening to?
Alex Taylor-Jackson: The importance of John Frusciante on the Red Chilli Peppers sound
Fairuz Khan: Mapping the Changes in my Top Songs in Spotify Across Different Time Ranges
So, what cool Viz’s can you come up with Spotify data? Post below.