If so, we establish which value from the waveform data we should use for this line. let segments = (segment => įirst, I check if the x value is divisible by 8. Then, we’ll map the segments data to only include the start, duration, and loudness properties. Next, we’ll create a variable for the track’s duration which is part of the data Spotify provides. const fs = require('fs') const data = require('./track.json') I do this by writing a little node script.įirst, we’ll include the node file system model and also our downloaded data. I’ll then use this array of levels to generate a waveform using HTML5 canvas or SVG. ? What I want to do is greatly simplify this data to only include an array of loudness levels from 0 to 1. I downloaded Future Islands new track “Thrill” and the data amounted to 378kb. Preparing the DataĪs I mentioned, the Audio Analysis endpoint provides all sorts of interesting data and this makes the returned data size pretty large to work with in practical applications. curl -X "GET" " " -H "Accept: application/json" -H "Content-Type: application/json" -H "Authorization: Bearer YOUR_ACCESS_TOKEN" -output track.json
Once you do, just add -output track.json to the curl command to download the data to a file. You can generate a temporary access token using the Spotify Platform console. This can be done very simply by passing the track id and a Spotify access token. Since I’m going to be simplifying this data before I use it for visualizations, I like using Curl to download the data locally. Using this data, I should be able to visualize the audio levels of the track. Each segment has many interesting properties, but I was most interested in three: the start point (in seconds,) the duration (in seconds,) and the max loudness (in decibels) of the segment. These are sections of the track which contain roughly consistent sound. The object I was most interested in was Segments. This endpoint provides all sorts of interesting analysis on the track’s structure and musical content, including rhythm, pitch, and timbre. I was ready to give up when I recalled that Spotify’s platform provided an Audio Analysis endpoint. In addition, from what I can tell, their Web Playback SDK does not expose the audio in a way which you might be able to generate this in real-time using Web Audio.
Spotify doesn’t really grant access to the full length audio files (for good reason) which would be required to extract this data. (See the case study for more info.) However, when I finished up the project, I started to think about how I might actually be able to create waveform images from Spotify tracks. Since we didn’t stream the audio from Spotify on that project, I ended up extracting the waveform data using Meyda. It’s a topic I haven’t thought about since I worked at SoundCloud many years ago. One of the key features of my recent Future Islands project was pre-rendering waveforms so that we had a cool visual which would accompany audio playback.