Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

The audio features of sleep music: Universal and subgroup characteristics

  • Rebecca Jane Scarratt,

    Roles Data curation, Formal analysis, Investigation, Methodology, Validation, Visualization, Writing – original draft

    Affiliations Radboud University, Nijmegen, The Netherlands, Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Aarhus, Denmark

  • Ole Adrian Heggli,

    Roles Conceptualization, Data curation, Formal analysis, Methodology, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing

    Affiliation Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Aarhus, Denmark

  • Peter Vuust,

    Roles Conceptualization, Project administration, Resources, Supervision, Writing – review & editing

    Affiliation Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Aarhus, Denmark

  • Kira Vibe Jespersen

    Roles Conceptualization, Investigation, Methodology, Project administration, Supervision, Writing – original draft, Writing – review & editing

    kira@clin.au.dk

    Affiliation Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Aarhus, Denmark

Abstract

Throughout history, lullabies have been used to help children sleep, and today, with the increasing accessibility of recorded music, many people report listening to music as a tool to improve sleep. Nevertheless, we know very little about this common human habit. In this study, we elucidated the characteristics of music associated with sleep by extracting audio features from a large number of tracks (N = 225,626) retrieved from sleep playlists at the global streaming platform Spotify. Compared to music in general, we found that sleep music was softer and slower; it was more often instrumental (i.e. without lyrics) and played on acoustic instruments. Yet, a large amount of variation was present in sleep music, which clustered into six distinct subgroups. Strikingly, three of the subgroups included popular tracks that were faster, louder, and more energetic than average sleep music. The findings reveal previously unknown aspects of the audio features of sleep music and highlight the individual variation in the choice of music used for sleep. By using digital traces, we were able to determine the universal and subgroup characteristics of sleep music in a unique, global dataset, advancing our understanding of how humans use music to regulate their behaviour in everyday life.

Introduction

Despite sleep being essential for human health and well-being, sleep problems are increasing in modern society [13]. Although some people seek professional help for their sleep problems, many people choose to initiate self-help strategies such as listening to music [46]. Indeed, epidemiological studies show that up to 46% of respondents indicate that they use music to help themselves fall asleep [68] which can significantly improve sleep across adult populations [911]. However, it is not well understood what defines the music that people use to sleep. Are there specific universal features characterising music used for sleep? Or may music be used as sleep aid independently of its musical features? In this study, we address these questions using big data from the global streaming service Spotify.

The habit of using music for sleep improvement may be rooted in the ubiquitous propensity of caregivers to sing lullabies to their babies [12, 13]. Lullabies are often sung to babies to assist with falling asleep and research indicates that even unfamiliar lullabies from different cultures decrease arousal, heart rate and pupil size in babies [14]. As such, it has been hypothesised that music facilitates sleep by reducing arousal [1517]. This may be physiologically or psychologically through a pleasurable emotional response, or by acting as a distractor from stressful thoughts. In general, it has been argued that in order to facilitate a relaxation response, music should have simple repetitive rhythms and melodies, small changes in dynamics, slow tempi (around 60-80bpm), no percussive instruments, and minimal vocalisations [1820]. However, these claims have not been investigated in relation to sleep.

Previous research on sleep music characteristics is limited by the use of qualitative self-reports with relatively small amounts of data, usually in geographically restricted areas. One survey study based in the UK (N = 651) found a large diversity among music used for sleep and concluded that the choice of music was driven more by individual differences than any consistent type of music [17]. However, that study only collected information on artists and genres and did not examine the specific characteristics and audio features of the actual music. Similarly, an Australian survey study on students (N = 161) found that music that aided sleep was characterised by medium tempo, legato articulation, major mode and the presence of lyrics [21]. Because that study was restricted to only 167 pieces of music in a local student population, it is unlikely to represent a full image of the type of music use for sleep. Therefore, a large global sample investigating not only genre but also the audio features of the music is important to understand the characteristics of music used for sleep.

Today, music listening is very often done via international streaming services and this allows for the collection of big data on sleep music from around the globe [22]. In 2019, the International Federation of the Phonographic Industry reported that 89% out of 34 000 internet users listened to music via a streaming service such as Apple Music, Spotify or YouTube music [23]. Out of these services, Spotify stands out with over 320 million listeners worldwide in 2020 [24, 25]. In addition, Spotify offers an easily accessible API (application programming interface), allowing users and researchers to pull metadata and pre-calculated audio features from millions of unique tracks [26, 27].

The audio features available from Spotify describe both basic features of recorded music, such as its tempo and loudness, and compound measures indicative of, for instance, a particular track’s Danceability and Acousticness. While the calculations behind these audio features are not publicly available, they nonetheless provide a rich source of perceptually relevant information, in particular for quantifying differences between and within datasets using the same set of audio features. Leveraging these audio features allows us to use Spotify as a platform for investigating sleep associated music in a representative industrialised population [28, 29].

By amalgamating data from Spotify, we build a large database of music associated with sleep, and related metadata and audio features. Using multiple analysis approaches, we use this dataset to determine both universal and subgroup characteristics of sleep music.

Materials and methods

Building the Sleep Playlist Dataset

We used Spotify to build a dataset of sleep-associated playlists of musical tracks. We used the playlist search function in the Spotify desktop client, searching for all playlists including a word in the word family of “sleep” (e.g. sleep, sleepy, sleeping) either in the title or in the description. The search also brought up results in different languages such as dormir, dormire, slaap, søve etc. With this inclusive search, we aimed to retrieve all relevant playlists. At the same time, we wanted to ensure that the playlists reflected the use of music for human sleep, and we therefore developed four exclusion criteria: we excluded playlists aimed at dogs or other pets, non-music playlists (e.g. podcasts, ASMR and nature sounds), and playlists where the word sleep did not refer to the use of music for sleep (e.g. band names and soundtracks including the word ‘sleep’). To make sure our dataset was representative of general trends in sleep music, and not just individual idiosyncrasies, we also excluded playlists with less than 100 followers.

The search was performed during the fall of 2020. As the exact number of sleep-related playlists on the Spotify platform is only available with access to Spotify’s proprietary database, we stopped data collection at 1,263 playlists. A total of 248 playlists were excluded due to either the criterion of having less than 100 followers or other predetermined exclusion criteria (S1 Table). The title, content and purpose of 69 playlists were ambiguous, so a qualitative review by two of the authors were performed by inspecting playlist title, description, visuals and content. Of these, 29 were excluded, leaving the total number of playlists in our dataset at 986. A flow diagram of the procedure can be found in Fig 1. The data includes no personal data from Spotify users and the data collection complies with Spotify’s terms of use [30]. The thorough assessment procedure was aimed to ensure that the included playlists were indeed associated with sleep. While we cannot experimentally ascertain their use, the general descriptions of the playlists, such as “Soothing minimalist ambient for deep sleep” or “A series of soothing sounds to softly send you to sweet, sweet slumber” and the visual illustration accompanying many of the playlists, such as a photo of a bed, a pillow, or a sleeping person, indicates the notion of sleep. Furthermore, all of the cases in which the mention of the word “sleep” was ambiguous were evaluated individually and excluded if any doubt persisted. While there is a possibility that some included playlists were not meant for the sleep related purposes, we believe the size of the dataset reduces their potential impact.

thumbnail
Fig 1. Flowchart of the playlist search and exclusion procedure.

We acquired 1,263 playlists by searching Spotify for ‘sleep’ words in title or description. 248 were excluded based on the four exclusion criteria non-human (e.g. music to help your dog sleep), non-music (e.g. speech, ASMR, nature sounds), non-sleep (e.g. sleep as part of a band name or soundtrack), or non-representative (less than 100 followers). 69 playlists had ambiguous titles (such as “NO SLEEP”), which were then qualitatively reviewed, leading to 29 additional exclusions. One playlist had unretrievable metadata and was excluded. The final dataset included 985 playlists.

https://doi.org/10.1371/journal.pone.0278813.g001

For each playlist, the playlist link, title, description, creator, number of followers, number of tracks and duration was noted. We used Spotify’s API through Spotipy in Python to access and extract audio features from the tracks included in the SPD. For one of the playlists, we were unable to access metadata and audio features, leaving the dataset used for further analysis at 985 playlists holding a total of 225,626 tracks. Out of these tracks, 95,476 tracks appeared in multiple playlists, leaving 130,150 unique tracks. In terms of followers, the playlist had a median following of 1,932 users, with a minimum of 102 and a maximum of 3,982,105. The playlists have a median of 434 tracks, with a minimum of 2 and a maximum of 9,991 (S2 Table).

The precalculated audio features available from Spotify cover a wide range of both basic and compound musical measures. Notably, as the calculation of these audio features are proprietary, we are unable to quantify exactly which calculations and transformations underlie each feature. Therefore, we base our interpretation of the audio features on Spotify’s description as part of their API reference manual [31], which we provide a summary of in Table 1.

thumbnail
Table 1. Overview of the audio features that are accessible through the Spotify API and their descriptions as given by Spotify [31].

https://doi.org/10.1371/journal.pone.0278813.t001

Genre

Spotify provides a highly detailed genre description of its tracks, with examples such as “Icelandic post-punk” and “instrumental math rock”, and occasionally returns multiple of these genres. To provide a more broad view on the genres included in our dataset we applied a genre reduction algorithm [32]. This algorithm aims to reduce the list of sub-genres provided by Spotify for a particular track such that: , where x is the list of sub-genres of a track, and G(x) is the main genre, obtained by calculating whether each pre-determined main genre y is a substring of the sub-genre xi, and then choosing the main genre with the most occurrences. A Python-implementation is available at GitHub.com/RebeccaJaneScarratt/SpotifySleepPlaylists.

Our list of main genres was created from the 23 STOMP genres removing Oldies with 5 additional genres that were added in Trahan et al. [17] and adding 4 genres ourselves [33]. For a full overview of the 31 genres, see Table 2.

thumbnail
Table 2. All genre categories used to reduce the genre tags.

https://doi.org/10.1371/journal.pone.0278813.t002

Selecting a control dataset

To assess the specific characteristics of sleep music, we selected the Music Streaming Sessions Dataset (MSSD) as a control dataset. This publicly available dataset was released by Spotify on CrowdAI [34] and contains audio features for approximately 3.7 million unique tracks that were listened to at all hours of the day. The MSSD was collected over multiple weeks in 2019 and is treated here as representative of general music listening on Spotify.

Analyses

To statistically assess the specific characteristics of sleep music, we compared the unique tracks from our Sleep Playlist Dataset (SPD) to the Music Streaming Sessions Dataset (MSSD). First, we statistically compared the individual audio features between sleep and general music using Welch’s t-tests from the rstatix package to account for unequal size and variance. All p-values were FDR-corrected. Second, we used linear discriminant analysis (LDA) using the flipMultivariates package distributed by Displayr to identify the individual audio feature’s contribution to separating the two datasets. Due to the unequal size between the SPD and the MSSD, data from the former was weighted by a factor of 28.48.

To assess the degree to which sleep music can be considered one homogeneous group of music or whether different subgroups exist within this category, we used a k-means clustering approach. The clustering was performed using R’s inbuilt kmeans function, with a maximum of 1000 iterations. This clustering approach partitions the data into k clusters by minimizing the within-cluster variance. Selecting the optimal k depends on the intended outcome of the clustering, with lower values of k generally capturing larger clusters in the data. To determine the optimal k for our case we applied the elbow-method, wherein the within-cluster sum-of-squares is summed for each value of k, in our case k = [1,17]. Inspecting this value revealed an optimal partition of the dataset into seven clusters.

All statistical analyses were performed in RStudio version 1.3.959 using R version 4.0.0, running on Windows 10. The scripts used for analysing the dataset can be found at GitHub.com/RebeccaJaneScarratt/SpotifySleepPlaylists. Figs were made using ggplot2 and the RainCloudPlots package [35].

Results

Defining features of sleep music

The comparison between general music in the MSSD and sleep music in our SPD yielded statistically significant differences for all audio features (p < .001), for both the t-test comparison and for the LDA. To better interpret our results, we focus on effect size, as measured by Cohen’s d for the statistical comparison and r2 for the LDA [36]. The results are illustrated in Fig 2.

thumbnail
Fig 2. Audio feature comparisons between sleep music in the SPD (green) and general music in the MSSD (orange).

The panels show the individual audio features, illustrated as smoothed density plots with an underlying box plot wherein the vertical line represents the median value, with the associated Cohen’s d for the comparison of sleep music versus general music.

https://doi.org/10.1371/journal.pone.0278813.g002

The largest effect sizes were found for a decrease in Loudness (Cohen’s d = -1.25) and Energy (Cohen’s d = -1.46) and for an increase in Acousticness (Cohen’s d = 1.20) and Instrumentalness (Cohen’s d = 1.10) in the SPD compared to the MSSD. Danceability (Cohen’s d = -0.64), Valence (Cohen’s d = -0.93), Tempo (Cohen’s d = -0.47), Liveness (Cohen’s d = -0.34), and Speechiness (Cohen’s d = -0.35) were all significantly lower in the SPD as compared to the MSSD. For a full overview, see Table 3.

thumbnail
Table 3. Statistical comparison and linear discriminant analysis of audio feature between the Music Streaming Sessions Dataset and the Sleep Playlist Dataset.

https://doi.org/10.1371/journal.pone.0278813.t003

The LDA performed at a correct classification rate of 78.61% (MSSD = 79.86%, SPD = 77.36%), well above chance levels. All audio features were found to significantly contribute to the classification. The best discriminator was Loudness (r2 = .09), followed by Energy (r2 = .06), Acousticness (r2 = .04), Instrumentalness (r2 = .04), Danceability (r2 = .02), Valence (r2 = .02), Tempo (r2 = .01), Liveness (r2 < .01), and Speechiness (r2 < .01).

Genre analysis

In order to paint a better picture of the music present in the Sleep Playlist Dataset, we reduced the many genres tags that each track is assigned by Spotify to one single genre from the list in Table 2. The most popular genre was sleep, followed by pop, ambient and lo-fi (Table 4). 45,993 tracks had unknown genres and 15,816 tracks were unable to be categorised and corresponded to 789 sub-genres. However, as the most prevalent uncategorised sub-genre only has 817 counts, none of the uncategorised sub-genres would appear in the top 20 genres present in the Sleep Playlist Dataset.

thumbnail
Table 4. Number of occurrences of each of the top 20 genre categories in the Sleep Playlist Dataset.

https://doi.org/10.1371/journal.pone.0278813.t004

Subgroup characteristics of sleep music

To assess the degree to which sleep music can be considered one homogeneous group of music or whether different subgroups exist within this category, we performed a k-means clustering analysis that revealed seven distinct clusters of tracks. We merged two of these clusters, as their mean tempi were close to multiples (140.6 BPM and 76.5 BPM) and their remaining audio features were highly similar. Tempo can be challenging to algorithmically determine and often half-time or double-time tempo is measured instead of the original tempo, so this occurrence is not surprising. Thus, our analysis revealed six musically meaningful subgroups of sleep music. Fig 3 illustrates the mean audio feature values for each subgroup in relation to the sample mean.

thumbnail
Fig 3. Overview of subgroups of sleep music.

The six clusters’ audio features are here shown in relation to the grand average value. A positive value indicates that the cluster is characterised by a relative increase in the audio feature’s value, and a negative value indicates a relative decrease.

https://doi.org/10.1371/journal.pone.0278813.g003

These clusters differ both in size and in the distribution of their audio features. To improve interpretability, we inspected the tracks included in the clusters, in order to assign each cluster a descriptive tag. The audio features of clusters 1, 2 and 3 are substantially different from the average, with low Instrumentalness, high Energy, high Tempo and high Loudness. Cluster 1 (N = 8,275) is characterised by high Speechiness, hence its name “Speechy Tracks”. It contains mainly rap, R&B or lofi tracks (S3 Table). Cluster 2 (N = 30,959) and cluster 3 (N = 30,721) are similar in their audio feature distribution and the tracks that they contain. They mostly contain popular songs of the moment, pop and indie tracks with some lofi, and R&B tracks. The main difference between them is that cluster 3 has high Acousticness whereas cluster 2 does not (Fig 3). Cluster 2 is therefore tagged “Radio Tracks”, and cluster 3 is tagged “Acoustic Radio Tracks”. Cluster 4 is the largest cluster (N = 117,237) and contains meditation tracks, healing music with nature sounds, continuous drone music or ambient music (S3 Table). Cluster 5 (N = 32,651) has higher Danceability than “Ambient Tracks” and the tracks from this cluster are mostly instrumental compositions, either piano cover tracks, classical or jazz instrumentals (S3 Table). By comparing tracks from cluster 5 and “Ambient Tracks”, it is apparent that tracks from cluster 5 usually have a stable pulse, which is expected in instrumental compositions but which is often absent or less salient in “Ambient Tracks” due to the continuous and floating feel of ambient music. Cluster 5 is composed of non-ambient instrumental tracks, hence its tag “Instrumental Tracks”. Cluster 6 (N = 5,783) is defined by a high Liveness value, and many of the tracks in this cluster is general pop or Christian tracks, the latter which has a tendency to be recorded live.

Discussion

By building a large collection of musical tracks associated with sleep, we show that sleep music is characterised by lower Tempo, Loudness, Energy and Tempo and is more likely to have high Instrumentalness and Acousticness values than general music. However, even within sleep music, a large variation of music features remains. Our results show that sleep music can be divided into six distinct clusters, with half of the clusters mirroring the characteristics of sleep music overall and half having higher Energy and lower Instrumentalness.

Previously, studies have focused on the music genres used for sleep, and a British survey study found that classical music was the most frequent genre mentioned (32%), followed by rock (11%), pop (8%), acoustic (7%), jazz (6%), soundtrack (6%) and ambient (6%) [17]. Similarly, an Australian survey study found that of the pieces of music participants rated as successfully helping them fall asleep, 18.5% were classical, 12.3% pop, 12.3% ambient, 10.8% folk and 10% alternative, with 11 different genres in total [21]. Interestingly, the most frequent genres in the current study were sleep, pop, ambient and lo-fi, and classical music was only the 7th most frequent. The incongruence of findings could be because both previous studies were based in one single country with a limited number of participants whereas this current study used a global approach. This being said, the studies agree that no single type of music is most listened to by the general population to fall asleep, accentuating the need for music-based sleep interventions to include many different choices of genres [17, 21].

In addition to genre characteristics, the present study adds to the current knowledge base by examining the audio features of the music. Overall, the characteristics of sleep music revealed by this study are in line with previous research on diurnal fluctuations in music listening behaviour. A recent study found that reduced tempo, loudness and energy was characteristic of music listened to during the night and the early morning [26]. Similarly, the average musical intensity has been found to decrease during the evening hours [27]. In addition, experimental research has highlighted the importance of low tempo and loudness for arousal reduction in response to music [3739]. The importance of a slow tempo may be explained by the entrainment of autonomous biological oscillators such as respiration and heart rate to external stimuli like the beat of the music [40, 41]. Additionally, there is also evidence for neural entrainment to musical rhythms at both beat and meter frequencies [42, 43]. Thus, it could be argued that music with a slow tempo may promote sleep by enhancing low-frequency activity in the brain [44].

Even though our findings provide evidence of the general soothing characteristics of sleep music, we also show that there is much more to sleep music than standard relaxation music. Our results reveal that the sleep associated music varies substantially with regard to the audio features and music characteristics. The large variation described above is accentuated by the six subgroups we identified based on their audio features. The largest subgroup of the Sleep Playlist Dataset was “Ambient music” which is the most expected type of music when looking at music used for relaxation as it has low Danceability and Energy, and high Instrumentalness and Acousticness. These represent the universal and predominant characteristics of music used for sleep. However, different combinations of audio features were found in the other subgroups (“Acoustic Radio Tracks”, “Radio Track”, “Speechy Tracks”). Surprisingly, these subgroups included popular contemporary tracks, which have high Energy and Danceability, and low Instrumentalness and Acousticness. For example, by counting the times a given track occur in the playlists included in our dataset, we find that the most popular track which appeared 245 times was “Dynamite” by the k-pop band BTS. This track does not match previous descriptions of relaxation music [1820] and is instead an up-beat track filled with syncopated and groovy melodic hooks and a busy rhythm section. Other popular sleep tracks included “Jealous” by Labrinth or “lovely (with Khalid)” by Billie Eilish and Khalid that appeared 62 and 60 times respectively (S4 Table). Both these tracks are characterised by medium-low tempo (85 and 115 BPM respectively, yet with an emphasis on half-time (57.5 BPM) on the latter), and a sparse instrumentation with focus on long melodic lines.

One could argue that music with high Energy and Danceability would be counterproductive for relaxation and sleep, however it is possible that they could increase relaxation when considering the interplay between repeated exposure, familiarity and predictive processing. In short, predictive coding is a general theory of brain function which proposes that the brain continuously makes predictions about the world that are compared to sensory input, and if found wrong, triggers a prediction error signal used to refine future predictions [45, 46]. Hence, if music contains many surprising elements, this would lead to many prediction errors [4750]. With repeated exposure, the brain gets increasingly precise at predicting the music. As a piece of music becomes increasingly familiar there is a corresponding decrease in attentional focus and in general energy use [51]. As such, it may be that familiar music even with high Energy and Danceability could facilitate relaxation due to its highly predictive nature. However, this relationship remains to be tested [47, 48, 52, 53]. Similarly, music that is very repetitive and constant over time might also result in increased relaxation due to familiarization with the piece and the increase of dynamic expectations [47]. In such a case, even music with, for example, high Tempo or high Energy might induce relaxation. However, the data presented here does not include track dynamics.

In addition to these surprising finds, more expected music is also present in our dataset. For instance, popular relaxation music pieces such as “Brahms Lullaby”, “Clair de Lune” or “Canon in D” also appeared more than 100 times in the dataset as well as lullabies and nursery rhymes like “Twinkle Twinkle Little Star” and “Incy wincy spider”. However, these were often present in various variations, with differing instrumentation, and hence different audio features. This is a weakness of using purely data-driven audio features to characterise music, as they are based on the recorded audio, and not on the notated music.

One explanation for the wide variety of tracks in our dataset could be the different motivations to listen to music before sleeping. Trahan et al. found four different reasons why people listen to music before bed: (1) in order to change their state (mental, physical, or relaxation), (2) to provide security, (3) as distraction, or (4) just by habit [17]. Certain types of music may be more suitable than others depending on the reason for using music as sleep aid. For example, music that leads to relaxation is usually linked to slow Tempo, low Energy, and high Instrumentalness, such as the tracks that are within the “Ambient Tracks” and “Instrumental Tracks” subgroups. However, a different motivation for music use before sleep, such as mood regulation, might be done better with tracks that are already liked by the listener. Because the motivation of the listener might have a large influence on the type of music people choose to listen to before bed, future research should investigate to what extent different reasons for using music as sleep aid may drive the specific choice of music. Furthermore, research on music used for emotion regulation shows that people do not always choose the music that facilitates a positive effect [54, 55]. Therefore, future studies should clarify if the different subgroups of sleep music do promote sleep equally well while taking music preferences into account.

Overall, the results of this study clearly highlight the variation within sleep music, and the need to move beyond genre descriptions towards more specific analyses of the audio features of the music. These results can help inform the choice of music for clinical studies, music therapy or personal use. Previously, some clinical trials have used researcher-selected music [56] while others have given participants a choice among pre-selected playlists [57]. So far, it is not clear to which degree the choice of music has an impact on the effect on sleep [58, 59].

When considering the results of this study, it is worth taking into account the limitation that we do not have demographic information on the specific users of the sleep music. As such, we cannot exclude the possibility that the dataset might be skewed by a certain demographic, such as having more younger people or more of one gender. However, we know that there are Spotify users in 92 different countries, covering many continents [25]. Furthermore, it is known that streamers of online music cover all ages [23]. Therefore, we consider this the most global sleep music dataset to date, with the largest age-range and demographic variability.

In summary, our study used the digital traces of music streaming to shed light on the widespread human practice of using music as sleep aid. Poor sleep is a growing problem in society and our study contributes to this field, by providing new knowledge on both the universality and diversity of sleep music characteristics that can help inform future music-interventions as well as bringing us a step closer to understanding how music is used to regulate emotions and arousal by millions of people in everyday life.

Supporting information

S2 Table. Descriptive statistics of the Sleep Playlist Dataset.

https://doi.org/10.1371/journal.pone.0278813.s002

(DOCX)

S3 Table. Most frequent musical genres, audio features and tracks based on trackID of the 6 clusters.

https://doi.org/10.1371/journal.pone.0278813.s003

(DOCX)

S4 Table. Top 20 most frequent tracks based on trackID.

https://doi.org/10.1371/journal.pone.0278813.s004

(DOCX)

References

  1. 1. Calem M, Bisla J, Begum A, Dewey M, Bebbington PE, Brugha T, et al. Increased Prevalence of Insomnia and Changes in Hypnotics Use in England over 15 Years: Analysis of the 1993, 2000, and 2007 National Psychiatric Morbidity Surveys. Sleep. 2012;35: 377–384. pmid:22379244
  2. 2. Garland SN, Rowe H, Repa LM, Fowler K, Zhou ES, Grandner MA. A decade’s difference: 10-year change in insomnia symptom prevalence in Canada depends on sociodemographics and health status. Sleep Health. 2018;4: 160–165. pmid:29555129
  3. 3. Pallesen S, Sivertsen B, Nordhus IH, Bjorvatn B. A 10-year trend of insomnia prevalence in the adult Norwegian population. Sleep Med. 2014;15: 173–179. pmid:24382513
  4. 4. Aritake-Okada S, Kaneita Y, Uchiyama M, Mishima K, Ohida T. Non-pharmacological self-management of sleep among the Japanese general population. J Clin Sleep Med JCSM Off Publ Am Acad Sleep Med. 2009;5: 464–469. pmid:19961033
  5. 5. Léger D, Poursain B, Neubauer D, Uchiyama M. An international survey of sleeping problems in the general population. Curr Med Res Opin. 2008;24: 307–317. pmid:18070379
  6. 6. Morin CM, LeBlanc M, Daley M, Gregoire JP, Merette C. Epidemiology of insomnia: prevalence, self-help treatments, consultations, and determinants of help-seeking behaviors. Sleep Med. 2006;7: 123–130. pmid:16459140
  7. 7. Brown CA, Qin P, Esmail S. “Sleep? Maybe Later…” A Cross-Campus Survey of University Students and Sleep Practices. Educ Sci. 2017;7: 66.
  8. 8. Urponen H, Vuori I, Hasan J, Partinen M. Self-evaluations of factors promoting and disturbing sleep: an epidemiological survey in Finland. Soc Sci Med 1982. 1988;26: 443–450. pmid:3363395
  9. 9. Jespersen, Koenig J, Jennum P, Vuust P. Music for insomnia in adults. Cochrane Database Syst Rev. 2015 [cited 2 Dec 2020]. pmid:26270746
  10. 10. Wang C-F, Sun Y-L, Zang H-X. Music therapy improves sleep quality in acute and chronic sleep disorders: A meta-analysis of 10 randomized studies. Int J Nurs Stud. 2014;51: 51–62. pmid:23582682
  11. 11. Cordi MJ, Ackermann S, Rasch B. Effects of Relaxing Music on Healthy Sleep. Sci Rep. 2019;9: 9079. pmid:31235748
  12. 12. Mehr SA, Singh M, Knox D, Ketter DM, Pickens-Jones D, Atwood S, et al. Universality and diversity in human song. Science. 2019;366. pmid:31753969
  13. 13. Trehub SE, Unyk AM, Trainor LJ. Adults identify infant-directed music across cultures. Infant Behav Dev. 1993;16: 193–211.
  14. 14. Bainbridge C, Youngers J, Bertolo M, Atwood S, Lopez K, Xing F, et al. Infants relax in response to unfamiliar foreign lullabies. Nat Hum Behav. 2020. pmid:33077883
  15. 15. Dickson GT, Schubert E. How does music aid sleep? literature review. Sleep Med. 2019;63: 142–150. pmid:31655374
  16. 16. Jespersen, Otto M, Kringelbach M, Van Someren E, Vuust P. A randomized controlled trial of bedtime music for insomnia disorder. J Sleep Res. 2019;28: e12817. pmid:30676671
  17. 17. Trahan T, Durrant SJ, Müllensiefen D, Williamson VJ. The music that helps people sleep and the reasons they believe it works: A mixed methods analysis of online survey reports. PLoS One. 2018;13: e0206531. pmid:30427881
  18. 18. Gaston ET. Dynamic Music Factors in Mood Change. Music Educ J. 1951;37: 42–44.
  19. 19. Holbrook MB, Anand P. Effects of tempo and situational arousal on the listener’s perceptual and affective responses to music. Psychol Music. 1990;18: 150–162.
  20. 20. Tan X, Yowler CJ, Super DM, Fratianne RB. The Interplay of Preference, Familiarity and Psychophysical Properties in Defining Relaxation Music. J Music Ther. 2012;49: 150–179. pmid:26753216
  21. 21. Dickson GT, Schubert E. Musical Features that Aid Sleep. Music Sci. 2020 [cited 13 Jan 2021]. https://doi.org/10.1177/1029864920972161
  22. 22. RIAA. Charting a Path to Music’s Sustainable Success. In: Medium [Internet]. 25 Feb 2020 [cited 13 Jan 2021]. Available: https://medium.com/@RIAA/charting-a-path-to-musics-sustainable-success-12a5625bbc7d
  23. 23. Music Listening in 2019. In: IFPI [Internet]. [cited 13 Jan 2021]. Available: https://www.ifpi.org/ifpi-releases-music-listening-2019/
  24. 24. Mansoor I. Spotify Usage and Revenue Statistics (2020). In: Business of Apps [Internet]. 2020 [cited 13 Jan 2021]. Available: https://www.businessofapps.com/data/spotify-statistics/
  25. 25. Spotify—Company Info. [cited 2 Dec 2020]. Available: https://newsroom.spotify.com/company-info/
  26. 26. Heggli OA, Stupacher J, Vuust P. Diurnal fluctuations in musical preference. 2021. 2021. pmid:34804568
  27. 27. Park M, Thom J, Mennicken S, Cramer H, Macy M. Global music streaming data reveal diurnal and seasonal patterns of affective preference. Nat Hum Behav. 2019;3: 230–236. pmid:30953008
  28. 28. Greenberg DM, Rentfrow PJ. Music and big data: a new frontier. Curr Opin Behav Sci. 2017;18: 50–56.
  29. 29. Holtz D, Carterette B, Chandar P, Nazari Z, Cramer H, Aral S. The Engagement-Diversity Connection: Evidence from a Field Experiment on Spotify. Rochester, NY: Social Science Research Network; 2020 Feb. Report No.: ID 3555927. https://doi.org/10.2139/ssrn.3555927
  30. 30. Spotify Developer Terms | Spotify for Developers. [cited 26 Oct 2022]. Available: https://developer.spotify.com/terms/
  31. 31. Spotify for Developers. [cited 13 Jan 2021]. Available: https://developer.spotify.com/documentation/web-api/reference/object-model/
  32. 32. Dammann T, Haugh K. Genre Classification of Spotify Songs using Lyrics, Audio Previews, and Album Artwork. Bachelor Thesis, Stanford University. 2017.
  33. 33. Rentfrow PJ, Gosling SD. The do re mi’s of everyday life: the structure and personality correlates of music preferences. J Pers Soc Psychol. 2003;84: 1236. pmid:12793587
  34. 34. Brost B, Mehrotra R, Jehan T. The Music Streaming Sessions Dataset. The World Wide Web Conference. New York, NY, USA: Association for Computing Machinery; 2019. pp. 2594–2600. https://doi.org/10.1145/3308558.3313641
  35. 35. Allen M, Poggiali D, Whitaker K, Marshall TR, Kievit RA. Raincloud plots: a multi-platform tool for robust data visualization. Wellcome Open Res. 2019;4: 63. pmid:31069261
  36. 36. Bakker A, Cai J, English L, Kaiser G, Mesa V, Van Dooren W. Beyond small, medium, or large: points of consideration when interpreting effect sizes. Educ Stud Math. 2019;102: 1–8.
  37. 37. Bernardi L. Cardiovascular, cerebrovascular, and respiratory changes induced by different types of music in musicians and non-musicians: the importance of silence. Heart. 2005;92: 445–452. pmid:16199412
  38. 38. Bernardi L, Porta C, Casucci G, Balsamo R, Bernardi N, Fogari R, et al. Dynamic Interactions Between Musical, Cardiovascular, and Cerebral Rhythms in Humans. Circulation. 2009;119: 3171–3180. pmid:19569263
  39. 39. Gomez P, Danuser B. Relationships between musical structure and psychophysiological measures of emotion. Emotion. 2007;7: 377–387. pmid:17516815
  40. 40. Juslin PN. From everyday emotions to aesthetic emotions: Towards a unified theory of musical emotions. Phys Life Rev. 2013;10: 235–266. pmid:23769678
  41. 41. Khalfa S, Roy M, Rainville P, Dalla Bella S, Peretz I. Role of tempo entrainment in psychophysiological differentiation of happy and sad music? Int J Psychophysiol. 2008;68: 17–26. pmid:18234381
  42. 42. Nozaradan S. Exploring how musical rhythm entrains brain activity with electroencephalogram frequency-tagging. Philos Trans R Soc B Biol Sci. 2014;369: 20130393. pmid:25385771
  43. 43. Nozaradan S, Peretz I, Missal M, Mouraux A. Tagging the Neuronal Entrainment to Beat and Meter. J Neurosci. 2011;31: 10234–10240. pmid:21753000
  44. 44. Ellis RJ, Koenig J, Thayer JF. Getting to the Heart: Autonomic Nervous System Function in the Context of Evidence-Based Music Therapy. Music Med. 2012;4: 90–99.
  45. 45. Friston K. Does predictive coding have a future? Nat Neurosci. 2018;21: 1019–1021. pmid:30038278
  46. 46. Heilbron M, Chait M. Great Expectations: Is there Evidence for Predictive Coding in Auditory Cortex? Neuroscience. 2018;389: 54–73. pmid:28782642
  47. 47. Huron D. Sweet Anticipation. The MIT press; 2006. Available: https://mitpress.mit.edu/books/sweet-anticipation
  48. 48. Koelsch S, Vuust P, Friston K. Predictive Processes and the Peculiar Case of Music. Trends Cogn Sci. 2019;23: 63–77. pmid:30471869
  49. 49. Vuust P, Ostergaard L, Pallesen KJ, Bailey C, Roepstorff A. Predictive coding of music–Brain responses to rhythmic incongruity. Cortex. 2009;45: 80–92. pmid:19054506
  50. 50. Vuust P, Witek MAG. Rhythmic complexity and predictive coding: a novel approach to modeling rhythm and meter perception in music. Front Psychol. 2014;5: 1111. pmid:25324813
  51. 51. Hansen NC, Dietz MJ, Vuust P. Commentary: Predictions and the brain: how musical sounds become rewarding. Front Hum Neurosci. 2017;11. pmid:28424603
  52. 52. Gebauer L, Kringelbach M, Vuust P. Ever-changing cycles of musical pleasure: The role of dopamine and anticipation. Psychomusicology Music Mind Brain. 2012;22: 152–167. https://doi.org/10.1037/a0031126
  53. 53. Salimpoor VN, Zald DH, Zatorre RJ, Dagher A, McIntosh AR. Predictions and the brain: how musical sounds become rewarding. Trends Cogn Sci. 2015;19: 86–91. pmid:25534332
  54. 54. Carlson E, Saarikallio S, Toiviainen P, Bogert B, Kliuchko M, Brattico E. Maladaptive and adaptive emotion regulation through music: a behavioral and neuroimaging study of males and females. Front Hum Neurosci. 2015;9: 466. pmid:26379529
  55. 55. Garrido S, Schubert E. Music and People with Tendencies to Depression. Music Percept. 2015;32: 313–321.
  56. 56. Harmat L, Takács J, Bódizs R. Music improves sleep quality in students. J Adv Nurs. 2008;62: 327–335. pmid:18426457
  57. 57. Lai H-L, Good M. Music improves sleep quality in older adults. J Adv Nurs. 2006;53: 134–144. pmid:16422710
  58. 58. Jespersen KV, Pando-Naude V, Koenig J, Jennum P, Vuust P. Listening to music for insomnia in adults. Cochrane Database Syst Rev. 2022;8: CD010459. pmid:36000763
  59. 59. Yamasato A, Kondo M, Hoshino S, Kikuchi J, Ikeuchi M, Yamazaki K, et al. How Prescribed Music and Preferred Music Influence Sleep Quality in University Students. Tokai J Exp Clin Med. 45: 207–213. pmid:33300592