UI Design | Entertainment | Mobile

Spotify Application

Adding a Music Recognition Feature

 

Spotify is a digital music service that provides users with music and podcast streaming

 

MY ROLE

I was the solo UX Designer of the project, proposed and designed a music recognition feature integrated into Spotify’s current branding and UI.

PROBLEM

Identifying a song requires downloading another app for Spotify users, which is time-consuming and not efficient

Users have limited options to find out a song through other music recognition apps. So it’s frustrating for them not having alternative ways when it fails to detect a song.

GOAL

I wanted to add an in-house feature of identifying a song by playing, singing/humming and inputing lyrics to ease the experience of identifying a song without frustration.

 

BACKGROUND STORY

The idea came to my mind when I was driving. I heard a song on the radio that I didn't know what the music and the singer were. I wanted to find out the song's name and the singer while it was playing. However, the only app I know to do that is Shazam, which I didn't have uploaded to my phone at that moment. Then I found myself thinking there must be a lot of Spotify users like me trying to identify the song, and wanting to add that to a playlist and start listening immediately.


So I envisioned Spotify having an in-house feature to identify not only songs but also hummed sounds or recognizing the written lyrics. My hypothesis is this feature would increase the value of the product and add another factor for users to choose Spotify over other music apps.

 

TIMELINE

 

  1. RESEARCH

OBJECTIVES

  1. Learn about the target audience and demographics.

  2. Figure out what kinds of sounds can be detected and identified? Would it be something written like song lyrics or podcast subtitles?

  3. Discover which apps they use now, and understand their frustrations and expectations from using these apps.

  4. Explore what tools/features are necessary for users that enable an efficient, easy, and accessible music recognition experience.

 

PRIMARY RESEARCH

USER INTERVIEWS to deep dive into users’ expectations and needs regarding music/sound recognition feature

  • Recruited participants who used a digital product before or currently using to identify a sound

  • Interview script and questions

  • Conducted 5 online user interviews through Zoom

First, I transcribed my interviews to be able to see the big picture for synthesizing my findings. Then, based on the common pattern I got from interviews, I created empathy map and a sample of user persona.

 
 
 
 
 

KEY INSIGHTS FROM THE INTERVIEWS:

Users;

  • Feel frustrated by the multiple steps in adding the tagged song from other music recognition apps to the Spotify playlist.

  • Want to detect songs with lyrics without having to use Google for it.

  • Wish to have alternative ways to identify a song when in a loud environment or not being close to the music source.

  • Want easily add a tagged song to the Spotify list.

 

SECONDARY RESEARCH

COMPETITIVE ANALYSIS to discover what features and innovative technologies other sound recognition apps use. I conducted quantitative feature comparison of well-known and most-used apps to music recognition.

 
 
 

Based on the quantitative feature comparison, Shazam seems to have fewer features than other competitors. However, it is the only answer I got from participants when I asked about the app they use for music recognition. So I tried to understand what makes it the most used and popular by comparing its pros and cons with other competitors.

 
 

Even though Shazam has the most weaknesses, it is still the most used and popular music recognition app. Based on my competitive analysis, I would say that being easily and quickly accessible with a simple UI explains that Shazam is the most preferred song recognition app. Therefore, I took this as a guide to my feature design and made it easily accessible with a clean and straightforward UI. Also, deriving from other competitors’ features, I planned to include lyric and singing recognition in Spotify’s new feature.

 

2. DEFINE

STORYBOARD

I drew out a story showing the experience of my persona of identifying a song at a party and adding it to a playlist on Spotify by typing lyrics with the new feature.

 
 
 
 

TASK FLOW

Since I only showed recognition by lyrics with storyboarding, I needed to create a task flow including all three options to complete the task.

 
 

3. DESIGN

WIREFRAME SKETCHES

I started with sketching out the necessary pages for a user to find a song in a loud environment by lyrics, reviewing matches, previewing the best result, and adding it to a new playlist.

 
 

A/B TESTING

Before I created high-fidelity wireframes, I wanted to understand users’ preferences on the icon design, placement of the feature icon, and the name of the feature. To do that, I conducted A/B testing with 27 participants.

ICON DESIGN

The majority of participants, 16 out of 27 (%59), picked the icon design on the right.

NAME OF THE FEATURE

The majority of the participants, 11 out of 27 (%41) wanted to name the feature 'Detect.'

PLACEMENT OF THE ICON

 

Most participants, 14 out of 27 (%52), wanted me to place the feature icon on the bottom navigation.

 

HIGH-FIDELITY WIREFRAMES

I followed the UI and branding from Spotify to ensure a seamless addition

 

 5. TEST

I conducted usability testing with high-fi prototype.

Scenario: Imagine you are in a loud environment, a party, or a concert. You hear a song you don’t know, and you want to identify the song. A friend suggests you use Spotify’s new feature.

Task #1: Finding and recognizing the feature icon on the homepage.

Task #2: Detecting the song by successfully using at least one of the given options – by singing, playing, or lyrics.

  • I tested

    • the overall usability and user interaction of the feature in terms of navigation.

    • Confusing and/or missing parts as well as pain points.

    • Whether users have difficulty interacting with the feature when identifying a song.

    • Whether the interaction with the feature feels intuitive to the users.

  • High-fidelity prototype of Spotify mobile app.

  • I tested my high-fi prototype with 5 users online through Zoom.

  • Number of Participants: 5

    Age Range: Young Adults (25-35)

    Personality: Participants who use any music app and a music recognition app to identify a song.

  • I recruited participants through my contact list, social media channels, Slack, and Discord channels.

 
 
 
 

AFFINITY MAPPING, PRIORITY MATRIX AND REVISIONS

Based on synthesis of user testing with high-fidelity prototype, I created affinity mapping to see improvements needed for playing, singing, lyrics tabs and search results pages.

 
 
 

Here are the refined improvements for each page I created in the prototypes. All participants were able to find the feature from the homepage successfully, and they all gave 5 out of 5 to the ease of finding it, and the icon/name/placement makes sense to all of them. So that's why I didn't make any iterations for the homepage. But I made revisions for the feature page and search results page.

 
 
 
 

CHALLENGES, LESSONS LEARNED AND INSIGHTS

As another Spotify user, I already had my biases and presumptions. However, I needed to conduct the research and improve the user experience with a non-biased, open mind. Even though it was challenging at first not to lead users during interviews, I noticed that they were already supporting the opinions I’ve had. For example, I assumed that the primarily used music app is Spotify and the music recognition app is Shazam. And my hypothesis was for Spotify users, the process of using another app to identify a song and then finding it again on Spotify to listen and add it to a Spotify list is very frustrating. There are multiple steps involved in the experience, adding that Shazam and other music recognition apps sometimes fail to find the song. Then users have to search lyrics on google to identify the song and return to Spotify again. Therefore, I assumed that this feature would increase the product's value and add another factor for users to choose Spotify over other music apps. I am happy that my research and usability test findings support and approve my assumption.

If I hadn’t had the time constraint to complete this project, I’d have incorporated some accessibility concerns and the innovative technologies I discovered. One example is to include voice-over technology through wearables for the users to access the feature hands-free and identify a song while driving. And the other one is to add AI lip-sync technology to read users’ lips while singing/humming a song to be used in the feature to recognize a piece. Therefore, I am hoping that I will be able to work on these ideas further to improve the product as the next step.

 

Previous
Previous

Plant Parenting App

Next
Next

GENIE Time Travel Booking