arrow icon
Back

Song Details Analysis

Music analytics dashboard providing lyrics analysis, song data visualization, and video discovery using Spotify design system.

Impact

  • Reduced initial load time by ~40% by balancing RSC/CSR/SSR and integrating TanStack Query/React Cache.
  • Integrated 4+ REST APIs (MusixMatch, Ticketmaster, Perplexity, Invidious) through proxy routes in NextJS.
  • Prototyped UX/UI based off Spotify design system using Figma.

Role

UX/UI Designer, Frontend Engineer

Tools

ReactJS, TypeScript, Python, NextJS, TailwindCSS, Figma, Motion, p5JS, AWS Lambda

Timeline

Jan 2024 - May 2025

Features

Song Details Analysis

Total streams is derived by scraping mystreamcount.com.

Most streamed country is derived by scraping kworb.net.

Chart data is derived by scraping mystreamcount.com.

Longevity is derived by an algorithm based on recent streams ratio via peak performance.

Lyrics score is derived Perplexity API based on criteria of depth, meaning, and complexity.

Popularity is derived from Spotify Web API results.

Song Lyrics Analysis

Lyrics is derived by passing Spotify ISRC through MusixMatch API.

Lyrics analysis is derived by passing lyrics, artist and album name into Perplexity API.

Scrolling to desired lyric was done by calculating the pixel distance from top of container and storing each element inside a ref array.

Related Media Discovery

Related content is derived by calling Invidious API.

Filtering is done on the frontend via string matching.

Performance Optimizations

Debouncing API Calls

export const debounce = (fn, wait) => {
let timeout;
let pendingPromise = null;
return function (...args) {
// eslint-disable-next-line @typescript-eslint/no-this-alias
const context = this;
// If there's already a pending promise for this debounce, return it
if (pendingPromise) return pendingPromise;
// Create a new promise that will resolve with the result of fn
pendingPromise = new Promise((resolve) => {
if (timeout) clearTimeout(timeout);
timeout = setTimeout(() => {
const result = fn.apply(context, args);
pendingPromise = null;
resolve(result);
}, wait);
});
return pendingPromise;
};
};

By debouncing API calls, I can reduce the number of API calls to only the most recent changes to state. This is a common technique to improve performance and reduce server load by preventing excessive API requests during rapid user interactions like typing.

TanStack Query

// Example: Using TanStack Query for API calls
const { data, isLoading, error } = useQuery({
queryKey: ['songData', songId],
queryFn: () => fetchSongData(songId),
staleTime: 5 * 60 * 1000, // 5 minutes
cacheTime: 10 * 60 * 1000, // 10 minutes
});

TanStack Query provides intelligent caching, background refetching, and optimistic updates. By setting appropriate stale and cache times, I reduce redundant API calls and improve user experience with instant data loading from cache.

React Cache

import { cache } from 'react';
// Cache expensive operations across component tree
const getCachedSongAnalysis = cache(async (songId) => {
const response = await fetch(`/api/analysis/${songId}`);
return response.json();
});
// Multiple components can call this without duplicate requests
const analysis = await getCachedSongAnalysis(songId);

React Cache eliminates duplicate requests during server-side rendering, ensuring expensive operations like AI analysis are only performed once per request cycle.

RSC Rendering + CSR Rendering

// Server Component (RSC) - runs on server
export default async function SongPage({ params }) {
// Pre-fetch data on server
const songData = await fetchSongData(params.id);
return (
<div>
<SongDetails data={songData} />
<InteractiveLyrics songId={params.id} />
</div>
);
}
// Client Component (CSR) - runs in browser
'use client';
export default function InteractiveLyrics({ songId }) {
const [selectedLine, setSelectedLine] = useState(null);
// Interactive features here
}

By strategically combining Server Components for initial data fetching and Client Components for interactive features, we achieve optimal performance with fast initial loads and rich interactivity.

Spotify Design Flow

Dynamic URLs

// Dynamic routing with Next.js
// app/song/[id]/page.js
export default function SongPage({ params }) {
const songId = params.id;
return (
<div>
<SongDetails id={songId} />
{/* URLs like /song/4uLU6hMCjMI75M1A2tKUQC */}
</div>
);
}
// Generate URLs from Spotify track IDs
const generateSongUrl = (trackId) => `/song/${trackId}`;

Following Spotify's URL pattern, each song gets a unique URL based on its Spotify track ID, enabling direct sharing and bookmarking of specific songs.

Dynamic Color Palette

// Extract colors from album artwork
const getColorPalette = async (imageUrl) => {
const img = new Image();
img.crossOrigin = 'anonymous';
img.src = imageUrl;
return new Promise((resolve) => {
img.onload = () => {
const canvas = document.createElement('canvas');
const ctx = canvas.getContext('2d');
canvas.width = img.width;
canvas.height = img.height;
ctx.drawImage(img, 0, 0);
// Extract dominant colors using color quantization
const colors = extractDominantColors(ctx.getImageData(0, 0, canvas.width, canvas.height));
resolve(colors);
};
});
};

Like Spotify, I extract dominant colors from album artwork to create dynamic color schemes that change based on the currently viewed song, creating a cohesive visual experience.

Challenges

Problem: Search element losing focus across page rerenders

Solution: Use NextJS route groups and have a unified layout.tsx file

There was an issue where the search input would lose focus every time the search element would render results. This was caused by the entire page re-rendering when new search results were fetched, causing the DOM to rebuild and lose focus state.

Problem: Python runtime not available alongside Node for Vercel hosting

Solution: Host Python scrapers as AWS Lambda functions

Vercel's serverless functions don't support mixed runtimes in a single deployment. Since the scraping logic was written in Python but the web app used Node.js, I had to separate the Python scrapers into AWS Lambda functions with API Gateway endpoints.

Problem: API rate limiting causing failed requests

Solution: Implement exponential backoff and request queuing

Multiple APIs (MusixMatch, Perplexity, Invidious) had different rate limits. I implemented a request queue system with exponential backoff to handle rate limiting gracefully and ensure data consistency.