A real time audio analysis toolkit for musicians with visualization, tuning, chord detection, and spectrum analysis
Resonate is a web app built using HTML, Tailwind CSS, Javascript and Flask for Axiom (Hack Club YSWS) it uses signal processing algorithms like Fast Fourier Transform (FFT) and auto correlation to analyze audio in realtime form user's mic or uplaoded audio files. It also comes with many professional musician tools like tuners, chord detection, spectrum analyzer and a metronome with automatic BPM (beats per minutes) detection while processing everything on the client side using the Web Audio API for 0 latency.
- Resonate is Live at resonate.pythonanywhere.com
- Demo Video: Watch on Google Drive
- Real-time Audio Visualizer - with four visualization modes (bars, waveform, circular, line) with customizable FFT size and sensitivity
- Multi Instrument Tuner - which supports guitar, bass, ukulele, violin with multiple tuning presets (standard, drop D, DADGAD, etc.)
- Chord Detection - real-time chord recognition supporting 18+ chord types including extended chords (maj7, dom7, dim7, etc.)
- Spectrum Analyzer - frequency spectrum visualization with 7 distinct frequency bands from sub-bass to brilliance
- Smart Metronome - variable BPM, multiple time signatures, tap tempo, and automatic BPM detection from audio
- Audio Recording - record, save, and manage practice sessions with support for multiple audio formats
- Format Converter - convert recordings between WebM, WAV, and MP3 formats to downlaod it
- Practice Tracker - automatic session tracking with daily and weekly statistics
- Theme Support - dark and light modes with system preference detection
- Keyboard Shortcuts - efficient navigation and control with comprehensive hotkeys shortcuts
- User Authentication - secure and easy Firebase authentication with Google Sign-In
Frontend
- HTML
- Tailwind CSS (via CDN)
- Vanilla JavaScript - Modular architecture with classes
- Web APIs:
- Web Audio API (AudioContext, AnalyserNode, OscillatorNode)
- MediaRecorder API - to record audio
- MediaDevices API (getUserMedia) - to get mic access
- Canvas API - for real time visualizations
- LocalStorage API - for practice data persistence
Backend
- Flask (Python)
- Werkzeug - for secure file handling
- Python-dotenv - for env mgmt
Authentication & Storage
- Firebase Authentication - secure user authentication with Google OAuth
- Server-side File Storage - recording mgmt with metadata tracking (JSON)
Audio Processing Algorithms
- Fast Fourier Transform (FFT) - for frequency domain analysis
- Autocorrelation - for pitch detection
- Peak Detection - for frequency identification
- Energy-based Beat Detection - for BPM analysis
- Microphone Access: first request user for permission via
navigator.mediaDevices.getUserMedia()function - Audio Context Setup: then create an AudioContext and connect microphone stream to AnalyserNode
- FFT Analysis: Configurable FFT size (512-16384) will transform time domain audio to frequency domain data
- Logarithmic Scaling: will map frequency bins to perceptually linear scale for better visualization
- Rendering: RequestAnimationFrame loop draws visualizations to HTML5 Canvas with optimized 30fps throttling (good for lowendpc too!)
- Waveform Capture: Capture raw audio samples using
getByteTimeDomainData() - Autocorrelation: Find periodic patterns in the waveform to determine fundamental frequency
- Peak Finding: Identify the strongest correlation peak above 0.9 threshold
- Frequency Calculation: Convert sample offset to frequency:
f = sampleRate / offset - Note Mapping: Map frequency to musical note using:
noteNum = 12 * log₂(f / 440) - Cents Deviation: Calculate tuning accuracy in cents for precise instrument tuning
- Multi-Peak Detection: Identifies multiple frequency peaks simultaneously (up to 6 notes)
- Note Extraction: Converts each peak frequency to its corresponding musical note
- Interval Analysis: Calculates semitone intervals between detected notes
- Template Matching: Compares interval pattern against 18 chord templates
- Confidence Scoring: Rates match quality based on exact interval matches and extra notes penalty
- Chord Identification: Returns best match with root note, chord type, and confidence percentage
- Frequency Filtering: Isolates low frequency range (60-250 Hz) where beats are strongest
- Energy Calculation: Computes RMS energy of filtered signal
- Threshold Detection: Compares current energy against rolling average (43-sample window)
- Beat Timing: Records timestamps when energy exceeds 1.5x average with 300ms minimum spacing
- Interval Analysis: Calculates average time between beats with variance-based confidence
- BPM Calculation: Converts interval to BPM:
BPM = 60000 / avgInterval
- MediaRecorder: Captures microphone audio to WebM/Opus format chunks
- Blob Creation: Combines audio chunks into single Blob object
- Server Upload: Sends Blob via FormData POST request with unique filename
- Format Conversion:
- WAV: Decodes audio buffer, creates PCM data, writes RIFF/WAVE headers
- MP3: Uses LameJS encoder to compress audio with 128kbps bitrate
- Download: Generates temporary Object URL for browser download
- Session Monitoring: Tracks start/end timestamps for each tool usage
- Duration Calculation: Computes session length in seconds
- LocalStorage Persistence: Stores up to 100 most recent sessions as JSON
- Statistics Aggregation: Calculates daily and weekly totals from stored sessions
- Activity Feed: Displays recent practice sessions with formatted timestamps
Resonate/
├── static/
│ ├── assets/ # Images, icons, and static files
│ │ ├── logo.png
│ │ ├── favicon.ico
│ │ └── ...
│ ├── css/ # Stylesheets
│ │ ├── styles.css # Global styles
│ │ ├── theme.css # Theme variables
│ │ ├── visualizer.css
│ │ ├── tuner.css
│ │ ├── chords.css
│ │ ├── spectrum.css
│ │ ├── metronome.css
│ │ ├── pitch.css
│ │ └── profile.css
│ ├── js/ # JavaScript modules
│ │ ├── audio.js # AudioCapture class
│ │ ├── visualizer.js # Visualization engine
│ │ ├── pitch_detector.js # Autocorrelation pitch detection
│ │ ├── chord_detector.js # Chord recognition
│ │ ├── fft_processor.js # FFT analysis utilities
│ │ ├── bpm_detector.js # Beat detection
│ │ ├── tuner_ui.js # Tuner interface
│ │ ├── chords.js # Chord detector UI
│ │ ├── spectrum.js # Spectrum analyzer UI
│ │ ├── metronome.js # Metronome logic
│ │ ├── recorder.js # Audio recording
│ │ ├── converter.js # Format conversion (WAV/MP3)
│ │ ├── practice_tracker.js # Practice session tracking
│ │ ├── settings.js # Settings manager
│ │ ├── theme.js # Theme switching
│ │ ├── home.js # Home page logic
│ │ ├── profile.js # Profile management
│ │ └── main.js # Firebase auth & app init
│ └── uploads/ # User recordings storage
│ └── metadata.json # Recording metadata
├── templates/ # HTML templates
│ ├── index.html # Landing page
│ ├── auth.html # Authentication page
│ ├── home.html # Visualizer page
│ ├── tuner.html # Tuner tool
│ ├── chords.html # Chord detector
│ ├── spectrum.html # Spectrum analyzer
│ ├── metronome.html # Metronome
│ ├── pitch.html # Pitch detection
│ ├── profile.html # User profile
│ ├── 404.html # 404 error page
│ └── 500.html # 500 error page
├── app.py # Flask application
├── requirements.txt # Python dependencies
├── .env # Environment variables (not in repo)
├── .env.example # Environment variables template
├── .gitignore # Git ignore rules
├── LICENSE # MIT License
└── README.md # This file hehe -_-
- Python 3.8 or higher (Mostly front end heavy with basic flask i used 3.12.10)
- pip (to install required packages)
- A web browser with Web Audio API support
- Firebase account (for authentication setup)
- Clone the repository
git clone https://github.com/Rexaintreal/Resonate.git
cd Resonate- Create virtual environment
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate- Install dependencies
pip install -r requirements.txt- Configure environment variables
Create a .env file in the root directory (use .env.example as template):
SECRET_KEY=your-secret-key-here
FIREBASE_API_KEY=your-firebase-api-key
FIREBASE_AUTH_DOMAIN=your-project.firebaseapp.com
FIREBASE_PROJECT_ID=your-project-id
FIREBASE_STORAGE_BUCKET=your-project.appspot.com
FIREBASE_MESSAGING_SENDER_ID=your-sender-id
FIREBASE_APP_ID=your-app-id-
Set up Firebase
- Create a new Firebase project at firebase.google.com
- Enable Google Authentication in Firebase Console -> Authentication -> Sign-in method
- Get your Firebase config from Project Settings -> General
- Add your Firebase config values to
.env
-
Run the application
python app.py- Access the application
Open your browser and navigate to:
http://localhost:5000
(if you need temporary https use sslcontent=adhoc in the main function)
app.run(ssl_context='adhoc')
Audio Settings (adjustable in-app):
- FFT Size: 512, 1024, 2048 (default), 4096, 8192, 16384
- Smoothing: 0.0 - 1.0 (default: 0.8)
- Sensitivity: 0% - 200% (default: 100%)
- Visualization Bar Count: 16 - 128 (default: 64)
Upload Limits:
- Max file size: 50 MB
- Supported formats: WebM, WAV, MP3, OGG, M4A, AAC, FLAC, OPUS
- Hosted on Pythonanywhere's free plan so MAX 500MB Storage
| Shortcut | Action |
|---|---|
Space |
Start/Stop audio capture |
Esc |
Stop everything |
? or H |
Show keyboard shortcuts |
1 |
Navigate to Visualizer |
2 |
Navigate to Pitch Detection |
3 |
Navigate to Tuner |
4 |
Navigate to Metronome |
5 |
Navigate to Chord Detector |
6 |
Navigate to Spectrum Analyzer |
Visualizer
- Click microphone icon or press Space to start
- Choose visualization mode (Bars, Wave, Circular, Line)
- Adjust settings via gear icon (FFT size, smoothing, bar count)
- Record audio sessions with the record button
Tuner
- Select your instrument (Guitar, Bass, Ukulele, Violin, Chromatic)
- Choose tuning preset if available (Standard, Drop D, Open G, etc.)
- Play a note and tune until the needle centers
- Click reference notes to hear target pitches
Chord Detector
- Start detection with microphone button
- Play chords on your instrument
- View detected chord name, notes, and confidence
- Check chord history for progression tracking
Spectrum Analyzer
- Start analysis to view real-time frequency spectrum
- Monitor frequency bands from sub-bass to brilliance
- View dominant frequency and peak detection
- Track energy distribution across frequency ranges
Metronome
- Set BPM (40-240) manually or use tap tempo
- Choose time signature (2/4, 3/4, 4/4, 5/4, 6/8, 7/8)
- Use auto-detect BPM to match music tempo
- Sync detected BPM to metronome automatically
- LameJS - MP3 encoding library
- Firebase - Authentication and user management
- Flask - Python web framework
- Tailwind CSS - Utility-first CSS framework
- Font Awesome - Icon library
- WAV Format Documentation - soundfile.sapp.org
- Web Audio API - MDN Web Docs
- FFT Algorithm - Fast Fourier Transform for frequency analysis
- Autocorrelation - Pitch detection algorithm
- Landing Page Design - CodePen by techgirldiaries
- Visual Elements - CodePen by andyfitz
- LeetCohort - Free Competitive Python DSA Practice Platform
- Sorta - Sorting Algorithm Visualizer
- Ziks - Physics Simulator with 21 Simulations
- Eureka - Discover Local Hidden Spots
- DawnDuck - USB HID Automation Tool
- Lynx - OpenCV Image Manipulation WebApp
- Libro Voice - PDF to Audio Converter
- Snippet Vision - YouTube Video Summarizer
- Syna - Social Music App with Spotify
- Apollo - Minimal Music Player
- Notez - Clean Android Notes App
feel free to submit a pull request!
- Fork the repository
- Create your feature branch (
git checkout -b feature/AmazingFeature) - Commit your changes (
git commit -m 'Add some AmazingFeature') - Push to the branch (
git push origin feature/AmazingFeature) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
Saurabh Tiwari
- Portfolio: saurabhcodesawfully.pythonanywhere.com
- Email: saurabhtiwari7986@gmail.com
- Twitter: @Saurabhcodes01
- Instagram: @saurabhcodesawfully
- GitHub: @Rexaintreal