How to master a song according to AI engines

Hi guys. I just wanted to take the temperature of modern AI engines so I asked Perplexity, Gemini 3 pro, GROK and ChatGPT to teach me how to master a song, ChatGPT misunderstood my intention and explained how to create a song instead of audio mastering.
I should probably have asked them to master for a specific genre.

Get Perplexity Pro free for 12 months if you have Paypal with a credit card tied to it, only works today. Just remember to cancel the subscription next december or you are on the hook. https://www.perplexity.ai/join/p/paypal-subscription

Happy New Year :slight_smile: :palm_tree: :sun:

------ PERPLEXITY.AI ------

Mastering a song polishes your final mix for professional loudness, balance, and consistency across playback systems. As a patient tutor, we’ll break this down into foundational steps, building your skills through practice and critical listening rather than just presets. Follow these sequentially in your DAW, starting with a well-mixed stereo file peaking at -6dB to -3dB for headroom.mastering+1​

Preparation Steps

Set up an accurate listening environment first—use treated room acoustics or reference monitors/headphones to avoid misleading decisions from bass buildup or harsh reflections. Export your mix as a 24-bit WAV at your project’s sample rate, take a day-long break to refresh your ears, and import 2-3 reference tracks from similar genres for comparison. Check levels on multiple systems (phones, cars, hi-fi) and note issues like muddiness or dullness before processing.izotope+2​

Tonal Balance with EQ

Apply subtle linear-phase EQ first: high-pass filter rumble below 20-30Hz, cut muddy lows around 200-400Hz by 1-3dB with wide Q, and boost air above 10kHz sparingly for sparkle. Use your reference tracks to match spectral curves—tools like Ozone’s Master Assistant analyze and suggest curves visually. Listen in mono to ensure no phase issues, then A/B with references, adjusting no more than 3dB total to preserve mix integrity.pirate+3​

Dynamics Control

Add gentle compression (2-4dB gain reduction) with slow attack (10-30ms) and auto-release to glue elements without squashing transients. Follow with multiband compression if needed for taming bass peaks or harsh mids, keeping ratios under 4:1. This connects to EQ by stabilizing tones before enhancement, teaching you how dynamics affect perceived loudness.sageaudio+3​

Enhancement and Limiting

Excite harmonics subtly (under 2%) with saturation or exciters for warmth and presence, focusing on tape or tube emulations. Finish with a brickwall limiter: set ceiling to -0.5 to -1dB true peak, aim for -14 to -9 LUFS integrated loudness (streaming targets), and reduce gain until 3-6dB reduction without distortion. Reference pros like those using Ozone macros for vocal balance and width.youtube​izotope+2​

Final Checks and Export

Dither to 16-bit for CD/streaming, sequence album tracks for consistent loudness, and embed metadata (ISRC, artwork). Test on 5+ systems, compare to references, and iterate—mastery comes from repeating this cycle on 10+ songs. Practice by mastering one mix weekly, noting what changes work across genres.studiobricks+3​

------ GOOGLE GEMINI 3 PRO --------

Mastering is the final step of audio post-production.[1] Its goal is to make a song sound balanced, cohesive, and “competitive” (loud enough) across all playback systems—from expensive studio monitors to cheap phone speakers.

Here is a comprehensive guide to mastering a song, tailored for 2025 standards.

Phase 1: Preparation (Before You Touch a Plugin)

You cannot master a bad mix.[1][2] Ensure your mix is ready before starting.

  • Headroom: Your mix should peak between -6dB and -3dB .[2] This gives you “room” to add processing without instantly distorting.[1][2]

  • Clean Export: Bounce your mix as a high-quality WAV file (same sample rate/bit depth as your project, typically 24-bit/44.1kHz or 48kHz).[2] Do not put a limiter on your master bus when exporting the mix.[1][2]

  • Fresh Ears: Never master immediately after mixing. Take at least a few hours (ideally a day) off so your ears reset.[2]

  • Reference Tracks: Import a professionally mastered song in the same genre into your project. Turn its volume down by about 3-6dB so it matches the volume of your unmastered mix. You will use this to compare tonal balance (bass/treble) and dynamics.[2]


Phase 2: The Signal Chain (The “Recipe”)

Mastering typically follows a specific order of plugins.[1][2] Here is the standard chain:

1. Subtractive EQ (Cleaning)

  • Goal: Remove “invisible” junk frequencies that eat up headroom.

  • Action:

    • Low Cut (High Pass): Cut everything below 20–30 Hz with a steep curve (12–24dB/octave). These sub-frequencies are inaudible but trigger your compressor/limiter unnecessarily.[2]

    • Corrective Cuts: If the track sounds “muddy,” try a small cut (1–2dB) around 250–400 Hz.[2] If it sounds “harsh,” look at 2–4 kHz.

    • Technique: Use a narrow “Q” setting to surgically remove bad frequencies.[2]

2. Compression (The Glue) [1][2][3]

  • Goal: Tame the loudest peaks so the track feels consistent and “glued” together.

  • Settings:

    • Ratio: Low (1.5:1 to 2:1).[1][2][3] Mastering compression should be gentle.[2][3]

    • Attack: Slow (30ms+). This lets the “punch” of drums pass through before clamping down.[1]

    • Release: Auto (or timed to the groove of the song).

    • Gain Reduction: Aim for only 1–2dB of reduction on the loudest hits. If the needle is moving wildly, you are over-compressing.[1]

3. Additive EQ (Sweetening)

  • Goal: Shape the overall tone.

  • Action:

    • Broad Strokes: Use wide curves (low Q).

    • High Shelf: A gentle boost (+1dB) above 10 kHz adds “air” and expensive-sounding sheen.[2]

    • Low Shelf: A small boost below 100 Hz can add weight, but be careful not to make it boomy.[2]

4. Saturation (Optional but Recommended)

  • Goal: Add harmonic richness and perceived loudness without raising the actual peak volume.

  • Action: Use a tape or tube saturation plugin.[2] Dial it in until you hear it, then back it off slightly. It should feel “thicker,” not distorted.

5. Limiting (Loudness)

  • Goal: Make the track loud without clipping (going into the red).

  • Settings:

    • Ceiling: Set to -1.0 dBTP (True Peak). This is the modern standard for streaming services like Spotify to prevent distortion during transcoding.[1][2]

    • Threshold/Gain: Lower the threshold (or boost gain) until you hit your target loudness.

    • Target: For 2025, a safe competitive target is -14 to -12 LUFS (Integrated).[2]

    • Gain Reduction: If your limiter is reducing more than 3–4dB, your mix might need fixing, or you need to compress more in stage 2.


Phase 3: Critical Listening & Ear Training

This is where amateurs are separated from pros.[1][2] You need to train your ears to hear what the tools are doing.[1][2][4]

  • The “Vowel” Technique:

    • Muddy/Boofy: Sounds like “Ouh” or “Oh” (200–500 Hz).[2]

    • Boxy/Honky: Sounds like “Aw” or “Eh” (500–1000 Hz).[2]

    • Nasal/Tinny: Sounds like “Ee” (1–2 kHz).[2]

    • Harsh/Piercing: Sounds like “Sss” (4–8 kHz).[2]

    • Exercise: Sweep an EQ boost across your track and listen for these vowel sounds to identify problem areas.

  • Check for Over-Compression:

    • Listen to the drums. Do they sound “punchy” (good) or like they are being swallowed/flattened (bad)?

    • Does the track “pump” (volume sucks in and out) on every kick drum hit? If so, your release time is too fast or threshold is too low.


Phase 4: The Environment (Budget Home Studio)

You can’t fix what you can’t hear.[1][2]

  • Speaker Placement: Your head and the two speakers should form an equilateral triangle.[1][2][5][6]

  • Budget Treatment: If your room echoes (clap your hands to test), hang thick blankets or heavy curtains behind you and on the walls to your left and right.[1][2]

  • The Car Test: The ultimate reality check.[1][2] Listen to your master in a car. If the bass rattles the trunk too much or the vocals disappear, go back and adjust.

Phase 5: Free Tools to Start With (2025)

You don’t need to spend money to learn.[1][2]

  • EQ: TDR Nova (Dynamic EQ) or Tokyo Dawn SlickEQ.

  • Compressor: TDR Kotelnikov (Very transparent mastering compressor).[2][7]

  • Limiter: VladG Limiter No6 or Kilohearts Limiter.

  • Metering (Essential): Youlean Loudness Meter (Free version is perfect for checking LUFS).[2]

Summary Checklist

  1. Prep: Mix peaks at -6dB, clean WAV export.
  2. EQ 1: Cut subs (<30Hz) and mud.
  3. Compress: 1–2dB reduction, slow attack.
  4. EQ 2: Gentle high shelf boost for “air.”
  5. Limit: Ceiling at -1.0 dBTP, target -14 LUFS.
  6. Check: Compare volume to reference track; check in car.

------ GROK --------

What is Audio Mastering?

Mastering is the final step in music production, where you take a finished stereo mix and polish it for release. The goals are:

  • Balance the overall tone (frequency response).
  • Control dynamics for consistency.
  • Increase perceived loudness without distortion.
  • Ensure the track translates well on different playback systems (phones, speakers, clubs).
  • For albums/EPs: Make all songs consistent in volume and feel.

A great mix makes mastering easier—mastering can’t fix a bad mix, but it enhances a good one.

Step-by-Step Guide to Mastering a Song

Here’s a beginner-friendly process. Use a DAW (Digital Audio Workstation) like Ableton Live, Logic Pro, FL Studio, Reaper, or GarageBand.

  1. Prepare Your Mix

    • Export your final mix as a high-quality stereo file: WAV or AIFF, 24-bit (or higher), at least 44.1kHz sample rate.
    • Leave headroom: Peaks should hit around -6dB to -3dB (no clipping).
    • No processing on the master bus during mixing (remove any limiters/EQ if added).
    • Trim silence at start/end, add fades if needed.
  2. Set Up Your Mastering Session

    • Create a new project in your DAW.
    • Import the stereo mix file.
    • Work at a higher sample rate if possible (e.g., 96kHz) for better processing.
    • Use good monitoring: Studio headphones/monitors in a treated room (or decent ones to start).

(Examples of DAW sessions during mastering.)

  1. Reference Tracks

    • Import 2-3 professional songs in the same genre.
    • Match their volume to your mix.
    • A/B compare constantly: How does your track stack up in bass, clarity, width, loudness?
  2. Typical Mastering Chain Order

    • Build this on your master channel with plugins.

(A typical mastering processing chain: corrective EQ → compression → saturation/exciter → stereo imaging → limiter.)

  • Subtractive/Corrective EQ first: Cut problem frequencies (e.g., mud around 200-300Hz, harshness 3-5kHz). Use broad, gentle cuts (1-3dB).
  • Compression (optional/light): Glue the track, 1-3dB gain reduction. Slow attack (>20ms), medium release.
  • Additive EQ: Boost for warmth/air if needed (subtle!).
  • Saturation/Exciter (optional): Add harmonic warmth.
  • Stereo Widening (optional): Enhance width, but keep low end mono.
  • Limiter last: Push loudness. Aim for -14 to -10 LUFS integrated (for streaming). True peak max -1dBTP.

(Use a loudness meter like Youlean Loudness Meter (free version available) to check LUFS and true peaks.)

  1. Key Tips During Processing

    • Make subtle changes—mastering is about polish, not overhaul.
    • Listen at low volumes to check balance.
    • Take breaks: Fresh ears are crucial.
    • Check mono compatibility (collapse to mono—no phase issues?).
    • Test on multiple systems: Car, phone, earbuds.
  2. Export the Master

    • 16-bit/44.1kHz WAV for most platforms (dither when reducing bit depth).
    • For streaming: Target -14 LUFS (Spotify normalizes anyway, but avoid over-limiting).

Tools to Get Started (Free Options in 2025)

  • Free Plugins: Youlean Loudness Meter 2 (loudness analysis), Frontier (free limiter), Voxengo SPAN (spectrum analyzer), stock DAW plugins.
  • All-in-One Free: Many DAWs have built-in tools; or try free trials of iZotope Ozone Elements.
  • AI/Online Free Options: LANDR (free previews/masters), or other browser-based tools for quick results.

Path to Mastery

Practice on your own tracks and references. Start simple—EQ + Limiter only. Over time, your ears will improve. If possible, send to a pro engineer for feedback or final polish (affordable online services exist).

You’ve got this—start with one track and iterate! If you have a specific DAW or genre, I can give more tailored advice.

— ChatGPT ----

Here’s a practical, beginner-to-intermediate guide to mastering a song for release, written DAW-agnostic so you can follow it in Logic, Ableton, FL Studio, Reaper, etc.


0. What mastering is (and isn’t)

Mastering = final polish + translation + loudness compliance.
It’s not fixing bad mixes. If something bothers you in mastering, 80% of the time it should be fixed in the mix.


1. Prepare your mix (this matters more than plugins)

Before touching a master chain:

  • Export format:

    • WAV or AIFF
    • 24-bit (or 32-bit float)
    • Same sample rate as your session
  • Headroom:

    • Peak around -6 dBFS
    • No limiter on the mix bus
  • No clipping, no master EQ/limiter

  • Listen on multiple speakers/headphones
    Fix obvious issues before mastering.


2. Set up your mastering session

  • One stereo track (your mix)
  • Insert plugins on the master channel
  • Reference 1–3 commercial tracks in a similar style
    Level-match them (very important!)

3. Typical mastering chain (simple & effective)

:one: EQ (subtle, corrective)

Goal: clean and balance, not reshape.

  • High-pass at 20–30 Hz (gentle)

  • Fix resonances:

    • Mud: 200–400 Hz
    • Harshness: 2–5 kHz
    • Air: +0.5–1 dB at 10–16 kHz if needed
  • Use small moves (±0.5–1.5 dB)

If you’re boosting more than 2 dB → stop and revisit the mix.


:two: Compression (glue, not punch)

Goal: control dynamics slightly.

  • Ratio: 1.5:1 – 2:1
  • Attack: slow (20–40 ms)
  • Release: auto or 100–300 ms
  • Gain reduction: 1–2 dB max

If the mix loses life → back off.


:three: Saturation (optional but powerful)

Adds perceived loudness and warmth.

  • Very subtle
  • Tape or soft clip style
  • Drive until you barely hear it, then pull back

:four: Stereo imaging (use carefully)

  • Do NOT widen the low end

    • Keep <120 Hz mono
  • Slight widening above 3–5 kHz if needed

  • If the mix already feels wide → skip this


:five: Limiter (final loudness)

This is where release-ready loudness happens.

  • True Peak ON

  • Ceiling: -1.0 dBTP (safe for streaming)

  • Push input until:

    • 2–4 dB gain reduction on loudest parts
  • If it pumps or distorts → stop


4. Loudness targets (very important)

Streaming platforms (Spotify, Apple Music, YouTube)

  • -14 LUFS integrated
  • -1.0 dBTP max

Club / EDM / loud genres

  • -9 to -7 LUFS (accepting less dynamics)

Acoustic / jazz / classical

  • -16 to -18 LUFS

Louder ≠ better. Streaming will turn loud masters down.


5. Quality control checklist

Before exporting:

:check_mark: No clipping
:check_mark: No harsh highs or boomy lows
:check_mark: Vocals sit naturally
:check_mark: Translates on:

  • Headphones
  • Phone speaker
  • Car / small speakers

6. Export settings (final master)

  • WAV
  • 16-bit with dither (for distribution)
  • 44.1 kHz (unless distributor specifies otherwise)
  • Name clearly:
    Artist – Song Title (Master).wav

7. Common beginner mistakes

:cross_mark: Over-EQing
:cross_mark: Crushing with limiter
:cross_mark: Stereo widening bass
:cross_mark: Mastering without references
:cross_mark: Fixing mix problems in mastering


8. Recommended beginner plugins (free & paid)

Free

  • TDR Nova (EQ)
  • Youlean Loudness Meter
  • Limiter No6
  • Voxengo SPAN

Paid (popular)

  • FabFilter Pro-Q / Pro-L
  • iZotope Ozone
  • Waves L2 / L3

Want a next step?

I can:

  • Build you a mastering chain preset
  • Walk you through mastering your own song step-by-step
  • Tailor this for a specific genre (EDM, hip-hop, rock, acoustic)

Just tell me what you want to do next :headphone:

Don’t want to sound mean, but why copy-pasting everything here ? I mean if we’re interested we can prompt them ourselves, and it’s a whole lot of text to read for nothing
Maybe if you want to engage conversation, you could summarize it for people (preferably not using LLMs) and ask open or specific questions ?

1 Like

Probably @Garf just wanted to show the different results of those AI engines.
They all have in common that they suggest to export the mix and do the mastering afterwards, which is something that I don’t get. Anyway, imho “Google Gemini 3 Pro” delivers the most accurate result in comparison.

Some notes:

I prefer to export the finished (mixing and mastering done) song in 16 bit. Maybe I do 24 bit in future, let’s see. I just don’t see (respectively hear) the benefits.

You can, but you can’t fix a crappy mix by mastering. Crap remains crap.

No! Use a DAW like Renoise, dammit!

Compression isn’t part of the mastering process, so there’s no need to put a compressor in the mastering chain.

Why is that “important”? To make everything sound the same? Personally I’ve never ever used a reference track. And I would claim that my stuff doesn’t sound that bad. :wink:

Summary: I’m doing it all wrong. So don’t listen to me! :sweat_smile:

1 Like

Let me be polite

If you are not interested then don’t read it, I didn’t tell you to. If you are ONLY interested in a summary then you ask your AI to summarise and quit moaning.

Haha I swear I read Garbageband.

I tried once but got bored with it, maybe I’ll try if I have a good track deserving a final polish some day.

I have SSL Bus Compressor 2 on my chain, definitely is good.

Well there is no correct answer, we still listen to old great songs that were mixed and mastered totally differently, right.

Yeah, what’s the problem? I’m sure he didn’t read any of it because he’s not interested, so makes no sense to complain about it, just skip and on to the next post. If he want a summary he could copy it all and ask his AI to summarise and post that instead.

Whether we like it or not AI will be everywhere I’m getting more sure of that.
I just don’t think it will be fully creative but it will make perfectly catchy songs we all want to hear, it’s annoying actually but it is what it is and unless governments put rules on it we all need to use it, maybe not creatively but certainly for research.

sorry for expecting an online forum to be about humain discussing together, guess I’m too old and I should go talk to a random LLM instead.

1 Like

Happy new year to you,

interesting experiment, I would conclude these things:

  1. Internet and Youtube are filled with 98% garbage and trivial content.
  2. Most mastering studios actually do master human played music only, so they even separate composition from mixing. But this is not useful for synthetic music.
  3. Most “knowledge” about mixing and mastering still comes from early 80s times, when there were mixing consoles etc., but time actually moved on
  4. AI chat bots are dumb as a brick, if the internet is flooded with nonsense, it will tell you the most common nonsense.
  5. Using linear phase EQ on lows is a very bad idea, as it alters the punch and adds preringing, it only makes sense on very high frequencies, but it is very ok to alter the phase. Just don’t use a cutoff filter for lows on master, instead a slight low shelf (e.g. Renoise eq). You commonly don’t need linear phase.
  6. The professional mastering studios of modern pop music often seem to use stems, not a stereo mixdown, and process each single track, so you call it a “post-mixing” even. Most useful action: Riding the volume sliders of each track, to get more expression from the song, emphasize song phases. At least I’ve seen that in Pensado’s Place’s videos, when they are mastering the charts stuff.

People here have completely different approaches, e.g. if you mostly use a very similar sound set and drums in each song like TNT, mastering will be easy and transferable. If you use completely different synths, tuning, drums in your songs, mixing or mastering to a consistent sound will be hard. Why even do albums these days anymore? Nobody is selling whole albums anymore. I would conclude that most mastering tips are not helpful at all, if you are not used to the composition itself or didn’t understand the intention of it.

Sorry, if this reads too negative and maybe arrogant for you, but I really don’t like AI shit, and the bros featuring it all the time. Currently the content quality is drastically lowered by that, worldwide. And if there is one thing the world does not need, it’s this, even more low quality contents.

Some AI mastering tools/vsts actually seem to produce useable results now, e.g. LANDR mastering or so. Often the result is crappy, too. Might be worth a try, to find good settings for your “album”, whatever.

BTW. if you really publish tracks, you should of course render in 32 bit float, as they can easily post process the stuff then, which is heavily done (or also publish as “lossless”). 16 bit is not a good choice for the current platform, IMHO. Apple music uses 48kHz, I think most other, too. At least Youtube will keep 48kHz, if the original was (not 100% sure here), so 48kHz seems to be the best choice for a very clear sounding mix. Surely 44,1kHz is very common, too. Not sure about resampling then to 48kHz.

I get the AI hate but I’m sure it’s here to make a meaningful change for most industries as it matures, I also think it will make excellent music in high-fidelity it just won’t do something 100% uniquely creative but who knows what some prompt magician can make it do.

Agreed. Gemini got this wrong and ChatGPT said a gentle high-pass would be fine but a low shelf is the most transparent way which is also what I use. Anyway I think you are too negative, in my opinion most of these tips are perfectly valid and at least provide a good starting point.

But what for? For the “glue”? Aren’t there compressors in each track in order to tame peaks and addtional compressors in the busses for the “glue”?

Even if you compare songs within the same genre, there are significant differences in terms of mixing and mastering, especially in terms of mixing. Even if it’s the same artist. Imho it’s only important to have the same kind of sound in case you’re creating an album, those sounds should be normalized of course.

I fully agree. I would even say it’s bad. Composing and mixing are kind of inseparable when it comes to electronic music.

Hell no! Use them in each track instead.

That’s the only way that makes sense when it comes to exporting something unfinished. Nevertheless it’s more useful to do the mastering internally imho.

I would say that using the same type of sounds (respectively making music within the same genre) would speed up the process, but it’s as “hard” as mastering anything else. In the end it’s always the same process, the steps that you need to do don’t differ.

44.1 KHz is the industry standard when it comes to music, 48 KHz is the industry standard when it comes to videos etc. The higher the frequency, the more CPU and space will be required. So if you’re doing music that’s not supposed to be used in a video, video game or whatever, 44.1 KHz is the way to go. The performance while composing will be much better too when using 44.1 KHz. I stick with 44.1 KHz.

Yes the +2dB glue is working well for me, even when I have compressors on all tracks and busses it still squeeze out a little bit.