top of page
< Back

How do I prioritize platforms and markets (genre fit, geography, curator quality, cost benchmarks)?

TL;DR - Rank every platform-market combo on a four-pillar scorecard - Genre Fit, Audience Potential, Curator Quality, Cost per Result

Genre Fit, Audience Potential, Curator Quality, Cost per Result - then fund the highest “quality-per-cost” tiers first; review scores quarterly as trends and CPMs shift.

Case Study - "Night Pulse - Global Synthwave Push"

Move: 12 DSP-region scorecard built
Timing: Week -4
KPI Result: Top score: Spotify DE 83/100
Outcome: Budget focused on DACH

Move: 4-week pilot across three A-tiers
Timing: Weeks -3 → 0
KPI Result: $0.09 CPS avg.
Outcome: +1.8 M streams

Move: TikTok JP added after trend spike
Timing: Week +2
KPI Result: 16 % save-to-stream
Outcome: +42 % followers JP

1 │ Why This Question Matters

• Platform-genre mismatches burn budget without algorithmic lift.
• A weighted, data-backed scorecard makes spend defensible and lets you pivot fast when curator quality or costs swing.

2 │ Weighted Scorecard Framework

Pillar: Genre Fit
Weight: 35 %
What to Measure: Playlist overlap, editorial shelves, user-generated density
Green Target: ≥ 80 % match

Pillar: Audience Potential
Weight: 25 %
What to Measure: Monthly listeners, YoY growth, view velocity
Green Target: Growth ≥ 15 %

Pillar: Curator Quality
Weight: 25 %
What to Measure: Save/Stream, bot score, engagement skew
Green Target: Save/Stream ≥ 12 %

Pillar: Cost per Result
Weight: 15 %
What to Measure: CPS, CPM, CPP vs. rev/stream
Green Target: CPS ≤ $0.12

*Adjust weights for niche genres or catalogue stage.

Workflow

Data Sweep - Pull platform analytics, playlist stats (Chartmetric, SpotOnTrack), CPM/CPS benchmarks.

Normalize (0–100) - Score each pillar, multiply by weight for a total.

Rank & Bucket - A ≥ 75, B 50-74, C < 50; launch in A-tiers first.

Pilot & Validate - 2–4-week tests; track CPS, save/stream, geo retention.

Quarterly Rescore - Update weights, add emerging markets when scores rise.

3 │ Metrics & Traffic-Light Guard-Rails

Metric: Save-to-Stream
Green (Scale): ≥ 12 %
Yellow (Tweak): 6–11 %
Red (Pull): < 6 %

Metric: CPS / CPP
Green (Scale): ≤ $0.12
Yellow (Tweak): $0.13–0.20
Red (Pull): > $0.20

Metric: 28-Day Listener Retention
Green (Scale): ≥ 30 %
Yellow (Tweak): 15–29 %
Red (Pull): < 15 %

Metric: Algorithmic Streams Share
Green (Scale): ≥ 40 %
Yellow (Tweak): 20–39 %
Red (Pull): < 20 %

*Scale up on green, tweak creative/targeting on yellow, cut spend on red.

4 │ Platform / Region Snapshot (Example)

Platform-Market: Spotify DE
Genre Fit: High
Audience Trend: +12 % YoY
Avg. CPS: $0.08
Curator Quality: Strong
Priority: A

Platform-Market: Apple US
Genre Fit: Medium
Audience Trend: +6 %
Avg. CPS: $0.11
Curator Quality: Solid
Priority: A

Platform-Market: TikTok PH
Genre Fit: High
Audience Trend: +28 %
Avg. CPS: $0.04 (CPV)
Curator Quality: Mixed
Priority: B+

Platform-Market: YouTube BR
Genre Fit: Med-Low
Audience Trend: +9 %
Avg. CPS: $0.15 (CPV)
Curator Quality: Low bots
Priority: B

Platform-Market: Deezer FR
Genre Fit: Low
Audience Trend: -2 %
Avg. CPS: $0.07
Curator Quality: Niche curators
Priority: C

5 │ Common Pitfalls & Quick Fixes

Pitfall: Chasing cheap CPS without vetting
Fast Fix: Run bot checks before spend

Pitfall: Ignoring fast-growing secondary markets
Fast Fix: Set alerts on ≥ 15 % YoY growth; rescore quarterly

Pitfall: One-size weights
Fast Fix: Rebalance for niche genres or early-stage releases

Pitfall: No kill switch after pilot
Fast Fix: Pull spend if save/stream < 6 % after 2 weeks

Key Takeaways

• Experience - Weighted scoring shifted 60 % of budget to Spotify DE, dropping CPS to $0.09 and adding 1.8 M streams.
• Expertise - Four-pillar framework surfaces highest quality-per-cost opportunities.
• Authority - Benchmarks align with industry CPS/CPM ranges and curator audit norms.
• Trust - Transparent, data-driven prioritization keeps artists confident and budgets accountable.

bottom of page