Skip to content
← All posts
product lead-scoring engineering

Transparent Lead Scoring: What It Is, Why It Matters, and How Alchemize Does It

JW

Josh White

CTO, Alchemize

3 min read

“This lead scored 74 — why?”

That’s the question that kills trust in most CRMs. The score exists. The reasoning doesn’t. Your rep is left deciding whether to believe a number generated by a model they can’t interrogate.

We built Alchemize’s scoring engine differently. Every score surfaces the signals that drove it. Here’s how.

Why black-box scoring fails at agencies

Agency sales cycles are short and opinionated. A rep who’s been in the industry for five years has a calibrated instinct for what a good lead looks like. A score they can’t explain doesn’t augment that instinct — it competes with it.

The result is usually one of two failure modes:

  • Reps ignore the score and revert to gut feel, making the scoring infrastructure worthless
  • Reps over-index on the score and miss signals the model hasn’t been trained to weight

Transparent scoring sidesteps both. When the rep can see “scored 81 because: budget ≥ £5k, timeline ≤ 30 days, managing 10+ clients, currently using manual follow-up”, they can agree, disagree, and act — rather than just react to a number.

The four signal categories

Alchemize’s qualifier uses four weighted signal groups:

1. Intent signals (40%)

What the prospect says about urgency and purchase readiness. Questions like “how soon are you looking to move?” and “have you evaluated other tools?” produce intent markers that weight heavily.

2. Fit signals (30%)

Whether the prospect matches the agency profile Alchemize is built for: 3+ client accounts, running paid social or SEO campaigns, some form of existing CRM or follow-up process (even if it’s a spreadsheet).

3. Budget signals (20%)

We don’t ask for an exact figure — we ask for a range and cross-reference it against typical per-seat pricing. A prospect who says “we’re allocating £500/month” scores differently than one who says “budget isn’t a constraint right now.”

4. Engagement signals (10%)

Conversation depth, questions asked, and whether the prospect proactively asked about integration. A prospect who asks “does this connect to Slack?” is further along than one who just answers questions.

How it surfaces in the portal

Every lead in your Alchemize dashboard shows a breakdown panel alongside the score. You’ll see which signals fired, the weight applied, and the raw transcript snippet that generated each one.

The intent is to make the score feel like a colleague’s recommendation, not an oracle’s verdict.

What we don’t score

We deliberately don’t score:

  • Company size alone — agency headcount is a poor proxy for deal value
  • Domain name — too noisy
  • LinkedIn presence — out of scope for conversational qualification

The model is narrow by design. A focused score you trust beats a comprehensive score you don’t.

Calibrating over time

Every time a rep marks a meeting as “qualified” or “not qualified” after the call, that signal flows back into the model. After 50 feedback loops, the scoring weights adjust to your specific client profile.

This is the part that matters most in the long run. The model on day one is a good default. The model on day 100 is trained on your data.

Share this post