SalesHUD
Back to blog

Product update (Jan 2026)

SalesHUD is now focused on next-step certainty: wrap-up drift detection → next-step lock → proof metrics. Some older posts may reflect earlier positioning.

Company

Building with pilot users: What we've learned

·7 min read

Three months ago, we started working with our first five pilot users—early customers who agreed to use SalesHUD daily and give us ruthlessly honest feedback. Here's what surprised us about building a tool focused on next-step certainty: the biggest problem wasn't objection handling or discovery—it was calls ending with "I'll think about it" instead of booked meetings or decision dates.

Why we chose pilot users over beta testers

We could have launched publicly and collected feedback from hundreds of users. Instead, we chose to work deeply with five companies. Here's why:

Surface-level feedback is easy to get, but not always useful. Most beta testers will tell you what they liked or didn't like. Pilot users will tell you why they didn't use a feature, what workflow it broke, and what they tried instead.

We wanted partners who would:

  • Use SalesHUD on real calls with real prospects (not just kick the tires)
  • Have weekly sync calls to walk us through their workflow
  • Share screen recordings of calls where SalesHUD helped—or got in the way
  • Challenge our assumptions and push back when features didn't make sense

What we got right

1. The core HUD concept resonated immediately

Every pilot user immediately understood the value prop: guidance in the moment, not analysis after the fact. No one questioned whether this was a problem worth solving.

One Head of Sales put it this way: "I've tried call recording tools, conversation intelligence platforms, coaching software. They all tell me what went wrong. You're the first tool that tries to prevent it."

2. Auto-generated notes were unexpectedly crucial

We built SalesHUD primarily for real-time cues and prompts. But the feature that got the most love? Auto-generated notes.

Reps told us they used to spend 5-10 minutes after every call typing up notes and action items. With SalesHUD, that's done automatically. They can review, edit if needed, and push to CRM in seconds.

The insight: Reducing post-call friction is just as valuable as improving in-call execution.

3. Lightweight beats comprehensive

In our initial builds, the HUD would show detailed talk tracks—full paragraphs pulled from playbooks. Reps found this overwhelming. They didn't want to read while talking; they wanted nudges.

We pivoted to cue cards: short, one-line prompts like "Ask about decision-making process" or "Probe on timeline." That was the sweet spot. Enough to jog memory, not so much that it felt like reading a script.

What surprised us

1. Different personas wanted different levels of guidance

We assumed all reps would want the same thing. Turns out:

  • New reps wanted more prescriptive prompts. They needed reminders for every key question.
  • Experienced AEs wanted minimal cues. They found constant prompts distracting.
  • Managers wanted full playbook enforcement—they wanted to see (and coach on) whether reps were hitting all the beats.

This led us to build modes: Coaching Mode (full playbook enforcement), Standard Mode (smart cues), and Notes-Only Mode (just capture notes, no prompts). Users could toggle based on their experience level.

2. Integrations mattered more than we thought

We originally planned to build CRM integrations later. But partners kept asking: "Can it pull data from Salesforce?" "Can I sync notes to HubSpot automatically?"

We realized the HUD is only as useful as the context it has. Partners wanted playbook search and deal-specific prep notes accessible mid-call. So we prioritized platform integrations much earlier than planned. (Note: HubSpot and Salesforce integrations are currently on our roadmap.)

3. Privacy concerns were top of mind

Multiple partners asked: "Does the prospect know they're being recorded?" "Where is the transcript stored?" "Can reps turn this off for sensitive calls?"

We'd thought about security, but not privacy controls at the user level. Based on feedback, we added:

  • Clear in-meeting indicators when SalesHUD is active
  • One-click pause/resume for sensitive topics
  • Admin controls to enforce recording consent best practices

Read more about our security practices →

What we changed based on feedback

Here are the biggest pivots we made in the last three months:

Feature removals

  • Real-time sentiment analysis: We thought it would be cool to show prospect sentiment (interested, skeptical, confused). Partners found it distracting and inaccurate. Removed.
  • Competitor mentions alerts: We had a pop-up that would trigger when a competitor was mentioned. Too aggressive. We moved it to a subtle badge in the context panel instead.

Feature additions

  • Playbook upload: Originally we required teams to manually input talk tracks. Partners wanted to upload existing Google Docs or Notion pages. We built a parser that extracts key sections automatically.
  • Custom cue triggers: Managers wanted to define their own triggers ("If the prospect mentions budget constraints, show pricing flexibility options"). We added a rule builder for this.
  • Post-call summary emails: Reps wanted to send follow-up emails with summaries of what was discussed. We auto-generate a draft now, pulling from the structured notes.
  • Microsoft Teams support: Many partners used Teams for customer calls. We added full support alongside Zoom and Google Meet.

Lessons for other founders

If you're building a tool for teams (not end consumers), here's what worked for us:

1. Choose partners who represent your ICP, not just anyone willing to try

We intentionally chose companies with 5-20 person sales teams doing complex B2B deals. If we'd picked early-stage startups with one founder doing all the sales, we would have built the wrong thing.

2. Ask to watch, not just hear

Feedback calls are useful, but nothing beats watching screen recordings of real usage. We asked partners to record calls (with prospect consent) so we could see where they looked, what they clicked, and what they ignored.

3. Ship fast, but don't ship everything

We pushed updates weekly, but we also said no to a lot of feature requests. The goal wasn't to build everything partners asked for—it was to identify patterns across all five and solve those.

4. Incentivize honesty

We offered partners free access for a year. But we also told them: "If this isn't working, tell us. We'd rather you stop using it and explain why than silently churn."

That openness led to some of the most valuable feedback. One partner told us, "Your notes feature is great, but honestly, we still use Gong for call reviews." That pushed us to integrate with Gong instead of trying to replace it.

What's next

We're opening up to a second cohort of pilot users in early 2025. If you're interested in shaping the product, reach out. We're looking for teams who:

  • Have 5+ reps doing live calls daily (across Zoom, Meet, or Teams)
  • Already have some form of playbook or process
  • Want to improve consistency and execution (not just analytics)

Explore all use cases → or see how it works.

Stop 'circle back.' Lock the next step before hang-up.

Join pilot users testing next-step certainty (wrap-up drift detection → next-step lock → proof metrics).