Getting more out of data: the science behind successful sales

Posted October 28, 2020

As technology became more accessible to sales teams in recent years, it led to a push for “more data.” COVID-19 reaffirmed the importance of data and insights, but the truth has been there all along: Too many sales orgs settle for a black box of data.

Today, it’s clear we’d all benefit from innovative ways of thinking about our approach to sales data.

“Having data is only mildly valuable,” said Mark Kosoglow, vice president of sales at Outreach. “The insights you draw from it are what’s valuable.”

Mark shared this observation with Pavel Dmitriev, our vice president of data science, during the first session of a three-part webinar series, New Ways to Measure Engagement.

The pair explored how organizations can make the most of their data, to boost sales efficiency and overall success. Read on for some of the key takeaways...

What a Data Scientist Does for Sales Teams

Pavel broke down the duty of a data scientist into four steps. He said they provide sales insights that are 1) descriptive, 2) diagnostic, 3) predictive, and 4) prescriptive.

In the initial case, he said data scientists want to “summarize what’s going on in a few key metrics.” An example, compiling a list of accounts and their statuses.

Diagnostic insights include more benchmarks that indicate how well things are going. Predictive insights “show what’s going to happen in the future if you don’t make any changes.”

Finally, prescriptive insights prescribe recommended changes to help sales teams improve their most important metrics.

Spotting the Good Data

Pavel and Mark both agree that a typical industry complaint is “my data is bad.” Data quality isn’t ever perfect because systems, approaches, and other contributing factors change too often.

Still, teams should always strive to make it better.

“Ensuring good data quality, in my opinion, is best left to tools,” Pavel said. “It’s very hard to obtain good data quality if humans are involved.”

Asking sales reps to log their activities over the course of a day could be helpful. But if their responses are nondescript, like “phone calls,” the data won’t do much for you.

Automating such data — by providing a list of curated, specific responses, perhaps — is one way to generate consistency and better data quality.

“Very soon I think this is going to be automated,” Pavel told Mark, “like what we’re with Kaia, trying to capture, automatically, more and more action items and notes.”

Going Granular with Sales Data

Getting all of your data in one place is challenging. It’s distributed across various systems and stores, and in different formats. Then, you have to narrow it down to what might serve you best.

“There’s a bit of an art in data science to defining good success metrics,” Pavel said. The general approach he likes to implement is one of “decomposition.”

Some deals take months or longer to close. Many different episodes play out in the interim. So knowing whether a sale was finalized is only part of the story.

Pavel suggested taking the entire sales process and splitting it into more granular stages so it’s less daunting and more accurate than a too-holistic approach.

From there, you can analyze reasons for mini-successes along the way.

“For example, you take a prospecting stage and we could split it into two steps,” Pavel said. “You could say that the first step is we want to get a reply from a prospect, and the second step is we want to convert that reply into a meeting.”

Pavel measures three characteristics in each step: success, time, and effort. These dimensions move the needle of productivity, and focusing on them across the sales experience leads to greater efficiency and more desired outcomes.

Our data science team examines the amount of time it took to move through each step. The number of manual versus automated tasks carried out along the way comprises an “effort” measurement.

“Success,” he admits, “is trickier to quantify.” There are varying approaches to it that could all be viable.

But in the case of the “getting a reply from a prospect” step, instead of broadly measuring “reply rates,” data scientists should look at “positive reply rates.” How teams might reasonably measure positivity is also up to them.

“If you have this success, time, and effort measured, then you can understand pretty well what’s happening in that stage,” Pavel said. “You can look at the content, the decisions being used and evaluate them, and you can start making changes.”


Read more

Stay up-to-date with all things Outreach

Get the latest product news, industry insights, and valuable resources in your inbox.