How to Build a Data Science Strategy for Any Team Size | by Sean Easter | Sep, 2023


Create a culture and practice that is fast moving and resilient to change

Sean Easter
Towards Data Science
A chess board and pieces, with a leather sofa behind them.
Photo by Maarten van den Heuvel on Unsplash

If you’re a data science leader who has been asked to “build our data science strategy” with much freedom and little direction, this post will help you out. We’ll cover:

  • What we mean by strategy: Is it just a plan? A roadmap? Something more, or less? In this section we’ll get specific and adopt a working definition of what we’re building when we build a strategy.
  • How does this concept apply to a data science team in a practical organizational context? Here we’ll examine how our concept of strategy applies to data science, and get specific on what our strategy applies to.
  • How to actually author that strategy.

Throughout, we’ll borrow heavily from strategy approaches to R&D, which shares key challenges with data science: The mission to innovate, and the increased uncertainty that comes with seeking discovery. When we conclude, you’ll come away with one clear definition of strategy, and a helpful process for authoring one for an organization of any size.

If, like myself, you lack a fancy MBA and have never taken a business strategy seminar, you might puzzle at what exactly someone wants when they ask you to develop a “data science strategy.” And you might not find initial searches very helpful. Classic, powerful frameworks like the Three C’s model (customers, competitors, company) make perfect sense at the level of a corporation determining where it should compete. Apply it to a function or team, and you find yourself feeling you’re stretching the concepts more than they can bear.

If you’re really like me, it’ll send you down a pretty deep rabbit hole of reading books like Lords of Strategy and The McKinsey Way. (Affiliate links.) The first is a delightful work of business history, and the second is a helpful collection of techniques pulled from the experience of successful consultants at the prestigious firm. Neither offers a quick answer to the question. One very beneficial side effects of reading Lords of strategy, is learning the data scientists here are not alone: “[I]t’s easy to conflate strategy with strategic planning, but it’s also dangerous. […] still today, there are many more companies that have a plan than there are that have a strategy. Scratch most plans, and you’ll find some version of, ‘We’re going to keep doing what we’ve been doing, but next year, we’re going to do more and/or better.” This confusion of definitions has shown up in my experience, where several times an ask for a strategy boiled down to, “What’s your plan for the next few months?”

One very helpful definition of strategy, and the one we’ll adopt through the rest of this article, is thanks to this working paper on R&D strategy by Gary Pisano: “A strategy is nothing more than a commitment to a pattern of behavior intended to help win a competition.” The beauty of this definition is that it can apply across any and all levels and purposes of an organization. All teams, of all types and sizes, contribute to the organization’s competitive efforts, and all teams can define and declare the patterns of behavior they use to focus those efforts.

A strategy is nothing more than a commitment to a pattern of behavior intended to help win a competition.”

—Gary Pisano

Pisano offers three requirements of a good strategy: Consistency, coherence and alignment. A strategy should help us make consistent decisions that contribute, cumulatively, toward a desired objective; should aid all corners of an organization in cohering their far-flung tactical decisions; and should align local actions with a larger collective effort.

And finally, they’re all founded on core hypotheses, bets about what will provide advantage in a competition. Pisano’s helpful example is that of Apple, whose strategy “to develop easy-to-use, aesthetically-pleasing products that integrate seamlessly with a broader system of devices in the consumer’s digital world” rests on a core hypothesis “that customers will be willing to pay a significantly higher price for products with these attributes.”

In essence, under this definition all strategies are bets that package the logic of decision-making: They give all parties a means to determine which actions aid a collective effort.

We will adopt this definition of strategy, and strive to define of our own core strategic hypothesis on how data science will add value to our organization, and the patterns we’ll commit to in the pursuit of that value. Further, we’ll assume that our parent organization has a developed strategy of its own, and this input will be crucial when we apply the third test of alignment. Having defined the form our final strategy should take, we’ll now turn our attention to bounding its scope.

To remind my friends how much fun I am, I sent several the same text message, “What do you think of when you hear ‘data science strategy’?“ The answers ranged from very thoughtful points on data infrastructure and MLOps, past healthy bristling at the vagueness of the question (I feel seen), to the colorful, “Nonsense,” and “My ideal job.”

Small sample, but the diverse array of responses from this group — which included experienced product managers at both start ups and large companies, a data science lead, and a consultant — speaks to how muddled definitions of this term can get. Worse, data scientists suffer from a second prong of confusion: What’s billed as “data science,“ in practice, often follows from whatever skill set a firm wants to recruit for and gussies up with a title that’s in vogue.

To fix one of these degrees of freedom in our analysis, we will first adopt a common definition of data science for the rest of this article: The function devoted to creating value and competitive advantage from modeling an organization’s available data. That can take a few typical forms:

  • Building machine learning models that optimize customer-facing decisions in production
  • Building models that aid staff at all levels in completing their work, perhaps in customer-facing human-in-the-loop applications
  • Building interpretable models for inferences that aid business decision making

Note that we’re excluding BI and analytics, and solely for the sake of focus and not because they’re less valuable than modeling work. Your analytics shop and you data science shop should be working together smoothly. (I’ve written about this here.)

Some, like my friend and Google PM Carol Skordas Walport, would suggest that data science strategy includes “How to get the data and infrastructure in a good enough state to do analysis or machine learning. I would say it’s how do you enable the team to get all the work done.” We’ll purposefully exclude these items of broader data strategy from scope. (Sorry, Carol.) We will, though, discuss navigating data and infrastructure limitations, and how developing your data science strategy can positively guide your broader data strategy.

Now we have bounds: We’re building a set of core strategic hypotheses on how machine learning and/or AI can add maximum value to an organization, with its own defined strategy or objectives, and a set of patterns a team will commit to in the pursuit of that value. How do we start?

Experienced machine learning product managers, engineers and data scientists will often remark that machine learning products are different from traditional software. An organization has to account for risk of model errors, data drift, model monitoring and refitting — hence the emergence of modern MLOps. And it’s fabulously easy to commit sins of engineering that wade ML applications into swamps of technical debt. (See “Machine Learning: The High Interest Credit Card of Technical Debt” for a great read on this topic.) So with all this cost, why do we do it?

Ultimately, we consider AI solutions because sophisticated models have a demonstrated track record of being able to detect valuable patterns. These can be anything from clusters of customer preference that imply novel segmentations, to the latent representations that a neural network finds to optimize predictions. Any given machine learning build relies on a case, or expectation, that a model can detect patterns that can improve a process, uncover actionable findings, or improve valuable predictions.

In defining the core strategic hypothesis for a data science team of any size, we can start with this McKinsey example description of how AI-enabled companies think differently. From “Winning with AI is a state of mind”:

If we choose the right use cases and do them the right way, we will learn more and more about our customers and their needs and continuously improve how we serve them.

This is an enormously helpful lens in the effort to build a data science strategy: It focuses us on maximum learning, and all we have to do is land on our organization’s definition of “right.” But what are the “right” use cases for us?

Here Pisano is helpful again, defining four elements of an R&D strategy that carry nicely to data science:

  • Architecture: The organizational (centralized, distributed) and geographic structure of our data science function.
  • Processes: The formalities and informalities of managing our work.
  • People: Everything from what mix of skills we seek to attract and our value proposition to our talent.
  • Portfolio: How we allocate resources across project types, and “the criteria used to sort, prioritize and select projects.”

We’ll start with the last concept, and turn our focus to defining the ideal portfolio of projects for our organization, the mix that we can convince ourselves will drive the most value. Given the great variation across organizations, we’ll start with one challenge every organization faces: Risk.

Modeling work has uncertain outcomes. “ML can do better” is a argument we often make based on history and intuition, and it often turns out to be true. But we never know how well it will work at the start, until we prove by construction how well ML can solve a problem. Learning the answer to this question for any given use case can have variable levels of effort, and thus varying levels of cost. The uncertainty as to this answer can also vary, based on how widely about our models have been applied and how well we understand our data.

A friend and healthcare analytics product leader, John Menard, defined risk as an explicit part of data science strategy, “How are you maintaining a pipeline of small and larger bets, while maintaining healthy expectations that that is all they are? What is your strategy for killing a project when the data doesn’t pan out, or pivoting the deliverable should it not meet requirements?”

It’s wise for organizations to be principled and specific about the level of resourcing they can afford, and for how long. Here are a few useful questions to ask of any individual modeling effort:

  • Estimated likelihood of success: What are the odds this model use case will pan out?
  • Expected range of returns: If successful, will this project deliver a tiny improvement in a process that can produce huge returns at scale? Will a breakthrough differentiate you from competitors?
  • Expected time to discover failure: How long will it take to learn whether a project’s hypothesized value prop will materialize? What is the minimum amount of resources you can spend before learning this project won’t work out?

Hopefully, these principles are straightforward, and all are consensus good things. The ideal project is likely to pan out, with huge returns on investment, and if it fails, fails early. This heavenly triumvirate never materializes. The art is in making tradeoffs that fit your organization.

An early stage startup focused on disrupting a particular domain with AI could have investors, leadership and staff that accept the company as a single large bet on a particular approach. Or, it could prefer small projects that get to production fast and allow for fast pivots. Conversely, if we’re in a large, established company and well-regulated industry with ML-skeptics for stakeholders, we might choose to bias our portfolio toward low-LOE projects that deliver incremental value and fail fast. This can help build initial trust, tune stakeholders to the uncertainty inherent in DS projects, and align teams around more ambitious projects. Successful early small projects can also bolster the case for larger ones around the same problem space.

Here are a few examples of how to define your target portfolio in terms of project scope, duration, and expected returns:

  • “Being early in our collective data science journey, we’re focused on small, low-LOE and fast failure uses cases that will uncover opportunities without risking large amounts of staff time.”
  • “We’ve identified a portfolio of three large machine learning bets, each of which could unlock tremendous value.”
  • “We aim for a balance of small-, medium- and high-effort projects, with corresponding levels of return. This lets us deliver frequent victories while pursuing game-changing potential disruption.”

As a final principle to apply in our complete portfolio, aim for a collection of projects with non-correlated successes. Meaning, we want to see our portfolio and sense that projects will succeed or fail independently. If multiple projects rest on a common assumption, if we sense that they’re so closely related that they’ll succeed or fail together, then we should revisit selection.

We’re done with this stage when we have:

  • Surveyed our data science and machine learning opportunities
  • Plotted them by investment, return and likelihood of success
  • Selected a rough cut priority list that’s consistent with our objectives and risk tolerance

Now that we’ve settled on our target portfolio, we’ll turn to ensuring that our processes position us to identify, scope and deliver valuable projects fast.

The question of whether to build or buy is perennial, and often wades into complicated organizational dynamics. There’s no shortage of vendors and startups looking to deliver AI solutions. Many are snake oil; many work. Many internal tech and DS teams view the former as a joke, the latter as competitors, and the time spent separating the two to be a huge waste of time. This has merit, since time spent checking out a vendor doesn’t advance a modeler’s skills, and if an organization doesn’t reward their effort, it’s a cost the data scientist pays without career reward. And this interpersonal complication compounds an already complicated business case: None of the typical software solution concerns go away. You still have to worry about things like vendor lock-in and cloud integrations. Nevertheless, we should all be willing to buy vendor products that deliver higher ROI, and you can cut through distractions if you consider your internal team’s unique advantages over boxed solutions.

In particular, your internal team can, in general, have governed access to much more of (perhaps all of) your organization’s proprietary data. This means that an internal team can probably understand it in more depth, and enrich it with other sources more easily, than could a single-purpose vendor solution. Given enough time and compute resources, a capable in-house team can probably beat a single-purpose vendor solution. (There’s a PAC theory joke in here somewhere.) But is it worth it?

Standard ROI and alternatives analysis here is key, with a focus on your time to internal market. Say we’re optimizing ad placements on an e-commerce site. We’ve winnowed a list of vendors down to one front-runner that uses a multi-armed bandit, a common method among leading marketing optimization vendors at time of this writing. We estimate the time to vendor integration at one month. Or, we could build our own MAB, and estimate that to take six. Would we expect that a MAB we build will outperform the one under the vendor’s hood, and sufficiently so to justify the delay?

Depends. Using Thompson sampling for a MAB buys you logarithmic bounds on expected regret, a jargon bomb that means it explores options without leaving much value on the table. That statement remains provably true regardless of whether its implemented by your in-house team or a vendor. Conversely, your in-house team is closer to your data, and taking a use case like this in-house amounts to a bet that you’ll find rich enough signals in that data to beat a vendor product. And perhaps that your team can inject domain knowledge that an off-the-shelf solution doesn’t have, providing a valuable edge. Finally, consider your in-house team’s opportunity cost: Is there another high-value item they could work on instead? If so, one option is to test the vendor, work on the other item, and reassess after you have measurable vendor results.

We’re done with this stage when we have:

  • Reviewed our opportunities from the prior step and, for each, answered, “Could we buy this?”
  • For each purchasable solution, answered whether we have a unique known or hypothetical advantage in-house
  • For each space with genuine trade-offs to be made, performed a trade-off analysis

Having defined our internal teams strategic competitive advantages, we’ll now account for our internal processes, tooling and data capabilities.

I’ve discussed the topic of time-on-task with plenty of experienced data scientists, and every one cites the discovery, processing, cleaning, and movement (to a suitable compute environment) of data as the bulk of their time spent on the job. As another group of McKinsey authors write on AutoML and AI talent strategy, “Many organizations have found that 60 to 80 percent of a data scientist’s time is spent preparing the data for modeling. Once the initial model is built, only a fraction of his or her time — 4 percent, according to some analyses — is spent on testing and tuning code.” This isn’t what draws most of us into the game. In most of our minds it’s the cost we pay for the joy of building models with impact. For this reason, we often talk about the “foundations” that data scientists require to be successful. In my experience, this framing can quickly get in our way, and I’m going to challenge us to think of ourselves as a model factory, subject to constraints of tooling and an elaborate, often problematic, data supply chain.

Confession: I’ve never bought into these “foundation” talking points when platforms are under discussion.

“Data and ML platforms are the foundations successful machine learning rest on,” goes a bolded statement in countless slide decks and white papers. “And without a strong foundation,” some consultant concludes, paternalistically, “everything falls apart.”

Here’s the rub, though: Very few things “fall apart” without machine learning. Start your house on a bad foundation and your garage might collapse on itself, and you. Start a machine learning project without the benefit of developed data and ML platforms, and your model build will…take longer. And without that fancy new machine learning model, chances are your business will persist in the same way it has, albeit without some competitive advantage that ML aimed to deliver. But persisting in mediocrity isn’t doomsday.

That’s where this cliche loses me. It seeks to scare executives into funding platform efforts — valuable ones, it’s worth stressing — as though the world will end without them, and it will not. We scream that the sky is falling, and then when a stakeholder encounters the same old rain they’re used to, we lose credibility.

Nevertheless, I’d wager that firms with strong ML capabilities will outperform competitors that don’t — it’s not lost on me that my career as a modeling lead is exactly such a bet — and modern data and MLOps capabilities can greatly reduce AI capabilities’ time to market. Consider this excerpt from the McKinsey paper “Scaling AI like a tech native: The CEO’s role,” emphasis mine:

We frequently hear from executives that moving AI solutions from idea to implementation takes nine months to more than a year, making it difficult to keep up with changing market dynamics. Even after years of investment, leaders often tell us that their organizations aren’t moving any faster. In contrast, companies applying MLOps can go from idea to a live solution in just two to 12 weeks without increasing head count or technical debt, reducing time to value and freeing teams to scale AI faster.

Your data science strategy needs to account for your organizational and tooling constraints, and adopt patterns that produce models or units of knowledge that are actionable within those constraints. That is, modeling projects should always have:

  1. A clear line of sight to minimum-viable modeling data. Your data science team should know where the source data is, and have a rough sketch of how it’ll need to be transformed.
  2. A straightforward and realistic path to realized value. How will you get a sufficiently performant model live, or otherwise apply model results?

Early-stage companies or teams with full, greenfield freedom over architecture and tooling are well-positioned to adopt a modern MLOps practice, which will make it easier to quickly prototype, deploy and monitor models to gauge their impact in the real world. Teams working alongside or within longstanding legacy tech might find that it wasn’t built with ML integration in mind, and that deployment is a large, heavyweight exercise. Firms in tightly regulated industries will likely find that many applications require high levels of explainability and risk control.

None of these challenges are insurmountable. We just have to be principled and savvy about timeline implications, and build this into our decision-making.

We’re finished with this stage when we have:

  • Surveyed our planned use cases to determine the path to data for each to get started
  • Determined each use case’s path to realized value if it were to succeed
  • Factored this into our expected investment and adjusted it from step one
  • Refined our prioritization in light of any changes we’ve discovered

Having refined our ideas our ideas of where to deploy data science, we’ll consider working models to ensure alignment.

Pisano defines architecture as “the set of decisions around how R&D is structured both organizationally and geographically.” Designing this includes mindful decisions about how to integrate our data scientists with a business unit. Are they fully centralized with a formal intake? Reporting to varied business units? Centralized and embedded? Reporting structures and decision-making authorities may not be under your control, particularly if you’ve been tasked with building a strategy for a unit with defined reporting lines. But if these points are under discussion, here a few things to consider in maximizing the value DS outputs.

Will your data scientists be well-supported and appropriately measured? Consider the pipeline of junior data science talent. Data scientists join the field from a variety of quantitative backgrounds, typically with a mix of theoretical and practical skills. A typical MS grad spent these formative years building skills and understanding, and demonstrating that understanding to experts in their field. This doesn’t generally include an abundance of training in communicating technical findings to non-experts.

Contrast this with the experience they’ll have in a business setting, where they’ll likely have less domain knowledge and be one the few with methods knowledge. They’ll be asked to apply techniques that few outside their function understand. Their projects will necessarily include more uncertainty than standard software builds. Their success will hinge on many more factors, many outside of the data scientist’s control, and they will have very little experience articulating the requirements to maximize chances of success. Put all this together, and we start to see a thrown-in-the-deep-end situation emerge.

This can lead to challenges for other functional leaders during their first experience leading data science teams. This lesson from McKinsey’s “Building an R&D strategy for modern times” carries to our field as well:

Organizations tend to favor “safe” projects with near-term returns — such as those emerging out of customer requests — that in many cases do little more than maintain existing market share. One consumer-goods company, for example, divided the R&D budget among its business units, whose leaders then used the money to meet their short-term targets rather than the company’s longer-term differentiation and growth objectives.

In our field, this tends to play out with junior data scientists being asked by their non-technical supervisors to write whatever SQL query will answer the question(s) of the day. This is usually helpful, but usually not the sort of value an enterprise is looking to drive by recruiting savvy modelers.

This problem is much more easily solved when you have leaders who have managed DS or ML projects before. Regardless of function, success hinges on having people who can listen to a problem and scope analytical and modeling approaches to solving them, and manage the risks and ambiguity. Plenty of early career data scientists thrive in these situations. In my experience they’re outliers with gifts in both communication and dealing with ambiguity. I’ve been lucky enough to hire a few by accident — hi Zhiyu! Bank on your ability to screen for and these talents, and compete for them, at your peril.

All this would seemingly argue for centralizing your data science function. That’s one approach, and it brings us to our next important question.

Will your data scientists be close enough to the business to focus on the right problems? A central data science functional group is likely to get less exposure to the business problems you’d like solved, compared to hyper-local teams that report directly to a business team. Big, monolithic, functional teams with formal intakes can struggle to get the business input they need, largely because many stakeholders aren’t really sure what to ask for. If you’ve heard a horror story or two about data science teams turning out “science projects nobody asked for,” this is often a root cause. And again, resist the urge to stereotype: This is rarely because the data science team has too academic a mindset, and much more often because two different functions don’t know how to converse in a shared language.

What options does this leave us? It’s one reason embedded models have worked in my experience. In this model, your data science team is offered access to all of the forums you routinely discuss business problems in. They are responsible for seizing this opportunity to understand the problems a business team wants to solve, and for proposing approaches that can add value. They report to data science leaders, who ensure they are doing methodologically sound work, support them in getting what their projects need for success, and mentor and coach their growth.

Sometimes data science projects fail because of shoddy methodology; they often fail because predictive features aren’t adequately helpful. Knowing the difference can be very difficult for someone outside a quantitative function.

We’ve finished with this step when we have:

  • Defined crisp ways of communicating scope of data scientists or teams
  • Defined engagement patterns

As in all practical decisions, there are trade-offs everywhere and no silver bullets to be found. Completely autonomous local teams will maximize focus on different, local outcomes. A centralized function will minimize duplication with an increased risk of deviating from practical, impactful outcomes.

Let’s review what we’ve accomplished so far:

  1. Defined a strategic hypothesis, the large bet on how we’ll add value with data science and machine learning.
  2. Defined a target portfolio that aligns with our organization’s risk appetite, accounts for your process and tech constraints, and focuses our team on the problems we can’t buy your way through.
  3. Filtered our use cases based on data access and how they’ll drive value.
  4. Possibly, developed reporting structures project sourcing methods that support your data scientists and focus their talents on their unique advantages.

More plainly, we’ve laid out the criteria for finding our right use cases, and filtered our use case opportunities to find the first right set.

The next things to do are:

  1. Step back and look at everything together. Viewed as a holistic whole, is it sensible?
  2. Communicate this strategy, and the initial plan that emerged from it.
  3. Communicate how would-be stakeholders can engage your functional team.
  4. Iterate: Revisit your strategy whenever assumptions or circumstances that led to it changed, and commit to a cadence for reviewing how circumstances have changed.
A chess board and pieces, with a leather sofa behind them.

To conclude, this process is a sobering amount of effort. But, it comes with the great reward. This strategy will deliver a clear articulation of the risks you want to take, how you’ll manage them, and how they’ll support your target outcomes if they pay off. A clear alignment of purpose, and ease of keeping activities consistent with that purpose, is an incredibly empowering thing for a functional team. Deliver that, and results will follow.



Source link

This post originally appeared on TechToday.