Our Four Questions to Validate an AI/ML Startup Idea

Over the last 15 years, we’ve seen the first wave of machine learning (ML) and artificial intelligence (AI) companies emerge from startup to success (or shutdown). Today, the field is as exciting as it has ever been, with critical innovations like GPT-3 and Dall-E becoming household names--depending on your household, of course.


At PSL, we continue to invest in ML/AI startups out of our venture fund like Super.AI, Reserved.ai, and Panda.ai and co-found them in our studio (as with Recurrent and Attunely). Along the way, we have worked to codify our own learnings. PSL Principal Data Scientist Sean Robinson and Principal Engineer Adam Loving put together the four key questions we work through as we build ML/AI businesses. These questions are supplementary to the many other important questions facing a company (i.e. necessary but not sufficient).  

  1. Are we commercializing at the right time?

Since the knowledge frontier is ever changing, we believe that one of the keys to starting a successful ML/AI company lies in perfectly timing your takeoff with the wave of technology adoption and implementation.  But how do you know if your timing is right? At PSL, we spend significant resources keeping an eye on research and innovation frontier. But we’ve found that this bleeding edge--and the associated hype around it--isn’t always the right moment to commercialize.

Below is a chart of the “hype cycle”, an imperfect but useful plot of the social response to innovation developed by the advisory firm Gartner.



This graph is a useful starting point for our analysis of commercial-readiness. At the peak of hype, the technology may not be ready, and in the trough of disillusionment, you could have a unique moment to launch.

So, here’s how we’d mark it up:


It is always tempting to operate in the earliest stages of the hype curve. While possible, this can require untenable effort on the part of a development team.  That’s because at this stage, you’re more likely to find technology results that don’t yield flexible implementations. They are often research papers with a brittle or obscure implementation and techniques that only work well in specific cases, rather than technologies that can make it into deployment. This isn’t always the case, but it’s common enough to approach this area with caution.

On the other hand, once you’re in the “slope of enlightenment” you may already be too late to the game. Other companies may have technical and data moats already which often compound their data advantage if they are already in use by customers. That leaves us “surfing the edge of the hype curve,” with an eye always on new developments as they are implemented and usable.  The goal is to become the front-runners who bring the new tech to the Plateau of Productivity, which means we need to keep an eye on technology concepts on the rise, and know the exact moment to connect new technology to business needs. Very often, that moment is when a research technology has a credible and flexible implementation that can become the heart of a larger product.

For a case study, look at this curve of Google search interest in GPT-3. 



GPT-3 is unusual in that it was first announced as a fully-baked tool (though access was gated). Still, it proved a shiny object. Lots of narrow demos hinted at its full power, but as usual the devil is in the details.  Now that the wave of hype has crested, it’s time to build. (Though we are probably already a few months past the sweet spot!)

  1. Does the business hand the right tasks to the machine? 

This is closely related to question one, but dives more specifically into whether the task you are training your model to accomplish falls in the zone of excellence for the technology. At this moment of technology adoption, we find the following question to be illuminating:

“If you had access to infinite willing-and-able workers that had a modest level of training, what would you ask them to do?” 

This question captures the most appropriate use cases for AI/ML. In particular that can mean: 

  • Minimizing time to decision: Using classification or “grading”  incoming data to dramatically shorten the time a human takes to get to a decision or find the “needle in a haystack”
  • Suggesting next steps: Guiding existing human tasks by showing “likely best next steps” for existing business or legal processes 
  • Repetitive workflows faster and cheaper: Automating manually-intensive processes and improving efficiency 
  • Automated monitoring: Providing near-human levels of analytics insight… but at scale - to provide consistently updated monitoring of a large and changing data stream

But on the flip side, here are the areas where AI/ML is less well-suited:

  • Replacing experts entirely. Like most tools throughout history, AI products will improve human productivity. Still, we believe the right mental model is “one person doing the work of ten” not “a computer replacing work of a person.” 
  • Operating without expertise.  AI/ML models still need to be constantly tuned, trained and customized. There are lots of emergent commoditized tools on the marketplace, which will increase ease-of-access. But no-knowledge-required implementations are not here yet in most cases.
  • Providing a moat all by itself.  There was a distinct time when applying even basic ML classification techniques to data was new to the business world, and data scientists (often from the research world) were comparatively rare. Rest assured that if your approach can be summarized as “apply this ML library as-is to a non-proprietary  data set,” you won’t have found yourself a sustainable competitive advantage in AI. 
  1. Is it a high-quality, exclusive, and improving data source?

There are three key ingredients to a great ML/AI data source: quality, exclusivity, and improvement over time.

High quality: While it may still require restructuring or formatting to make the data convenient to work with, the important thing is that the data source reliably demonstrates and represents the dynamics that the ML model will operate on.  This is a combination of data set size (how many customers, how many people in each customer segment, how often can we get a new image, etc.) and the presence of a usable signal.  These requirements often require a bit of exploratory work - e.g. to test if a classifier can be made accurate on existing data.

Exclusivity: This can be either proprietary access from (e.g. a key partnership) or ownership, with the latter being preferable. While you might not be able to start with one, you should be able to envision a path to having one.

Improvement through time: This means that additional data adds predictive power, additional useful dimensions, or greater precision. The magic ingredient is to find something where a customer relationship increases the data set.

Here’s a useful encompassing evaluation criterion for these two (related questions): “After operating for a year, will we have a large, clean dataset no one else does?”

  1. Can we actually complete the first turn of the crank?

Every now and then, we catch ourselves engaging in “perfect future” thinking. This fallacy goes something like this: “once I have all the data, I’ll be able to create plenty of value, and surely capture some of it myself.”

But as any startup practitioner knows, the real skill is in charting a course to the perfect future, not just envisioning it.

To fight this thinking, we’ll typically ask ourselves a few questions:

  • Can we create value with less and messier data than we’d like?
  • How good is “good enough” for a customer to want the product, and can we get there with little money and little time?
  • If the AI/ML effort fails and must be replaced with a less cutting-edge solution, do we still have a business model?

Let's build

We continue to be astonished and excited by the pace of innovation in ML/AI. Want to chat about an idea? Reach out at hello@psl.com.