Can we finally stop launching "minimum viable products"? The fact is, users hate them.
(Steven Gottlieb / Getty Images)

Can we finally stop launching "minimum viable products"? The fact is, users hate them.

In the past decade, the “lean startup” movement has had, arguably, a greater impact on the approach that product designers, engineers, and entrepreneurs take than any other. The fundamental concept is undeniably compelling: craft a version of your product (and company) that requires the least amount of time and effort to validate whether the problem you’re solving is an important one that real customers will pay for or use. This “minimum viable product” will help you learn faster, iterate faster, and survive longer on less money. It’s a powerful way to overcome many of the problems that plague (and often prematurely kill) young companies and new product efforts. But it also leads people to create a lot of crappy, barely useful products.

We’ll Learn a Lot Once the MVP’s Out

In late 2014 and early 2015, I worked with Moz’s big data and data science teams designing a minimum viable product (MVP) to help people identify websites that Google might consider to be spam. We started with a few assumptions about the field of web spam and SEO that we then validated through research and customer interviews:

  • Getting links from spam sites can potentially harm rankings and visibility in Google.
  • Knowing which sites are spam is hard because Google won’t label them (if they did, spammers could easily see what passes Google’s filters and what doesn’t).
  • If you take the time to perform searches in Google related to a website, and see that it doesn’t rank for any terms and phrases that it obviously should (e.g., if Moz.com didn’t rank on the first page in Google for “Moz” or “Moz com,” we’d know something was wrong), there’s a good chance Google’s penalized or banned that site for spam.
  • If spam does link to you, Google recommends using the “disavow tool” system in Google Search Console, but you must be incredibly cautious, because disavowing non-spam links to your site can result in massive traffic losses. (Cyrus Shepard, Moz’s head of SEO at the time, tested this by disavowing all the links to a site he owned, and it plummeted in the rankings . . . lesson learned).
  • Many of our interview subjects said that fear of Google penalties and the constant need to identify and validate spam versus nonspam links was driving them up the wall and consuming a lot of their SEO work time.

Based on these learnings (and others I’ve excluded for brevity), we decided to build a “Spam Score” into our web index. This score would help indicate the degree to which a site might be perceived as spam by Google, and thus a potentially risky place from which to get or have links.

The MVP process used a clever bit of research from our head of data science Matt Peters. Long story short, Matt and I dreamed up nearly a hundred potential factors that might be correlated with sites Google had penalized or banned. We then generated a large list of websites that didn’t rank for their own brand or domain name (indicating they had been flagged by Google) and looked at the relative connections between all those hundred factors and the penalized or banned websites. In the end, we found seventeen factors that were relatively good predictors of whether a site had drawn Google’s ire.

We called them “spam flags” and saw in our research that the more flags a website had, the more likely it was to be penalized in Google’s rankings. The flags included things like the length of the domain name (turns out spammers often have very long domain names) or the presence of many external links with very little content. Having a few flags wasn’t a particularly bad thing—most websites triggered at least two or three. But if a website triggered eight or more of the seventeen flags, it was more likely than not to be penalized.

The great part about Spam Score, at least for us at Moz, was that it required a relatively small amount of additional work (all in, about three months of effort from five people, though it spread out across almost a year due to overlapping priorities) to include in our data sets and to publish in our tools. We knew that initially, it would receive some fair and justified criticism, and we expected folks to have concerns like:

  • Moz’s web index wasn’t large enough (at the time) to cover all of the domains that may be spam, and thus couldn’t provide a comprehensive list of sites to disavow with Google.
  • The percent-risk model can be confusing. Many people would prefer a model that simply showed whether or not a domain was penalized by Google (rather than a percent-chance tied to a count of features), but we didn’t have the bandwidth to make that happen.
  • Spam flags could be misconstrued as a potential problem for one’s own website rather than a filter system for reviewing links from other spammy websites.
  • The scores of five through eleven (out of seventeen) could be particularly vexing because they indicate a higher risk of penalization, but could also be totally innocuous.
  • The flags weren’t actually the spam signals Google uses (we don’t know what those are because Google doesn’t disclose them). They’re simply well correlated with sites that have drawn penalties in the search engine.

At launch, we figured, despite these issues, our MVP would still help a lot of people and, like all good MVPs, it would help us learn more about our customers and what they wanted from a spam-identifying product long term.

But here’s the kicker: Our research had already revealed what customers wanted. They wanted a web index that included all the sites Google crawled and indexed, so it would be comprehensive enough to spot all the potential risky links. They wanted a score that would definitively say whether a site had been penalized by Google. And they wanted an easy way of knowing which of those spammy sites linked to them (or any other site on the web) so they could easily take that list and either avoid links from it or export and upload it to Google Search Console through a disavow file to prevent Google from penalizing them.

That would be an exceptional product.

But we didn’t have the focus or the bandwidth to build the exceptional product, so we launched an MVP, hoping to learn and iterate. We figured that something to help our customers and community was better than nothing.

I think that’s my biggest lesson from the many times I’ve launched MVPs over my career. Sometimes, something is better than nothing. Surprisingly often, it’s not.

Spam Score launched on March 30, 2015, and while we did receive a good bit of positive feedback, we also got a lot of criticism, confusion, and questions. The score’s design was suboptimal. The way the flags aligned to a percentage risk model wasn’t intuitive. Many users focused on the flag count for their own website rather than the flags of the incoming links to their sites. These were things we knew would happen in the design and construction phase but pushed to the back burner in favor of a faster release.

Marie Haynes, one of the world’s foremost experts in the field of web spam and Google penalty issues, left a comment in the launch blog post that summed up a lot of the sentiment around the release:

I wanted to like this tool, but I am really concerned that it could do more harm than good. Perhaps I have misunderstood its purpose. If used as an adjunct to a manual link audit, it could be helpful. But to me, it came across as an all in one solution to link problems. I think that other people are going to assume this as well.

We’d talked to Marie during the development of the metric. We knew her concerns. We knew she was massively influential in the space and that her approval and support (and others like her) were a great barometer for our success at solving the problem, but we chose to launch while we were still “embarrassed” by our first version of the product, rather than waiting until we could develop something better. Perfect is the enemy of done, right?

Six months after launch, which looking at our product performance metrics, we noted that spam score had become mildly popular with a small group of our customers (about 5 percent of the folks who regularly used Open Site Explorer visited the spam score section), but it had no observable impact on free trials, on vesting rate, on retention, or on growth of the Moz Pro subscription overall. In other words, we’d probably have seen exactly the same performance in our customer base and growth rate if we’d never launched Spam Score.

Great use of (at least) $500,000 in data collection, research, and engineering time, eh? Thank god I’m the founder . . . otherwise I might have been shown the door.

Do MVPs Have to Be So Minimally Viable?

The problem with MVPs, and with the “something > nothing” model, is that if you launch to a large customer base or a broad community, you build brand association with that first version. To expect your initial users (who are often the most influential, early-adopter types you’ll attract—the same ones who’ll amplify the message about what you’ve put out to everyone else in your field) to perceive an MVP as an MVP is unrealistic.

In my experience, our customers (and potential customers) don’t see new things and think: “Oh, this must be their initial stab, and while it’s not exactly what I want or need, I can see that it’s a product I should pay attention to and help support, because eventually I can imagine it getting to the place where it really is useful and helpful to me.”

Instead, they (usually) see new things and think: “Is this interesting? Does it do what I need? Is it way better than what I already use? Is it worth the hassle of learning something new and switching away from what I’ve always done?” and if the answer to those questions is a “no,” or even “Well, maybe, but I’m not quite sure,” your product is unlikely to have substantive impact.

Worse, I’ve found that when we launch MVPs, the broad community of marketers and SEOs who follow Moz perceive our quality to be shoddy and our products to be inferior. I’ve termed this brand reputation that follows an initially incomplete, minimally viable product’s launch the “MVP hangover.” It seems to follow the product and even the broader brand around for years, long after we’ve iterated and improved to make the product truly exceptional and best-in-class.

My theory about MVPs applies differently to different stages of your organization, based mostly on reach:

For an early-stage company with little risk of brand damage and a relatively small following and low expectations, the MVP model can work wonderfully. You launch something as early as possible, you test your assumptions, you learn from your small but passionate audience, and then you iterate until you’ve got something extraordinary. Along the way, your (tiny) organization is associated with an ever-improving product, and by the time large groups of influencers and potential customers hear about you, you’re in great shape to be perceived as a leader and innovator.

Conversely, if you already have a big following with high expectations, publicly launching a traditional MVP (one that leans more to the “minimum” side of the acronym than the “viable” side) can be disastrous. If you’ve reached a certain scale (which could vary depending on the reach of your organization versus the size of your field), perception and reputation are huge parts of your current and future success. A not-up-to-par product launch can hurt that reputation in the market and be perceived as a reason to avoid your company/product by potential customers. It can carry an MVP hangover for years, even if you do improve that product. And it can even drag down perception of your historic or currently existing products by association.

Rand Fishkin is the Lost and Founder: A Painfully Honest Field Guide to the Startup World, from which this article is excerpted.



David N'DRI

Consultant-business developper

1y

Beaucoup commencent à penser que, souvent, les MVP sont plus minimaux que viables, ce qui entraîne une dégradation des relations avec les clients et une mauvaise qualité des produits. Alors que de plus en plus de produits inondent le marché, se contenter du minimum n'est plus un moyen d'attirer l'attention des consommateurs potentiels. L'approche du produit minimum viable (MVP) est la manière minimale ou "allégée" de donner aux consommateurs ce qu'ils veulent sans qu'il s'agisse nécessairement d'une idée entièrement réalisée. Compte tenu du fonctionnement de l'informatique dématérialisée et de sa capacité sans précédent à tester des idées incomplètes, l'approche MVP est devenue la méthodologie dominante pour lancer des idées dans le monde. Bien que la définition de "viable" soit discutable et à juste titre dans l'œil de celui qui regarde, parce que les personnes qui construisent réellement des systèmes logiciels viennent généralement d'une formation d'ingénieur où "V" signifie être fiable et avoir aussi peu de défauts que possible. Ils veulent construire un pont qui ne s'effondrera pas spontanément - à quoi bon l'orner d'un beau motif floral s'il ne peut pas supporter le poids de plus d'un véhicule ?

Like
Reply
Mame fatou Gueye

Incubée chez FRTN Technologie |étudiante en communication digitale|

1y

Très intéressant mais Cela implique de prendre le temps de comprendre votre public cible et ses besoins. Cela vous donnera donc l'opportunité de développer un produit qui répond non seulement à leurs exigences minimales, mais aussi de dépasser leurs attentes sur différents points.

Like
Reply

Il va falloir se lancer avec un prototype de 1ere version simplifiée pour tester rapidement le produit sur le marché. Les potentiels clients interagiront ou non pour de la valeur ajoutée que leur apporte le produit ou service sinon leur déception. Toutes ces remarques, observations sont les bienvenues pour améliorer la solution. Ce processus de construction du minimum viable Product  MVP est incontournable pour ce qui  se lancent dans le marché 

Like
Reply
Adolphe FOSSI TCHATAGNE

TECHNICIEN SUPERIEUR EN ELECTROTECHNIQUE chez GBA MALI

2y

 Can we finally stop launching "minimum viable products"? The fact is, users hate them.    Son nom seul peut faire peur aux entrepreneurs qui sont toujours prêt à mettre sur le marché leur MVP. Et pourtant c'est un outil très important pour l'amélioration du produit minimum. Il pense que l'accent sur le MVP permet de Créer une version de votre produit (et de votre entreprise) qui nécessite le moins de temps et d'efforts. Juste une façon d'attirer notre attention sur le fait que nous devons prendre le temps nécessaire pour nous rassurer de notre MVP avant sa mise sur le marché. C'est un moyen puissant de surmonter bon nombre des problèmes qui affligent (et souvent tuent prématurément) les jeunes entreprises et les nouveaux produits. Il nous donne des outils pour éviter d’être spammer au moment du lancement de notre produit sur le marché. Son défaut c’est de se limiter aux réseaux sociaux pour la promotion du MVP.

Like
Reply
Levi Wilson

Improving humanity by building robust social connections one Cube at a time.

4y

Hi, Mr. Fishkin. Thank you for adding more meat to the defensive of this argument. I've mentioned some of these ideas to my development team and consultants, but its difficult to bust through their MVP mantra. Do you have any suggestions on how I can improve my communication with them?

Like
Reply

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics