Back to top

Lotus Technologies

The Minimum Viable Product has been a game changing concept in the start-up world. For most of technology’s history, companies simply mimicked the approach of large corporations when deciding product development and go-to-market strategies. But books like The Lean Start Up and The Start Owner’s Manual introduced the simple but powerful approach of starting by create a product with the minimum required functionality to test in the market. The focus isn’t on the product in this approach. But rather the feedback you get from potential users. Then a cycle of creating a features, getting feedback from users and iterating on that feedback. Now this idea is simple and powerful. You spend less time building features people don’t want and you learn about the market in a much quicker fashion than simple launching a full bodied product and hoping for the best.

But in recent time, people have abused this concept. Creating a bunch of MVP’s that float around in a graveyard of bad projects in the app stores.

Here are 3 common mistakes we see happen way too often.

Not Caring About UX/UI

Now it’s easy to assume that you shouldn’t care too much about the appearance of your project since an MVP’s principal job is to prove any market viability. However, far too many people take this to an extreme. A lot of apps get developed that not even entrepreneurs behind them would use.

The problem is that in the past, you could get away with bad UX/UI.

At the first wave of apps and web 2.0 products, looks didn’t matter as much as the value proposition. Everything was new. But things have since changed. Users have expectations and if you want to build something that people can adapt, you have to consider what they’d be comfortable adapting.

Consider UX/UI apart of the value proposition. Consider patterns, aesthetics and user flows that make your app impactful. You will see the return on it.

Bad Metrics For Success

Another start-up killer is metrics that don’t make any sense.

For example, when a company decides “In Xamount of time, We’re going to get to Y number of users”.

It seems like a simple enough metric. The problem is that there is too much room for an unreliable result.

For example, you could simply spend a lot of money on paid advertising and get “Y” number of users. But does that mean your company has really progressed?

The Solution ?

Consider metrics that genuinely indicate new shifts of momentum. And keep them reasonable.

Example, “In Y amount of time, We will have X number of daily active users”.

(An even better example, “In Y amount of time, We will have X amount of users who signed up via organic search.” )

Iterating Too Quickly

Now I know I talked about the beauty and power of a proper consumer-feedback loop. But a common people mistake is iterating too often and too quickly.

It may seem natural to simply move to every user twitch. But if you don’t give sufficient time and effort into your experiments before iterating you could easily miss out on great opportunities.

Here’s a rough example.

Imagine you’ve built a SaaS product. And after some hopeful changes to your landing page you, your website has 43 visits. BUT, no one converts.

It may be our first instinct to ask why 43 people didn’t care enough to convert. But 43 is too small a number to call a test group. Seeing as the average conversion rate floats around 3–5%, you can’t say whether or not your site works very well at scale. If your site had an above average conversion rate, you’d probably miss it if you iterated at only 43 visits.

Use test groups of good size to have more reliable data.

Hope You Found This Valuable.