A/B Pricing with Allan Wille
A conversation with Allan Wille, President and CEO of Klipfolio, about pricing strategy.

A/B Pricing with Allan Wille

Today’s conversation is with Allan Wille, President and CEO of Klipfolio, about pricing strategy.

Welcome, Allan! Start out by telling us a little bit about Klipfolio and your role as CEO.

Klipfolio is a real-time dashboard software for teams who want to continuously monitor the health of their business. We're fortunate to have roughly 8,000 customers globally and about 95 employees. For the most part, our customers are small and midsize businesses that are monitoring marketing or sales metrics.

Really, every company should be monitoring the things that matter to their business and they should be in control of what the business is telling them – the health and performance of the business.

The topic for the day is A/B pricing and specifically, a cool experiment that you ran at Klipfolio that's seen amazing results. Do you mind going through that a little bit?

Pricing is one of the most difficult and ill-understood areas of the product mix for each and every company. If you did a quick search on how to price a product, you’d get a million articles, each one preaching a different idea or strategy.

So, when we were initially digging into our pricing strategy, our model was a monthly cost of $X per user. With that user-based model, it always felt like we had one foot on the brake pedal because we couldn't say, "Hey Dan, thanks for signing up. You should invite your buddies to try the product." Because each friend would have to pay under the user-based pricing model.

Now, this was 4-5 years ago when user-based models seemed to be the standard way for pricing software, but like I said, it felt like we had one foot on the brake. We were in a spot where no matter how much research we did, we couldn’t answer the question of how much to charge for the product, and under which model.

Then, one day, I was chatting with an investor about pricing and they introduced me to a former General Manager at Netflix (Netflix did a ton of testing on their pricing.) I gave him a call and the advice I got was "I can’t tell you … and by the way, nobody can tell you.”

That didn’t help me much, but what did is when he started talking about building a testing harness and a framework. It does take some set-up time and initial investment, but having a solid A/B framework and ability to test is absolutely worth it.

So that's how we started down this road. We’re a company that values being able to test things, so this resonated with us strongly.

Looking at it from a technical standpoint, how was this implemented?

At its core, it’s a cookie based system. As soon as you hit our website you get a cookie that says "X, Y, and Z" and that cookie gets passed over into our app. That way, each user sees consistent pricing every time they view our pricing page.

Behind the scenes, a few more things happen beyond the cookie-based system. We’re integrated with Salesforce so our reps can see which price/pricing model each prospect saw. So, generally speaking, our reps check Salesforce before reaching out to clients so they are prepared going into the conversation.

Then there are things like the live chat that we have which is very active, but reps don’t have the time to look at Salesforce beforehand, so we focus a lot of training there so they know the right questions to ask. It’s worked quite well as we’ve moved through different pricing tests.

What kind of variability did you have from price to price? Are we talking a couple dollars different or a significant difference?

The thing to remember here is that not only were we testing the price to charge, but sometimes the entire pricing model as well.

The first test we ran is a good example of that when we tested the hypothesis that a resource-based pricing model would see better behavior than the existing user-based model. Any time you’re testing behavior, it has to be a longer-term test.

If it’s just price that you’re testing, it’s a little easier. The main thing to look at is conversion rates for cohort A who saw price A and compare that with cohort B who saw price B. If you have enough volume, you can see very quickly what your answer is with a great deal of accuracy.

Keep in mind, anytime you’re A/B testing anything, you want to keep the A/B test exactly the same other than the one thing you're testing.

How did that first test go? What were the main things you were looking at to see if the new resource based model was better than the existing model?

The first thing we looked at was conversion rates – is the new model better or worse than the existing model at converting customers? Then over the longer term, it’s looking at how those converted accounts behave. What’s engagement like? What’s the retention rate? How is the behavior different between the two models and which one is better?

At Klipfolio, we have a lot of lead volume and website activity, so we can run tests quite quickly with enough volume to have statistical accuracy. We ran the test for two months where we measured conversion, then closed the test, but kept track of the cohorts to see how they behaved after they converted.

So after you ran the initial test, did you end up switching pricing models?

The interesting thing is that there wasn’t much change in the conversion rates, but what we did see was a healthier long-term customer shown by higher retention rates.

Once we had the winner, we switched all of the customer traffic to that new pricing page and have been on the resource-based pricing model ever since.

How much experimentation have you done to find that optimal price under that resource-based pricing model?

We've tested out increasing the price, nothing dramatic, but it didn’t get the results we were hoping for. There was a big drop in conversion rates, a surprisingly big drop. So, obviously, we didn’t proceed with that.

We’ve got a test going right now with a slightly lower price. Out of this one, we’re wanting to see if the number of additional converted prospects make up for the drop in price. We don't have the full results on that yet, but it will be interesting to see.

How do you keep track of all these different pricing models and what customers are getting charged what?

We use a tool called Zuora, which is our billing and pricing framework system.

Inside of Zuora, we have all of the legacy pricing tests that still have customers on it. We have a lot of customers on some of the original pricing plans. Then we have customers here and there that converted on certain tests where the price was lowered, and that's okay. Sure, they may be paying less than the current pricing, but the overall value we get from testing is well worth it.

You mentioned that everything runs off a cookie-based system. What happens when someone looks at pricing on their computer, then checks on their phone and sees something different?

Yeah, so this is something that can happen and has happened, but not very often. They’re not necessarily going to see different prices if there’s a test going on, but there is a chance that they get grouped with the different cohort.

Our strategy? Ask them which one they prefer.

Having the ability to strike up that conversation with a prospect is very valuable. Which price would make them more compelled to sign up? Going back to that first test, would they rather have a user-based price or resource-based? Once you get that answer, ask why. Ask how they’d use the product differently under each pricing model.

At Klipfolio, we’re a research-based company, and most prospects can respect that.

Now, same sort of situation but this time companies in the same region talking and figuring out they are under two different pricing models or getting charged different pricing?

It’s the same sort of thing … our strategy is to simply ask them which one they prefer and offer it to them.

When it comes to getting charged different prices, we try to get ahead of that issue. Let’s say we have a price plan that’s $100 per month and there are 100 customers that are currently on that plan. Then, we decide to test how a price point of $50 would work. If the $50 price tag proves to be optimal, what happens to those customers paying $100?

One option is that we'd continue cashing in on them until they notice, but we’re not a fan of that option. We’d rather move all of our customers over to that $50 price point. It’s one of those situations where if you handle it wrong, and try to take advantage of a situation, things could go very wrong.

On the flip side, if we decide on a higher price moving forward, those who have current plans with a lower price will continue on with the current price as long as they stay a customer. They get that benefit for signing up early.

All these things and scenarios we’ve talked about … are they things you had a plan for going into all this or did you come up with things as you went?

A lot of it was things we learned on the go. With any business, you know, most decisions are made with 70% of the data or less. Some of the obvious ones, like what if a prospect sees two different prices, we asked early on and made a plan for.

In other cases, like training sales reps on how to deal with certain circumstances that come up and what the minimum number is to have statistical relevance, we figured out as we went.

So, looking back before you started all this, how do you think about pricing strategy differently now?

The biggest thing I’ve learned through all this is that through all the information (or misinformation) that’s out there, is that every business is unique, every customer base is unique, and every product is unique. Read as much as you can, and use that information to influence your thinking about pricing, but above all else, test it.

Pricing is one of those things that unless you test it, you never know for sure. You can't really ask people "Hey, what do you think about this price?" Because they're not actually standing at the edge of the diving board, about to make the choice.

To be able to study behavior through pricing A/B tests is the only way to accurately see how people react.

Now I’d like to grab a few of your favorite resources:

A must-follow on Twitter?

Tomasz Tunguz from Redpoint Capital is a must. The amount of data that he produces and the studies that he does are truly amazing.

Other names that come to mind are Dave Kellogg and Brad Feld.

Then the next one is either a favorite blog?

I think it’s got to be those same three names.

Tomasz Tunguz - Daily Blog 

Dave Kellogg - Kellblog

Brad Feld - FeldThoughts

A book you’re currently reading or have just finished?

I just finished a book which I was really impressed with called Lead By Greatness by David Lapin. I saw him talk in Toronto a while ago. Really interesting guy — he's a consultant and a rabbi.

What is your favorite '90s dance song?

I had to think about this one because originally I was thinking it’d be something by Pet Shop Boys, but they’re from the late ‘80s.

So, I did some more thinking and settled on Cosmic Girl by Jamiroquai.

Nice answer. Thanks a lot for the time today Allan.

Anytime.


Interested in more expert interviews? Visit the FunnelCake blog.

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics