Should we use surveys to validate our product hypotheses?

(Ajay Mishra) #1

Hi All,

Is it advisable to use surveys to validate specific assumptions about the problem we are solving?

I have been reaching out to our target customers by email to seek their inputs on our basic tenets. I have received responses from 3 out of 20 I contacted. While all of them are positive and they want to be our beta users (I was so happy to have some sort of validation from likely users), this process doesn’t seem scalable to me. Should I instead formulate the questions into a survey and send it out to a larger group of people on Survey Monkey?

I am sorry if the answer is common knowledge, in which case please point me to the right reference to learn about this topic.


(Dane Madsen) #2

It would depend on the number of responses you think you need to form a clear picture. I have no idea the response rate for a blind survey to an opt-in list but I cannot see it being much better than an email as you have already done. 15% is actually a good response for an email.

(Leon Zucchini) #3

When I worked in a market research company, B2B customer satisfaction surveys by large west coast tech companies would get 5-10% response rate (depending on timing, topic, etc.). Response rates above 15% were pretty rare. B2C satisfaction surveys tended to get more like 2-5%.

So given they’re not yet customers, I think you can be quite happy with 3/20 (though the sample size is small). Obviously, more data is better.

Great to hear about your positive feedback! When you put those numbers in a BC, please do remember response bias, i.e. people who answered might be systematically more interested than those who didn’t.

HTH - good luck with your further testing

(Ajay Mishra) #4

Thanks, Dane! I am also struggling to define the minimum number of responses. Borrowing from what I remember from my statistics courses, I am thinking of getting at least 10 samples for each tenet. What do you suggest?

(Ajay Mishra) #5

Thanks for all the data in your reply, Leon! While my response rate is currently better than average, I am sure it will go down to 5% as I send out more emails.

Could you speak to whether you found the information gathered in these surveys reliable to base your critical business decisions on?

(Dane Madsen) #6

Are you looking for only quantitative data or is qualitative more valuable to vetting an idea? If the latter, engaging directly with the 3 that did respond, probing on the responses, and further asking them for other contacts they could direct you to, could be more valuable than 1000 emails/surveys are responding to your queries which, unavoidably, have your bias in drafting.

(Leon Zucchini) #7

Generally yes… but I was working for a specialist MR agency gathering data for large corporates from many thousands of respondents for each study.

That said, response bias is always an issue in customer surveys (less so in panel surveys where it’s more about selection bias), but if you have time and money you can try to get around it by comparing results over time and between competitors.

(Leon Zucchini) #8

Agreed. Qualitative surveys are valuable in themselves and if you have low data volume they can be a good alternative. They will definitely give you more depth.

(Leon Zucchini) #9

Depending on what market you’re considering, you might consider using Google surveys. Last time I checked they provided reasonably cheap panel-type information, the surveys were easy to set up, and they had really nice online analysis interfaces.

(Ajay Mishra) #10

Thanks for your lovely advice! My current focus is on qualitative assessments. I actually engaged further with the respondents, but I was afraid they would consider me too pushy. Fortunately, they were responsive throughout the exchange.

(William Doom) #11

Check out GV Research Sprint. We found this super useful. includes day by day play. The GV research sprint: a 4-day process for answering important startup questions

(Ajay Mishra) #12

Thanks for the reference! I read the book and loved the process. My main problem, however, in using it for us in an early stage hypothesis testing is how to recruit the right candidates for the interviews on Friday. Are there effective ways to do this?

(Chris Ruder) #13

I would do full length, Jobs To Be Done-style interviews with people that you think could be future customers (since it sounds like you aren’t in market yet). I would not share any details on your product or solution and would focus entirely on trying to learn what the person is using today and why it works or doesn’t work for them. Do a handful of these interviews (not surveys) and then determine if your product/service addresses any of the gaps.

If you’re not familiar with Jobs to Be Done, Google “Jobs to Be Done Bob Moesta.”

Good luck

(Ajay Mishra) #14

Thanks, Chris! I will read up on it.
BTW, I watched your pitch on Shark Tank. Congratulations on all the success you are having with Spikeball!

(Sebastien Roy) #15

Customer validation are layers around building a growth sales engine.

So first off, the goal of customer surveys and interview (assuming pre-product; ideation stage but event later I think such process is mandatory) is NOT to validate but rather to invalidate.

It is an epitemiological question really. If you state something like “There are people willing to pay X dollars to get problem Y solved”, you basically have to prove that everybody in the universe doesn’t to prove the statement to be true. If on the other hand you state “Y is not a problem” as soon as you find one person who thinks this is not only a major hurdle but the no 1 hurdle they lose sleep over, you’re done with the proving.

Once you understand that, you can create an iterative feedback loop where you go from assumption to product to sales to assumption to product to sales etc.

Surveys, interviews, landing pages are basically methods to hack that loop on the cheap (i.e. without resorting to spending massive amount of $$ to build a prototype to validate).

The strongest sign of product validation is paying customers.

So now about your question: you basically have to figure out what can eliminate wrong assumptions as cheap and as quickly as possible?

You can check out Justin Wilcox stuff it’s pretty good:

(Tanmayi Sai) #16

Hi Ajay, I’m a PM at Zynga and working on launching a new product (game) next year. So I’m in a similar stage to your product from what I understand. At this stage when we don’t have a product or lots of users, we do long qualitative interviews with our intended audience. If a product is in a stage where you have a significant number of users, then it makes sense to make it a survey and sending it to a big group. A great article I read recently about finding insights is this one-

I hope this helps

(Ajay Mishra) #17

Thanks, Sebastien! I like how you flipped the script. While locating someone who loses sleep over that problem isn’t easy, finding one can truly be a game changer. We have extended our pool of prospective users by a few more but I am yet to find the kind you are talking about. We will keep looking!

(Ajay Mishra) #18

Thanks for sharing the article! I stumbled upon this article and read it last week. It’s really good in how it segments and analyzes the user feedback. We aren’t there yet, but I hope to get there soon.

(Laura Levy) #19

I like your “get out of the building” approach. You could also do remote user research with qualitative tools like Appsee (for mobile apps) or Hotjar (for web). These tools don’t require you to be face-to-face with someone. They enable you to actually watch users interact with your product/website, with user session recordings and touch heatmaps, so you see what grabs your audience’s attention, what leads to confusion, and what makes them convert.