Content pfp
Content
@
1 reply
0 recast
0 reaction

Nick Brodeur pfp
Nick Brodeur
@ncale.eth
Do any founders have a good method of applying A/B testing when doing feature iteration? How do you split your test groups and collect feedback? Do you always test with people directly, or is there a smart way to implement it into the normal user experience?
3 replies
0 recast
5 reactions

slobo pfp
slobo
@slobo.eth
a/b testing is harmful sub scale talk to customers
1 reply
0 recast
1 reaction

Nick Brodeur pfp
Nick Brodeur
@ncale.eth
I could see over-analysis being a waste. But surely a/b testing provides better info on actual user preferences than their direct feedback, right?
1 reply
0 recast
0 reaction

Nick Brodeur pfp
Nick Brodeur
@ncale.eth
Actually lmk if I’m on track with your thought process: Decisions related to overall features can be better gathered from talking to customers, where specific UI decisions could be better decided from an empirical test. When doing product iteration, the features have a higher priority
1 reply
0 recast
0 reaction

slobo pfp
slobo
@slobo.eth
for context i used to manage an 8 figure paid search budget at that scale a/b testing makes sense, but still super easy to fuck it up you need thousands, if not 10s of thousands of visits per variation to get comfort in what's right you're not getting those numbers early
1 reply
0 recast
1 reaction

slobo pfp
slobo
@slobo.eth
also if you want more data get something like hotjar & track individual user journies @kp3556.eth probably has more up to date tool suggestions its been a couple of years since i was in that world
1 reply
0 recast
1 reaction

Nick Brodeur pfp
Nick Brodeur
@ncale.eth
Heard - doesn’t make sense to make major product decisions based on the click through of small set of potentially random customers. Feel the preferences from the source and decide from there
0 reply
0 recast
0 reaction