Image credit: Michael Longmire
Whilst working on our wireframes of RI2, Astrid and I briefly discussed, if we were to turn the artefact into a business venture, how would we go about collecting user feedback.
Initally, I proposed the idea of beta testing with a small demographic. In previous roles, I have suplemented automated tests and my own QA with feedback from the userbase (corroborated by Matt Heusser of TechTarget 2019).
After further consideration, I realised that I was able to do this because the products in question were built bespoke for SMEs, or were B2B SaaS solutions. In both cases, the users had consented to participating in research. They had either paid for the software development directly, or had signed a contract for the software, with a clause for their involvement in UX research.
With a consumer-facing project like ours, Astrid and I would need to be particularly careful in our approach to gathering feedback.
As a thought experiment, I applied the exercise from this week’s challenge activity to our product:
In order to gain foundational feedback on our product before it enters the market, I would ask a small sample to trial the app for a period of two weeks. At the end of the trial period, I would present them with a questionnaire. This questionnaire would allow me to explore their experience of the app and uncover any painpoints to be resolved before taking the app to market.
I came to the realisation that, depsite the app’s irrefutably wholesome USPs of helping users be more physically healthy, save money on trainfares, and be more environmentally conscious with transport, there were undenyable ethical issues.
The first ethical issue was in asking participants to engage in daily exercise. As a humble UX designer, I’m not an authority on what is physically healthy. It is reasonable to suggest that a person would typically benefit from regular activity.
On a per-participant level, I couldn’t say with any confidence that excerising everyday would actually be positive for them. Even if I had access to each participant’s medical history (which would be a study-breaking ethical issue in itself), I don’t have the medical knowledge to approve the exercise plan for them. As suggested in Coolican’s Research Methods and Statistics in psychology, 5th edn, I would need to seek advice from professional colleagues to mediate any potential physical discomfort.
Granted, the participants wouldn’t be obligated to go on runs each day, but they may still exercise beyond a comfortable level through demand characteristics (Orne, 1962).
Harvesting the feedback with a questionnaire could be subject to scrutiny. Leading questions could lead me to mistakenly thinking the app had achieved product-market fit; the notion that I have delivered an appropriate or valuable product to my target audience (Hotjar, 2020).
What I would consider more important than research credibility, however, is the ethical risk of leading questions. Allen, 2017, states that “using questions that lead respondents can negatively affect objectivity and ethics of both the researcher and the study”. A leading question could plant the idea in a participant’s head that may be uninformed or wrong, becoming a source of anxiety for them.
Those were my main considerations for my imagined research. I’ve carried out product research with early adopters in the past, but now I’d like the opportunity to devise a new study with a more refined research method.
In part, I think that this new perspective has arisen from this week’s content. I also think that RI2 has encouraged me to think about our project operationally, almost as a product owner. Talk of technical excellence – focusing on the craft of producing better software (Ben Morris, 2016) – always come up in agile environments. Better research seems to fall into the remit of operational excellence. Something that, in my experience, isn’t considered quite so often (Hanna, 2019).
I’ll carry an attention to operational excellence forward in my studies, and my career.
ALLEN, M. 2017, ‘Survey: Leading Questions’. The SAGE Encyclopedia of Communication Research Methods. Available at: https://methods.sagepub.com/reference/the-sage-encyclopedia-of-communication-research-methods/i14288.xml [accessed 22/12/2020].
COOLICAN, H. 2009. Research Methods and Statistics in Psychology. 5th edn. Oxfordshire: Routledge.
HANNA, L. 2019. ‘What it Takes to Achieve Operational Excellence’. KaiNexus. Available at: https://blog.kainexus.com/improvement-disciplines/operational-excellence/what-it-takes-to-achieve-operational-excellence [accessed 22/12/2020].
HEUSSER, M. 2019. ‘6 ways to tighten Agile feedback loops’. TechTarget. Available at: https://searchsoftwarequality.techtarget.com/tip/6-ways-to-catch-defects-in-software-tighten-feedback-loops [accessed 22/12/2020].
HotJar. 2020. ‘Chapter 3: how to find product/market fit’. Available at: https://www.hotjar.com/grow-your-saas-startup/product-market-fit [accessed 22/12/2020].
MORRIS, B. 2016. ‘How do you foster technical excellence in an agile development culture?’ Ben Morris. Available at: ben-morris.com/how-do-you-foster-technical-excellence-in-an-agile-development-culture [accessed 22/12/2020].
ORNE, M.T. 1962. ‘On the social psychology of the psychological experiment: With particular reference to demand characteristics and their implications’. American Psychologist. 17, 776-783.