A couple of weeks ago, I wrote about defining use-cases. Y'all have since asked me some really good questions as a follow-up: how do we prioritize one use case over the other, what framework works best to identifying a user persona, what pre-requisites we need to tick, and more. (I love office hours 🙂)
So, I've decided to write a 4 part series over the next 4 weeks, where I decode what goes into a successful use-case roll-out. Let's start with part 1 today! 🚀
When you are designing a use case it is only natural that you will start with problems that YOU can see. Obviously, this step is important: as a champion of an initiative in the organisation, you probably have a birds eye view of the key challenges you want to solve for your end users. However, the biggest challenge I've seen with this approach is when champions treat their understanding of end user problems as the holy grail without any end user validation before rolling out a use-case. Instead, I recommend to approach your understanding of user problems by defining a hypothesis (and approaching it like an experiment) + collecting data from your end users to finally validate and prioritise them.
One great tool to reach out to your end-users is....User Surveys. They provide you with the ability to zoom in on the areas that stand out and dig deeper into them by conducting further 1:1 user interviews with representative users.
The next important aspect to remember is that your learnings from surveys are dependent on how well you design them. So, in this week's digest I want to share some best practices to follow while designing user surveys:
1. Define a clear goal for the survey
Your goal is your anchor, your reference that helps to prioritize the top questions you want to ask. Keep it clear and specific. For example, your goal could be to understand the data challenges of different users in the org.
2. Focus on using closed-ended questions
Closed-ended questions are the ones that use pre-populated answer choices for the respondent to choose from—like multiple choice or checkbox questions. They are not only easier for respondents to answer but also provide you with quantitative data to use in your analysis
3. Keep your answer choices balanced
Remember that since you are asking respondents to choose from a few options it becomes crucial that you are not biasing your options. For example, if the prompt is to rate an experience, don't just offer "great", "good", "neither good nor bad" as potential options. Also, add "bad", "Extremely bad" as options. Using answer choices that lean a certain way can result in respondents providing inauthentic feedback.
4. Don't ask double-barreled questions
Double-barreled questions are when you ask for feedback on two separate things within a single question. An example would be - “How do you find the process of locating the right data and understanding the context behind it?” In such a situation, split the questions into two parts.
5. Don’t let your survey get too long
A user survey is a way to understand broader problems not an opportunity for a deep dive into specifics. So you need to make sure that the survey isn't long.
Want to start soon? Our awesome community team worked with some of our DataOps champions to create a generic user survey template that you can easily customize for your goals as you rollout Atlan!
As always, if you have any questions or want to discuss this in detail shoot me an email. Would be happy to help over an office hours session!
P.S: Want to read more about user surveys as a tool to understand your users better? A detailed article is also up on the community: link.