Difference between revisions of "Akademy/2013/UsabilityWorkshop"

< Akademy‎ | 2013
Jump to: navigation, search
(Created page with "Usability workshop notes Bjorn started with introduction on Usability testing. The goal is to get input from a user about the application. It works by putting them in front o...")
 
(Make the page, as I promised, useful for later reading :D)
Line 1: Line 1:
Usability workshop notes
+
=Usability workshop notes=
  
Bjorn started with introduction on Usability testing. The goal is to get input from a user about the application. It works by putting them in front of it, giving a task and le ng them execute it, guided by the developer asking questions.
+
The goal of this workshop is to get input from a user about the applications being tested. The test works by putting the user in front of the application, giving them a task and letting them execute it, guided by the developer asking questions.
  
The question protocol is a bit like this:
+
==Process==
* What is your first impression?
+
The process works as follows:
* What do you think you can do here?
+
The user is put in front of the opening screen of the application/tool. The developer or usability expert guiding the process explains them what they are supposed to do. Then the following protocol is executed:
* What would you do to reach your goal?
 
* What do you expect to happen when you do it?
 
* Please do it...
 
* Did it do what you expected?
 
* (if not: was it better or worse?)
 
* go back to the beginning
 
  
We tested KScreen. I turned out to be very confusing.
+
# Question: What is your first impression?
 +
# Question: What do you think you can do here?
 +
# Question: What would you do to reach your goal?
 +
# Question: What do you expect to happen when you do it?
 +
# Please do it... (let user execute action)
 +
# Question: Did it do what you expected?
 +
# Question: (if not: was it better or worse?)
 +
# If task is not completed yet, go back to 1
 +
 
 +
If the task is finished, asked what the user thought of it all, discuss improvements etc.
 +
 
 +
Notes:
 +
* Try to closely stick to the protocol, not skipping steps. Of course, if a user has already answered a question in his answer on another one, there is no need to ask that question again...
 +
* make clear that the user is ONLY to execute a SINGLE action in step 5. This means that most of the time, you are checking the expectations and ideas of the user instead of watching him/her clicking around.
 +
* As guide you're not supposed to give any hints as to what button the user should click or what he/she should do, other than perhaps remind him/her of the end goal he/she has been given
 +
* Despite the above point, there is of course no reason to let the user helplessly waste his/her time. If the user get stuck, help them to get unstuck.
 +
* Yes, it can be a bit hard to stay disciplined, not giving hints or telling the user what to do... And to stick to the protocol instead of just quickly letting the user click to the end. Try to keep to the structure anyway, it works best.
 +
* Extra tip: it is very valuable to record the sessions. Often you see things later on that you were not aware off during the session...
 +
 
 +
==Results==
 +
* We tested KScreen. I turned out to be very confusing ;-)
 +
* Also Plasma Media Center got tried out. See [[Plasma/Plasma_Media_Center/Akademy2013#PMC%20Usability%20test%20feedback|here]] for PMC feedback.
 +
* The network manager plasmoid session was even put on video: [www.youtube.com/watch?v=FjuVND3VAoM part 1] and [www.youtube.com/watch?v=5BJ_e4jGpRA part 2]

Revision as of 20:31, 16 August 2013

Usability workshop notes

The goal of this workshop is to get input from a user about the applications being tested. The test works by putting the user in front of the application, giving them a task and letting them execute it, guided by the developer asking questions.

Process

The process works as follows: The user is put in front of the opening screen of the application/tool. The developer or usability expert guiding the process explains them what they are supposed to do. Then the following protocol is executed:

  1. Question: What is your first impression?
  2. Question: What do you think you can do here?
  3. Question: What would you do to reach your goal?
  4. Question: What do you expect to happen when you do it?
  5. Please do it... (let user execute action)
  6. Question: Did it do what you expected?
  7. Question: (if not: was it better or worse?)
  8. If task is not completed yet, go back to 1

If the task is finished, asked what the user thought of it all, discuss improvements etc.

Notes:

  • Try to closely stick to the protocol, not skipping steps. Of course, if a user has already answered a question in his answer on another one, there is no need to ask that question again...
  • make clear that the user is ONLY to execute a SINGLE action in step 5. This means that most of the time, you are checking the expectations and ideas of the user instead of watching him/her clicking around.
  • As guide you're not supposed to give any hints as to what button the user should click or what he/she should do, other than perhaps remind him/her of the end goal he/she has been given
  • Despite the above point, there is of course no reason to let the user helplessly waste his/her time. If the user get stuck, help them to get unstuck.
  • Yes, it can be a bit hard to stay disciplined, not giving hints or telling the user what to do... And to stick to the protocol instead of just quickly letting the user click to the end. Try to keep to the structure anyway, it works best.
  • Extra tip: it is very valuable to record the sessions. Often you see things later on that you were not aware off during the session...

Results

  • We tested KScreen. I turned out to be very confusing ;-)
  • Also Plasma Media Center got tried out. See here for PMC feedback.
  • The network manager plasmoid session was even put on video: [www.youtube.com/watch?v=FjuVND3VAoM part 1] and [www.youtube.com/watch?v=5BJ_e4jGpRA part 2]

Content is available under Creative Commons License SA 4.0 unless otherwise noted.