Yesterday, I have completed all my user study and prototype testing. There were two types of testing took place in the past two weeks:
I tested with 6 people who are my intended audience (i.e. college seniors)
I also tested with 8 people who are industry experts (i.e. people who either work in education, technology or both) during science fair
When testing with my intended audience, here's the flow I followed:
Ask their permission to record
Interview participant for some general background info, familiarity to the topic, and pay attention to key attributes (i.e. desire to learn, tech empathy, and etc.)
Ask “self-reporting” financial knowledge score (1-10)
Have participant do FINRA’s financial literacy test
Chat with them regarding their current pain points, goals, and etc.
Play prototype (Introduce “Hannah” the Financial Coach by iGrad)
Short assessment on how much they learned and ask for prototype feedback
Thank them with Starbucks Card and send thank-you email
When testing with industry experts, here's the flow I followed:
Ask their permission to record
Introduce them my idea through the deck I prepared for science fair
Play prototype demo
Short assessment on how much they learned from interacting with Hannah
Ask for prototype feedback
Thank them for their inputs
Some snapshot of my takeaways are:
Quantitive data
In terms of Financial Literacy, according to FINRA's national financial capability study survey, the national average score is 53% while New York state's average score is 49%. When using the same survey for my intended audience, participants scored an average of 69% which is above both the national and New York state's averages.
Post-assessment of prototype testing shows 91% accuracy rate when assessing the knowledge participants gained from interacting with Hannah. In the likert scale, all participants found Hannah helpful and half of them likely will consult with Hannah for future financial questions while the other half are undecided.
Qualitative data
Direct feedback (Quotes)
"After the second call, talking to Hannah was a lot more natural. She seemed to give advice that could appeal emotionally instead of scientific breakdown."
"I would talk to Hannah because it is a more interactive experience than say reading the same information on a website or a book. I think what would make me not talk to Hannah is knowing that it's not a real person so she might be able to answer more specific concerns or questions but this isn't often so it should be fine."
"I feel like she's just giving me bullet pointed information that a website would give. The good thing is asking questions and getting answers"
"Speaks slowly and robotically, would prefer a messenger chat bot type thing."
"Speaking with Hannah raised points that were irrelevant for my current financial needs; however, without being able to get a thorough explanation of topics presented I would feel more comforted than less."
"Phone calls are not a common part of my daily life, I tend to avoid them. Also, the content feels more like something from a weekly email of the type format rather than a call."
"I did not know that Grad loans are 1% higher than undergraduate loans."
"The biggest takeaway was realizing that I still have so much to learn about being financially responsible and how little I currently know, like with the student loan's part."
"Budget tips were good. I did not also know that grad school has higher loan rates."
"Loan forgiveness programs vs paying off early"
"The topic of grad vs undergrad loan programs"
"There's a bit more simple final tips/tricks as in, I don't have to understand the whole system/process of loans in order to make informed budgets"
"The robotic voice actually didn't bother me. I'd feel more intimidated if it were a real human voice"
Observations
When Hannah asked a question, sometimes participants didn't notice whether it was a question or a statement.
Participants want to carry micro-conversations
Not every participant begin the call with excitements, some seemed stressed
Information Hannah provides at the moment is not suitable for experts (aka people who have more advanced financial literacy)
Challenges on designing the experiment & conducting tests
Prototype development
Consolidating articles and videos to create useful content for testing participants
Building decision/options trees for micro conversations to mimic natural human conversation
Finding a good Text-to-speech program that will mimic a friendly, warm, female voice
Finding a good voice recording application that will provide "list-view" of all the audio clip to minimize time on switching screens
Finding a good MP3 audio playing program that will NOT auto play the next song in a playlist
Finding a good bluetooth speaker that will play real-time clips without any lag
Finding a good bluetooth earbuds that will play real-time clips without any lag
Finding the most efficient and quiet way to click onto the next audio file
Crafting pre-test and post-test surveys
Prototype testing
Recruiting test participants that are my intended audience (i.e. college seniors)
Designing incentives for participating tests
Booking conference room for testing
Scheduling testing sessions across different locations
Setting up friendly environment for testing
Seeking permission to record
Post-assessment on recording files
Future development
Prototype development
Expand decision trees for users with different emotions
Add missed call scenarios (i.e. text, voicemail, email, and etc.)
Build more micro conversations and pauses
Consider a menu for people who have more advanced knowledge in finance
Prototype testing
Prepare audio microphones for participant to use during test, so that I don't need to be in the same room
Prepare another set of tripod to record participant from a different angle
Build a phone case where the ear buds can be inserted so that it will further mimic the actual phone conversation
I'm excited to share these results with Kate today and Matthew on tomorrow. The remainder of the week, I'll be working on writing my paper. I will also be attending the Academic Writing Workshop hosted by Center for the Advancement of Teaching Program this Friday. Once the paper draft is ready, I'll start working on my thesis defense presentation where I aim to create a video to showcase my prototype.
In the search of understanding machine learning and artificial intelligence better, I also came across this well-written Medium post by Josh Lovejoy and Jess Holbrook, called Human-Centered Machine Learning. This article talks about the techniques to be used when designing with machine learning algorithms. Interestingly, without reading this article, I happened to use quite a number of methods it mentions in the article, such as Wizard of Oz experiments!
P/S This week, I have also scheduled my thesis defense for November 30th, 2017 and received confirmation from Elana Blinder, an Learning Experience Designer and Education Research expert, to join me as my external expert during the defense. Time to start counting down!