Bloom is an iOS mobile application designed to assist with plant care management. Bloom focuses on three main challenges: keeping track of plant care requirements, keeping users positive and informed about their plants, and reducing the stress of maintaining a care routine.
Research, Design, Prototyping
iOS Mobile App
This project started in Spring 2020 in my Senior Project class for my Interactive Design degree. In my position on a four-member team, I scheduled most of the interview and testing sessions, conducted the competitive audit, and produced most of the high-fidelity prototype. If you'd like to read more about this, click View the Group Project.
In addition to the group project, I also individually studied pedagogical agents and how to integrate one into the group app as my Honors thesis, which involved several stages of research and further prototyping. In Fall 2020, I continued the solo project to refine the pedagogical agent integration through user interviews, iterative design drafts, and user testing. If you'd like to read more about my research, click View the Solo Project.
Click the phone to view the full prototype!
GROUP PROJECT: PLANT CARE MANAGEMENT APP
Katie Anne Flood
The team, which worked cohesively without a defined team leader, followed the Goal-Directed Design methodology for our project, a methodology that ensured we always stayed focused on our users' goals. Goal-Directed Design begins with a strong foundation of research before any design decisions are made to ensure the app appropriately helps its users achieve their goals. After completing our research, we moved into creating representations of our users to better define their goals and behaviors. This meant that we had more accurate restrictions and priorities when defining the user, business, and technology requirements of the app. From there we designed the app's prototype, which we tested with users and put through the iterative process of refinement to improve the product.
The steps of Goal-Directed Design.
Research is essential for Goal-Directed Design; a design team must have understanding of their users, the product's constraints, and the business goals that are directing the design. We began our research by gathering information on plants/plant care and reviewing available information from the field to give us a scope of what topics to focus on during the interview process. We started with questions we found pertinent to our app’s concept, such as the most common plant care mistakes and the most impactful resources for plant owners. We summarized the primary sources we found and analyzed them for their usefulness. This literature review helped show the team what day-to-day aspects were theoretically more impactful for users.
We also identified six competitors across the Android and iOS platforms, which we sorted into a chart identifying their key features, system, and download count. This competitive audit helped us define what was missing in the field, as well as what users might expect of Bloom due to previous experiences with other plant care apps.
Competitive Audit Results
We then began interviewing current and prior plant owners—six in total. These interviews, which typically ranged from thirty to forty-five minutes in length, focused on what participants experienced in their plant care process: what they struggled with, what they appreciated, and what they wished they had done differently. By level of plant care knowledge, we had two beginners, one intermediates, and three experts. We gained a variety of valuable information from our interviews.
"When I first got it, I didn’t realize how much water they needed."
"Knowing which plant is good for which season is important."
"I went online to get information."
"I think having something to care for improves your energy."
"Notifications like 'water me!' would be great."
Using the data we gathered from our research and interviews, we mapped users’ behavioral variables. These included aspects such as users' goals, frustrations, activities, and attitudes. We placed these on a whiteboard and mapped them by their similarities to establish patterns (called an affinity map). Three primary spectrums emerged: socially-oriented versus introspection-oriented, skilled versus unskilled, and looking for more information versus satisfied with their current knowledge level.
To ensure we were keeping the audience and their goals in mind as we designed, we created personas, or example users based on the real users we met in our research, that represent the goals and needs of the overall audience. Goal-Directed Design prioritizes the use of personas because they provide design teams with a focus point to think about how users behave, how they think, what they want to achieve, and why.
Our primary persona (the main target of our design) became Iris, who represents the highest priority and most common goals of our user base. The secondary persona is someone who is mostly satisfied with the primary persona’s interface, but has additional needs that can be designed to accommodate without upsetting the product’s ability to serve the primary persona. That secondary persona is Rowan, who represents goals of our participants that were more uncommon but still needed attention to create a positive app experience for users.
MEET OUR PERSONAS
Primary Persona | Skill Level: Beginner
Iris is a 24-year-old college student who has recently moved closer to the city. By downloading this app, she’s able to receive notifications to keep her new plants alive while she handles many new responsibilities.
She wants to maintain her new plants.
She wants an enjoyable and happy apartment.
She wants to receive push notifications while she’s developing this new habit.
Based on our research, interviews, and personas, we created context scenarios—daily routines the personas would hypothetically follow—to understand how the app could meet their expectations and to prioritize the app’s information and features. Our team collaborated in multiple work sessions, basing our scenarios off the affinity mapping and personas. This helped form a set of persona expectations and design requirements for our app prototype.
Iris Collin's Context Scenario (Above)
Design Requirements Preview (Right)
Once we had our personas set with their goals and scenarios, we worked to ensure we were meeting those needs. We created a list of potential pages and the features we would see within those pages.
This brought us to the app's keypath and validation scenarios—common and uncommon paths that a user would take while navigating through the application. This ensured we paid attention to the flow of the app and its sequence of pages. One such validation scenario is displayed here.
This brought us to our prototype, which we completed in several stages. We began with low-fidelity prototyping on a whiteboard, sketching different ideas for the layout of each page. Each team member created designs, which we discussed and compared to ensure our different designs would function cohesively. A few samples of my low-fidelity prototyping are available below.
Following the scenario creation and low-fidelity prototyping, I moved onto medium-fidelity prototyping. Based on what we created with our low-fidelity product, I began prototyping in the software Figma. As I developed pages, I communicated regularly with my teammates to discuss changes that we should make, ensuring the design still made sense and met the users’ goals.
Once we had the basis of our prototype created, we conducted four usability tests. Goal-Directed Design believes that usability testing should come only after the product is detailed enough to give users something concrete to respond to, and are good for identifying both major problems with the interaction (like button labels, activity order, and priority) and for fine-tuning behaviors like response time to user actions.
We asked participants to complete scenarios within the app to test the app's functions and to gauge participants’ feelings and responses to the process. The users reacted with primarily positive comments, but we had more to expand on. Below were some of our findings.
Social media integration isn't needed or prioritized by our users.
Users enjoy the schedule page and the plant addition process.
The profile page should be replaced by the badges page, which shows achievements users have earned.
Plants' mood feature is especially helpful to users.
Following our initial usability tests, my team and I moved onto high-fidelity prototyping. The overall wireframe and a few page samples are available below. We applied the changes that users emphasized in the usability testing and improved on other aspects of the prototype simultaneously.
Midway through the semester, our university shut down due to the COVID-19 pandemic. Our group continued to communicate online, and we completed two final usability tests remotely. Some final improvements included an increase in yellow throughout the app to vary the color scheme, layout modifications, and consistency adjustments.
THE FINAL PRODUCT
Though the switch to a fully-online platform because of COVID-19 added challenges to the team's communication and collaboration, we all put in effort to minimize its impact. I have to thank Katie, Breann, and Wendy - this could not have succeeded without my teammates. To view the app without the integration of my solo project work on pedagogical agents, click here.
Click the phone to view the full prototype!
SOLO PROJECT: INCREASING USER MOTIVATION
In addition to this group work, in Spring and Fall 2020, I individually researched pedagogical agents and how to integrate one into the group's app to increase user motivation. I did this for my Honors thesis to receive my designation of Honors Research Scholar.
Pedagogical agent: a virtual character created to facilitate learning or motivation.
The need for a pedagogical agent arose from the user interviews we conducted during the group project; a characterized representation of the app was brought up several times. One interview subject suggested a “bitmoji-like thing” to represent the user’s plant, referring to the cartoonish avatars used on platforms such as Snapchat, while another mentioned having a virtual seed grow as the user completed achievements. These comments sparked the idea of Bloom’s pedagogical agent.
In addition to the work I did with the group, I added to it by including a further literature review, six more research interviews, and twenty-five design surveys. Through this process, I refined the pedagogical agent’s integration. Though I did not follow the distinct steps of Goal-Directed Design, I kept the methodology’s core priorities of research and user goals at the forefront of the process.
Since I conducted this project to achieve my Honors Research Scholar designation, research remained my core focus throughout my individual project. What would later become my applied research paper began with a detailed literature review to learn about theories in the field, common misconceptions, and gaps in research. The most essential information found through the review was that two “roles” exist within pedagogical agents: the expert and the co-learner. The expert agent serves as a mentor and possesses a higher amount of knowledge. The co-learner agent serves as a partner, starting with a low level of knowledge and gaining more as time passes. Comparing similar apps with agents also helped me identify common strengths and weaknesses of existing agents. To read more about my research, you can click here to view my applied research paper.
Research interviews added detailed information about what experiences people wanted to have with the app and how an agent could increase the effectiveness of those experiences. I interviewed seven current or prior plant owners and found similarities between their responses. All participants described plant care as a less “passive” activity, primarily because of the sentimental or nurturing emotions it evoked. This helped me know what emotional foundations to target through the agent. Additionally, all four participants that were directly asked about the presence of an agent in the Bloom app saw the agent as an added benefit, but each gave different opinions on which role they would prefer. I therefore designed the app to give an option between the two roles.
Once it was established through those qualitative interviews that users desired a pedagogical agent, quantitative data was gathered through design surveys to ensure the agent would satisfy the largest possible audience. 25 participants completed an 11-question survey that focused on four potential agent designs through questions about aesthetics, color scheme, and emotional effectiveness. The results showed Design D as the favored design, with 45.83% of participants ranking it highest.
The favored model of the agent, Design D, performed highly enough that I changed very little about the design. To satisfy the desired co-learner role, I created a second design to serve as the role’s visual: a seed to grow as the user advanced in the app. I used the traits of dark grey eyes and blushing cheeks to capture the positivity respondents preferred, while its patterning took after Design D’s leaves to look thematically similar.
The pages that the agent inhabited followed the style guidelines of Bloom to ensure the feature was well-integrated with the overall app, and Bloom’s interface avoided competitors’ common mistakes and improved on their ideas. The chat page gave more screen space to the agent to allow for higher levels of bonding between user and agent. Though the menu and function bars followed the style guide set by Bloom, the chat page was distinct from others in the app. The user’s speech bubbles took a dark background to bring more attention to the agent’s replies.
The agent’s onboarding had four essential stages: introduction, agent role selection, user focus selection, and naming. Each page’s arrow button pointed down, and clicking caused the new page to push upwards onto the screen and leave the former page behind, giving the user a sense of progression. The agent role selection page explained the two roles in audience-friendly terms. The user was given two options: Partner (“I start small and learn with you. When your plants grow, I grow too.”) and Mentor (“I start fully formed with all of my knowledge. I’m focused on being a steady presence.”) to represent the co-learner and expert roles, respectively. Subsequent pages asked about user goals (motivation, routine, etc) and the user’s chosen name for the agent.
I walked six participants through the prototype virtually, allowing them to explore both the group app and the agent feature to understand its integration. During and following the prototype experience, I asked questions about any confusion, obstacles, likes, and expectations they had about the app. I also directly asked them about the chatbot feature for Sprout, as chatbots can evoke strong likes and dislikes from users. Their answers delivered several majority opinions:
The agent’s onboarding (introduction for the user) was clear, simple, and enjoyable.
The agent’s button location needed to move for clarity.
The participants liked the idea of the agent jumping in to offer help throughout the app, but wanted the option to enable or disable it.
The chatbot feature was received poorly by some participants, and all participants saw potential for user frustration or confusion caused by the feature.
With the results of the user interviews in mind, I revised the prototype across several weeks. I added a page onto the onboarding that gave users the option between enabling or disabling in-app agent warnings, created sample notifications for the app, and moved the agent’s feature button to the bottom-level navigation. In-app agent warnings turn the bottom-navigation button from green to yellow to signify an alert, which can then be clicked on or ignored. My biggest task was redesigning the agent interface to move away from the standard chatbot experience and towards something more functional.
The agent interface now has two tabs: Chat and Alerts. Although there were some negative opinions about the chatbot, I wanted to appease the users who viewed it favorably. I therefore decided to keep it on the condition that I would modified the chatbot based on their feedback.Though I kept the chatbot feature, the agent would theoretically explain in its first conversation with the user that it is limited in its available responses and will be able to reply more accurately to preset questions. If the user types a custom response, it will provided resources based on keywords it identifies in the message. This would set expectations to a more appropriate level and encourage users to rely more on the preset replies, therefore reducing frustration and confusion with the chatbot feature. The Alerts tab keeps a log of all prior alerts the agent tried to give, so if a user didn't want to read it in the moment but was interested in it later, they could find it in the log.
I showed the revised prototype to five participants. After completing the five tasks, I asked participants to complete a System Usability Scale (SUS) and select five Product Reaction Cards (PRCs).
The SUS, a quick and reliable tool for measuring the usability of a product, consists of ten questions about aspects such as the complexity, clarity, and enjoyment of the product. On a scale of 0 to 100, the five participants gave the Bloom app an average score of 91. This is well above the average of 68, and showed a positive outlook on the app.
The PRCs, originally created by Microsoft, are a list of positive and negative words that can be used to describe a product. I asked participants to choose the most accurate five words from the list of 60 cards to describe Bloom. The chart displays the results, with similar choices color-coded to highlight patterns. The most common words were "helpful/useful," "organized," "easy to use," and "convenient."
In explaining their choices, I saw similarities in their reasoning as well. Participants often noted the intuitive navigation, pleasant visuals for the agents, straight-forward structure, and motivational energy. Interestingly, even the word choice "dull" was framed in a positive light—the participant liked that the app was "mellow" and didn't have too many distractions.
The test results were overwhelmingly positive, and showed an overall cheerful disposition toward the pedagogical agent and its accompanying features.
THE FINAL PRODUCT
I have provided both my applied research paper (written three months into the process, before receiving user feedback) and my prototype. To view my applied research paper, click the button below. To view my prototype, click the phone.
My solo project on pedagogical agents expanded my knowledge of user motivation, motivational theory, cohesive design, and the iterative design process. With the background context of COVID-19, I adapted my project from a face-to-face process to a virtual one, giving me more experience with virtual research and user interviews/tests. I created an app and pedagogical agent that I am proud to show.
Click the phone to view the full prototype!