top of page

Prototype Testing - Time Consolidation Engine (TCE)

Design a new platform UI that users can easily navigate and use to identify and resolve payroll errors for clients across the business.  The current UI had failed UAT many times before the Design team was brought in to assess why and how to solve it. 

Core Team

Product Designer

User Researcher (me)

Product Owner

Tools

Figma

Miro

Teams

TL;DR

These usability tests were aimed at improving the design of a payroll consolidation software being built by the business.  The software had failed User Acceptance Testing (UAT) nine times, so the business pulled in the Product and User Experience teams to assess why and get the software to MVP. 

After designing a prototype and testing usability, we were able to identify over 40 user needs and requests.  To prioritize these, I pulled in UX Design and the Product Lead to plot each point on a matrix based on severity and frequency. This methodology was borrowed from Erica Hall's book Just Enough Research.  The result was 10 recommendations for MVP and moving into the next phase of prototype design and usability testing.

  • Prioritized Recommendations​

    • Platform functionality

      • 6 recommendations based on what users need to be able to do inside the platform​

    • Data needs​

      • 4 recommendations for what users need to be able to do with data inside the platform. Data needs are critical to the success of the platform in improving efficiency.​

Continue reading for the full case study! 

Research Plan

Business Objective

Decrease time to resolve payroll errors to pay Associates and bill Clients accurately and on time.

 

Research Goal

Validate TCE platform UX updates to improve usability and adoption. The research will focus on selecting a bridge, uploading a payroll file, executing it, resolving errors, and sending it to AMT.

 

Methodology

Medium fidelity prototype testing

Designing the Prototype

Step 1 - Borrowing from Design Sprint methodologies

The Product Designers and I each sketched some solution ideas for redesigning the TCE platform. We then hosted a prototype planning workshop using activities that we borrowed and modified from Jake Knapp's Design Sprint methodology. 

 

Attendees
  • Design, Product, and Engineering stakeholders.

 

Activities  
  1. Art museum: Uploading our sketches and writing brief annotations for each.

  2. Speed critique: Walking through the highlights of each sketch as a team, discussing and answering questions along the way.

  3. Heat map: Each participant placed dots on any of the elements within the solution sketches that they found interesting.

  4. Supervote: Product and Design leaders were given a set number of supervote dots and voted on which solution ideas to move into prototyping

 

Outcome
  • Product Designers got to work building a medium fidelity prototype to be tested with end users.

TCE_Design_Workshop.png

(Screenshot of workshop activities inside Miro)

Step 2 - Prototype demo with stakeholders

Based on feedback from the prototype planning workshop, the Product Designers set out to build a prototype in Figma.  While I broke off and drafted a formal research plan.  We then scheduled a platform demo with all of our stakeholders.

 

Attendees
  • Design, Product, Engineering, and Development stakeholders.

 

Activities  
  1. Design walkthrough: The Product Designers walked through each screen of the prototype showing how it was laid out and the functionalities they were proposing to add into the platform redesign. 

  2. Research walkthrough: Directly inside Figma, I added stickies above each wireframe with the task we'd be asking users to complete and what we were trying to learn.  After the design walkthrough, we did a research walkthrough hitting on all of these points.

  3. Discussion: Stakeholders provided their feedback on the designs and the research objectives, adding in their questions and making suggestions around feasibility and what they wanted to learn.  

 

Outcome
  • Product Designers got to work building a medium fidelity prototype to be tested with end users. I got to work finalizing the research plan, building a discussion guide, and recruiting participants. 

(Zoomed out image of prototype and research cards inside Figma)

(Zoomed in snippet of prototype screen with research card inside Figma)

Testing the Prototype

Remote User Testing Sessions 

We focused our testing on one site use case - running IP Bridges - and focused testing on the users who run those bridges. 

Participants
  • 1 - Payroll admin

  • 5 - Payroll managers and specialists

  • On each interview there was a note taker and an observer who silently watched the sessions.

 

Session Flow  
  1. Participant shared screen on Teams. After intros and setting up the session, I sent the users a link to the prototype and had them share their screen. 

  2. Talk aloud while completing tasks. Users were given tasks to complete and asked to share their thoughts as they completed (or attempted to complete) the tasks.

  3. Value ratings. After certain tasks users were asked to rank how valuable certain features inside the prototype are for them.

Image blurred for privacy and data protection.

(Blurred preview of the prototype.)

Debriefing

Immediately after each user testing session, I led a debrief session with the note taker and observer. We used Miro and went task by task noting user pains, surprises, key insights, feature or functionality requests, and whether or not users passed or failed each task.  

(Sessions debrief board inside Miro with debrief activities panel)

Analyzing the Results

Synthesis Exercise One
Exercise

As a team, we completed the first frame (task 1) activities together. Then, we broke out into 3 groups of 2 and divided the remaining frames to work on. We came back together as bigger team and reviewed where everyone landed and gave everyone the opportunity to provide input on what the other teams found.

Set up
  1. Created frames for each of the tasks users were asked to complete

  2. Pulled in screenshots of the screens users encountered during the tasks

  3. Noted how many users passed with ease, passed with difficulty, or failed each task

  4. Copied over the corresponding sticky notes for each task from our interview debriefs

 

Activity One
  1. Clustered the insights, labeled them, and gave them each a brief summary

  2. Copied the insight cluster summaries, pain point stickies, and user request stickies down to section two of the board

Activity Two
  1. Sorted the stickies into columns - Problems & user needs, what went well, requests from users

  2. Assigned a frequency tag and a severity tag to each insight

Activity Three 
  • Revisited the assumptions related to the user tasks/screens for this section and marked them as validated, invalidated, or needs further study

Synthesis Exercise Two
Exercise

With our shared understanding of what the user problems and user needs are, we plotted each of those stickies on a prioritization matrix.  The initial framework for this activity was borrowed from Erika Hall's book Just Enough Research.  The goal of this activity was to walk away with focused recommendations for the Product, Development, and Engineering teams. 

Set up
  • Built 2 3x3 matrixes

    • X-axis: Low, medium, high severity​

    • Y-axis: Low, medium, high frequency

  • Copied over the tagged problem stickies and user need stickies from each of the frames in synthesis exercise 2​.

    • Problem stickies went in matrix 1​

    • User needs stickies went in matrix 2

  • Sorted the stickies into the corresponding cells on the matrixes

 

Activity One
  1. As a team we went cell by cell through the matrix moving stickies from left to right in terms of relative severity 

  2. We then went through and moved those stickies up and down according to how frequently each issue or need came up

Activity Two
  1. After completing activity one for both the problems matrix and the user needs matrix, we layered both matrixes over each other to see where all of the stickies landed

  2. We then drew a rectangle around the matrix cells that were medium to high in frequency and severity 

    1. These became the focus of our recommendations​

(Final plot of the prioritization matrix in Synthesis Exercise Two)

Making Recommendations

Telling the full story 

Given the very tight timeline of this initiative, we decided to produce our final recommendations inside of Miro.  This saved time on creating an in-depth final report and made for a more relaxed atmosphere for the final readout (aka - made it more comfortable for stakeholders to speak up and ask questions about the findings).

Section One: Background
  • Team

  • Business Objectives

  • Research Goal

  • Methodology

  • Helpful Links

  • Pre-prototype activities

Section Two: Findings
  • Overall findings

  • Quotes

  • Validated/invalidated assumptions with supporting evidence

    • 5 assumptions were validated​

    • 8 assumptions were invalidated

    • 1 needed further study

Section Three: Recommendations
  • Prioritization matrix 

    • Used as supporting evidence. We talked through the methodology but didn't spend too much time talking through every sticky note.​

  • Recommendations​

    • Platform functionality requirements​

      • 6 recommendations based on what users need to be able to do inside the platform​

    • Data needs​

      • 4 recommendations for what users need to be able to do with data inside the platform. Data needs are critical to the success of the platform in improving efficiency.​

    • Example recommendation 

      • Recommendation 

        • Provide users with access to a copy of what they send to LPM. 

      • Observation 

        • Users want a copy of what was sent to LPM for their records because there have been issues with incomplete or incorrect data transfers when submitting to LPM in the past.

Section Four: Next Steps 
  • We came out of this initiative with a few next steps the primary takeaways were that needed to

    • Align on what's next given the outcomes of this research​

    • Get approval for further research

      • Generative and evaluative​​

Outcomes

Development and Engineering leads initially agreed with the findings and recommendations. However, after looking at the full scope of the recommendations compared to their internal resources, Engineering, Product, and Design needed to realign on what was feasible from a resources standpoint and reallocate stories across a few sprints.

This resulted in two more research studies for the team to conduct.

  1. Prototype testing new designs based on this round of user feedback. You can read the case study here.

  2. A discovery initiative to understand the end-to-end process of getting Associates paid, including who are all the players involved and what are the tools they're using.  You can read the case study here.

Professional Learnings

  1. There is such thing as too many stickies- not every single detail needs to be brought into synthesis.  This is where Researchers need to be able to step back and look at everything on a macro level, and then zoom back into a micro level.

  2. Plan for prototype malfunctions.  Prepare for navigating a user through a prototype malfunction while not being leading or escalating any frustration they may have.

  3. I need to set realistic time expectations, especially for synthesis exercises. Nailing professional learning number 1 about the stickies has also helped with this. 

bottom of page