Patient Triage Dashboard
accuRx’s mission is to build a world-leading healthcare communication platform. Established in 2016, today over 98% of GP practices use accuRx to communicate with their patients.
Async communications between healthcare providers and patients became even more relevant during lockdown. This was the reason behind building Patient Triage, a way for patients to contact their GP practice via an online consultation, back in April 2020.
Framing the problem
GP practices across the UK are currently facing an uphill battle when it comes to meeting patient demand. A combination of increased demand and decreased capacity have led an overworked and overstressed workforce as well as decreased patient satisfaction.
With Patient Triage, accuRx has made it easy for patients to get in contact with their practice, but we have not fully delivered on making it easy for practices to action these requests. One reason this isn’t easy today is because we haven’t given practices the data they need to optimise their workflows around Patient Triage, and appropriately manage their staff to meet patient demand.
Our current usage report page at the time was not offering enough data nor insights, with just a list of the past Patient Triage requests received in the past 90 days by a Practice. Our analytics also confirmed a low usage of the page, as shown in the second graph below.
I joined a cross functional team just in time for the kick off of this project. I focussed on the design of the Dashboard in a timeframe of a week, working closely with the rest of the team to prioritise features for our MVP, and have detailed expectations on what current iterations might look like.
The team never had a full time designer before, so I was keen on involving the team in design decisions, and finding a balance between advocating for design and not being too disruptive of established ways of working.
In order to understand where we can best place our efforts to give users a greater sense of control, we need to understand what data is of the highest demand. The first iteration of research calls had already been conducted by our Product Manager, Clinical Lead and Product Ops Manager, giving us clear understanding on our users struggles and needs:
1️⃣ Historical data
Practices need to know what demand they can expect at specific hours of the day/days of the week. This way, they can roughly plan their staffing to align with the anticipated demand.
2️⃣ Live data
Practice managers need to understand the team’s live capacity so that they can appropriately manage (assign and keep track of) requests.
The project kicked off with an ideation session ran involving people in the team, and some key stakeholders. The goal of the session was to have a shared understanding of our users, their pain points and needs, so that we could ideate together on a number of possible solutions to the problem.
Once the design was finalised, it was time to involve the team to agree on what was necessary for our MVP. Previous conversations with our PM and Tech Lead already revealed that our team was going to have little front-end capacity during the rest of the cycle, hence it was crucial to commit to something we could deliver in time, providing value to our users.
I decided to run a design critique with the others, something I used to do weekly with my previous team. The new team had never joined such a session before, so for me the most important outcome was for everyone to start feeling comfortable being involved in design decisions, and feeling like they could give an input.
I shared some documentation beforehand so that, if needed, people knew what to expect. The session itself was incredibly helpful, and the team agreed collectively to what prioritise. I received useful input to tweak my design, and finalise our MVP.
Once the UI was finalised, I went on holiday 😅. It was important to handoff designs with clear expectations on what was included in MVP, what was nice to have, and what was coming next.
Our MVP has been built and released to our users. Reusing existing components and libraries helped overcome the little front end capacity the team had.
It was clear to us that our MVP was only a first iteration to the dashboard, so collecting data and feedback from users after releasing it was the only way for us to understand how to improve the product moving forward. Jynn, the very first User Researcher to join the team, did a great work on embedding a survey in the product so that we could gather feedback while users are interacting with it in real life.
The results we collected were definitely positive, and consistent with what we'd expect to have to iterate on (the ability to set custom time frames to compare data, rather than just comparing the current week to the past one.
The team didn't limit themselves to qualitative feedback, as we also set clear OKRs around retention for this piece of work. Again, data collected was definitely positive, indicating strong potential for the future iterations of the product.