Case Study / Research · Strategy · Evidence

Mixed Methods Maze Research 3 Countries 174 Responses

Annual Leave
Experience Study

A 174-response mixed-methods research programme spanning the UK, India, and US — transforming colleague and people leader frustrations into a strategy that reshaped HR platform priorities.

Organisation
Barclays
Platform
HR Hub / Workday
Year
2026
Reach
98,000+ colleagues
Role
Research Lead
```

The Problem

Annual leave should be
simple. It wasn't.

Annual leave booking is one of the most fundamental HR interactions a colleague has — yet it was generating disproportionate frustration, helpdesk contacts, and manager overhead at Barclays. The experience had been built around process logic rather than human logic, and it showed.

Before we could fix anything, we needed to understand precisely what was broken, for whom, and why — across three countries with very different employment contexts, cultural expectations, and technical environments.

01
No research baseline
Decisions about the annual leave experience had been made on assumptions and stakeholder opinion rather than evidence from the people actually using it.
02
Two distinct user groups
Colleagues and people leaders had fundamentally different needs and frustrations — but the platform treated them almost identically, optimising for neither.
03
Global complexity
Annual leave rules, entitlements and cultural norms differ significantly between UK, India and US — yet the research needed to surface insights that could drive a coherent platform strategy.

The Approach

Research that can't be acted on isn't research — it's theatre. Every design decision in this study was made in service of producing insights that stakeholders could actually use.

— Research principle that shaped the programme design

The programme was designed as a mixed-methods study — combining quantitative scale with qualitative depth. The quantitative layer gave us statistical confidence; the qualitative layer gave us the why behind the numbers. Neither was sufficient alone.

174
Unmoderated Maze study
A structured unmoderated test deployed via Maze — measuring task completion, navigation patterns, comprehension, and satisfaction across both colleague and people leader paths.
Split: Separate paths for colleagues and people leaders, with role-specific tasks reflecting real annual leave scenarios in each country context.
1:1
Moderated interviews
In-depth moderated sessions with a representative sample of participants — exploring the emotional experience, workarounds, and unmet needs that quantitative data alone can't surface.
Structure: Semi-structured interview guide with scenario-based prompts, allowing participants to direct the conversation toward what mattered most to them.
🌍
Global sampling
Participants recruited across UK, India and US — stratified by role level, tenure, team size (for people leaders), and country — to ensure findings were representative rather than dominated by any single population.
Challenge: Controlling for country-specific leave rules while surfacing platform-level issues that applied universally.
📐
Navigation label study
An embedded study testing navigation label comprehension — specifically "Managing Teams" vs "Leading Teams" — providing additional directional data to inform information architecture decisions for colleague Nicola Hobbs.
Method: First-click testing embedded within the Maze study, with confidence ratings to distinguish guesses from confident choices.

Global reach

Three countries.
One platform.

🇬🇧
United Kingdom
Largest cohort · 28 days statutory
🇮🇳
India
Distinct leave types · complex entitlements
🇺🇸
United States
PTO model · different cultural norms
174
Survey responses
3
Countries represented
2
Distinct user paths
4
Embedded studies

Key Findings

What the data
actually said.

01
Navigation was the primary failure point. The majority of task failures occurred before participants reached any functional element — they couldn't find the right section of the platform, not because the feature was broken, but because the information architecture didn't match their mental model.
02
People leaders needed a fundamentally different experience. The overlap in needs between colleagues and people leaders was far smaller than assumed. Managers needed team-level visibility, approval workflows, and conflict management — none of which were treated as primary use cases in the current design.
03
"Leading Teams" significantly outperformed "Managing Teams" as a navigation label — with higher first-click accuracy and higher confidence ratings across all three countries. This gave the information architecture team clear directional evidence for a previously contested decision.
04
India-specific leave complexity was underserved. The variety of leave types available to India-based colleagues — casual leave, privilege leave, sick leave and more — created navigation and comprehension challenges not experienced by UK or US colleagues, pointing to a need for localised content and potentially localised IA.
05
The emotional cost was higher than expected. Qualitative sessions revealed genuine anxiety around leave management — colleagues uncertain about entitlements, managers worried about making mistakes. The platform was amplifying rather than reducing stress.

Outcomes

From data to
decisions.

🗺️
Research that shaped platform strategy
Findings were presented as a stakeholder-ready deck with bento-grid visual design — translating 174 responses into a clear strategic narrative that influenced HR platform priorities for the following cycle.
🏷️
Navigation label decision resolved
"Leading Teams" was adopted as the navigation label based on clear evidence — ending a debate that had previously been driven by opinion rather than data.
👥
People leader experience prioritised
The research established a clear case for treating people leader workflows as a distinct design problem — with their own IA, task flows, and content requirements.
📊
Measurement framework established
The study became the baseline for an ongoing measurement programme — giving the team a repeatable methodology to track experience improvements over time.

Reflection

What good research
actually looks like.

The thing I'm proudest of in this study isn't the sample size or the methodology — it's that the findings changed something. Too much research gets filed. This one got presented, debated, and acted on.

That happens when you design research with the stakeholder conversation in mind from the beginning. What decision are we trying to make? What would change our minds? What format will make the findings impossible to ignore? These questions shaped every choice in the programme design.

The navigation label finding is a small but perfect example. A contested, opinion-led debate — resolved in one study. That's what evidence-based design looks like in practice.

```

Next case study

EasyJet Holidays Discovery

Conversational UX before chatbots existed.

Read the case study →
← Back to all work