For the best experience, please open this site on a desktop device.

Improving the feedback loop: A strategy for higher completion rates

šŸ“˜ How I Helped Make Feedback Work Again. Not Just Exist

This all started when my manager turned to me and asked, ā€œHow can we get users to actually fill in our feedback form?ā€

The form was there. It was live inside our app. But barely anyone was using it. And when they did, the responses were rushed and often skipped key questions. That left us in the dark, building features with no clear idea whether they were helping users or just adding noise.

So I was given the task to figure it out: Make feedback useful again.

At first, it sounded like a simple UX tweak. But I quickly realized this wasn’t just about how the form looked. It was about how it fit into someone’s day. Why would a user pause what they’re doing to give feedback, unless it felt effortless or rewarding?

🧩 Problem

The feedback form wasn’t broken but it wasn’t working.

It existed, sure. But few users completed it, and when they did, we got shallow insights. That meant we had no reliable way to validate whether our features were solving real problems. We needed feedback not just to exist, but to work, to become a part of the experience, not a chore.

So I set out to:

  • Rethink when and how we ask for feedback

  • Design an experience users would want to engage with

  • Uncover insights the team could act on, fast

This wasn’t about making a form more visible. It was about making it valuable for users, and for us.

šŸ’” The Solution

I followed the Double Diamond process. I started with curiosity. What really drives someone to give feedback inside an app? When do they feel the urge to speak up and when do they stay silent?

To move past assumptions, I took a research-first approach, combining quantitative and qualitative methods to see both the big picture and the personal nuance behind user behavior.

My goal was simple: Make giving feedback feel less like an interruption and more like a natural, integrated part of the product experience.

šŸ”§ Deliverables

  • Survey ResultsĀ (10 responses)
  • A/B TestingĀ (10 responses)
  • Streamlined Feedback Process:Ā Simplified the user interface for providing feedback, making it quick and easy to engage without interrupting the user experience.
  • Contextual Feedback Prompts:Ā Integrated feedback directly within the app, triggered after key interactions with main feature, ensuring relevant and timely input.
  • Pitch DeckĀ summarizing the project, key results, and recommendations
    Miroboard :Ā https://miro.com/app/board/uXjVKuarxrw=/Connect your Miro account
    Confluence : Exciting Research Insights! 🌟

Starting With a Clear Intention

Before jumping into design or opening Figma, I took a step back to outline the strategy and objectives. I wanted to be clear about what I was trying to achieve and why it mattered.

Objectives

  • Improve the in-app feedback loop to support faster validation

  • Enhance the design of the feedback form to increase completion and ease of use

⚔ Discover & Define

1. Learning From What’s Out There

To kick things off, I started with a literature review. I wanted to understand how others have tackled feedback systems, what worked, what didn’t, and where the gaps were. This helped me shape my approach and avoid reinventing the wheel.

I also studied how big brands designed their feedback forms. Not just for inspiration, but to learn the design principles behind forms that actually work, ones that are easy to use, respectful of the user’s time, and still gather useful data.

2. Listening First: What People Really Think

But I knew looking at other people’s solutions wasn’t enough. To truly understand the problem, I had to hear directly from users.

I wanted to know: Why do people avoid giving feedback and what would make them change their mind?

So I created a short survey focused on feedback preferences. This wasn’t about testing any design. It was about listening. I needed to understand what people feel when they’re asked for feedback, and what stops them from responding.

What This Gave Me

By combining insights from industry practices and real user voices, I could begin designing a feedback experience that’s not only effective but something users actually want to engage with.

⚔ Develop: Testing What Works

With key patterns in hand, I moved into ideation and lightweight validation. I translated what I had learned into design concepts and began prototyping solutions to test how they held up in practice.

One of the main decisions I needed to make was: What kind of feedback form would actually get completed?

So I created two different versions of the form and ran a small A/B test to see which one performed better

A/B testing result

Out of 10 people, there was an equal number of votes for both option A and option B.

This led to a group critique session with team members from other departments to help us decide which version to move forward with. After some discussion, everyone agreed: Option A felt more aligned with our goals and user expectations.

šŸ› ļø Note on implementation:
After talking with the dev team, we discovered some limitations in the Typeform API, which meant we couldn’t fully implement the custom UI I had designed. While the overall user experience and flow stayed true to the intent, the final interface had to be adapted slightly to fit Typeform’s built-in constraints.

šŸ“ˆ Design That Drove Results

I designed an updated feedback experience that was later implemented in two public-facing enterprise apps. While the user base was more focused than a typical consumer product, the improvements were both measurable and meaningful.

Even with a limited audience, the impact was clear and sustained:

Survey for JSM

Within the first 2 months:

  • 35.4% completion rate
  • 186 views → 48 starts → 17 submissions

After 6 months:

  • 35.6% completion rate
  • 588 views → 135 starts → 48 submissions

Autodesk for Miro

Within the first 2 months:

  • 44.4% completion rate
  • 16 views → 9 starts → 4 submissions

After 6 months:

  • 62.2% completion rate
  • 68 views → 45 starts → 28 submissions

Even though both apps serve enterprise-only audiences, the improved UX design led to:

  • Higher completion rates across both platforms

  • Faster response times (especially early on)

  • Clearer signals for product teams to act on

These results exceeded the industry average of 23.04%, showing that even in niche, high-context tools, thoughtful, user-centered improvements can have lasting impact.

šŸ’¬ When Research Sparks Recognition

The research gained attention across teams – executive directors, a senior lead product manager, and product managers said it guided their work and could be applied to other projects.

šŸ” Reflection

One of the biggest things I learned was that people are much more likely to answer multiple-choice questions than open-ended ones. Even though the overall completion rate was high, most users skipped the open-ended part. This matched what I found in my early research and helped confirm that keeping things quick and easy really matters.

I also learned how important it is to work closely with developers early on. Because of some limits with the Typeform tool, we couldn’t fully follow my original design. But we still made sure the experience felt smooth and easy for users. It reminded me that good UX isn’t just about how it looks, it’s also about how it works within real-world limits.

These lessons will help me design better feedback forms and user flows in future projects.

Conclusion

In the end, this wasn’t about redesigning a form. It was about rethinking the experience of giving feedback, making it timely, easy, and worth a user’s time. I didn’t just boost numbers. I built something that gave users a voice and gave our team clarity. There’s still room to refine, but now we have a strong foundation and real data to guide us forward.