0:00
/
0:00
Transcript

Stakeholder Management 101: An Interview with Amanda Gelb and Jen Blatz

We sat down with two fantastic experts who shared stories, tips and tricks. Read the full interview.

Amanda Gelb is the founder of Aha! Studio. She spent a decade in-house, mostly in tech, before branching out on her own. Amanda is a user researcher who conducts end-to-end research for clients and offers trainings and workshops for in-house teams.

Jen Blatz is a user experience researcher at BECU, one of the largest credit unions in the United States. She is also a co-founder of UX Research and Strategy, one of the biggest UX groups in the world. Jen has a YouTube channel called BlatzChatz.

Check out the full episode on Spotify and Apple Podcasts too:

What are the most common challenges people face when managing stakeholder expectations?

Amanda: What I've seen and experienced firsthand is navigating misalignment. That could mean unclear objectives, conflicting priorities, or misunderstanding what research as a discipline can deliver. Another unfortunate perception is that user research is sometimes seen as a nice-to-have rather than a business-critical function, which can lead to stakeholders undervaluing its role.

I like to think about alignment in a strong intake process as a way to counter that. It really starts at the beginning, trying to prioritize and clarify those foundational questions that we're great at asking as researchers—turn those on our stakeholders. What decisions will this research inform? What risks are we trying to mitigate? Making sure everyone's on the same page about those goals and expectations. And if they're not, then you as the researcher have that data and can be strategic in how to bring those people in or speak to them to understand who you're dealing with.

Jen: I'm going to play a little bit off of Amanda and talk about the intake process. There's so much more to that than just filling in a form. One of the challenges I've found with stakeholders is they come to you saying what method they want to do. "I want you to do a survey," or "I want you to do a focus group." Those are my two least favorites!

They bring the method to you. And to Amanda's point, the question she asks—"What are the risks involved in your project?"—I like to ask, "What does success look like? What will you do with this data?" When you ask questions like that, it shoots down the method because they're not going to say the results they want are a completed survey. They're going to tell you what data they want, what they're going to learn.

So one of the most interesting challenges is them bringing that method to you and saying, "Just do this," rather than "This is the answer I am seeking" or "This is the information I am seeking." It's a little bit of education—getting them to think about the outcome of this research, rather than coming to you with a method that they're familiar with.

Who to involve and when: How do you map out or identify stakeholders?

Jen: When I'm joining a project or a company, I like to do a stakeholder map—understanding who is who, what are their roles, what are their responsibilities. And tying a RACI to that: Responsible, Accountable, Consulted, Informed, making sure that I start to get a feel for the political landscape.

When you're new especially, it's difficult to do on your own. So bring your managers in, bring your teammates in, let them help you navigate those new waters. A good way to do that is to get a stakeholder map, just so you understand who's who in this game.

Amanda: Anytime I'm at a new company or even switch roles at different companies, I see it as an excuse to do this kind of research. And it is—researching your stakeholders is research. There's a method called stakeholder interviews, and that's what I do.

I might not always say, "I want to formally interview you for 45 minutes," but I absolutely have questions in advance that I want to ask folks. Like Jen said, it's brilliant to bring other trusted people, like a manager, into the process. I'll look around an organization or a line of business and say, "Who are the decision-makers here and what do they care about?"

Part of it is social listening—what are people saying on Slack? What are the metrics everyone's talking about? What's the jargon? When we're having company standups or all-hands meetings, what's being shared there in terms of language that I can start to replicate in my own research studies?

Then for the actual interviews, I'll schedule time with everyone. I don't care how senior you are—I've often interviewed C-suites of 5,000-person companies. If it feels important to understand their perspective on a particular product or on research in general, I am not shy about putting time on their calendar and being really clear about what I want to learn. One of those top questions is, "What's your relationship to research? How have you worked with research? How do you want to work with research?" You get so much data that way.

This is also amazing to do when people join your company. Maybe you've been at a company for a while, but then there's a new VP of Engineering—I am the first on their calendar. "Hey, welcome to the company! I'm the researcher around here. Let me chat with you." Again, asking "What's your relationship to research? How do you want to be involved?" So many of those folks become research champions because you get to onboard them to the company.

How do you keep people involved throughout the research process?

Amanda: Going back to what we were talking about earlier, that question of what decisions the research is going to inform—find out who's going to be impacted by those decisions and who has the influence to act on those findings.

I'm at the point in my career where I'm not doing validation research just for the quick checkbox. Who is available to make changes based on our findings? Who's lined up designer time or QA time after this study? I involve those people directly—the people who are going to be working on the next steps of whatever we find. If a project is going to shape a product roadmap, then the PM and the design lead are essential, but the team maybe only needs periodic updates.

I think not just about who those players are that are going to be impacted by the research, but to what degree they should be involved. I treat everyone's time preciously, and I learned that the hard way. I always used to bring engineers into research, and a lot of them loved getting that front-row seat to customers. But then I heard from a few people that it was a huge time suck for engineers to sit in on research interviews and help with synthesis.

I'm very much a collaborative spirit, but that was a hard lesson—maybe there are more strategic touchpoints for certain disciplines. They don't have to be cut out, but maybe they're not attending every session with me.

Jen: This has been something I've struggled with in my career. Most of my roles have been more "go off and research the thing, then come back and show the results." Sometimes stakeholders want to sit in if it's an in-person lab session, but often they're just not interested in participating.

I would love to talk about how you set that precedent at the beginning. Because it can be difficult—if it's not established early, how do you then say, "You're now going to come observe research and help me with analysis"? I've really struggled with that in my work process.

Amanda: This has been the bread and butter of my practice. It's not that it hasn't received any resistance, but my framing was around co-creation. I presented the rapid research approach we took at Lyft—I don't think it's for everyone, but it was born out of me being the only researcher in an office with 80 other people who all didn't know they needed research. Then they became overly excited once they saw research's impact. I was overwhelmed and said, "You're all going to help me."

As an experiential educator, I hosted Lunch and Learns, ran trainings, and created a process for others to be involved. I did it out of necessity so I could survive and accommodate all the requests. But a lovely bonus was that I had introduced this way of co-creating with everyone from individual contributors to VPs. They had all experienced what it was like to engage in research this way.

For projects outside the rapid research program, it gave me permission to make requests: "Before we meet next, I want you to come up with three things..." And people showed up because they got that training. It built not just organizational learning, but showed them how to be a partner to research. Research isn't just putting in a ticket and getting results.

It wasn't a magical experience for everyone, but it worked for the majority of folks. For those it didn't work for, I was still able to engage in conversation and meet them where they were. If it was taking up too much of their time or they weren't sure what a research question was—that was another finding! No one actually knows what a research question is. That was a mid-career mistake of mine, asking "What questions do you want answered? What are your hypotheses?" People don't know how to answer those questions—they don't have our training or experience.

Subscribe for free to receive new posts

How do you find the right balance between involving stakeholders and working independently?

Jen: It's about experimentation and getting a feel for the team. I'm often in more of a consultant role, so I don't have that deep context and long-term relationship to know that "Bob, my PM, does not want to participate" or is eager to participate. It comes back to learning about your stakeholders and building those relationships.

Also, showing the value is key. Once I was doing interviews for a B2B product, and instead of personas, I created mini-profiles summarizing what each person said—what they value in their workday, their motivations, and pain points. I built it in a digital whiteboard, highlighting key points. I did a couple of interviews every day and drip-fed that information to the team. They loved it: "Wow, we hadn't thought of this pain point!"

If you give that information to designers early on, rather than after completing all interviews, it gets their creative juices flowing. Debriefs are also really helpful—not only to share what you learned, but also to see how different people hear different things. Everyone listens through their own professional lens. Sometimes PMs are listening for confirmation of what they expect to hear. Being there to have those discussions, instead of making conclusions on my own, can change perspectives: "Oh, I hadn't thought about that!"

How do you navigate conflict with stakeholders?

Amanda: I was once pulled into an executive's pet project. A very well-meaning executive got a big idea and sent me a Slack message (partly my fault, because I had conducted a stakeholder interview with them a few months earlier). They wanted me to pull information on a demographic we'd never researched so they could build a new thing—but I didn't know why it was a good idea to build it.

With executives, you can't just schedule a coffee chat—you don't have much of their time. I said, "Great, let me see what I can do for you. Who else should I be chatting with?" I got a few more people to speak to, and I was curious: does this executive have buy-in from their broader team? The answer was no—the team saw this as a waste of time, but the executive was adamant about pushing it forward.

It was gutsy, but I scheduled one-on-one conversations with all the key stakeholders the executive mentioned. I said, "This is on the record, but I want it to be no-holds-barred. We're going to talk about this initiative before we go off and build it, and I want you to tell me exactly what your fears are and what you think this gets us as a company."

I love that we get to do that as researchers. Other disciplines have power dynamics where everyone needs to look good. Research should look good too, but we have permission to be unbiased third parties asking hard questions.

I asked these tough questions and then facilitated a workshop with the executive about what success did and didn't look like for this project. It required delicate moderation and bringing up different perspectives. I went back to everyone I interviewed and said, "This is what I heard and what I'm going to share. Do you have any problems with that?" I was building trust with the executive's team as well.

The workshop ended up being contentious, but everyone knew where they stood from the beginning. The executive was able to see everyone's hesitations and started to play defense—going back to the numbers and figuring out how to make this well-meaning idea make sense.

Jen: I have a fun story about working with developers at a security company. I was the UX team of one, surrounded by engineers. I would talk to our users, hear about pain points, and bring feature requests to the engineers. I had strong relationships with them—they were like my brothers—so we could have playful yet real conversations.

When I'd ask if we could implement a feature, they'd say no. So I'd ask, "What degree of no is this? Is this a 'fuck no, this cannot be done in the universe'? Or is it a 'heck no, it could be done but it's a lot of work'? Or is it 'no, I'm just telling you no because I'm lazy'?"

Then I'd say, "My technical savviness is in the negative—you're going to have to explain to me why it won't work, as if I'm 10 years old." Sometimes, to avoid having to explain it to "dumb Jen," they'd just switch to yes! I learned a lot, but I also made them reveal the truth. You hear that you should know enough code to talk to a developer—I think knowing enough to call out nonsense is really helpful.

How do you handle situations where research doesn't provide clear answers?

Amanda: I think there are two possible paths forward. Most importantly, acknowledge it: "Hey, we wanted to find out this thing. The results were inconclusive." That happens in science, research, A/B tests—it should be okay to say out loud.

The first path is to ask: Should we invest more time to figure this out? Should we try another approach or method? Should we talk to a different target demographic or region?

The second option is to say: "I don't have time or interest in conducting more research. What do we do now?" We as researchers should be equipped to have that conversation with teams. Help them figure out if there's something else we can try or if we're going to go forward with our best guess—and what risks that entails. What are we possibly missing? What's the most catastrophic thing that could happen?

Sometimes that conversation leads to more research. Other times, it means no build—we're not going forward. Maybe it was someone's idea or pilot project, we did a little research, and there isn't time for more, but the risks are too big or there's too much unknown.

Some of my greatest successes were when research led to the conclusion that we should not build something or that we needed to go back to the drawing board. Those kinds of decisions are really important too.

Jen: I'll add that if we all agreed on the research plan, the questions, and what we were trying to learn from the beginning, I'm not going to find additional information as time goes on.

One example: I'll find some data and they'll say, "Oh, did you dig deeper into that?" And I'll respond, "No, I didn't. I didn't realize that was important to you. If you'd sat in on the research with me, you could have asked me to go deeper. Now the opportunity is gone." I'm catty like that! The opportunity is gone, and we can revisit it, but I didn't know that needed to be explored deeply.

How do you prove the value of research?

Jen: I have a "golden ticket" story. I was working at a pet hospital, and the product owner told me, "We're going to development in a couple of days for a mobile app for doctors because doctors go from room to room to check on their pet patients."

When I asked what was going into the app, he listed many features. I asked, "How do you know that's what doctors need?" He said, "Oh, I just know."

I said, "Just give me a couple of hours. I'd like to confirm we're going in the right direction." I had a handful of doctors I could call, so I asked them, "What are the top two things you need the most?" They all said exactly the same things.

Because I did that quick research, we eliminated probably 75-80% of the features. We finished the app six weeks early and way under budget. That's the huge impact research can have in re-scoping a product to be completed faster and cheaper.

Amanda: To sum up Jen's story, my one-liner is: "I collect data to help people make decisions." That's it—insights in, action out.

What are three key tips for first researchers, solo researchers, or researchers in small teams about stakeholder management?

Amanda:

  1. Prioritize ruthlessly. You as a solo or small team cannot say yes to every request. Figure out what those high-impact projects are that align with your organization's goals. Sometimes that's doing executional research, like usability tests, to show how research influenced a product directly. It's not always a big, lofty, strategic thing.

  2. Create a simple scalable framework. What are repeat processes that you can put into place and then plug and play? This way, you're not developing the process with tight deadlines at the same time as handling all these requests.

  3. Build relationships beyond the actual project or product. Think about how to build rapport outside of context-specific projects—grab coffee with a key stakeholder, participate in cross-functional meetings. You want to be seen as a trusted partner rather than just a service provider. People are more likely to collaborate with you when you show up to the

Discussion about this video