The Many Problems with Future Predictions About UX
In April, I’m focusing my content—so my YouTube video, my newsletter, Instagram, and here on the blog—on the broad topic of the future of UX. It’s a theme I’ve been exploring in preparation for a free talk I’ll be giving on May 8th 2025 about where our field might be headed.
As part of that preparation, I’ve been digging into what others are writing, predicting, and saying about what comes next.
Why the extra effort? Because a lot of what’s being shared under the label “future of UX” doesn’t appear to be based on research—or even direct experience.
Instead, it often serves as a tool to grab attention, build reach, or sell something. Especially on LinkedIn. This contributes to a broader issue in our field: treating predictions as surface-level engagement material, rather than the directional signals they really are.
Why Future Predictions Deserve Higher Standards
Speculating about the future is natural. Anyone can do it. But in a professional context, predictions do more than express opinions.
They shape priorities.
They influence what people choose to study, what skills they develop, and which parts of the field they decide to move away from.
When someone claims, “AI will take over UX,” or “Tool X will be irrelevant in five years,” they’re not simply reflecting—they’re implying what others should act on today.
These kinds of statements often get attention because they offer certainty in a field that is currently uncertain. But that’s also what makes them potentially harmful. If we don’t treat future predictions with the weight they deserve, we risk misleading the people who trust in our expertise.
What’s Going Wrong in the Discourse?
During my research, I noticed something interesting. On platforms like Reddit and Medium, discussions around the future of UX were often thoughtful and well-argued. People questioned others and themselves, added context, and acknowledged complexity. But on LinkedIn, the tone was different: less careful, more confident, and in many cases, less grounded.
Here are the patterns that stood out most:
1. Emotion Over Evidence
Many predictions are written in an urgent, dramatic, or emotionally charged language. It’s not uncommon to come across lines like “adapt or get left behind,” or even more extreme phrasing. While these kinds of statements may draw engagement, they don’t actually inform – or help.
What’s missing, in most cases, is data. For instance, if a tool is “the next big thing”, how often does it show up in job listings? Are there actual examples of teams implementing it successfully? What kind of outcomes have been observed?
Without that context, a solid-sounding prediction becomes a mere opinion, and we don’t even learn if that opinion is an informed one.
2. Repetition Over Original Thinking
Something else I noticed: especially on LinkedIn, people bring up very similar lists of “emerging trends” in UX and UX Writing. Like, very similar. These lists include trends such as “Accessibility & Inclusion”, “Voice UI”, “Emotional Content Design”, and “Localization”.
Weird, I thought.
Initially, these posts seemed to be just… consensus. But many of these trends didn’t align with what I see in my day-to-day work. Others are not trends, but essential quality standards like inclusion.
That raised a question: where are these predictions really coming from?
Simple:
when I asked ChatGPT for emerging trends in our field, I got back nearly identical answers to what I’d seen in those posts. And, going back and forth with GPT about it, it became clear that the newest data GPT used to predict these trends was from 2023.
Or to put it more clearly: it’s not that ChatGPT learned from our predictions and therefore names the same trends. It’s that people claim to know the truth about the future of their field, when they actually just asked ChatGPT and did not even question its answers.
This doesn’t mean AI can’t support idea generation. But if a prediction comes directly from a language model, it should be treated as a starting point, not published as a result of professional expertise.
3. Authority Without Proximity
Another issue: many high-visibility predictions are coming from people who are no longer involved in day-to-day UX work—or never were.
To be clear, this isn’t about saying only practitioners make future predictions about our field. But if you’ve moved into a role that no longer involves hands-on experience, you might see things from a strategic, organizational, or even educational level, which can be incredibly valuable. But it also means your insights should be framed accordingly.
As someone who has actively worked as a UX Writer for almost 8 years, I can say that most trends discussed as hot topics—for example, on LinkedIn and in the media—have not yet shown up in corporate or startup reality.
4. Visibility Over Responsibility
Predictions that are loud and definitive tend to perform well on platforms like LinkedIn. But the ones that perform well aren’t always the ones that serve the field or community.
As soon as someone positions themselves as a thought leader and follows through on that, their words take on influence. Their posts shape the discourse. Other professionals value their opinions, and beginners look up to them.
That’s a big responsibility. And when predictions are shared without nuance, context, or even true reasoning, they can do more harm than good. Unfortunately, when certain future predictions fall apart or are challenged or criticized, they’re rarely revisited or corrected. The post stays up. The impressions roll in. And for many, that’s all that matters.
5. A Narrow Focus
Perhaps the most limiting pattern I’ve seen during my research is the absolute narrowness of the discussion.
Almost all future-of-UX conversations are centered on AI. And while AI is undoubtedly a huge factor, it’s not the only one.
What about economic shifts? Political change? Education reforms? New hardware categories? The UX implications of aging populations? The needs of users in emerging markets?
All of these forces can shape our work in significant ways. Still, they’re largely missing from most predictions. Even within AI discourse, the conversation often stays shallow. Most posts offer the same single-line takeaway: “AI won’t replace us, but our jobs will change.”
That’s a fine starting point, however, why not be more precise about how our jobs will change—what tasks will shift, what skills will become more important, what new ethical questions we’ll face? What should we learn more about? Without that depth, we’re repeating platitudes rather than actually preparing ourselves for the future.
How We Can Do Better
We don’t need perfect answers. No one has perfect answers about the future. But we do need a better process for asking the right questions. Here are a few things I’ve found helpful during my research:
Filter OUT the Noise
With so many predictions being shared, not all of them will be useful or thoughtful. I now tend to skip posts that lead with dramatic claims like “brand voice is dead” (and, yes, this is a real claim I’ve come across). If something feels designed more to provoke than to inform, it’s probably not worth your time.
Listen to Practitioners
Look for people who are close to the work that we do. If someone is making strong claims about where UX is going, it’s important to ask whether they even actively work in UX today – or whether they’ve moved out of the craft a couple of years ago. And of course, this isn’t about excluding voices, it’s about weighting them appropriately.
Build Your Own Picture
If every prediction you see is about AI, pause and ask: what’s missing? What else might impact the field that no one’s talking about?
Dig a little deeper. Look into hiring patterns, trends in job ads, tech policy, emerging markets, or hardware innovation. Talk to peers. Ask questions. Don’t just accept the story – actively contribute to building it.
Choose Platforms Thoughtfully
Where a prediction is published often affects how it’s written. Medium posts, in my experience, tend to include more context and case studies. Reddit threads often show a variety of perspectives. LinkedIn posts are frequently designed for reach.
That doesn’t make any platform inherently bad. It just means we should bring different expectations to each space.
Help Shape the Conversation
If you have questions, dare to ask them out loud. If a claim doesn’t sound right, challenge it. If you’ve observed something meaningful in your work, share it with others – even if it contradicts the dominant narrative.
Keep in mind: the future of our field is not an ominous, fatal, dark wave that is about to roll onto the shore of our careers. Think of it more as a soup with different chefs adding ingredients and spices. It will be served to all of us – but we also get to stand at the stove.