Blog
6 MINS
Artificial intelligence is everywhere in the narrative. But when you look at how it’s actually used, the reality is far more nuanced. Today, a few words are enough to prompt a machine to imagine, build, refine, deploy, and document at a speed even the most ambitious teams struggle to match.
Somewhere between the vision of an “intelligent co-creator” and the reality of a highly efficient assistant, innovation teams are still navigating both opportunity and structural limitations.
In March 2026, Yumana hosted a User Club in Paris, bringing together innovation leaders from multiple organizations around a central question:
How is AI concretely enhancing the performance of Innovation teams?

The AI use cases that truly drive innovation performance
Beyond isolated use cases, we wanted to stay grounded in reality. During the Yumana User Club, we asked a simple question: where does AI actually make a difference for innovation teams?
The answers reveal a contrasted landscape. Some use cases are already becoming standard, while others are still struggling to gain traction.

When asked where AI truly boosts innovation performance, participants quickly aligned. The conclusion is clear: value is primarily created upstream in the innovation cycle, where ideas emerge and become structured.
Scenario and persona generation leads by far (86%), confirming that AI excels at exploring possibilities and expanding thinking. Close behind, structuring ideas into actionable concepts (71%) and brainstorming (64%) reinforce this trend: whenever it comes to turning intuition into tangible material, AI becomes a key enabler.
But the most interesting insights lie in the contrasts.
More critical, decision-oriented use cases, such as market appetite testing (14%) or prototyping (36%), remain significantly less adopted. The same applies to business needs analysis (14%), despite its central role in many innovation processes.
On the other hand, some intermediate use cases stand out, such as user feedback analysis (64%) and, more surprisingly, weak signal analysis (57%), despite acknowledged limitations.
These insights lead to a clear first conclusion: in 2026, AI is a powerful engine for exploration and structuring, but still falls short when it comes to validation and decision-making. As stakes become more critical, trust begins to erode.
To better understand these dynamics, here is what participants shared across each stage of the innovation process.
AI and weak signals: a promise yet to be fulfilled
Among all the topics discussed, weak signals generated both the most excitement and the most frustration. In theory, AI promises to help organizations anticipate trends, detect emerging patterns, and uncover what is not yet visible. In practice, the reality is more complex.
LLM-based models are extremely effective at processing existing information: analyzing large datasets, structuring fragmented inputs, and producing clear, actionable insights. But when it comes to identifying truly novel signals, performance drops significantly, and for structural reasons.
These models are trained on historical data, largely sourced from the web. As a result, they are fundamentally pattern-recognition systems. They excel at highlighting what is already somewhat visible but difficult to access, yet struggle to uncover what does not yet exist in the data.
This gap reframes their role: AI is not an autonomous weak signal detector, but rather an analytical accelerator that enhances human intuition... still essential today.
Brainstorming: AI as a creativity copilot
If there is one area where AI already delivers tangible, immediate value, it is creativity. Far from acting as an autonomous co-creator, it is increasingly positioned as a copilot, capable of turning vague discussions into structured, actionable outputs.
In brainstorming sessions, its role is clear: structure and formalize where ideas remain unshaped. It captures conversations, organizes them in real time, generates summaries, and helps teams overcome the blank-page effect.
The result is both more fluid and more productive sessions.
Some organizations are already taking this further by integrating AI as a full participant. It no longer just observes or reformulates, it contributes, challenges assumptions, and opens new directions.
However, this shift raises a key question: where should the line be drawn? Can artificial intelligence truly be considered a creative intelligence?
The risk is now cognitive. Relying on the same models, sources, and underlying logic can lead to homogenized thinking and biases.
The challenge is to integrate AI into brainstorming, but it's needed to do so without diluting what makes innovation valuable in the first place.
From idea to concept: where AI becomes a true accelerator
Once ideation is complete, AI’s impact becomes even more tangible. Where discussions often produce fragmented insights, it acts as a structuring engine, turning raw ideas into near-actionable concepts.
In just a few iterations, AI helps clarify problems, reduce complexity, and lay the groundwork for prioritization. This transition is critical: it marks the shift from intuition to execution.
This is where its value becomes most evident, however, this capability has its limits. As soon as projects move beyond generic contexts into highly constrained environments (such as industry, engineering, or complex systems), the credibility of outputs declines. AI still struggles to fully integrate real-world constraints and technical subtleties.
The limitation becomes clear. AI structures quickly, but does not always deeply understand. It helps formalize thinking but does not replace expertise or judgment.
In these contexts, its value lies not in making decisions, but in preparing them.
Can AI reliably test a market?
AI appears ideal for quickly testing market ideas: generating personas, simulating behaviors, exploring usage scenarios, all without fieldwork or significant investment. For innovation teams seeking speed and early signals, this is highly attractive.
In practice these use cases remain exploratory and widely debated.
Persona and scenario generation is now common. It helps structure hypotheses and expand thinking upstream. But these outputs are rarely considered reliable on their own. The reason is simple: they lack grounding in real-world data.
Generated personas are often coherent, but overly generic and disconnected from actual market dynamics. They create an illusion of understanding without delivering true validation. Moreover, they inherit biases from their underlying datasets.
In other words, they help teams think but not yet decide with certainty.
Some organizations are beginning to move beyond this stage.
They are experimenting with more advanced approaches, leveraging orchestrated agent systems that combine multiple areas of expertise (UX, product marketing, etc.) to deliver richer, more contextualized insights.
These initiatives are still emerging, but they point to a key evolution: moving from standalone generative AI to structured, decision-support systems. The real challenge is preparing what must ultimately be validated in the field.
Governance and adoption: the real challenge
As use cases multiply, one reality becomes clear: while AI is evolving rapidly, organizational structures are not keeping pace.
In practice, teams are testing, experimenting, sometimes hacking their way forward. They unlock quick wins and boost productivity. But friction comes fast: how do you turn these scattered uses into something coherent, scalable, and under control?
The same barriers consistently appear:
- Unclear prioritization
Too many initiatives, with limited visibility on those that truly create value - Scaling challenges
Effective local use cases that are difficult to roll out across the organization - Limited integration
AI tools often remain peripheral, poorly connected to existing business processes - Pervasive shadow IT
Practices evolve faster than governance, creating gaps between usage and control - Lack of structure and oversight
Few clear rules or KPIs to guide, measure, and structure AI adoption
This creates a paradox: AI is already widespread in practice, yet still underrepresented in governance.
The advantage lies in usage, not in the tool
This point is still underestimated. AI performance depends less on the technology itself than on how it is used.
With similar tools, performance gaps are not driven by platforms, but by team capabilities. AI skills make the difference.
These skills go far beyond prompt writing. They involve framing problems, structuring thinking, formulating hypotheses, interpreting outputs, and understanding their limits.
AI does not do the work for us, but it amplifies strong thinking and quickly exposes weak or poorly structured reasoning. This is where competitive advantage is built. Not in access to increasingly commoditized tools, but in the ability to extract real value from them.
The challenge is therefore scaling these capabilities. As long as they remain limited to a few advanced users, their impact will remain constrained.
The organizations pulling ahead are those that successfully scale adoption by equipping their teams, sharing best practices, and establishing clear usage standards.
Over time, AI skills will become what digital skills were ten years ago: a foundational capability expected across all levels of the organization.
Those who invest in this transition now will gain a lead that others will struggle to catch up with.
Key takeaways
The discussions from the User Club highlight more than just the rise of AI in innovation. They signal a shift in the center of gravity.
The conversation has evolved. The question is no longer whether innovation teams will adopt AI. Instead, a more demanding question emerges: where does it truly create value? Under what conditions? And how far can it be trusted without compromising judgment?
On this point, feedback is consistent. AI stands out as a powerful thought partner in early-stage innovation. It challenges assumptions, enriches brainstorming sessions, and surfaces blind spots that would otherwise go unnoticed. But it remains a support not a substitute.
This is likely where the future will be decided. The winners will not be those who resist AI, nor those who adopt it blindly. True maturity will lie in the ability to precisely define what should be entrusted to it and what should not.

















