Ideas are never in short supply. Whether in large organizations or more agile structures, they emerge, populate collaborative platforms or notebooks, and are shared over a coffee break or during a strategic committee meeting. However, behind this abundance lies another reality: every company eventually faces the same questions.
What to keep? What to discard? And, most importantly, how to decide?
This decision-making phase, often regarded as just another step in the process, is in fact a pivotal moment. It marks the transition between high-impact initiatives and appealing yet unrealistic concepts.

Qualify Before Scoring
Not all submissions arising from a corporate innovation program qualify as ideas. Some are expressions of frustration, others are incremental suggestions, and some are merely unstructured aspirations.
To avoid wasting time and resources, an initial screening is essential: differentiate true ideas from simple observations or complaints, then identify those that truly fall within the scope of innovation.
What Defines an Innovative Idea?
An innovative idea is not solely defined by its originality. It must create new value, whether economic, social, or environmental, be aligned with the organization’s strategy, and be concretely feasible. An idea too far ahead of its time or out of context remains an appealing but sterile concept.
For further reading: How to spot Innovative ideas?
It can be useful to position the idea on a spectrum to tailor the evaluation grid.
Ask yourself: Is this idea…
→ …a quick win (simple and rapid improvement)?
→ …incremental (evolution of an existing product or service)?
→ …adjacent (extension into a related market)?
→ …disruptive (a breakthrough that revolutionizes a market)?
→ or transformational (creation of an entirely new market or need)?
Each category requires its own set of criteria and level of rigor, from immediate feasibility for quick wins to strategic vision and informed intuition for breakthrough innovations.
The Art of Evaluation: Between Rigor and Complexity
Once this initial filtering is complete, the actual evaluation phase begins. For most companies, this is structured around rational criteria:
→ Business relevance: Does the idea align with the company’s activities and objectives?
→ Strategic alignment: Does it fit into the medium-term roadmap?
→ Technological maturity: Is the necessary technology available?
→ Market potential: How large is the target market, and to what extent does it demonstrate interest or readiness for the solution?
→ Available resources: Does the organization have the skills and means to implement it?
→ Expected return on investment (ROI): Short, medium, or long term?
→ Risk: Financial, operational, reputational.
These dimensions can be consolidated into scorecards or multi-criteria analysis frameworks, enabling consistent comparison across projects. This approach streamlines the evaluation of hundreds of ideas and supports more objective decision-making.
However, innovation is not an exact science. Some ideas may meet all the criteria and still fail, while others, seemingly unlikely on paper, go on to disrupt entire industries.
Ideas that do not fit within traditional assessment models
This is especially true for so-called “creative innovations”, projects that do not address an expressed need but instead give rise to entirely new markets. The smartphone has become the quintessential example: no one anticipated that a device initially positioned alongside the personal computer would become so essential.
Such projects cannot be assessed solely through conventional criteria like immediate relevance or feasibility. They require a different approach, one that makes room for intuition, experimentation, and a measured tolerance for uncertainty.
→ Read also – Cracking the code: Overcoming the corporate innovation paradox
Leveraging Collective Intelligence
To enhance the evaluation process, some organizations choose to open it up to the broader community. Through community voting, they assess an idea’s ability to generate collective support. Thinking with 3,000 minds can sometimes surface intuitions that a committee of five experts might overlook. This internal engagement can become a valuable indicator, especially in intrapreneurship programs, where a project leader’s ability to rally support is often a key success factor.
That said, a balance must be struck. Too much weight given to popular vote can introduce biases: favoritism, trend influence, or internal influence networks. Conversely, relying exclusively on expert evaluation may lead to missing weak signals that only emerge at scale.
A Continuous Evaluation Process
Evaluation should not be seen as a one-time event, but rather as an ongoing process that supports a project throughout its development. At each stage (feasibility study, prototype, market test), the same questions must be re-evaluated:
- Is the market still ready?
- Has the context evolved?
- Does the project remain aligned with the strategic vision?
This regular reassessment helps secure investments and prevents the continuation of initiatives whose potential may have diminished over time.
Anticipating to Avoid Saturation
Some ideation campaigns have been so overwhelmingly successful that they became counterproductive: hundreds of ideas submitted, yet no evaluation framework in place to process them, leaving innovation teams overwhelmed by the volume.
To prevent this, one guiding principle is essential: design the evaluation process at the outset of the program, not afterward. Without it, the success of an ideation campaign can quickly turn into organizational gridlock.
Evaluating Without Stifling
To evaluate is to make choices, but it also means knowing when to give a chance to unconventional ideas that may seem fragile today, yet hold the potential to become major growth drivers tomorrow.
At Yumana, we advocate for a combined approach, where analytical rigor coexists with openness to the unexpected. Achieving this balance requires the right tools.
On our platform, idea evaluation is designed to be progressive and adaptable, allowing organizations to adjust the level of scrutiny based on each project’s stage and nature.
On the platform, idea evaluation is organized progressively and modularly:
Self-evaluation by the idea owner
Upon submission, the contributor answers a custom-designed questionnaire. This provides immediate feedback and allows them to refine their project accordingly.
Evaluation by the experts
Employees identified for their specific skills provide their insights on feasibility, impact, or strategic relevance.
Collaborative voting
The community can be invited to express their preferences, offering a valuable complementary perspective to identify ideas that can truly resonate.
Advanced Scorecards
Evaluation grids that synthesize key criteria (market potential, expected ROI, risks, required resources) and generate a score to facilitate decision-making.
Darwin Artificial Intelligence
As a co-pilot, AI analyzes innovation potential by cross-referencing multiple dimensions: value creation, quality and dynamics of involved entities. It also detects weak signals (projects losing momentum, initiatives progressing rapidly, or ideas that deserve to be revisited).
Yumanists Support
Beyond technology, Yumana experts help structure the framework, define the most relevant evaluation criteria, and adjust methods based on the company’s culture and ambitions.
This combination of evaluation methods enables the management of portfolios containing hundreds of ideas without becoming overwhelmed by detail, while still preserving the ability to identify the rare “gem” that defies conventional patterns.
Another key advantage: evaluation is not static. The platform allows scores to be updated as projects evolve, new data to be incorporated, and assessments to be revisited at every critical milestone.
A system that mirrors the true nature of innovation: fluid, iterative, and never fully predictable.
Because innovation is not just about sorting through the past, it’s also about remaining open to the unexpected.
Making the Right Choices Means Embracing a Degree of Intuition
Effective evaluation is never entirely rational. Striking the right balance between objective criteria and strategic intuition is essential.
For conventional ideas, rely on rigorous assessment frameworks.
For creative or disruptive concepts, focus on long-term vision and your organization’s willingness to take bold bets.
And to capture weak signals, involve your communities, while maintaining safeguards to mitigate bias and undue influence.
So, are you ready to structure your evaluation process?
Don’t wait any longer, get in touch with our experts to discuss your projects and join the Yumana community!

CTO & Co-Founder Yumana