AI in 2025: Between Promises and Upheaval

In recent years, artificial intelligence has gone beyond transforming just our tools or methods, it is profoundly shaking all our reference points. By permeating every area of our society — economy, education, healthcare, culture, social bonds — it is silently redefining the ways we think, act, and make decisions.

This profound upheaval demands a response that matches its scale. It is not about rejecting the technology, but about consciously, humanely, and critically reclaiming it. AI should not be seen as a blind promise, nor as an insurmountable threat. In reality, it is a mirror, one that reflects our collective choices… or abdications.

At Schoolab, we have made a clear choice: to place AI at the service of discernment. Faced with increasingly autonomous systems, we reject full autonomy without human oversight while promoting the use of AI where it can solve important problems and contribute to responsible development and wellbeing.

An Exponential Technology

AI doesn’t just evolve. It accelerates. And in this acceleration, it transforms. It no longer simply responds, it acts. It no longer merely equips, it decides.

AI is not monolithic. There is no single artificial intelligence, but rather a range of uses, each of which demands consciousness, responsibility, and safeguards:

  • When AI generates text or responds to prompts, its effects are limited.
  • When it is integrated into processes that impact the real world, it must be monitored.
  • But when it acts alone, decides alone, without any direct supervision, we’re dealing with an AI agent. And in that case, vigilance is not optional.

These often-invisible evolutions shift the boundary between human and machine, without explicitly setting its conditions or limits.

And this is precisely where the urgency lies: to assess the value of this progress in the light of its downside, risks, and prioritize essential human-centered progress over dangerous gadgetization.

Our Stance

Our approach to AI is neither theoretical nor speculative. It stems from our unique standpoint: we do not position ourselves as world-class commandos in AI deployment and massive automation, but as strategic advisors guiding organizations through responsible digital transformation. Our expertise lies in methodology and humble and practical experimentations. This is precisely what allows us to take a clear-eyed, demanding, and thoughtful stance on current and future uses.

We reject magical, sensationalist narratives that depict an all-powerful AI, and we are equally wary of those that standardize, homogenize, or replace without reflection. We oppose uses of AI that disempower, impose, or freeze human agency. AI that decides on recruitment without explanation or recourse and doesn’t advance humanity. AI that homogenizes cultural content to maximize engagement, at the cost of diversity. AI that surveils without debate and undermines democratic foundations.

At Schoolab, we advocate for technology that strengthens human capabilities, that enlightens without imposing, and that complements without diminishing or impoverishing. We support AI that enriches pedagogy in classrooms without mechanizing it; that aids medical discernment in hospitals without biasing it; that informs public decisions without reducing citizens to mere data. We firmly believe in AI that supports and enhances our creativity, intuition, and collective capacity to imagine a better future.

In other words, augmented intelligence — not automation of our humanity.

Supporting the Transition

Our vision is inherently embodied in our actions. Understanding AI also means learning how to integrate it thoughtfully: at Schoolab, we act as a catalyst for responsible transitions.

This is why we guide organizations through a gradual and critical adoption of these technologies. By combining user-centricity, strategy, foresight, governance, and ethical thinking, we build bridges between technological imaginaries and human realities.

This also means acknowledging the limits of autonomous agents: overconsumption of resources, loss of traceability, unjustified actions — these are potential pitfalls we strive to anticipate through a responsible and documented approach.

At Schoolab, we believe in AI as a lever for accessible innovation.

Not in the logic of fierce competition, but as a tool serving those who build — entrepreneurs, intrapreneurs, and citizen collectives alike. A tool to create new solutions with limited resources. This vision — that innovation should be accessible to all — has guided our values from the very beginning.

And in this role, a central mission emerges: to ask the questions that technology — and especially autonomous agents — do not: What is the agent doing? Why? How far should we trust it? When should it be held accountable?

Understand Before Acting

Before widely deploying any technology, we must understand its underlying logic and limitations. Today, too many actors rush into implementation without grasping the deeper implications, risks, or limits. At Schoolab, we believe in first awakening critical and systemic thinking, fostering real debate, and building a shared culture.

To that end, we have developed immersive and participatory formats that combine methodology with hands-on practice: murals, design fiction, workshops, simulations… Spaces where leaders, civil servants, employees, and citizens can explore the issues, express preferences, and envision the future.

Our goal is not to train technical experts. We train informed actors, capable of making judgments, expressing preferences, and articulating a vision.

Analyze, Control, Govern

Understanding is not enough. We must also frame usage, govern it, define clear rules, and implement safeguards. Letting things run unchecked means relinquishing all control and accepting that fundamental decisions escape democratic debate.

That’s why, at Schoolab, we advocate for Artificial Intelligence that is transparent, traceable, explainable. And above all, governed by those who are often voiceless: citizens, users, communities… This democratic imperative is not a luxury — it is the very foundation of trust.

This involves implementing governance principles specific to AI agents: human oversight, nuanced access and role management, autonomy limits, and intervention protocols such as “human-in-the-loop.” It is essential to include all stakeholders in defining what an agent is authorized to do.

Because AI must not escape us. On the contrary, it must reflect us.

For an AI that Serves the Common Good

Today, in a world shaken by major ecological, democratic, and social crises, AI cannot merely optimize. It must help build a more just, sustainable, and inclusive future.

Our stance at Schoolab: a vision of artificial intelligence that is deeply open, accessible, and sustainable. We advocate for AI that also benefits those too often left out of major digital revolutions. In this spirit, we work with tech for good actors, NGOs, and public entities to ensure inclusive impact.

When well-directed, AI can be a powerful lever for social transformation.

Les équipes de Schoolab France

Co-Creating a New Vision of AI

Twenty years ago, Schoolab was founded on the conviction that innovation should serve the great transitions of our time. Today, we extend that ambition to the field of artificial intelligence.

Not to follow a trend.
But to take a stand.
Not to announce what we will do with AI, but to clearly state how it should be framed.

We don’t claim to know everything. But we do know this: AI will change the world. And we have the power — and the responsibility — to choose in which direction, so it contributes to a fairer, more sustainable, and inclusive future.