AI robot image

CONSCIOUS MACHINES? HOW VALAWAI EXPLORES AWARENESS AND VALUES

What does it mean for a machine to be aware? For Luc Steels, it is not just a philosophical question, but a practical challenge that shapes the very future of artificial intelligence.

With the VALAWAI project, Luc is exploring how AI systems are not only intelligent but also value-aware, capable of understanding context, ethical boundaries, and even aspects of self-awareness. “Generative AI is amazing, but it doesn’t grasp meaning,” he says. “It’s a prediction machine. It can complete text, generate images, or compose music, but it has no clue what it is about.” VALAWAI seeks to go beyond that, tackling the deeper questions of awareness, ethics, and understanding in human-centric AI. 

Luc Steels’ journey in AI stretches back to the early 1970s, giving him decades of insight into both the field’s rapid advances and its persistent challenges. He put himself out there with the VALAWAI by writing the proposal and bringing together a consortium of partners, also overseeing communication and interaction across the portfolio.

“Finding the right partners? That was actually easy.”

“The real difficulty is the bureaucracy. Most of my time goes into documents, negotiation, and evaluation processes. In AI, things move fast, while bureaucracy can drag you back”, he reflects. 

The inspiration for VALAWAI stems from the limitations of current AI. While generative models capture patterns across massive datasets, they fail to handle meaning and values. “We want AI that can reason about consequences, understand ethical constraints, and act within a moral framework,” Steels explains. The project builds on his previous work on human-centric AI, exploring how systems can be aware not only of their environment but also of their own actions, a form of machine self-awareness.

“Awareness is intuitive for humans. When you wake up, you’re aware. When you sleep, you’re not. How can we give machines a glimpse of that?” 

The EIC Pathfinder–funded project demonstrates ambition both conceptually and in its applications. One area involves hospitals, where VALAWAI tools support doctors in critical decision-making scenarios. During the COVID-19 pandemic, overwhelmed hospitals faced a scarcity of equipment and resources, making ethical and life-or-death decisions even more complex. “We’re not replacing doctors,” Steels stresses. “We’re providing tools that help them weigh options and make informed choices.” Another application is in social robotics, where humanoid robots act as companions for seniors, offering conversation, guidance, and assistance. “Maybe people would prefer human helpers, but the reality is there aren’t enough,” he says. “Robots can fill a gap, and we’re learning how to make them socially aware and responsive.” 

VALAWAI also addresses the growing influence of AI on social media. Systems now create content, manipulate narratives, and even deceive, unintentionally or otherwise. The project explores how value-aware AI can introduce ethical guardrails, giving users tools to protect themselves and helping platforms reduce misuse. Each of these applications could be a project in itself, yet Steels’ team is tackling them simultaneously, blending fundamental research with applied technology. 

Despite its successes, the project faces unique challenges. AI evolves at breakneck speed, often rendering proposals obsolete before funding is secured. “In AI, new things happen almost weekly,” Steels says. “It’s like being on a football field where the game keeps changing while you’re still writing your playbook.” Stability is rare in research teams, and short-term funding adds pressure to focus on immediate applications rather than deep, foundational questions. Yet VALAWAI has navigated these obstacles, balancing the need for innovation with the practical realities of European project bureaucracy. 

The human element remains central to the project. Awareness, after all, is a profoundly human concept. Steels draws inspiration from the Society of Mind theory, proposed by Marvin Minsky in the 1970s, which sees the mind as a collection of interacting agents. In VALAWAI, this idea is embodied in humanoid robots equipped with multiple “agents” that perceive the environment, act, and communicate. At the Biennale di Venezia 2025, a robot installation engages hundreds of thousands of visitors, demonstrating these principles in real-time.

“Children are so spontaneous when they interact with the robot. It’s partly a science experiment, partly an art installation, but fundamentally, it’s about exploring awareness.”

Installation part of the VALAWAI project at the Biennale di Venezia 2025
Installation part of the VALAWAI project at the Biennale di Venezia 2025

Looking ahead, the impact of VALAWAI is both immediate and long-term. Hospitals are already using prototypes to support decision-making. Social robots and ethical AI tools are being tested with real users. And the theoretical insights into awareness and value in AI continue to grow, forming a foundation for future innovations. “We’re building tools, but we’re also shaping understanding,” Steels says. “AI isn’t just about what it can do, it’s about what it should do, and how it interacts with human values. 

For Luc, the project is as much a personal journey as a scientific one. “It’s crazy, yes, but the material we’re collecting, the experiments, the interactions, it’s invaluable,” he reflects. VALAWAI shows that AI research can be human-centered, ethically informed, and profoundly exploratory, bridging the gap between cutting-edge technology and the timeless quest to understand consciousness, awareness, and value in our machines.

 

Photo by Gabriele Malaspina on Unsplash

18 Dec 2025
WRITTEN BY Caterina Falcinelli
SHARE