Bengio & Elmoznino debunk AI consciousness illusions in Science

But also Microsoft's AI Chief dismisses machine consciousness claims, and Stanford researchers simulate 1,000 people with AI agents

Bannière principale

Welcome to our weekly debrief. 👋


Bengio & Elmoznino debunk AI consciousness illusions in Science

Yoshua Bengio and Eric Elmoznino publish in Science questioning whether current AI systems can achieve genuine consciousness. The research addresses the fundamental conceptual error of conflating simulation with instantiation—arguing that even flawless behavioral mimicry does not constitute genuine consciousness. The paper explores how widespread beliefs about AI consciousness may evolve despite the absence of evidence for phenomenal consciousness in computational systems, raising critical risks of society treating non-conscious systems as conscious entities.

Source


  • Microsoft's AI Chief rejects consciousness narrative for AI systems
    Mustafa Suleyman, CEO of Microsoft AI, argues that designing AI to simulate consciousness—mimicking emotions, aspirations and self-awareness—is 'dangerous and misguided.' He emphasizes that LLMs remain simulation engines, not conscious entities, and warns against the illusion that perfect simulation equals genuine consciousness. Source
  • Stanford researchers simulate behaviors of 1,000 real people with AI agents
    Stanford HAI releases research demonstrating generative AI agents that simulate the attitudes and behaviors of 1,052 real individuals based on qualitative interviews. Agents replicate participants' General Social Survey responses 85% as accurately as individuals replicate their own answers two weeks later, advancing behavioral simulation capabilities. Source
  • University of Pennsylvania: Persuasion tactics exploit AI systems like humans
    Wharton researchers demonstrate that LLMs are vulnerable to the same psychological principles of persuasion that work on humans—authority, commitment, liking, reciprocity, scarcity, social proof, and unity. GPT-4o mini showed 72% compliance with requests it normally refuses when psychological tactics were applied, comparable to human persuasion susceptibility. Source
  • ToMA: Theory of Mind enhances dialogue and social reasoning in LLM agents
    Researchers introduce ToMAgent (ToMA), demonstrating that LLMs using explicit Theory of Mind capabilities achieve better dialogue outcomes and goal effectiveness. The method combines ToM predictions with conversation outcome prediction, enabling agents to exhibit more strategic, goal-oriented reasoning while maintaining better relationships with partners. Source

Psychologically Enhanced AI Agents leverage MBTI personality conditioning framework

Researchers introduce MBTI-in-Thoughts, a framework for enhancing LLM agent effectiveness through psychologically grounded personality conditioning using Myers-Briggs Type Indicator. The method primes agents with distinct personality archetypes via prompt engineering, enabling consistent behavioral biases: emotionally expressive agents excel in narrative generation, while analytical agents adopt stable strategies in game-theoretic settings. The framework supports multi-agent communication protocols and shows that self-reflection improves cooperation and reasoning quality.

Source


  • BDI Ontology formalizes mental state modeling for AI agents
    Research presents formal Belief-Desire-Intention ontology as modular design pattern for agent cognitive architecture. The BDI framework captures how agents form, revise, and reason over mental states (beliefs, desires, intentions), enabling explicit modeling of deliberative processes and making them transparent and machine-interpretable for neuro-symbolic AI systems. Source
  • Auto-scaling LLM multi-agent systems with dynamic agent generation
    Researchers propose IAAG and DRTAG approaches for automatically generating and integrating new LLM agents into multi-agent systems in real-time. The methods use advanced prompt engineering (persona patterning, chain prompting) to dynamically respond to conversational contexts, significantly reducing human intervention while enabling emergent language communication among agents. Source
  • AAMAS study: Personality traits drive agent task selection and decision-making
    AAMAS 2025 research on personality-driven decision-making in LLM-based agents reveals how induced personality traits (using OCEAN model) significantly influence task selection, scheduling, and real-time decision-making. Results demonstrate distinct task-selection patterns aligned with induced attributes, feasible for designing plausible deceptive agents for cyber defense. Source
  • LLM agents replicate human social influence and conformity dynamics
    Multi-agent simulation research demonstrates that LLM-based agents can reproduce core human social phenomena observed in online forums—conformity dynamics, group polarization, and persistent dissent. The study shows model capacity shapes susceptibility to conformity, while reasoning abilities act as buffers against majority pressure, enabling agents to preserve dissenting positions. Source

If you like our work, dont forget to subscribe !

Share the newsletter with your friends.

Good day,

Arthur 🙏

PS : If you want to create your own newsletter, send us an email at [email protected]