Read All ArticlesAnalyses

AI, Defence, Sovereignty: Lessons from PDSF 2026

A roundtable brought together industry leaders, researchers, and institutional representatives to discuss the challenges of artificial intelligence in defence.

Odysseus Defense Partners·8 min read

Introduction: AI enters doctrine

The Paris Forum for Defence and Strategy (PDSF 2026) brought together, under the leadership of Jean Peeters (Institut des hautes études de défense nationale – IHEDN), six individuals who embody the diversity of actors now engaged in the AI transformation of defence: Patrick Aufort (Agence de l'innovation de défense), Eric Papin (Naval Group), Julien Bzowski (Safran.AI, formerly Preligens), Karl Neuberger (Capgemini Invent), Bruno Carron (Airbus) and Stéphan Clémençon (Télécom Paris).

Prime contractors, industrial champions, civil-sector actors turned defence professionals, academics specialising in machine learning: the diversity of perspectives was representative of an ecosystem in the midst of deep transformation.

What emerged from their exchanges goes far beyond a simple technological stocktake. The underlying question is more profound: is defence AI just another tool, or is it the symptom of a systemic transformation that forces us to rethink doctrines, industrial architectures, sovereignty, and the relationship between humans and decision-making?


1. A fast-growing market, a reshaping ecosystem

Over the past two years, the defence AI market has undergone a marked acceleration. Major industrial players — Naval Group, Airbus, Safran, among others — have created dedicated entities, building in-house capabilities that until recently resided in research departments or external providers. At the same time, startups and civil-sector actors have entered the ecosystem, bringing with them a product culture, short iteration cycles, and an appetite for experimentation.

This dual dynamic — institutionalisation within large groups, openness to agile actors — is redrawing the competitive landscape. It also raises a central question for programme owners: how to integrate these new capabilities into organisations and acquisition processes designed for a very different pace?


2. AI as a force multiplier: where and why it changes everything

AI is not a universally relevant tool. The roundtable identified the conditions under which it truly becomes a force multiplier:

  • Abundant data — AI relies on volumes that no human can process alone
  • Massive production of heterogeneous information to be analysed under time pressure
  • Difficult understanding — weak signals, non-intuitive correlations
  • Critical time constraints — where processing speed makes the operational difference

Under these conditions, two profound transformations are underway. The first is the shift from siloed reasoning to multi-domain fusion: crossing ground, air, electromagnetic, and cyber data in real time to produce a coherent situational picture. The second is the move from delayed analysis to real-time action: no longer analysing after the fact, but acting within the window of opportunity.


3. Over 400 use cases identified by the DGA: combat and support applications

France's Directorate General of Armaments (DGA) now catalogues more than 400 AI use cases in the defence domain. They fall into several major categories:

  • Processing heterogeneous intelligence streams (satellite imagery, electromagnetic signals, OSINT data)
  • Smart sensors integrating on-board processing
  • Autonomous navigation for drones and unmanned systems
  • Decision support for command staffs
  • Predictive maintenance of equipment

What stands out in this landscape is the growing maturity of logistics and support applications. Less visible than combat systems, they are equally decisive: supply chain optimisation, spare parts management, failure prediction — all domains where AI produces measurable and immediate operational gains.


4. Trustworthy AI: human oversight and adversarial robustness

Defence AI does not operate in a neutral environment. It operates against an adversary actively seeking to make it fail. This creates requirements radically different from those of civilian systems:

  • Training data contamination through the injection of corrupted data
  • Adversarial attacks designed to fool perception models
  • Jamming and spoofing of the sensors on which autonomous systems rely

Added to this is a paradox well known to practitioners: the more capable a model, the less interpretable it tends to be. Explainability — the ability to account for an AI system's decisions — remains an unsolved challenge, particularly in state-of-the-art deep learning architectures.

These two constraints, robustness and explainability, underpin the requirement for human oversight in defence AI systems. Not as a regulatory constraint, but as an operational and ethical necessity. The question is not "should there be a human in the loop?" but "what exact role should that human play, and how do we equip them to genuinely fulfil it?"


5. Sovereignty in concentric circles: between strategic autonomy and accepted dependencies

The roundtable reached a clear position on a much-debated question: sovereignty cannot be conceived as a quest for total autonomy. Building on Jensen Huang's framing of the "five layers of AI" (applications, models, infrastructure, chips, and energy), a vision of sovereignty structured as concentric circles emerged:

  • A sovereign core under national control: critical algorithms, models trained on sensitive data, command architectures
  • An intermediate layer of accepted dependency within a European framework: shared computing capacity, pooled development tools
  • An off-the-shelf periphery: generic components where dependency is acceptable because it is substitutable

This structuring requires clear political and industrial choices. It means knowing what one truly wants to control and accepting the costs of that control.


6. Dependence on American GPUs: a structural vulnerability

Dependence on American-made graphics processing units (GPUs) for systems deployed in the field is identified as a fundamental vulnerability. In a context of geopolitical tensions and export controls, the availability of these components during overseas operations or in times of crisis cannot be taken for granted.

The data question compounds this: classified or sensitive information cannot pass through generic cloud architectures. The emerging response, advocated during the roundtable, is a layered architecture:

  • Generic models pre-trained on open data
  • Specific fine-tuning carried out in sovereign environments on operational data
  • Graduated trusted clouds tiered according to classification levels, from non-sensitive data to defence secrecy

This approach allows organisations to benefit from the power of open foundations while preserving the confidentiality of strategic assets.


7. Defence AI as a socio-technical system

It would be reductive to treat defence AI as a purely technological question. The roundtable converged on a more complete vision: defence AI is a socio-technical system that simultaneously engages:

  • Industrial choices: which actors, which architectures, which partnership models?
  • Employment doctrines: how to integrate AI into command chains, rules of engagement, and operational procedures?
  • Data architectures: how to collect, qualify, secure, and leverage operational data?
  • Training challenges: how to prepare operators, officers, and decision-makers to work with AI systems?
  • Talent attraction: how can the defence sector attract and retain engineers and data scientists in competition with the private sector?
  • A political vision of what Europe wants to master over the long term

None of these dimensions can be addressed in isolation. Their overall coherence will determine the quality — and resilience — of the systems produced.


Conclusion: why Lean Tech thinking is exceptionally well positioned to meet these challenges

Faced with the complexity of these issues, our conviction at Odysseus Defense Partners is that Lean Tech thinking offers a framework for design and delivery that is particularly well suited to defence AI systems. Not as just another method, but as a philosophy of action aligned with the specific constraints of this domain.

First, Lean Tech places value and use-case thinking at the heart of the process. Before writing the first line of code or training the first model, the task is to define precisely what the system must accomplish for the operator, in what situation, and under what constraints. In a domain where use cases are numerous and resources limited, this discipline prevents building capabilities that are technically impressive but operationally useless.

Second, short and intense learning loops are structurally adapted to operational uncertainty. Defence AI operates in non-cooperative environments, against adversaries who adapt. In this context, building on long cycles means delivering systems that are already obsolete. The iterative loops of Lean Tech instead enable teams to confront hypotheses with reality early, adjust models to real data, and integrate end-user feedback before design errors become irreversible.

Third, value stream mapping and defect analysis reveal systemic waste. In complex processing chains — from raw data collection to operational decision — bottlenecks, redundancies, and non-value-adding steps are ubiquitous. The Lean approach, applied to information flows and AI pipelines, makes it possible to identify these structural defects and build systems that are genuinely fluid rather than merely sophisticated in appearance.

Fourth, and perhaps most importantly, Lean Tech conceives of the human as master of the system, not merely a "loop" supervisor. This distinction is fundamental. A "human in the loop" can approve or reject a decision proposed by a machine, but remains passive — dependent on what the system presents. A human who is master of the system understands the assumptions on which the model rests, identifies its failure conditions, knows when not to trust it, and can intervene proactively. They define the context in which the model will operate. Training that operator, designing interfaces that give them genuine visibility into system behaviour, structuring processes that enable them to exercise informed judgement: this is a considerable programme of work. It is precisely what Lean Tech makes it possible to formalise and operationalise.

AIsovereigntyLean Tech

Work With Us

Want to discuss your defence strategy?

Odysseus Defense Partners provides strategic advisory, funding support, and go-to-market acceleration for European defence startups and institutional stakeholders.

Get in Touch