Marta Walega-Durand
Operational crises · Institutions · AI-informed design
AI Project · Concept & System Thinking

Cross-Border Incident Intelligence System

A system-level concept for using AI in real operational crises: inspections, accidents, disputes and institutional logic across borders. No “magic chatbot” – a structured way to make incidents less chaotic and more predictable.

Incident Intelligence Assistant
Cosmic network illustration representing the Cross-Border Incident Intelligence System
Serious incidents are messy. Institutions are slower than reality.

A single incident can involve drivers, clients, insurers, police, inspectors, lawyers and different legal systems. Information is scattered across e-mails, PDFs, portals and “someone remembers something”. Humans handle judgement well. They handle fragmented information badly.

Real-world pattern
Scenario: a driver is stopped for a complex roadside control in another country. Documents are partially incomplete, the load is sensitive, timing is tight and everyone expects an answer: “what happens now and what do we risk?”.
Today: the operator scrolls through dozens of e-mails, screenshots, PDFs and calls, while trying to remember “that one case two years ago in a similar region”. Important details are easy to miss, and no one sees the full picture.
Design goal
With the system: the incident has a single structured file: people, vehicles, institutions, documents, deadlines, previous similar cases and draft strategies – all in one place, ready to support a human decision.
Key idea: AI does not replace legal counsel or management. It compresses context and makes institutional logic visible, so humans can think faster and with fewer blind spots.
An “intelligence layer” on top of incidents, not an auto-pilot.

The system combines structured data, document search and controlled use of large language models. It is designed for traceability: every suggestion points back to concrete documents or past cases.

Incident knowledge graph

Each incident is a node connected to people, vehicles, institutions, locations, procedures and documents. This makes it possible to see not only “what happened now”, but how it relates to previous cases.

Entities & relations Cross-border structure Institution profiles
Document store with semantic search

All relevant material – e-mails, attachments, reports, inspection notes – is stored in a way that allows both exact filtering and semantic search (“find similar incidents to this one”).

E-mail parsing PDF & scan ingestion Similarity search
Retrieval-augmented reasoning

When the assistant answers questions or drafts a message, it is forced to work with retrieved context: selected documents, past cases and internal rules. The goal is to reduce speculation and keep answers grounded in real data.

Grounded answers Citations to sources Configured boundaries
Human-in-the-loop control

The operator remains responsible. Drafts can be accepted, edited or discarded. Only reviewed content is stored as part of the organisation’s playbook for future cases.

Operator review Audit trail Explicit limits
From “What is going on?” to “Here are realistic options.”

The most important part of the system is not technology, but behaviour: what it does when an incident starts, when it escalates and when it is closing.

Typical questions the assistant can support

  • “What is the exact status of this incident right now?”
  • “Which documents are still missing or inconsistent?”
  • “Have we seen a similar pattern in this country or with this institution?”
  • “What are the realistic next moves of the authority involved?”
  • “What are three strategic options, with trade-offs?”
  • “Draft a calm, precise update for the client explaining the situation.”

What it deliberately does not do

  • Does not claim to give binding legal advice.
  • Does not automatically send messages without human review.
  • Does not invent rules or “fake certainty” where data is unclear.
  • Does not over-collect personal data just because it is technically possible.
From mindset to tool, not the other way around.

This concept is intentionally staged. Before any code, there is structure: how incidents are described, how decisions are made, how knowledge is stored and shared.

Phase 1
Manual incident playbook

Define a standard way of describing incidents, collecting documents and writing timelines. Capture repeat patterns and institutional behaviours in a simple, human-readable format.

Phase 2
Semi-manual AI support

Use AI tools to help search, summarise and draft, but keep everything in a controlled space. Measure where AI helps most and where it needs stricter constraints.

Phase 3
Integrated assistant

Connect to mailboxes, document storage and internal systems with clear access control. Evaluate the assistant on real incidents and refine rules, prompts and safeguards.

This page describes a system concept and a way of thinking. The exact technical stack can be adjusted to the organisation and its constraints.

Turning chaos, constraints and institutions into a system.

I do not compete with engineers. I work with them. My value is in how incidents actually unfold in the real world: documents, human reactions, institutional logic and the way people break under pressure. This project is a blueprint for using AI there in a controlled way.

What I bring

  • Years of work inside cross-border operations and incidents.
  • Fluency in institutional logic in different countries.
  • Ability to see patterns in messy, incomplete information.
  • Structured thinking that translates into workflows and system behaviour.

How to work with me

  • Use this concept as a base for an internal prototype.
  • Involve your technical team – I speak “constraints”, not just “ideas”.
  • Start with one real incident type and build from there.