A system-level concept for using AI in real operational crises: inspections, accidents, disputes and institutional logic across borders. No “magic chatbot” – a structured way to make incidents less chaotic and more predictable.
A single incident can involve drivers, clients, insurers, police, inspectors, lawyers and different legal systems. Information is scattered across e-mails, PDFs, portals and “someone remembers something”. Humans handle judgement well. They handle fragmented information badly.
The system combines structured data, document search and controlled use of large language models. It is designed for traceability: every suggestion points back to concrete documents or past cases.
Each incident is a node connected to people, vehicles, institutions, locations, procedures and documents. This makes it possible to see not only “what happened now”, but how it relates to previous cases.
All relevant material – e-mails, attachments, reports, inspection notes – is stored in a way that allows both exact filtering and semantic search (“find similar incidents to this one”).
When the assistant answers questions or drafts a message, it is forced to work with retrieved context: selected documents, past cases and internal rules. The goal is to reduce speculation and keep answers grounded in real data.
The operator remains responsible. Drafts can be accepted, edited or discarded. Only reviewed content is stored as part of the organisation’s playbook for future cases.
The most important part of the system is not technology, but behaviour: what it does when an incident starts, when it escalates and when it is closing.
This concept is intentionally staged. Before any code, there is structure: how incidents are described, how decisions are made, how knowledge is stored and shared.
Define a standard way of describing incidents, collecting documents and writing timelines. Capture repeat patterns and institutional behaviours in a simple, human-readable format.
Use AI tools to help search, summarise and draft, but keep everything in a controlled space. Measure where AI helps most and where it needs stricter constraints.
Connect to mailboxes, document storage and internal systems with clear access control. Evaluate the assistant on real incidents and refine rules, prompts and safeguards.
This page describes a system concept and a way of thinking. The exact technical stack can be adjusted to the organisation and its constraints.
I do not compete with engineers. I work with them. My value is in how incidents actually unfold in the real world: documents, human reactions, institutional logic and the way people break under pressure. This project is a blueprint for using AI there in a controlled way.