Agentik.md Launches Open-Source AI Safety Specifications Ahead of 2026 EU and Colorado AI Regulations
WellStrategic has launched the AI Agent Safety Stack — twelve open-source Markdown file specifications designed to help developers and organisations define safety boundaries, shutdown protocols, and accountability standards for autonomous AI agents.
The specifications are available at https://killswitch.md, with full templates and documentation at https://github.com/killswitch-md/spec. All twelve specifications are released under the MIT licence at no cost.
Background
Autonomous AI agents — software systems that can plan, decide, and act without continuous human direction — are entering enterprise environments at pace. Industry analysts project that a significant proportion of enterprise applications will embed AI agents by the end of 2026. These systems can call APIs, modify files, send messages, and incur costs at machine speed. However, there is currently no widely adopted, version-controlled format for documenting their safety boundaries alongside project code.
The AI Agent Safety Stack addresses this gap by providing a set of plain-text Markdown files, each covering one safety concern, that can be placed in a project's repository root. The approach follows the pattern established by AGENTS.md, a file convention for AI agent project instructions now used in over 60,000 open-source repositories.
Regulatory Context
Several AI governance frameworks are scheduled to take effect in 2026:
Provisions of the EU AI Act (Regulation (EU) 2024/1689) relating to high-risk AI systems — including requirements for human oversight and the ability to interrupt or stop AI systems — are scheduled to apply from August 2, 2026. The Colorado Consumer Protections for Artificial Intelligence Act (SB 24-205), which requires impact assessments and risk management documentation for high-risk AI systems, begins enforcement on June 30, 2026. Additional AI governance legislation is active or pending in California, Texas, Illinois, and other US states.
The AI Agent Safety Stack is designed to help organisations document their AI safety controls in a format that is version-controlled, auditable, and co-located with project code. The specifications do not guarantee compliance with any regulation and should not be treated as a substitute for qualified legal or compliance advice.
The Twelve Specifications
The specifications are organised into four categories:
Operational Control: THROTTLE.md — Rate limiting, cost ceilings, and automatic slow-down protocols ESCALATE.md — Human-in-the-loop approval and notification workflows FAILSAFE.md (https://failsafe.md) — Safe fallback states and recovery procedures KILLSWITCH.md — Emergency shutdown triggers and escalation paths TERMINATE.md — Permanent shutdown with evidence preservation
Data Security: ENCRYPT.md (https://encrypt.md) — Data classification, secrets handling, and transmission rules ENCRYPTION.md — Cryptographic standards, key management, and compliance mapping
Output Quality: SYCOPHANCY.md — Output bias detection and disagreement protocols COMPRESSION.md — Context compression rules and coherence verification COLLAPSE.md — Model drift detection and recovery checkpoints
Accountability: FAILURE.md — Failure mode mapping and incident response procedures LEADERBOARD.md — Agent performance benchmarking and regression detection
Each specification is a plain-text Markdown file designed to be read by AI agents on startup, reviewed by engineers during development, referenced by compliance teams during audits, and inspected by regulators if required. The specifications are framework-agnostic and can be used with any AI agent implementation.
Availability
All twelve specifications are available immediately under the MIT licence. Full documentation is at https://agentik.md.
About WellStrategic
WellStrategic is an Australian technology & virtual tour company. The AI Agent Safety Stack is an open file convention specification project published and maintained by WellStrategic. The project is designed to complement existing open standards including AGENTS.md and the llms.txt proposal.
Note: The AI Agent Safety Stack is an open-source project provided under the MIT licence, "as-is" and without warranty of any kind. It does not constitute legal, regulatory, or compliance advice. Organisations should consult qualified professionals to determine their regulatory obligations. Use of these specifications does not guarantee compliance with any law, regulation, or standard.
Media Contact
Company Name: WellStrategic
Contact Person: Craig
Email: Send Email
Phone: +611800360888
Country: Australia
Website: https://agentik.md
Press Release Distributed by ABNewswire.com
To view the original version on ABNewswire visit: Agentik.md Launches Open-Source AI Safety Specifications Ahead of 2026 EU and Colorado AI Regulations
Information contained on this page is provided by an independent third-party content provider. XPRMedia and this Site make no warranties or representations in connection therewith. If you are affiliated with this page and would like it removed please contact [email protected]

