TGA, 27 December 2025 – The Digital Transformation Agency (DTA) has officially published the AI Impact Assessment Tool (v1.0) and the updated Policy for the responsible use of AI in government. While this framework is designed for Australian Government agencies, it acts as a mandatory “gatekeeper” for procurement. If a government agency intends to use an AI-enabled medical device, they must now conduct a rigorous impact assessment to verify the system’s safety, fairness, and transparency.
This publication effectively introduces a new compliance layer for manufacturers: to sell to the government, your device must provide the data necessary for agencies to pass this assessment.
TIMELINE FOR MEDICAL DEVICE MANUFACTURERS IN AI IMPACT ASSESSMENT COMPLIANCE
The publication of the updated policy triggers the following implementation timeline:
- Effective Immediately (1 December 2025): The policy and assessment tool are now live. Agencies may begin using this tool for current procurements immediately.
- 15 December 2026: Agencies are required to fully implement the AI impact assessment requirement for all in-scope use cases by this date.
What this means for you: While agencies have until late 2026 to make this mandatory for all uses, DTA guidance encourages agencies to “implement them sooner if practicable”. Manufacturers should expect procurement teams to request this compliance data immediately for any new contracts.
KEY IMPACT AREAS IN AI COMPLIANCE FOR MEDICAL DEVICE MANUFACTURERS
The Assessment Tool evaluates AI systems across several key categories. Medical device manufacturers must be prepared to provide evidence for the following “modules” of the assessment:
- Reliability and Safety (Section 6)
- The Requirement: Agencies must confirm the AI is “reliable and safe” and that “clear processes for human intervention” exist.
- Manufacturer Action: You must provide proof of “circuit breaker” mechanisms that allow clinicians to safely disengage the AI without disrupting patient care.
- Transparency and Explainability (Section 8)
- The Requirement: Agencies must be able to “offer appropriate explanations” for AI-generated decisions, especially those affecting individuals.
- Manufacturer Action: “Black box” algorithms will likely fail. You must demonstrate how your system explains its outputs in plain language to the user.
- Fairness and Non-Discrimination (Section 5)
- The Requirement: The assessment specifically targets the risk of “unfair discrimination” and “bias” in training data.
- Manufacturer Action: You will need to disclose the “lineage, provenance and volume” of your training data to prove it is representative of the Australian population and does not replicate existing inequalities.
- Liability and Contracts (Section 6.3)
- The Requirement: Agencies are instructed to review the “suitability of procured AI systems,” including specific contract terms for testing, support, and warranties.
- Manufacturer Action: Expect stricter contract clauses where the government pushes liability for model failure or drift back onto the manufacturer.
RESOURCES
To help your regulatory and sales teams prepare, the following resources are available:
- Official Policy: Australian Government Policy for the responsible use of AI in government
- Contact DTA: For questions on the tool’s application, email [email protected]