Responsible AI at AI MINDS
Aligned with the NIST AI Risk Management Framework (AI RMF 1.0)
At AI Minds, we build TerraVision AI, a geospatial intelligence platform that analyses terrain, orthophotos, blueprints and GIS layers to support better decisions in urban planning, infrastructure, environmental management and defence.
Because our AI can influence real-world decisions, we align our governance and engineering practices with the NIST AI Risk Management Framework (AI RMF 1.0). We use its four core functions — Govern, Map, Measure, and Manage— to identify, assess and manage AI risks across the lifecycle of TerraVision.
1. GOVERN – AI Governance, Accountability and Oversight
We maintain an internal governance structure to ensure TerraVision is developed and operated responsibly.
- Clear accountability: We assign roles for AI product ownership, data stewardship, security, and risk management. Founders and technical leads remain accountable for how AI is used in TerraVision.
- Policies and procedures: We maintain policies covering AI use, data protection, access control, incident response, and model change management. These policies are reviewed and updated as our product and risk profile evolve.
- Risk-based approach: We classify TerraVision use cases (e.g. planning support, environmental risk overlays, defence-adjacent analysis) by risk level and apply additional controls for higher-risk applications.
- Ethical principles: We commit to human-centric, fair, transparent and safe AI, consistent with international best practices and local guidance.
- Training and awareness: Team members working on AI features receive training on responsible AI, data handling and security expectations.
Governance ensures that AI in TerraVision is not an afterthought but integrated into our strategy, design and daily operations.
2. MAP – Understanding Our AI Systems and Their Context
We “map” our AI systems by documenting what they do, where they are used, and what risks they may pose.
- AI system register: We keep an internal inventory of TerraVision’s AI features (e.g. Q&A terrain insights, suitability scores, risk overlays, orthophoto analysis). For each system we record its purpose, inputs, outputs, intended users and dependencies.
- Context and stakeholders: We consider how our outputs may affect planners, communities, landowners, infrastructure operators and other stakeholders, especially in high-impact domains like environmental risk or defence.
- Data understanding: We document the types of data we process (terrain models, imagery, GIS layers, zoning and environmental overlays), their sources, and known limitations or uncertainties.
- Use and misuse scenarios: We assess both intended use (e.g. decision support for experts) and potential misuse or over-reliance (e.g. treating outputs as definitive without expert review).
- Legal and regulatory landscape: We monitor relevant laws and guidance in the jurisdictions where our customers operate, including privacy, data protection and emerging AI regulations.
Mapping helps us understand where AI is used in TerraVision and what can go wrong, so we can design appropriate safeguards.
3. MEASURE – Evaluating Performance, Risks and Impacts
We “measure” our AI systems to understand their performance, limitations and potential impacts on people and environments.
- Technical performance: We evaluate model behaviour on representative test data and scenario-based checks (e.g. different terrain types, environmental conditions, data quality levels).
- Reliability and robustness: We assess how sensitive outputs are to noise, missing data or changes in input quality and clearly communicate when data quality limits reliability.
- Fairness and bias considerations: For use cases that may impact specific communities or regions, we look for patterns that might systematically disadvantage or misrepresent certain areas, and we work with domain experts where appropriate.
- Explainability: We focus on producing interpretable outputs — such as highlighting key layers, criteria or factors that contributed to a score or classification — rather than only opaque scores.
- Uncertainty and limitations: We explicitly acknowledge uncertainty and important limitations in our documentation and user experience, especially for risk or suitability outputs.
Measurement is an ongoing process. As TerraVision evolves, we continue to test, review and improve our AI components.
4. MANAGE – Continuous Monitoring, Improvement and Incident Handling
We “manage” AI risks over time by monitoring systems in operation and responding quickly to issues.
- Lifecycle management: We treat AI features as living systems with defined phases: design, development, testing, deployment, monitoring and retirement. We do not deploy high-impact features without appropriate controls and review.
- Monitoring and logging: We log system activity and key events to help detect anomalies, investigate incidents and improve our models and prompts over time.
- Change management: We follow change control processes when updating models, prompts or infrastructure. We assess potential impacts before changes are deployed and roll back if necessary.
- User feedback and complaints: We provide channels for users to report concerns or unexpected behaviour. Reports are reviewed, prioritised and used to improve TerraVision’s design and safeguards.
- Risk mitigation: For higher-risk use cases, we implement additional safeguards such as stronger human-in-the-loop review, more conservative defaults, or tighter access controls.
- Retirement and decommissioning: If an AI feature is no longer suitable, safe or aligned with our principles, we update, restrict or retire it.
Managing AI risk is not a one-time exercise; it is an ongoing commitment embedded in how we operate TerraVision.
Our Ongoing Commitment
The NIST AI Risk Management Framework provides a structured way for us to build trustworthy, accountable and secure AI systems. We treat it as a living guide and will continue to refine our practices as:
- New standards and regulations emerge,
- Our product and data sources evolve, and
- We learn from our customers, partners, and the communities affected by AI-supported decisions.
If you have questions or concerns about how TerraVision uses AI, or if you believe an output may be inaccurate or harmful, please contact us at: admin@aiminds.ai