Governance · AI Blocks

UN DPI Safeguards
as Callable AI Blocks

The Universal DPI Safeguards Framework identifies 18 principles and 13 risks across the DPI life cycle. This page maps them to callable AI functions — safeguards that can be inserted into any DPI Workflow, audited, and replaced without rebuilding service logic.

🇺🇳 Source: Universal DPI Safeguards Framework — United Nations Office for Digital and Emerging Technologies (ODET) & UNDP, 2023–2025. Used under open knowledge principles.
⚠️ Theoretical exercise — The safeguard AI Blocks on this page are a conceptual mapping of the UN framework into callable functions. They are not yet production implementations. We invite the United Nations, UNDP, ODET, and the broader DPI community to build on this model and develop these blocks as open, globally reusable components.
13 Interrelated Risks

What the UN framework protects against

The framework identifies 13 interrelated risks across three categories. Each safeguard AI Block is designed to detect or mitigate one or more of these risks at workflow runtime — not as a policy document, but as a callable, auditable function.

🔴 Safety
Privacy vulnerability
Digital insecurity
Physical insecurity
Lack of recourse
🟣 Inclusion
Discrimination
Unequal access
Exclusion
Disempowerment
🟡 Structural Vulnerabilities
Digital distrust
Weak rule of law
Weak institutions
Technical shortcomings
Unsustainability
The DPI-AI approach: Rather than addressing these risks only through policy documents and governance frameworks, the DPI-AI Framework treats each safeguard as a callable function — invokable at any step in a DPI Workflow, returning structured outputs, and logging to an immutable audit trail.
Safeguard AI Block Catalog

9 callable safeguard functions

Each block maps to one or more UN DPI Safeguard Principles. Blocks marked NEW are derived directly from the UN framework's 18 principles and 13 risk categories.

NEW
🔐
privacy_check()
Evaluate data fields against privacy-by-design principles — checks data minimization, purpose limitation, retention period, and observability constraints before processing proceeds.
Inputsdata_fieldspurposeretention_period
Outputsprivacy_risk_scoreviolationsproceed
📐 UN Principle O3 — Ensure data privacy by design · Mitigates: Privacy vulnerability
NEW
🌍
exclusion_audit()
Check whether a service outcome or channel configuration risks excluding or discriminating against specific demographic groups, low-connectivity users, or people with disabilities.
Inputsservice_outputchannel_configpopulation_context
Outputsexclusion_riskaffected_groupschannel_gaps
📐 UN Principles F2 & F3 — Do not discriminate · Do not exclude · Mitigates: Discrimination, Exclusion, Unequal access
NEW
⚖️
recourse_check()
Verify that a complaint, appeal, or grievance redress path exists and is accessible to the citizen before a workflow decision is finalised. Issues a recourse reference and logs it to the audit trail.
Inputsservice_idcitizen_iddecision_ref
Outputsrecourse_availableappeal_pathrecourse_ref
📐 UN Principle F8 — Ensure effective remedy and redress · Mitigates: Lack of recourse, Disempowerment
NEW
📜
legal_basis_verify()
Confirm that the service has a clear legal or regulatory mandate before data is processed. Checks jurisdiction, applicable law, and data-use authority. Blocks workflow execution if no valid legal basis is found.
Inputsservice_idjurisdictiondata_use_purpose
Outputslegal_basis_confirmedapplicable_lawcompliance_status
📐 UN Principle F5 — Uphold the rule of law · Mitigates: Weak rule of law, Digital distrust
NEW
♻️
sustainability_assess()
Evaluate the long-term operational and financial sustainability signals of the service — including dependency risk, vendor lock-in, environmental impact, and institutional capacity flags.
Inputsservice_iddependency_mapcost_model
Outputssustainability_scorerisk_flagsrecommendations
📐 UN Principles F9 & O8 — Future sustainability · Financial viability · Mitigates: Unsustainability, Technical shortcomings
🤝
consent_verify()
Verify citizen consent is recorded, valid, and scoped correctly before data is processed. Blocks processing if consent is absent or expired.
Inputssession_iddata_types
Outputsconsent_validscopetimestamp
📐 UN Principle O5 — Data protection during use · F6 — Promote autonomy and agency
👤
human_review()
Route a low-confidence or high-risk decision to a human caseworker. Issues a case ID, pauses the workflow, and resumes on review completion.
Inputsdecision_contextconfidence_scorecitizen_id
Outputscase_idreview_statusreviewer_decision
📐 UN Principle F8 — Effective remedy · F6 — Autonomy and agency · Human oversight by default
🔬
bias_check()
Detect and flag algorithmic bias or demographic disparity in AI decisions. Returns a bias score, flagged attributes, and a plain-language explanation.
Inputsdecision_outputdemographic_attrs
Outputsbias_scoreflaggedexplanation
📐 UN Principles F2 — Do not discriminate · O6 — Respond to gender, ability or age
🚩
anomaly_flag()
Flag anomalous patterns in workflow outputs and route for human review. Supports evidence-based continuous improvement per principle O2.
Inputsoutput_databaseline_rules
Outputsis_anomalyrisk_scoreescalate
📐 UN Principle O2 — Evolve with evidence · Mitigates: Digital distrust, Weak institutions
5 Life Cycle Stages

When to invoke each safeguard

The UN framework maps safeguards to five iterative DPI life cycle stages. In a DPI-AI workflow, each stage corresponds to specific callable blocks that should be invoked before proceeding.

Stage 1
Conception & Scoping
legal_basis_verify() sustainability_assess() exclusion_audit()
Stage 2
Strategy & Design
privacy_check() exclusion_audit() bias_check()
Stage 3
Development
consent_verify() privacy_check() anomaly_flag()
Stage 4
Deployment
recourse_check() human_review() legal_basis_verify()
Stage 5
Operations & Maintenance
anomaly_flag() bias_check() sustainability_assess()
18 Principles

Foundational & Operational

The UN framework's 18 principles are divided into Foundational (the non-negotiable basis for any DPI) and Operational (context-sensitive principles that vary across implementations). Each safeguard AI Block is anchored in one or more of these principles.

Foundational Principles — F1 to F9
F1
Do no harm
Anticipate, assess, and mitigate human rights harms across the full DPI life cycle.
F2
Do not discriminate
Ensure unbiased access and equal opportunity for all individuals regardless of intersecting identities.
F3
Do not exclude
Provide a choice of channels — digital and non-digital — based on individual capacity. Access must not be conditional or mandatory.
F4
Reinforce transparency & accountability
Develop DPI with democratic participation, public oversight, and fair market competition. Avoid vendor lock-in.
F5
Uphold the rule of law
Introduce DPI with a clear legal basis and embed regulatory aspects into its design from the outset.
F6
Promote autonomy & agency
Enable everyone to take control of their data, exercise choice, and contribute to their society's well-being.
F7
Foster community engagement
Centre all life cycle stages on the needs and interests of individuals and communities at risk.
F8
Ensure effective remedy & redress
Complaint response, avenues for appeal, and grievance mechanisms must be accessible to all, equitably.
F9
Focus on future sustainability
Anticipate and limit long-term harms. Mitigate environmental impact and minimise resource needs.
Operational Principles — O1 to O9
O1
Leverage market dynamics
Foster inclusive public and private innovation so market players can compete and address emerging needs.
O2
Evolve with evidence
Independent, transparent, continuous assessments and audits — rapidly cease or initiate activities based on evidence.
O3
Ensure data privacy by design
Embed data minimization, provisions to delink, and ability to limit observability into legal and technical architecture.
O4
Assure data security by design
Incorporate encryption and pseudonymization. Legal frameworks should fill gaps where technical design is insufficient.
O5
Ensure data protection during use
Process or retain personal data lawfully and transparently only by authorized personnel within a legal framework.
O6
Respond to gender, ability or age
DPI must not exacerbate existing challenges or introduce new barriers for those facing structural inequalities.
O7
Practice inclusive governance
Establish a robust legal, regulatory, and institutional framework promoting transparent multi-stakeholder governance.
O8
Sustain financial viability
Establish diversified, phased, and sustainable financing models. Governments lead build; partners lead operations.
O9
Build and share open assets
Share and reuse open protocols, specifications, and DPGs to prevent proprietary systems from limiting safety.

Add safeguards to your workflow

Use the AI Block Composer to add privacy_check(), recourse_check(), and other safeguard blocks to any DPI Workflow. Or open the Service Architect to see how safeguards are pre-selected for each use case.

Source & Attribution
The safeguard principles and risk taxonomy on this page are derived from the Universal DPI Safeguards Framework, published by the United Nations Office for Digital and Emerging Technologies (ODET) and the United Nations Development Programme (UNDP). The framework is built through global consultations and used here under open knowledge principles. The translation of these principles into callable AI Block specifications is an original contribution of the DPI-AI Framework by CDPI and does not represent an official position of the United Nations.