Skip to main content

Responsible Data Framework

A responsible data framework codifies the ethical obligations that govern how organisations collect, use, share, and dispose of data about people. These obligations extend beyond legal compliance to encompass moral duties arising from the relationship between data holders and the individuals whose information they possess. For mission-driven organisations working with vulnerable populations, refugees, survivors of violence, or communities in crisis, the framework addresses the fundamental asymmetry between institutional power and individual agency that characterises most data relationships.

Responsible data
The duty to ensure that data practices throughout the information lifecycle respect the rights, dignity, and safety of the people whose data is collected, while balancing legitimate operational needs.
Data subject
Any identified or identifiable natural person whose personal data is processed. In humanitarian contexts, this includes beneficiaries, community members, staff, and partners.
Power asymmetry
The imbalance between an organisation’s capacity to collect, analyse, and act on data and an individual’s ability to understand, control, or benefit from that same data.
Informational self-determination
The principle that individuals should control what information about themselves is collected and how it is used, to the extent possible given operational constraints.
Data harm
Any negative consequence to individuals or communities resulting from data collection, use, sharing, or retention, including physical harm, discrimination, loss of dignity, economic loss, or psychological distress.

Foundational principles

The responsible data framework rests on five principles that apply across all data activities. These principles function as decision criteria when operational needs conflict with protection obligations, providing a structured basis for determining how to proceed.

Do no harm requires that data activities do not create or exacerbate risks to individuals or communities. This principle operates prospectively through risk assessment before data collection and retrospectively through monitoring for emergent harms. The standard is not zero risk but rather that anticipated benefits justify accepted risks, with particular weight given to risks borne by vulnerable individuals who cannot meaningfully consent or withdraw.

Purpose limitation restricts data use to the specific purposes communicated at collection time. Data collected for programme registration cannot be repurposed for research without additional consent. Data collected for needs assessment cannot be shared with donors for fundraising. Each new use requires evaluation against the original purpose and, where the new use differs materially, fresh consent or a documented justification for proceeding without it.

Data minimisation requires collecting only data necessary for the stated purpose. The impulse to collect additional fields “in case they become useful” violates this principle because speculative collection imposes certain privacy costs against uncertain future benefits. Minimisation applies not only to what data is collected but also to who can access it, how long it is retained, and where it is stored. Each expansion requires justification.

Informed consent means that individuals understand what data is collected, why, how it will be used, who will access it, and what choices they have. Consent is not a checkbox but a process of communication appropriate to the context, the individual’s capacity, and the power dynamics at play. In humanitarian contexts where assistance may depend on registration, consent is structurally compromised, requiring additional safeguards rather than treating consent as sufficient authorisation.

Accountability places responsibility for data practices on the organisation, not the individual. Accountability means that data harms can be traced to decisions, that decision-makers can be identified, and that remediation mechanisms exist. Accountability operates through governance structures, documentation requirements, and feedback channels that enable affected individuals to raise concerns.

The responsible data lifecycle

Data passes through distinct phases from collection to disposal, each presenting specific ethical considerations. The lifecycle model provides a framework for identifying where responsible data principles apply and what safeguards each phase requires.

+-------------------------------------------------------------------------+
| RESPONSIBLE DATA LIFECYCLE |
+-------------------------------------------------------------------------+
| |
| +-------------+ +-------------+ +-------------+ |
| | | | | | | |
| | COLLECTION +---->+ USE +---->+ SHARING | |
| | | | | | | |
| +------+------+ +------+------+ +------+------+ |
| | | | |
| | | | |
| v v v |
| +------+------+ +------+------+ +------+------+ |
| | | | | | | |
| | - Necessity | | - Purpose | | - Recipient | |
| | - Consent | | alignment | | capacity | |
| | - Method | | - Access | | - Onward | |
| | - Quality | | controls | | transfer | |
| | | | - Analytics | | - Agreement | |
| +-------------+ +-------------+ +-------------+ |
| |
| +-------------+ +-------------+ |
| | | | | |
| | RETENTION +---->+ DISPOSAL | |
| | | | | |
| +------+------+ +------+------+ |
| | | |
| v v |
| +------+------+ +------+------+ |
| | | | | |
| | - Duration | | - Method | |
| | - Security | | - Verify | |
| | - Review | | - Document | |
| | - Archive | | - Backups | |
| | | | | |
| +-------------+ +-------------+ |
| |
+-------------------------------------------------------------------------+

Figure 1: Responsible data lifecycle with decision points at each phase

The collection phase determines what data enters organisational systems. Every field collected creates obligations: storage security, access control, retention management, and eventual disposal. Collection decisions should answer three questions: Is this data necessary for the stated purpose? Can the purpose be achieved with less sensitive data? Have individuals been informed in a manner they can understand?

The use phase governs how collected data serves operational purposes. Use must align with the purpose communicated at collection. Internal use includes programme delivery, monitoring, reporting, and operational decision-making. Each use case requires access controls proportionate to data sensitivity. Analytics and aggregation require additional consideration because patterns derived from data can reveal information beyond what individuals disclosed.

The sharing phase covers data transfer to parties outside the collecting organisation. Sharing multiplies risk because control transfers to entities with different capacities, interests, and obligations. Sharing decisions require assessment of recipient capability, contractual protections, and the consequences if shared data is misused or breached. The data sharing decision framework below provides structured guidance for these assessments.

The retention phase determines how long data remains in active systems. Retention beyond operational necessity increases risk without corresponding benefit. Retention decisions balance operational needs (programme continuity, audit requirements, legal obligations) against protection interests (minimising exposure, enabling deletion). Review mechanisms ensure retention does not continue indefinitely through inertia.

The disposal phase permanently removes data from organisational systems. Disposal requires verified deletion across all storage locations, including backups and archives. Disposal decisions must account for legal holds, ongoing investigations, and contractual obligations that may require extended retention. Documentation of disposal provides accountability evidence.

Consent functions differently across contexts depending on the relationship between the data collector and the individual, the consequences of declining, and the individual’s capacity to understand the implications of their choice. A consent spectrum describes the range from fully informed voluntary consent to situations where consent is structurally impossible or inappropriate.

+-------------------------------------------------------------------------+
| CONSENT SPECTRUM |
+-------------------------------------------------------------------------+
| |
| FULLY INFORMED IMPLIED |
| VOLUNTARY CONSENT |
| | | |
| v v |
| +---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+ |
| | | | | | | | | | | | | | | | | | | |
| +---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+ |
| ^ ^ ^ ^ ^ ^ |
| | | | | | | |
| | | | | | | |
| Research Donor Programme Emergency Public Vital |
| with no reporting registration response health interests |
| service (aggregate) (service (immediate monitoring (life- |
| link data conditional) needs) (population) saving) |
| |
+-------------------------------------------------------------------------+
| |
| SAFEGUARDS INCREASE AS CONSENT QUALITY DECREASES ------------------> |
| |
| - Full disclosure - Aggregation - Alternative - Strict |
| - Withdrawal option - Purpose limits channels minimisation |
| - No consequences - Anonymisation - Extra review - Time limits |
| |
+-------------------------------------------------------------------------+

Figure 2: Consent spectrum showing how safeguard requirements increase as consent quality decreases

At the fully informed voluntary end, individuals understand exactly what data is collected, have genuine alternatives, face no negative consequences for declining, and can withdraw consent at any time. Research participation with no connection to service delivery exemplifies this category. Safeguards beyond standard consent are minimal because consent itself provides meaningful protection.

Programme registration occupies an intermediate position. Individuals typically understand basic data collection but may not grasp how data flows through systems, who accesses it, or how long it persists. The link between registration and service access compromises voluntariness because declining means forgoing assistance. Safeguards include limiting data to programme necessities, providing alternative registration channels where feasible, and ensuring service delivery does not depend on consent to non-essential data collection.

Emergency response contexts constrain consent further. Individuals in crisis have diminished capacity to evaluate data implications. Urgency limits time for explanation. Power dynamics intensify when survival needs are immediate. Safeguards include strict minimisation (collect only data essential for immediate response), time-limited retention, and enhanced security given the heightened vulnerability of individuals in emergencies.

At the implied consent end, individual consent is absent or impossible. Public health surveillance operates on populations rather than individuals. Vital interests processing occurs when data use is necessary to protect life and seeking consent is impractical. These categories require the strongest safeguards: independent review, strict purpose limits, automatic deletion timelines, and enhanced accountability mechanisms.

The consent spectrum operates as a decision framework. Identify where a proposed data activity falls on the spectrum. Recognise that positioning determines required safeguards. Implement safeguards proportionate to consent quality. Document the reasoning for both the positioning and the safeguard selection.

Data sharing decisions

Data sharing extends organisational data obligations to external parties. Every share creates potential for harm beyond organisational control. A structured decision framework ensures sharing decisions receive appropriate scrutiny and documentation.

+-------------------------------------------------------------------------+
| DATA SHARING DECISION FRAMEWORK |
+-------------------------------------------------------------------------+
| |
| +-------------------+ |
| | Sharing proposed | |
| +---------+---------+ |
| | |
| v |
| +--------------+---------------+ |
| | Is sharing necessary for | |
| | stated purpose? | |
| +--------------+---------------+ |
| | | |
| YES NO |
| | | |
| v v |
| +------------+----+ +-----+----------+ |
| | Can purpose be | | Do not share | |
| | achieved with | | Explore | |
| | less data? | | alternatives | |
| +------------+----+ +----------------+ |
| | |
| v |
| +------------+-------------+ |
| | Minimise to essential | |
| | fields only | |
| +------------+-------------+ |
| | |
| v |
| +------------+-------------+ |
| | Assess recipient | |
| | protection capacity | |
| +------------+-------------+ |
| | |
| +-----------+-----------+ |
| | | |
| ADEQUATE INADEQUATE |
| | | |
| v v |
| +------+-------+ +--------+--------+ |
| | Execute | | Can capacity | |
| | sharing | | be built? | |
| | agreement | +--------+--------+ |
| +--------------+ | |
| +-------+-------+ |
| | | |
| YES NO |
| | | |
| v v |
| +-------+-----+ +------+-------+ |
| | Build | | Do not share | |
| | capacity | | Consider | |
| | then share | | aggregation | |
| +-------------+ +--------------+ |
| |
+-------------------------------------------------------------------------+

Figure 3: Data sharing decision flow ensuring necessity, minimisation, and recipient capacity

The necessity assessment asks whether sharing is required to achieve a legitimate purpose. Legitimate purposes include coordinated service delivery, mandated reporting, research with ethical approval, and operational coordination. Sharing for convenience, relationship maintenance, or speculative future use fails the necessity test.

Data minimisation for sharing operates independently of collection minimisation. Even if an organisation legitimately holds extensive data, sharing should include only fields necessary for the recipient’s stated purpose. A referral to a health partner requires health-relevant information, not complete programme registration data. A donor report requires aggregate figures, not beneficiary lists.

Recipient capacity assessment evaluates whether the receiving organisation can protect shared data adequately. Capacity includes technical security measures, policy frameworks, staff training, and accountability mechanisms. An organisation with strong data collection practices but weak security controls may be unable to protect received data. Capacity assessment is not a judgment of organisational worth but a practical evaluation of protection capability.

When recipient capacity is inadequate, sharing may proceed only if capacity can be built before transfer or if aggregation removes individual identifiability. Capacity building requires time and resources; urgent operational needs may preclude this option. Aggregation requires sufficient population size to prevent re-identification; small programmes may not support meaningful aggregation.

Risk assessment for data activities

Every data activity carries risk. Risk assessment identifies potential harms, evaluates their likelihood and severity, and determines whether proposed safeguards adequately address them. Assessment should occur before initiating data activities and periodically for ongoing activities.

+-------------------------------------------------------------------------+
| RISK ASSESSMENT MATRIX |
+-------------------------------------------------------------------------+
| |
| SEVERITY |
| |
| Minor Moderate Serious Severe |
| +-----------+-----------+-----------+-----------+ |
| | | | | | |
| Almost | MEDIUM | HIGH | HIGH | CRITICAL | |
| Certain | | | | | |
| +-----------+-----------+-----------+-----------+ |
| | | | | | |
| Likely | LOW | MEDIUM | HIGH | HIGH | |
| | | | | | |
| L +-----------+-----------+-----------+-----------+ |
| I | | | | | |
| K Possible | LOW | LOW | MEDIUM | HIGH | |
| E | | | | | |
| L +-----------+-----------+-----------+-----------+ |
| I | | | | | |
| H Unlikely | NEGLIGIBLE| LOW | LOW | MEDIUM | |
| O | | | | | |
| O +-----------+-----------+-----------+-----------+ |
| D | | | | | |
| Rare | NEGLIGIBLE| NEGLIGIBLE| LOW | LOW | |
| | | | | | |
| +-----------+-----------+-----------+-----------+ |
| |
+-------------------------------------------------------------------------+
| RISK LEVEL ACTION REQUIRED |
| ----------- ---------------------------------------------------- |
| CRITICAL Do not proceed without executive approval and |
| extraordinary safeguards |
| HIGH Requires senior management approval and enhanced |
| safeguards beyond standard practice |
| MEDIUM Proceed with documented safeguards and monitoring |
| LOW Proceed with standard safeguards |
| NEGLIGIBLE Proceed with minimal documentation |
+-------------------------------------------------------------------------+

Figure 4: Risk assessment matrix mapping likelihood and severity to required actions

Severity categories reflect the magnitude of harm if a risk materialises. Minor harms include inconvenience, minor financial loss, or embarrassment with limited duration. Moderate harms include significant financial loss, reputational damage, or distress requiring support. Serious harms include physical injury, severe psychological impact, discrimination, or loss of livelihood. Severe harms include loss of life, grievous injury, persecution, or destruction of community.

Likelihood reflects probability based on threat environment, vulnerability, and historical precedent. Rare events have less than 5% probability over the assessment period. Unlikely events have 5-20% probability. Possible events have 20-50% probability. Likely events have 50-80% probability. Almost certain events exceed 80% probability.

A worked example illustrates risk assessment in practice. An organisation proposes collecting GPS coordinates of beneficiary households for programme monitoring. Severity assessment: if coordinates were obtained by hostile actors, beneficiaries could face targeted violence. This represents severe harm. Likelihood assessment: the organisation operates in a conflict zone where data breaches have occurred. Government actors have previously requested programme data. Likelihood is possible to likely. Risk level: crossing severe severity with possible/likely probability yields HIGH to CRITICAL risk. Action required: do not proceed without executive approval and extraordinary safeguards, or reconsider whether GPS collection is necessary at all.

Risk assessment informs but does not dictate decisions. An activity with HIGH risk may proceed if benefits are substantial and safeguards are robust. An activity with LOW risk may be rejected if benefits are marginal. The assessment provides structured input to decision-making, not an automatic outcome.

Balancing operational needs and protection

Mission-driven organisations face genuine tension between data activities that support programme effectiveness and protection obligations that constrain those activities. This tension is not resolvable through formula but requires case-by-case judgment informed by structured analysis.

+--------------------------------------------------------------------------+
| OPERATIONAL VALUE VS PROTECTION RISK |
+--------------------------------------------------------------------------+
| |
| HIGH +------------------+------------------+------------------+ |
| | | | | |
| | ZONE C | ZONE D | ZONE E | |
| | | | | |
| O | Enhance | Careful | Extraordinary | |
| P | safeguards | balance | justification | |
| E | Proceed | required | required | |
| R | | | | |
| A +------------------+------------------+------------------+ |
| T | | | | |
| I MED | ZONE A | ZONE B | ZONE C | |
| O | | | | |
| N | Standard | Enhanced | Enhance | |
| A | practice | safeguards | safeguards | |
| L | Proceed | Proceed | Proceed | |
| | | | | |
| V +------------------+------------------+------------------+ |
| A | | | | |
| L LOW | Reconsider | ZONE A | ZONE B | |
| U | necessity | | | |
| E | Low value | Standard | Enhanced | |
| | activities | practice | safeguards | |
| | warrant | Proceed | Proceed | |
| | scrutiny | | | |
| +------------------+------------------+------------------+ |
| LOW MEDIUM HIGH |
| |
| PROTECTION RISK |
| |
+--------------------------------------------------------------------------+

Figure 5: Framework for balancing operational value against protection risk

Operational value measures how much a data activity contributes to organisational mission. High-value activities directly enable service delivery, programme quality, or life-saving response. Medium-value activities support operational efficiency, reporting, or decision-making. Low-value activities provide marginal convenience or speculative future utility.

Protection risk measures potential harm to individuals from the data activity. High-risk activities involve sensitive data, vulnerable populations, hostile threat environments, or limited safeguard options. Medium-risk activities involve personal data with standard safeguards available. Low-risk activities involve non-sensitive data with minimal harm potential.

Zone A represents the straightforward case: medium operational value, low protection risk. Standard practices suffice. Most routine data activities fall here. Programme registration in stable contexts, aggregate reporting, internal operational data.

Zone B requires enhanced safeguards beyond standard practice. Either the operational value justifies accepting moderate risk with additional protection, or lower-value activities proceed only with stronger safeguards. Examples include sharing data with vetted partners under data sharing agreements or collecting sensitive data with enhanced security controls.

Zone C applies enhanced safeguards to higher-value or higher-risk activities. The organisation accepts elevated risk because operational value justifies it, but invests in commensurate protection. Examples include biometric registration for large-scale assistance programmes or data collection in moderate-risk environments.

Zone D demands careful balance. High operational value accompanies substantial protection risk. Neither value nor risk clearly dominates. These decisions require senior review, explicit documentation of the balancing rationale, and ongoing monitoring. Examples include protection case management in conflict zones or beneficiary tracking where diversion risk is high.

Zone E requires extraordinary justification. High protection risk combines with high operational value, but the risk level demands exceptional scrutiny. Proceeding requires executive approval, exhaustive safeguard implementation, and clear articulation of why the operational need outweighs the protection risk. These cases should be rare.

Low operational value combined with any significant risk warrants reconsideration. If an activity provides marginal benefit, even moderate risk may be unjustified. The impulse to collect data “because we can” or “in case it’s useful” fails when subjected to value-risk balancing.

Community engagement

Data activities affect communities as well as individuals. Community engagement ensures that data practices reflect community perspectives, address community concerns, and provide mechanisms for community input. Engagement is not consultation theatre but substantive involvement in decisions that affect communities.

Engagement mechanisms vary by context and community. Focus groups provide qualitative input on proposed data activities. Community advisory committees offer ongoing oversight. Feedback hotlines enable individual concerns to surface. Public information sessions explain data practices. The appropriate mechanism depends on community structure, communication norms, and the nature of the data activity.

Engagement timing matters. Consultation after decisions are finalised reduces engagement to notification. Meaningful engagement occurs during design phases when input can shape outcomes. For ongoing activities, periodic engagement ensures practices remain aligned with evolving community perspectives.

Engagement content should address questions communities actually have. How is data protected? Who can access it? How long is it kept? What happens if something goes wrong? Can individuals see their data? Can they correct errors? Can they request deletion? Technical explanations should be translated into accessible language without condescension.

Power dynamics affect engagement quality. Community members may tell organisations what they think organisations want to hear. Anonymity mechanisms, third-party facilitation, and multiple engagement channels help surface genuine concerns. Engagement should not assume community homogeneity; different groups within communities may have different perspectives.

Engagement must have consequences. If community input is routinely ignored, engagement becomes extractive rather than participatory. Organisations should explain how input influenced decisions, including when input did not change the decision and why. Closing the feedback loop demonstrates that engagement matters.

Data rights of affected communities

Individuals and communities possess rights regarding data about them. These rights exist independently of legal frameworks, though legal frameworks may codify and enforce them. Recognising data rights shifts the framing from organisational permission to individual entitlement.

The right to information means individuals can know what data is held about them, why it was collected, how it is used, and who has accessed it. Exercising this right requires organisations to maintain records adequate to respond to inquiries and to provide information in accessible formats.

The right to access enables individuals to obtain copies of their personal data. Access requests should be fulfilled within 30 days. Data should be provided in commonly readable formats. Where data is held in systems that make extraction difficult, organisations should invest in access capabilities rather than treating technical constraints as justification for denying rights.

The right to correction allows individuals to fix inaccurate data. Correction requests should be evaluated promptly, and where corrections are made, downstream systems and sharing partners should be notified. Where organisations disagree with requested corrections, individuals should be able to record their disagreement alongside the data.

The right to erasure permits individuals to request deletion of their data when retention is no longer necessary, consent is withdrawn, or data was collected unlawfully. Erasure may be limited by legal obligations, legitimate interests, or technical constraints, but limitations should be exceptions rather than defaults.

The right to object allows individuals to challenge specific uses of their data, particularly automated decision-making and profiling. Where objections are sustained, the challenged use should cease. Where objections are rejected, reasons should be documented and communicated.

The right to portability enables individuals to transfer their data to other organisations in structured, commonly used formats. Portability supports individual agency by reducing lock-in to specific service providers.

These rights require operationalisation. Organisations must have processes to receive, track, and respond to rights requests. Staff must be trained to recognise and route rights requests. Systems must support the technical operations rights require. Response timelines must be monitored and met.

Implementation considerations

Implementation of a responsible data framework varies with organisational capacity, operating context, and data activity profile. The guidance below addresses implementation across different situations.

Organisations with limited resources

Organisations with minimal IT or data management capacity can implement responsible data practices through prioritisation and simplification. Focus on the highest-risk data activities first. Beneficiary data, protection data, and data collected in high-risk environments warrant attention before routine operational data.

Documentation need not be elaborate. A simple register of data activities, their purposes, and their safeguards provides accountability without bureaucratic overhead. A one-page data sharing checklist ensures sharing decisions receive consistent scrutiny. A basic consent script ensures individuals receive key information even without comprehensive privacy notices.

Training can be integrated into existing staff development rather than requiring dedicated data protection courses. Brief modules covering core principles, common scenarios, and escalation paths equip staff to handle routine situations and recognise when to seek guidance.

Technical safeguards should match capacity. Encryption, access controls, and backup procedures provide meaningful protection even without sophisticated data governance infrastructure. Cloud services with built-in security features may offer better protection than on-premises systems that organisations cannot adequately secure.

Organisations with established data functions

Organisations with dedicated data or IT staff can implement more comprehensive frameworks. Data governance structures should include explicit responsible data accountability, either within existing roles or through dedicated positions. Data stewards should receive training on ethical dimensions of their stewardship function.

Systematic data inventory enables framework application across all data activities. Cataloguing data assets, their purposes, retention periods, and sharing arrangements provides the foundation for lifecycle management. Automated classification tools can assist with large data estates.

Impact assessments should be conducted for high-risk data activities. Assessment templates guide consistent evaluation of necessity, proportionality, and safeguards. Thresholds determine which activities require assessment, balancing thoroughness against overhead.

Monitoring and audit verify framework implementation. Regular reviews of consent practices, sharing activities, and retention compliance identify drift from intended practices. Metrics such as rights request response times, consent rate variations, and data quality indicators provide operational visibility.

High-risk operating contexts

Organisations operating in conflict zones, authoritarian environments, or contexts with elevated surveillance face intensified implementation requirements. Threat modelling should explicitly consider hostile state actors, armed groups, and criminal enterprises as potential adversaries.

Data minimisation becomes more aggressive in high-risk contexts. The question shifts from “what data would be useful” to “what data is essential, and can we defend holding it if challenged.” Collecting data that would endanger individuals if compromised is itself a harm, regardless of whether compromise occurs.

Technical security must match threat sophistication. Encryption standards, access controls, and operational security practices should assume capable adversaries. External security assessments provide independent evaluation of organisational defences.

Contingency planning addresses scenarios where data must be destroyed or where staff may face coercion. Pre-positioned destruction capabilities, secure communication channels, and protocol training prepare for crisis scenarios. These preparations must themselves be secure; discovery of destruction capabilities or security protocols may trigger the crises they are designed to address.

Data sharing agreement template

The following template provides a starting structure for data sharing agreements. Organisations should adapt the template to their specific context, legal requirements, and risk profile. Legal review is advisable before finalising agreements.


DATA SHARING AGREEMENT

Between:

[Organisation A name and address] (the “Disclosing Party”)

and

[Organisation B name and address] (the “Receiving Party”)

  1. Purpose

This Agreement governs the sharing of personal data between the parties for the following purpose: [Specify the exact purpose for which data is shared, e.g., “coordinated delivery of health and nutrition services to registered beneficiaries in [location]”].

Data shared under this Agreement shall not be used for any purpose other than that specified above without prior written consent from the Disclosing Party.

  1. Data description

The following categories of personal data will be shared: [List specific data fields, e.g., “Name, date of birth, household location (village level only), registration number, and programme enrolment status”].

The following categories of data are explicitly excluded from this Agreement: [List excluded fields, e.g., “GPS coordinates, health records, protection case information, and biometric data”].

The estimated number of data subjects is: [Number or range].

  1. Legal basis

The Disclosing Party confirms that data was collected with appropriate consent or legal basis for sharing as described in this Agreement.

The legal basis for this sharing is: [Specify, e.g., “consent obtained at registration for inter-agency coordination” or “legitimate interests of the data subjects in receiving coordinated services”].

  1. Receiving Party obligations

The Receiving Party shall:

a) Process shared data only for the purpose specified in Section 1.

b) Implement technical and organisational security measures at least equivalent to [specify standard, e.g., “ISO 27001 controls” or “the measures detailed in Annex A”].

c) Limit access to shared data to personnel with a legitimate need to access it for the stated purpose.

d) Not share data with any third party without prior written consent from the Disclosing Party.

e) Notify the Disclosing Party within [48/72] hours of any actual or suspected data breach affecting shared data.

f) Delete or return all shared data within [30/60] days of the termination of this Agreement or upon request from the Disclosing Party.

g) Permit the Disclosing Party or its designated auditor to verify compliance with this Agreement upon reasonable notice.

  1. Data subject rights

Both parties shall cooperate to respond to data subject rights requests, including access, correction, and erasure requests. The Disclosing Party shall serve as the primary contact point for data subject inquiries regarding shared data.

  1. Duration

This Agreement shall remain in force from [start date] until [end date/event], unless terminated earlier by either party with [30] days written notice.

  1. Liability

Each party shall be liable for data protection breaches caused by its own acts or omissions. [Expand as appropriate for jurisdiction and risk level.]

  1. Governing law

This Agreement shall be governed by the laws of [jurisdiction].

  1. Signatures

For the Disclosing Party:

Name: _______________ Title: _______________ Date: _______________

For the Receiving Party:

Name: _______________ Title: _______________ Date: _______________

Annexes:

Annex A: Security measures required Annex B: Data field specifications Annex C: Consent documentation


See also