All policies

Responsible AI Policy

Effective March 1, 2026

This Responsible AI Policy ("Policy") sets out the principles, governance structures, and operational safeguards that Subduxion B.V. ("Subduxion", "we", or "our") applies to the development, deployment, and continuous monitoring of artificial intelligence systems within the Blake platform (the "Services").

Subduxion is committed to the responsible and ethical use of AI in commercial settings. This Policy reflects our obligations under Regulation (EU) 2024/1689 (the "EU AI Act"), Regulation (EU) 2016/679 (the "GDPR"), and the OECD Principles on Artificial Intelligence, and is informed by the guidelines issued by the European Commission's High-Level Expert Group on Artificial Intelligence ("AI HLEG").

1. Scope and Classification

This Policy applies to all AI systems integrated into the Services, including systems used for lead identification, prospect scoring, content generation, behavioural pattern analysis, and predictive analytics.

Subduxion has conducted a risk classification assessment of its AI systems in accordance with Title III of the EU AI Act. Based on this assessment, the AI systems deployed within the Services are classified as limited-risk AI systems under Article 50 of the EU AI Act, subject to transparency obligations. None of the AI systems used in the Services are classified as high-risk within the meaning of Annex III of the EU AI Act. Subduxion does not deploy prohibited AI practices as defined in Article 5 of the EU AI Act.

This classification is subject to periodic review and will be updated in response to changes in the regulatory framework, the deployment context, or the capabilities of the underlying systems.

2. Governing Principles

The following principles guide our approach to AI development and deployment:

  • Human Agency and Oversight. AI systems within the Services are designed to augment human decision-making, not to replace it. All AI-generated outputs, including prospect recommendations, outreach content, and scoring assessments, are subject to human review and approval before any action with external effect is taken. Users retain full authority to accept, modify, or reject any AI recommendation.
  • Transparency. We are committed to ensuring that users understand when they are interacting with AI-generated content and how AI influences the information presented to them. AI-generated outputs are identified as such within the platform interface. Users may request information about the general logic and parameters underlying AI-driven features.
  • Fairness and Non-Discrimination. Subduxion designs, tests, and monitors its AI systems to minimise the risk of unfair bias in outputs. We employ structured evaluation procedures to identify and mitigate bias related to protected characteristics as defined in Directive 2000/78/EC and national equality legislation.
  • Accuracy and Reliability. We apply rigorous testing, validation, and monitoring procedures to ensure that AI systems perform consistently and reliably. AI outputs are probabilistic in nature and are not warranted for accuracy. Users are responsible for verification before reliance.
  • Privacy and Data Governance. All AI processing is conducted in compliance with the GDPR and our Privacy Policy. Personal data used for AI functionality is processed on a lawful basis, with appropriate safeguards including data minimisation, purpose limitation, and pseudonymisation where feasible.
  • Security and Resilience. AI systems are subject to the same security standards as all other components of the Services, including access controls, encryption, monitoring, and incident response procedures as described in our Security Policy.
  • Accountability. Subduxion maintains internal governance structures to ensure accountability for AI-related decisions. Responsibility for AI governance is assigned at the management level, with defined escalation procedures for incidents, complaints, and regulatory inquiries.

3. AI Governance Framework

3.1 Internal Governance

Subduxion maintains an AI governance function responsible for:

  • Conducting and maintaining risk assessments for all AI systems in accordance with the EU AI Act.
  • Defining and enforcing internal standards for AI development, testing, and deployment.
  • Reviewing and approving changes to AI systems that materially affect their behaviour, outputs, or risk profile.
  • Monitoring AI system performance and investigating anomalies, bias, or complaints.
  • Maintaining documentation of AI system design decisions, training data governance, and evaluation results.
  • Coordinating with legal, data protection, and product functions on AI-related matters.

3.2 Risk Assessment

Prior to deployment or material modification of any AI system, Subduxion conducts a structured risk assessment that evaluates:

  • The intended purpose and deployment context of the system.
  • The categories of persons affected and the potential impact on their rights and interests.
  • The risk of bias, inaccuracy, or unintended consequences.
  • The adequacy of human oversight mechanisms.
  • Compliance with applicable legal requirements, including the EU AI Act, GDPR, and the ePrivacy Directive.

4. Data Governance for AI

The quality and governance of data used in AI systems is fundamental to responsible AI. Subduxion applies the following data governance practices:

  • Training Data. AI models are developed using carefully curated datasets. Subduxion does not use individual customer data to train models shared across customers without explicit, informed consent. Where customer data is used for model improvement, it is aggregated and anonymised in accordance with Recital 26 GDPR.
  • Data Quality. Subduxion implements procedures to assess and maintain the quality, relevance, and representativeness of data used in AI systems, with particular attention to the identification and mitigation of bias in training data.
  • Data Minimisation. AI systems are designed to process only the data necessary for their intended purpose, in accordance with Article 5(1)(c) GDPR.
  • Retention. Data processed by AI systems is retained only for the period necessary for the stated purpose, subject to the retention periods specified in our Privacy Policy.

5. Transparency Obligations

In accordance with Article 50 of the EU AI Act and our commitment to transparency, Subduxion ensures that:

  • Users are informed that AI systems are used within the Services and are provided with a general description of their functionality.
  • AI-generated content (including outreach drafts, summaries, and scoring) is clearly identified as AI-generated within the platform interface.
  • Users are informed of the general logic underlying AI-driven features upon request, without disclosure of proprietary methods, model architectures, or trade secrets.
  • Where AI-generated content is intended for communication with third parties, it is the user's responsibility to ensure appropriate disclosure in accordance with applicable law.

6. Human Oversight

Subduxion implements the following human oversight measures:

  • All AI-generated outreach content (emails, messages, call scripts) requires explicit user approval before transmission. The Services do not autonomously send communications to third parties.
  • Lead scoring and prospect recommendations are presented as decision-support tools. Scoring criteria are documented and available to users. Users may override any AI-generated score or recommendation.
  • Automated workflows that incorporate AI outputs include configurable checkpoints where human review is required before proceeding.
  • Users may disable specific AI features at the workspace or individual level without affecting access to the core functionality of the Services.

7. Bias Monitoring and Mitigation

Subduxion recognises the risk that AI systems may reflect or amplify biases present in training data or system design. To address this risk, Subduxion:

  • Conducts periodic bias audits of AI outputs, with particular attention to outcomes that may disproportionately affect individuals on the basis of protected characteristics.
  • Maintains evaluation datasets designed to test for common forms of bias in language generation and scoring systems.
  • Documents and tracks bias-related findings and the corrective actions taken in response.
  • Provides a reporting mechanism for users who identify potentially biased or discriminatory AI outputs (legal@subduxion.com).

8. Prohibited Uses

The following uses of AI within the Services are expressly prohibited:

  • Social scoring of individuals or the assessment of individuals based on personal characteristics unrelated to legitimate commercial criteria.
  • Real-time biometric identification or emotional recognition of individuals.
  • Manipulation of individuals through subliminal techniques or exploitation of vulnerabilities.
  • Automated decision-making that produces legal effects or similarly significant effects on individuals without meaningful human intervention, within the meaning of Article 22 GDPR.
  • Processing of AI outputs in a manner that circumvents or undermines data protection rights.
  • Use of AI to generate content that is intentionally misleading about its AI-generated nature in contexts where disclosure is required by law.

9. Incident Management

Subduxion maintains procedures for the identification, investigation, and remediation of AI-related incidents, including:

  • AI systems producing systematically inaccurate, biased, or harmful outputs.
  • Unintended data exposure through AI-generated content.
  • Failures in human oversight mechanisms.
  • Complaints from users or third parties regarding AI behaviour.

AI incidents are subject to the incident response procedures described in our Security Policy. Where an AI incident involves Personal Data, the breach notification procedures in our Data Processing Agreement apply.

10. Regulatory Engagement

Subduxion is committed to constructive engagement with regulatory authorities on AI governance matters. We:

  • Monitor developments in EU and national AI regulation and update our practices accordingly.
  • Will register AI systems with the EU database established under Article 71 of the EU AI Act when and to the extent required.
  • Cooperate with supervisory authorities in the exercise of their functions under the EU AI Act and the GDPR.
  • Participate in industry initiatives and standards development related to responsible AI, including ISO/IEC 42001 (AI Management Systems) and the forthcoming CEN/CENELEC harmonised standards under the EU AI Act.

11. Review and Updates

This Policy is reviewed at least annually and updated as necessary to reflect changes in our AI systems, applicable law, regulatory guidance, or industry best practices. Material changes will be communicated through the Services or by notice to the primary contact associated with your account.

12. Contact

For questions, concerns, or reports relating to the responsible use of AI within the Services, please contact:

Subduxion B.V.
Attn: AI Governance
High Tech Campus 5
5656 AE Eindhoven
The Netherlands
Email: legal@subduxion.com

Blake

Sales is not a department. It's the oxygen of your company.

Without sales you don't have a business. You have a hobby. And hobbies don't pay salaries.