How Affinity uses AI

This article explains how Affinity uses artificial intelligence, what data it processes, and how you maintain control over AI-generated journeys. Affinity is designed with transparency and human oversight as core principles.

AI models and technologies

Affinity is an AI-first feature designed to assist you in building marketing automations. It helps create journeys (including segmentation), generate and assemble email content, and insert personalization elements.

Multi-agent architecture

Affinity uses multiple AI agents that collaborate to deliver journey creation capabilities. These agents work together, leveraging other Bloomreach features such as:

Base models

The system primarily relies on Gemini models from Google DeepMind, with certain components using OpenAI models. Both providers apply model-level safety and bias-mitigation mechanisms, which serve as the foundation for responsible AI use within Affinity.

Self-optimization functionality

Affinity includes self-optimization functionality that uses AI and data-driven algorithms to automatically adjust the duration of wait nodes within a journey for each customer. This dynamic optimization analyzes journey performance to maximize engagement or conversions without requiring manual input, and only uses data collected within a single project.

Data usage and privacy

Default PII handling

By default, Affinity doesn't access or process personally identifiable information (PII) in connection with customer profiles. However, the system can process PII in anonymized and aggregated form, but never in direct connection with a specific customer profile.

For example, if the Loomi Analytics Assistant is prompted to create a gender segmentation, it accesses the list of values for customer attributes and returns the aggregated number of customers matching the segmentation. Importantly, this data is anonymized, meaning individual identities are never exposed.

User-submitted data

If you include personal data in a prompt, the Large Language Models (LLM) will process that information as part of fulfilling the request. It is your responsibility to ensure that any personal information you share is managed properly and in accordance with relevant laws and regulations.

Data isolation

User-provided data isn't used to train or improve the model for other clients, and no cross-client data sharing occurs.

Permissions for personal data

Affinity follows the same data access and privacy model as existing Bloomreach Engagement email campaign functionality. Only users with Personal Data Viewer permissions are able to see personal data.

Potential sources of bias

Bias may arise from:

  • Model training data: Can reflect societal or cultural biases.
  • Customer datasets: May include attributes such as skin tone, gender, or audience segmentation.
  • User prompts: Phrasing or context can influence the tone or fairness of generated content.

As a result, outputs may occasionally contain unintentional bias or unbalanced language.

Mitigation measures

Affinity implements multiple safeguards:

Built-in safety features

Affinity relies on Gemini and OpenAI's built-in safety and fairness mechanisms, which filter harmful content during generation.

Gemini safety filters

Affinity's primary AI provider, Gemini, includes built-in safety filters across four categories:

  • Harassment: Filters negative or harmful comments targeting identity or protected attributes
  • Hate speech: Blocks content that's rude, disrespectful, or profane
  • Sexually explicit content: Restricts references to sexual acts or lewd content
  • Dangerous content: Prevents promotion or facilitation of harmful acts

Each category assigns content a probability rating (high, medium, low, or negligible). By default, Gemini blocks content with medium or higher probability of being unsafe.

Learn more about Gemini safety settings.

Mandatory human review

You must review all generated agentic journey content from Affinity before it's sent or published.

User education

Affinity informs users within product documentation about potential bias risks and the importance of reviewing outputs before use.

Your responsibilities

You're responsible for ensuring that generated content is fair, accurate, and compliant with applicable laws and ethical standards. Before publishing or sending, you should:

  • Review outputs for potential bias or inappropriate content.
  • Avoid prompts or inputs that could reinforce stereotypes or discriminatory language.
  • Ensure any personal data included in prompts is handled appropriately.

❗️

Important

Always review and confirm all journey content before launch—you're responsible for accuracy, tone, and compliance.

Disclaimer

While Bloomreach and its model providers take measures to promote fairness, safety, and data protection, no AI system can guarantee complete neutrality or accuracy. You should always apply human oversight before publishing or using content generated by machines. Review all journey elements in the Canvas before launch, as explained in Editing, refining, and launching.