Audit Log
Bloomreach considers security as of the highest importance and provides customers with audit logs to monitor all user activities in Bloomreach Engagement. Audit logs help you detect and investigate suspicious activity by recording user actions.
This guide explains what audit logs are, how Bloomreach structures them, and how you can use them to maintain security and compliance in your organization.
Note
Bloomreach currently offers audit logs for Bloomreach Engagement and Data Hub. To learn how to access audit logs for both products, see the Audit logs implementation guide.
What is an audit log?
Audit log records user activity chronologically to investigate unauthorized access, data leaks, or GDPR incidents.
Audit logs answer four key questions:
- Who? — The actor: public access, an authenticated user, or an API key.
- Did What? — A unique combination of request method and path. Some records include resource references and service data.
- When? — A timestamp when the action occurred.
- Where? — The application scope (cloud organization, workspace, or project), service name, and referrer URL for browser actions.
Organizational structure
The audit log organizational structure differentiates between Data hub and Engagement.
- Data hub uses a hierarchical structure and cloud organization to organize your resources.
- Engagement audit log structure is much simpler and follows the account > project structure.
Understanding this hierarchy is essential for navigating how audit logs are organized and stored.
Organizational structure for Data hub
Cloud Organization
├── Workspace (EU region)
│ ├── Engagement
│ │ ├── Project 1
│ │ └── Project 2
│ └── Discovery
│ ├── Account 1
│ └── Account 2
└── Workspace (US region)
├── Engagement
│ ├── Project 1
│ └── Project 2
└── Discovery
├── Account 1
└── Account 2
Cloud organization for Data hub
Your company's top-level container represents your entire relationship with Bloomreach. One cloud organization can contain multiple workspaces in different regions.
Important
All audit logs for your organization are consolidated at the cloud organization level, regardless of which workspace or project generated them.
Workspace
A container within your cloud organization tied to a single geographic or regulatory region (for example, EU or US). Multiple Engagement projects and Discovery accounts can link to a single workspace.
Workspaces help separate your data by region for compliance purposes. For example, GDPR-related data might be stored in an EU workspace, while other data resides in a US workspace.
Projects and accounts
Engagement Project: Used to differentiate between businesses or business units. Each project has its own project token (ID) for event tracking. Projects are independent—each has different customers, events, analyses, and campaigns.
Discovery Account: A container used to segment business units within Discovery. A Bloomreach customer can have multiple accounts under a single workspace.
Note
For more details on the organizational structure, see Unified user management overview
How this affects audit logs
Understanding your organizational structure helps you:
- Locate the correct audit logs for specific projects or workspaces.
- Understand log consolidation (all logs appear under your Cloud Organization folder).
- Identify the scope of user actions (whether they affected a project, workspace, or organization).
- Plan your log retention and analysis strategy.
Audit log storage architecture
Bloomreach stores audit logs in Google Cloud Storage (GCS) with a structured, time-based organization that makes it easy to locate and retrieve specific events.
Regional storage
Audit logs are stored in regional GCS buckets. Each region (EU, US, etc.) has a dedicated storage bucket containing audit logs for all Cloud Organizations in that region.
Key principle: Logs are stored in the region where the activity occurred, supporting data residency and compliance requirements.
Folder structure
Within each regional bucket, logs are organized hierarchically by Cloud Organization and time:
{regional-bucket}/
└── cloud-org-{your-cloud-org-id}/
└── {YYYY}/
└── {MM}/
└── {DD}/
└── {HH}/
└── {timestamp}-{index}.jsonl.gz
| Regional bucket | The GCS bucket name (e.g., us-auditlog-storage or eu-auditlog-storage) |
|---|---|
| Cloud Organization folder | Your unique Cloud Organization ID creates a dedicated folder containing all your logs |
| Time hierarchy | Logs are organized by year (YYYY), month (MM), day (DD), and hour (HH) in UTC time zone |
| Log files | Individual files use a timestamp and index in the format YYYYMMDDTHHMMSS-{index}.jsonl.gz |
File format
Audit log files use gzipped JSON Lines format (.jsonl.gz):
- Each line contains exactly one complete audit log record.
- Records are in JSON format for easy parsing.
- Files are compressed with gzip to reduce storage and transfer costs.
- Each file represents approximately one hour of activity.
Key storage principles
- Consolidated logs: Your Cloud Organization folder consolidates all audit logs from all your Engagement Projects across all Workspaces.
- Hourly organization: The system organizes logs by hour (UTC), making it easier to locate specific events within a timeframe.
- Immutable storage: Audit logs are append-only for security. You can read the data, but you can’t edit or delete it to prevent tampering.
- Retention policy: Files are available for download for at least 60 days. After this period, the system moves them to archive storage where they are no longer accessible for download.
Audit log schema
Log records follow a predefined schema using JSON format. The top-level schema is fixed, allowing you to import audit logs into Security Information and Event Management (SIEM) systems such as Splunk, Graylog, or Kibana.
Fixed fields
Fixed fields are always present in every audit log record. These fields are reliable for indexing and alerting in SIEM systems.
| Field | Description |
|---|---|
timestamp | ISO 8601 datetime in UTC when the operation occurred |
request.method | HTTP method used |
request.path | API endpoint or resource path |
status | HTTP status code |
serviceName | Internal service name within Bloomreach architecture |
scopeType | Level where action occurred |
scopeID | Unique identifier for project, account, or organization |
requestID | Unique ID for correlating related records |
Note
Scope-related fields (
scopeType,scopeID) appear only for methods that require authorization. These fields don't appear for public methods or methods that require only authentication.
Optional fields
Optional fields appear only for certain operations, providing additional context when relevant.
Authentication information
| Field | Description |
|---|---|
identity | The actor performing the operation (email address, API key identifier, or system account) |
type | Actor type—USER, SYSTEM_SERVICE_ACCOUNT, BASIC_AUTH |
Authorization information
| Field | Description |
|---|---|
allowed | Whether the action was authorized (true/false) |
permission | The specific permission required for operations that need authorization |
Metadata
| Field | Description |
|---|---|
clientIP | Remote IP address where the request originated |
host | Hostname (domain) for external requests |
referrer | Browser-reported document referrer (useful for identifying the origin screen) |
session | Session ID for requests within an authenticated user session |
userAgent | Browser-reported User-Agent string |
Resource ID
| Field | Description |
|---|---|
resource_id | Resource identifier for operations on specific resources with internal IDs (customer, scenario, banner, experiment). Useful for filtering logs related to specific resources |
ServiceData types
ServiceData provides additional context specific to the service or operation. There are three types:
1. GenericServiceData
Provides additional informational data. The info field isn't fixed and may change over time.
| Field | Description |
|---|---|
@type | auditlog.GenericServiceData |
info | JSON-encoded string with operation-specific details (schema isn't fixed) |
versionID | Version identifier for resources with versioning history (reports, scenarios) |
2. PolicyChange
Records changes to user permissions and roles.
| Field | Description |
|---|---|
@type | iam.PolicyUpdate |
policyChanges | List of role changes with member, roleId, and type (ADD, UPDATE, REMOVE) |
oldExpireTime | Previous role expiration timestamp (if applicable) |
newExpireTime | New role expiration timestamp (if applicable) |
updateTime | When the policy change occurred |
userId | User who made the change |
3. AnonymizationServiceData
Records customer data anonymization requests.
| Field | Description |
|---|---|
@type | auditlog.AnonymizationServiceData |
requests | List of anonymization requests with customer IDs |
ids | Customer identifiers—name is the ID type (e.g., "registered", "_id"), value is the actual identifier |
Understanding log records
Correlating multiple records
Some actions generate multiple records on different levels of the application. Different components add component-specific information as service data. The system generates a unique request ID at the first touchpoint and propagates it to subsequent events.
Use case: When a user updates a scenario, you might see records from the portal application, the workflow engine, and the data processing service—all sharing the same requestID.
How to correlate: Filter logs by requestID to see all records related to a single user action.
Identifying scope
All events contain fields that define the layer where the system created the event, connecting it with a specific project, account, or instance.
Project-level action:
"scopeType": "PROJECT",
"scopeID": "a25641b8-dd39-11ea-b199-ae02a0152881" // project token
Account/Workspace-level action:
"scopeType": "ACCOUNT",
"scopeID": "workspace-abc123" // account or workspace ID
Cloud Organization-level action:
"scopeType": "CLOUD_ORGANIZATION",
"scopeID": "org-xyz789" // cloud organization ID
Instance-level action:
"scopeType": "INSTANCE"
// no scopeID field
Use case: Filter by scopeType and scopeID to analyze activity within a specific project or workspace.
Correlating user activity
Use the sessionID to correlate and analyze the flow of user activity during a session. The system creates a new unique sessionID for each login session and discards it after logout or session expiration.
Important
Methods that don’t require authentication don’t have a sessionID.
Use case: Track all actions a user performed during a single work session by filtering on their sessionID.
Example: Reading a complete record
Here's a real audit log record with interpretation:
{
"timestamp": "2022-04-06T13:05:33.000Z",
"authorizationInfo": {
"allowed": true
},
"instanceName": "c1d",
"logName": "audit.v1.Log",
"metadata": {
"clientIP": "178.143.40.201",
"host": "cloud.exponea.com",
"referrer": "https://cloud.exponea.com/",
"userAgent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:98.0) Gecko/20100101 Firefox/98.0"
},
"request": {
"@type": "http",
"method": "GET",
"params": {},
"path": "/login/google"
},
"requestID": "c25d06bc-a464-4364-aba2-b4a5e99bd429",
"serviceData": {
"@type": "auditlog.LoginServiceData",
"final": true,
"identity": "[email protected]",
"info": {},
"provider": "google",
"twoFactor": "none"
},
"serviceName": "portal-app",
"status": 302,
"timestamp": "2022-04-06T13:05:31.095757Z"
}
Interpretation: User [email protected] successfully logged in via Google authentication from IP address 178.143.40.201 at 2022-04-06T13:05:31 UTC. The action was authorized (allowed: true) and returned HTTP status 302 (redirect). The login was completed without two-factor authentication (twoFactor: none). This was an instance-level action handled by the portal application (portal-app).
Use cases & integration
Security monitoring
Unauthorized access attempts: Filter logs for "authorizationInfo.allowed": false to identify blocked access attempts.
Anomalous activity: Track unusual patterns such as:
- Multiple failed login attempts from different IP addresses
- Access from unexpected geographic locations
- Actions performed outside normal business hours
- Bulk data exports by users without typical export patterns
Data leak investigation: Use AnonymizationServiceData and export-related logs to track sensitive data access and movements.
Compliance & auditing
GDPR compliance: Track customer data access, modifications, and anonymization requests to demonstrate compliance with data protection regulations.
Policy verification: Monitor PolicyChange records to verify that role assignments and permission changes follow company policies.
Access reviews: Generate reports of who accessed what resources during specific time periods for periodic access reviews.
Operational analysis
User behavior analysis: Track feature usage patterns by analyzing request paths and frequencies.
Performance troubleshooting: Correlate user-reported issues with backend actions using timestamps and requestIDs.
Change management: Review configuration changes, scenario updates, and system modifications to understand when and why changes occurred.
SIEM integration
Audit logs are designed for import into Security Information and Event Management (SIEM) systems. The fixed schema supports:
- Automated indexing based on reliable field names
- Real-time alerts triggered by specific events or patterns
- Long-term retention and historical analysis
- Cross-system correlation, when combined with other security logs
Common SIEM platforms compatible with Bloomreach audit logs are Splunk, Graylog, or Elastic Stack (Kibana).
Next steps
Now that you understand what audit logs are, how they're structured, and how they're organized in storage, you're ready to access them.
Refer to the Audit logs implementation guide for step-by-step instructions.
Updated 1 day ago
