Product Recommendations in the Bloomreach Engagement platform have got performance and algorithm/model limitations of which you need to be aware. Here is the list of restrictions:

  1. Max 500 products/items are returned in a [metric-based](🔗) engine in a single request.

  2. Max 500 products/items are returned in a [metric-based category](🔗) engine in a single request with one category in `categoryNames` attribute.

  3. Catalog filter supports neither regex filters, nor list attributes and filters.

  4. Catalog filter serves as a post-filtering. Meaning that the recommendation model might return 10 suitable items, but if you specified a very strict catalog filter, it can filter out some of those recommended products.

  5. In _Filter-based template_, the dynamic catalog filter from the get-recommendation request is ignored.

  6. In _Advanced template_ request latency stack up with more recommendation engines added in a single advanced template instance. Please consider it when designing a recommendation engine.

  7. In all AI models, which are using events for training, there's a minimum threshold of 2 events, which are considered for training per customer. e.g. customer with 1 purchase_item during the last 90 days will not be considered, a customer with 1 purchase_item and 1 view_item, will be considered for model training. This mechanism is introduced to mitigate model skewness.

  8. Max 1000 products could be requested in the parameter `size` for AI models.

  9. Textual similarity is trained only on a model with less than 1 million rows.

  10. Combining more recommendation engines inside the advanced engine could lead to prolonged response time.

# Challenges & Trade-offs

While developing/using recommendations, one deals with several challenges:

  • **Exploitation/Exploration:** We need to recommend items the user already interacted with (exploitation), but we also need to show something new (exploration).

  • **Popularity bias:** Most users view very few items and most items are seen by a minority of users. This creates huge “data gaps” and we need to cover them.

  • **Cold-start problem:** When a fresh client starts to use Bloomreach Engagement, we are usually not provided with enough historical user-item interactions. Thus our personalized RS cannot perform well. The more data we collect, the better the performance will be observed. The same idea applies when a new product is added to the catalog.