QA & Monitoring

Monitoring keeps your integration healthy and makes performance visible. Froomle provides dashboards and metrics to validate data quality and measure impact.

Integration Monitoring Dashboard

The dashboard is organized into three panels:

  • Activity - confirms that traffic levels match your internal analytics.

  • API errors - must be kept at zero for a stable integration.

  • QA warnings - highlights missing events, inconsistent IDs, or broken flows.

As long as more than 5% of API calls contain errors or critical QA warnings are present, the integration is not ready for go-live. QA warnings are disabled by default for development environments and can be enabled once a full staging integration is in place.

QA Tests

The Froomle platform automatically runs a suite of quality assurance tests to ensure data integrity and integration health.

Test Name Description

check_anonymous_traffic_clicks

Checks whether the percentage of clicks on recommendations for user_group anonymous/blacklist does not exceed the threshold.

check_anonymous_traffic_pageviews

Checks whether the percentage of pageviews (detail/page visits) for user_group anonymous/blacklist does not exceed the threshold.

check_anonymous_traffic_requests

Checks whether the percentage of recommendation requests for user_group anonymous/blacklist does not exceed the threshold.

check_clicks_without_following_pageview

Checks that for every "click on recommendation" Froomle receives, there is also a corresponding "detail_pageview" (same user, page type, request id, list name, user group & item within 120 sec).

check_clicks_without_preceding_impression

Checks that for every "click on recommendation" Froomle receives, there is also a corresponding "impression" (same user, page type, request id, list name, user group & item within 120 sec).

check_clicks_without_preceding_recommendation

Checks that for every "click on recommendation" Froomle receives, there is also a corresponding "recommendation" (same user, page type, request id, list name, user group & item within 120 sec).

check_device_consistency

Something is wrong with tracking devices. The impact is that personalization is not working on all integrations.

check_duplicate_click_on_recommendation

Duplicate clicks: Verifies that the percentage of duplicates in the click_on_recommendation events is below an acceptable threshold. Issues skew reporting.

check_duplicate_detail_pageview

Duplicate pageviews: Verifies that the percentage of duplicates in the detail pageview events is below an acceptable threshold. Issues skew reporting.

check_duplicate_events

Monitors for duplicate event logs which can skew metrics and analysis.

check_duplicate_impression

Duplicate impression: Verifies that the percentage of duplicates in the impression events is below an acceptable threshold. Issues skew reporting.

check_enough_events

Enough events: Checks if we receive sufficient events to train collaborative filtering algorithms. Issues lead to lower quality.

check_enough_items

Checks that the item catalog has a sufficient number of items to generate meaningful recommendations.

check_enough_recommendable_items

Enough recommendable items: Checks whether enough items (minimum 10) are available for being recommended. Issues lead to lower quality.

check_events_api_responses

Events API responses: Verifies success rate of events API responses. Issues lead to lower quality.

check_events_shared_click_on_recommendation

Checks whether a minimal amount of click_on_recommendation events is sent to our events API.

check_events_shared_detail_pageviews

Checks whether a minimal amount of detail_pageview events is sent to our events API.

check_events_shared_impressions

Checks whether a minimal amount of impression events is sent to our events API.

check_events_shared_pagevisits

Checks whether a minimal amount of page_visit events is sent to our events API.

check_events_shared_requests

Checks whether a minimal amount of requests are sent to our recommendations API.

check_extreme_users_in_detail_pageviews

Extreme users (pageviews): Verifies that the 5 users with most pageviews do not represent too much of the total traffic. Issues skew reporting and possibly recommendation quality. Usually caused by bots.

check_extreme_users_in_recommendations

Extreme users (requests): Verifies that the 5 users with most recommendation requests do not represent too much of the total traffic. Issues skew reporting and possibly recommendation quality. Usually caused by bots.

check_extreme_users_pageviews

Detects users with an abnormally high number of pageviews, often a sign of bots or scrapers.

check_extreme_users_recommendations

Monitors for users who request an excessive number of recommendations.

check_impressions_without_preceding_recommendation

Checks that for every "impressions" Froomle receives, there is also a corresponding "recommendation" (same user, page type, request id, list name, user group & item within 120 sec).

check_items_api_responses

Item API responses: Verifies success rate of item API responses. Issues lead to outdated recommendations.

check_metadata_available_for_visited_items

Metadata availability: Verifies that detail_pageviews events occur on items present in Froomle platform. Issues lead to lower quality since Froomle models cannot be trained on items without metadata nor can these items be recommended.

check_ratio_logged_in_users_events

Logged in users (events): Verifies that a minimum percentage of users is logged in in the events. Issues can indicate problem with user identifiers, impacting quality.

check_ratio_logged_in_users_recos

Logged in users (requests): Verifies that a minimum percentage of users is logged in in the recommendation requests. Issues can indicate problem with user identifiers, impacting quality.

check_recommendations_api_responses

Recommendation API responses: Verifies success rate of recommendation API responses. Issues lead to empty recommendation modules.

check_requests_without_pageviews

detail_pageview and/or page_visit events are missing. We know this because we receive recommendation requests but not the corresponding detail_pageview or page_visit event one would expect. If the status is MAJOR, the missing events are expected to significantly degrade the quality of the model. If the status is MINOR, the missing events are only hindering analysis and reporting and are expected to have no significant impact on model quality.

check_user_consistency

User consistency: Checks if something is wrong with tracking logged-in users. The impact of a status different from PASSED is that cross-device personalization is not working on all page_types. If the status is MINOR, recommendations are still personalized, but not cross-device. If the status is MAJOR, recommendations are most likely not personalized.

enough_recent_items_available

Enough recent items: Verifies that enough items have been added or updated in the lookback period. Issues indicate potential problems in the item integration.

not_too_many_recent_items

Not too many recent items: Verifies that the amount of new items is on a normal level. Issues indicate potential problems in the item integration.

Common mistakes to watch

  • Item IDs not aligned across items, events, and requests.

  • Missing impression or click events.

  • Stale item catalogs or missing deactivation of removed content.

  • Inconsistent user_id / device_id usage across channels.