## How to check consistency between your custom evaluation and the Evaluate tab?
To check how the definitions in the **Evaluate** tab in [Email Campaigns](🔗) are built and compare them with your custom evaluation: a. Go to the **Evaluate** tab in the **[Scenarios](🔗)** and navigate to **Results** in the **Evaluate** tab. b. View more options by clicking on the three dots on the top right corner of the page (next to the **Start **button). c. Select the **Copy email campaign to dashboard** (**Copy email campaign evaluation to dashboard** in scenarios) option.
d. You can check and compare the definitions between the **Evaluate** tab and your custom evaluation in the new [dashboard](🔗). You can view and edit this dashboard like any other regular dashboard.
## How to check consistency between your custom evaluation and New Evaluate tab?
You can compare the results with the email campaign results tab by manually building the metrics such as:
Average order value
Please bear in mind that the purchase event must contain the following assets:
Hours between campaign click and purchase
Last clicked campaign_id
## How to build a report for multiple campaigns that calculates revenue or other properties using a running aggregate in rows?
If a [running aggregate](🔗) is used in the rows and the [report](🔗) is made for multiple campaigns, discrepancies between the original evaluation for a single campaign and custom report for multiple campaigns are expected due to variability of customer journey and inability to achieve 100% correct attribution.
Did this article help you?
Please provide your feedback. We would like to know if our help center is effective in solving your queries. You can also leave comments and suggestions on how we can make our help articles better. You can also suggest topics you’d like us to cover.