# Audience Performance

### How to measure the performance of my Audience?

To check the performance of an audience, you can go directly to the *Audience* tab and click on the audience you are interested in.

#### Overview Tab

The Overview tab allows you to view various **snapshots** and **trends** about your audience.

By default, you will always have:

* The **Global size**, which corresponds to the size of your audience (Treatment and control groups included)
* The **Share of "Contacts"**, which corresponds to the percentage of all known profiles that belongs to this audience
* The **Daily turnover**, which corresponds to the percentage change in audience from one day to the next

All metrics that you have decided to track will also be summarised in this Overview tab.

#### Analysis Tab

In the *Analysis* Tab, you will have access to two tools that will help you better understand your audience.

* **Breakdown**

Understanding your audience composition is essential for optimizing your targeting strategy. With the **Breakdown Insights**, you can now analyze how different criteria shape your segments in real-time.

This feature allows you to visualize field distributions directly within the Audience, making it easier to refine your audience selection.

<figure><img src="https://3204318043-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxzBTp1t4OfqV67nXkVse%2Fuploads%2FM869hi3acMH6Rn59kt4E%2Fimage.png?alt=media&#x26;token=4e7d8129-240d-4b3c-88ed-650b178e434d" alt=""><figcaption></figcaption></figure>

* **Overlap analysis**

The Audience Overlap Analysis feature helps you identify the common population between two audiences. This is crucial for two primary reasons:

1. **Optimizing Activation Budgets**: Prevent unnecessary spending by avoiding targeting highly similar populations with separate campaigns.
2. **Analyzing Rule Similarities**: Understand if different segmentation rules unintentionally capture the same audience, revealing hidden correlations.

<figure><img src="https://3204318043-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxzBTp1t4OfqV67nXkVse%2Fuploads%2Fdin8V7VYzbz35hfVuWEE%2Fimage.png?alt=media&#x26;token=c0ec893d-5c29-4c73-a22f-63449dec2261" alt=""><figcaption></figcaption></figure>

#### Experiments Tab

{% hint style="info" %}
The *Experiments* tab is only available if the control group has been activated.
{% endhint %}

Our *Experiments* rely on a treatment/control experimental framework—commonly referred to as frequentist A/B testing. This method measures the incremental impact of your campaigns on business outcomes and validates the results through rigorous statistical significance testing.

Because DinMo platform sits directly on top of your data warehouse, we can compare the performance of the treatment and control groups across any metric, at the audience level.

**Understanding Adjusted Control**

It's common for the Treatment and Control groups to have different sizes. In such cases, we must **normalize our metrics by group size** to ensure a fair comparison.

Take this example: if the Treatment group has 70 users and the Control group has 30, and every user spends 10€, the total spend would be 700€ for Treatment and 300€ for Control. But you cannot claim that the treatment group performs better, since the average income spent is the same...

{% hint style="info" %}
The adjusted group is only necessary for certain types of metric calculations: *sum*, *count*, *distinct count*.
{% endhint %}

<figure><img src="https://3204318043-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxzBTp1t4OfqV67nXkVse%2Fuploads%2FCEm4Bh3Y4eiFtsHuzThJ%2Fimage.png?alt=media&#x26;token=3e29ce08-2e3d-46bd-9cc9-c185530bf7ed" alt=""><figcaption></figcaption></figure>

To properly compare results, we adjust the Control metric by first normalizing it per user, then **scaling it to the size of the Treatment group**.&#x20;

This gives us a **consistent basis for comparison** and clearly highlights the true incremental effect of the campaign.
