# Export data to Azure Blob Storage

#### Overview <a href="#overview" id="overview"></a>

This destination service allows DinMo to insert new files or update existing ones in an **Azure Blob Storage Container**, based on your DinMo models or segments.

To use this service, follow these three steps:

1. **Create an Azure Storage destination**. Follow the [step-by-step guide](https://docs.dinmo.io/integrations/destination-platforms/azure-storage) above to establish this connection using a SAS Token.
2. **Create your DinMo model or segment** representing the data you want to send to your Azure Blob Storage container.
3. **Activate your model or segment** with the Azure Storage destination to start synchronization.

Every time the activation runs:

* If the file does **not yet exist**: DinMo will create it with all the rows in the query results.
* If the file **already exists**: DinMo will **overwrite it**, with all the rows in the query results.

{% hint style="info" %}
If you don't want to overwrite existing files, we recommend using a **timestamp in your file name**.
{% endhint %}

### Activation Setup <a href="#activation-setup" id="activation-setup"></a>

Once the Azure Storage destination is configured, create an activation to begin syncing your data.

{% hint style="info" %}
In the destination configuration, we ask you to specify the target container where you want to receive the data.
{% endhint %}

To do so:

1. Go to the **Activations** tab.
2. Click on **New activation**.
3. Select the **model or segment** you want to export.
4. Choose your **Azure Storage destination** from the list.

You’ll then configure the activation:

* **File Name**: Indicate the desired name for your file.

If you don't want to override existing files, we recommend including timestamp variables in the filename. To do so, you just need to surround each variable with `{}`. DinMo supports these timestamp variables:

* **`{YYYY}`**: Full year (e.g., 2025)
* **`{YY}`**: Last two digits of the year (e.g., 25)
* **`{MM}`**: Month (01-12)
* **`{DD}`**: Day of the month (01-31)
* **`{HH}`**: Hour (00-23)
* **`{mm}`**: Minute (00-59)
* **`{ss}`**: Second (00-59)
* **`{ms}`**: Millisecond (000-999)
* **`{X}`**: Unix timestamp in seconds
* **`{x}`**: Unix timestamp in milliseconds

For example: `{YY}-{MM}-{DD}_export` will be `25-04-14_export.csv` for the upload of April 14th 2025.

{% hint style="warning" %}
All placeholder values are set on **Coordinated Universal Time** (UTC)
{% endhint %}

* **File Format**: Choose between CSV, JSON, XML, or Apache Parquet.
  * For CSV only: select a delimiter and whether to include headers.<br>
* **Indicate the type of run you would like to do**, based on the result you would like to see in your file.

{% hint style="info" %}
Check [this section](#run-types-and-sync-modes) if you want to learn more about the Run Types and the Sync Modes
{% endhint %}

* **Attribute Mapping**: Map any fields from your model/segment to custom column names in the destination file. You can rename fields freely.

<figure><img src="https://3204318043-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxzBTp1t4OfqV67nXkVse%2Fuploads%2FrfRbWe3D1MP37NfO74L2%2Fimage.png?alt=media&#x26;token=aa9711b7-4565-4536-aaf5-7c72773a5f30" alt=""><figcaption></figcaption></figure>

The example above shows how to export the `age`, `name`, `phone_number` and boolean `is_active`. These columns are mapped to new fields in the destination file as `age`, `last name` , `phone` and `is_active`.

{% hint style="warning" %}
DinMo ignores all other columns from your model/segment.
{% endhint %}

#### **Run Types and Sync Modes**

When configuring your sync, you will be asked to choose the Run Type and the Sync Mode.\
For Azure activations, here are the available options:

<table data-full-width="true"><thead><tr><th>Run type</th><th>Description</th><th>Use Case</th><th>Behavior</th></tr></thead><tbody><tr><td><strong>FULL ONLY</strong></td><td>Every sync processes <em>all</em> records from the source and exports a complete file <em>(or several, if size limit is reached).</em></td><td>When the exported file must always contain a full extract, and incremental updates are not required.</td><td><p>​</p><ul><li>No delta logic</li><li>Every sync rebuilds the full export</li><li>Recommended for Snapshot mode</li></ul></td></tr><tr><td><strong>FULL THEN DELTA</strong></td><td>The first sync exports all records. All following syncs export only changed records, based on DinMo’s delta detection logic.</td><td>When exporting large datasets frequently and wanting to reduce file size or processing time.</td><td><p>​</p><ul><li>Sync 1 → Full export</li><li>Next syncs → Only changed records (new, updated).</li><li>Compatible with INSERT, UPSERT, and DIFFERENCE modes</li></ul></td></tr></tbody></table>

Sync modes determine how data is synchronized between your data source and destination. They control whether to insert new records, update existing ones, or both, and how to handle the synchronization process.

For the Azure destination, here are the available options:

<table data-full-width="true"><thead><tr><th>Sync Mode</th><th>Description</th><th>Use Case</th><th>Behavior</th></tr></thead><tbody><tr><td><strong>SNAPSHOT</strong></td><td>Exports a complete snapshot of all records at the time of sync. Each sync can generate a new file with a timestamped name.</td><td>Useful for backups or systems expecting full “point-in-time” extracts.</td><td><p>​</p><ul><li>Always exports the full segment / model</li><li>Creates a new file with timestamp</li><li>No incremental logic</li></ul></td></tr><tr><td><strong>UPSERT</strong></td><td>Writes all updated or new records into the exported file.</td><td>Keeps files up-to-date with the latest source data.</td><td><p>​​</p><ul><li>New records → included in the exported file</li><li>Existing records → included <strong>if, and only if</strong>, there are updated values</li><li>Deleted records → simply not present in the file</li></ul></td></tr><tr><td><strong>INSERT</strong></td><td>Adds only new records to the exported file. Existing records are never modified.</td><td>Useful for append-only files, such as historical logs or event tracking.</td><td><p>​</p><ul><li>New records → added to file</li><li>Existing records → ignored</li><li>Missing records → no action</li></ul></td></tr><tr><td><strong>DIFFERENCE</strong></td><td>Generates separate files for added, updated, and removed records between syncs.</td><td>For audit trails, incremental processing, or systems needing change-specific files.</td><td><p>​</p><ul><li>Creates <code>_added.csv</code>, <code>_updated.csv</code>, <code>_removed.csv</code> <em>(or other extensions)</em></li><li>Each file contains only the relevant change type</li></ul></td></tr></tbody></table>

#### **Scheduling**

Define how frequently your data is exported to your Azure Blob Storage Container.

For each run, DinMo performs a **full run**, meaning that 100% of the people in the model/segment will be present in the file (even if they were already in the previous one).

#### Warnings

In this section, specify if you want to receive warning for your Azure Storage activation.

{% hint style="info" %}
Consult the specific section to [learn more about sync warnings](https://docs.dinmo.io/activations/troubleshooting-syncs/sync-warnings).
{% endhint %}
