Cloud migration changes with BC19 wave 2

With BC 19 wave 2 Microsoft improved the Cloud Migration functionality by re-designing the migration flow, cleaning up the errors reported by the partners over the summer, allowing migration of a larger set of GP tables with one additional extension installation, and removing the version 18 size limitations.

Improved migration flow

  • search “cloud migration management”
  • go through “Cloud Migration Setup”
    • before version 19, the replication and data upgrade were done in one step
    • now we have 2 separate steps triggered from the Migration Cloud Management page actions:
  1. “Run Migration Now” – for copy/replication; at the end the last “Cloud Migration Management” entry shows “Upgrade Pending”
  2. “Run Data Upgrade Now” – once data is copied it has to be validated/upgraded. Status will move through “Upgrade in Progress” and Completed.

The split of the unique process “Run Migration Now” into two steps was introduced to better manage cases when the data upgrade would fail.

If this is the case, then in the admin center you can restore the environment as of the time of end of the replication step and repeat the data upgrade.

Cloud migration resiliency

  • use of upgrade tags: to avoid multiple data upgrades processing use Upgrade Tags or to completely skip the upgrade. More here.
  • less locking when running cloud migration (more companies per run). There is still a limitation if schema for migrated tables is above a json object limit of 4 Mb. To avoid locking, migrate companies in smaller chunks. Especially if you get an error like this:
    • The size of the lookup activity result exceeds the limitation …
  • data repair after Cloud Migration: some data would be missing while migration would be reported as successful in the previous version

Dynamics GP migration changes

  • Enabled large GP tables (> 500 mb or > 100k records)
  • Support mapping of any custom table
  • add events for GP to BC migration:
    • new sample Microsoft extension to create BC counterpart tables for GP tables (source code here). This allows for a much larger set of tables to be migrated to SaaS.
    • mapping between GP to BC tables
    • there is also a powershell script that generates AL table based on a SQL table

Upload limits

80 GB limitation is lifted. Any size is supported.

Cleanup as much as possible before cloud migration

Some functionality may be disabled after cloud migration if tenant is too large. For ex. you might not be able to create a sandbox from Production if your capacity is 80 GB and you reached already 80 GB with your Production. Alternatively, you can upgrade capacity.

Business Central 2021 wave 2: an overview of Data Management

Reasons why data size matters

  • Data size or rather tenant capacity is reflected in the price the end user pays
  • Data size influences performance: smaller databases allow for efficient processing, faster backups, reports, sorting records, faster page display …

To see the current data size, for the entire database and per environment, in the admin center, click on the Capacity blade:

To increase the capacity of their environments or increase number of environments, customers can purchase through their partners, additional capacity Add-Ons (check this).

Under Customer subscriptions, their Partner can purchase additional capacity:

To stay within their license capacity, customers and/or their partners need to handle their data size and compression.

Handle and Compress Data

Users can start with the Table Information page and use Data Administration action or directly launching the Data Administration page from Search.

This page/view contains two lists: one displays data per table and the second list summarizes data per company:

Through Data Administration page one can note or perform the following:

  • Refresh action: updates tables’ size and can be scheduled via a job queue
  • Data Cleanup: various reports can be run to delete Change Logs, Document Archives, Invoiced Documents and so on.
  • Data Compression
    • various reports are used to delete and archive ledger entries and registers:
  • A link to Companies page, allows for deletion of companies or copy an existing company under a different name:
  • An action to Retention Policies brings up a list of records from Retention Policy Setup table.
  • And drilling deeper, we can inspect the Retention Policy Setup record via Retention Policy Setup Card:
  • If you drill down on the Table ID you can see there are a limited number of tables that have defined retention policies:
  • To add your own table in the list above please read Stefano’s blog.
  • There is also a Data Administration Guide action. This wizard takes users through retention policies list, manage companies page, data compression of different tables

A few notes about archiving

  • the archiving mechanism is not a backup restore system
  • integrated in the Data Compression processes, a report option for archiving for each data compression report
  • can be exported to excel or csv files: look in Data Archive list page
  • ability to archive any data to be deleted: start logging, delete data, stop logging.
  • Example: lets archive some test vendors:
  • A quick test of deleting a test vendor reveals the Data archive record and the excel (or csv) created:
  • archives are stored in media fields in Data Archive Table: Table Fields (json) and Table Data not counting in the database size

More to check in Microsoft’s Tomas Navarro and Bardur Knudsen’s video “Data Management in the Business Central Application”.