Generate Azure Business Central containers using deployment template and parameter files

Standard

As soon as I started working with Containers, more specifically with Azure Containers, around mid-December 2018, I quickly run into a few questions: how can I automate the container creation, how can I update a container (scale up or down, override settings)? How can I scale out my configuration? For some of my questions I identified answers, for others the research is ongoing.

As we established I am not exactly an expert and if you’re still here, the process of generating your first Azure Container loaded with Business Central is a fairly easy one. Check my previous blog where I described step by step the process.

I like to mess around, and I did mess around with the tables where the extensions are managed (system tables 2000000150 NAV App*) ending up with a corrupt container, or rather with a corrupt Business Central. Because I did not have any important data I could just delete the container and run through the steps of manually creating it again. But what if I wanted to automate the process? What if I needed to build 5 distinct containers? How can I speed up the process and make it scalable?

Instead of going through last blog exercise, to delete the corrupt container and re-create it, I decided to investigate Microsoft documentation around deployment templates and deployment parameter files.

This is what I learnt:

In the portal, go to the container created in the previous blog, click on “Automated script” and download:

deploy template

Download the automatic script into a new Visual Studio Code folder. I chose to save it as azuredeploy.json.

vscode-build automated script

Above, is the deployment template I’m going to work with to automate the creation of new containers loaded with a Business Central image. The current image, Microsoft/bcsandbox:latest, in the template code, won’t have data. If you want sample data in your new container(s) use this image: Microsoft/bcsandbox:base. If you need more info about loading your Business Central with data, read Waldo’s and Roberto’s blogs.

image with dataAdditionally, create a new file(the script) – I named it templatedeploy.ps1:

auto_script

Before we run this script we have to take a closer look at the deployment template downloaded from the portal.

template param section

I replaced the highlighted section above with this one below:

my params

I’m adding 3 new parameters, but you could parametrize almost any setting in your  deployment template and create placeholders for them in the deployment template:

placeholders

Moreover, I needed to create a new file in our project, parameters.json:

paramsjson

Before running the script “az group deployment create” looks like this:

command

Now I’m ready to run the powershell script:

result_script_template

To be able to log in Business Central we need the credentials for admin which can be obtained with the command:

az container logs -g rg-template -n d365bc-container-fromtemplate

To perform some cleanup (remove resource group and its content)  run:

az group delete -n rg-template –yes

Let’s now scale out our deployment to 2 containers:

scaleout

And after running “templatedeploy.ps1” we go to Azure Portal and we can see 2 containers under our unique deployment:

scaleout_result

Check the logs, identify the Admin password and you’re ready to login in your container!

That’s what I learnt. What would you add?

Advertisements

How to generate Azure Containers Instances loaded with Business Central in minutes

Standard

To start writing extensions for Business Central we have a few choices: installing locally one of the release candidates that comes in the same format as any other Dynamics NAV DVD packages, creating a locally hosted docker sandbox, or in Azure as a container instance.

As the process of getting your container takes just a few minutes, I prefer to do my extensions testing and development in an Azure container.

To generate my Azure container with Business Central I started by installing Azure CLI for Windows. You can also use chocolatey to install Azure CLI on your local machine.

In Visual Studio Code click on Terminal and in a powershell session start your Azure work by logging in your Azure account with

az login

1.Azure Login

If logged in already and want to check account logged info:

az_account_show

Next, we need to create a resource group, which is a logical container in Azure, something like an organization unit in Active Directory or a folder for Windows files.

The command is “az group create” and takes two parameters: group name and location:

create group

Once the resource group is created we can create the azure container instance loaded with the latest Business Central using the following Azure command:

az container create

containerIn the image above,

  • the group in which the container will be created follows “-g” (group) option: “svrg”
  • the name of the container follows the “-n” (name) option: “d365bc-az-cont-us-cont”
  • the image loaded on this container is stored here: “Microsoft/bcsandbox:latest”
  • the OS is Windows
  • We can only enter 5 ports: 80,7046, 7048, 7049, 8080

For a complete list of parameters for “az container create”, check this.

To check the logs, find the credentials to log in recorded by Azure for the previous command run “Az container logs” like below:

logs

As you have seen above, the admin credentials are displayed and the new Azure Business Central instance appears ready for connections. Lets check by browsing on the link for the web client:

Ctrl + Click on the web client link in the picture above opens the Business Central web client:

webclient

To see the newly container page in Azure navigate to the resource group and then to your container:

az_container_page

After entering the credentials from the logs we are in:

inbc

Good! We’ve got a Business Central instance in Azure running in a container and we’re ready to code and test extensions!

To get into this container in Visual Studio Code generate with AL:Go command a new AL project and change in launch.json the value for server token to the container dns name created above:

vscode to azure

In the next blog I’ll go through the steps of deploying an Azure container loaded with a Business Central image using deployment templates with parameters.

If you liked this article bookmark my blog or follow me for more stuff about NAV and Business Central.

Microsoft Flow, Twitter and Dynamics NAV

Standard

As C/AL is my number one language to code, I wanted since last summer to give it a try to Microsoft Flow. And as Twitter is one of the top 3 applications I use on my phone I wanted to see if I can get an MS Flow bring the tweets in my favorite environment, Dynamics NAV.

After a few trials and tweaks my flow brings twits in NAV:

Tweet Table

If you want to try it out this is what you need:

  • a Dynamics NAV instance with public IP. I used an azure machine loaded with Dynamics NAV 2017
    • web services for entities you want to create or update via MS flow
  • a Microsoft or work account to connect to flow.microsoft.com
  • a Twitter account

To allow MS Flow to talk to both Twitter and MS NAV we need to set up appropriate Connections in MS Flow:

Conn

The connection to NAV looks like this:

NAVConn

For usename and password create a NAV user and set a password. On the instance you want to connect set a credential type = NAVUserPassword.

For twitter connection user your twitter user id.

To support the Flow, I needed 2 tables and 2 pages:

  • a table Tweet and a page Tweets (exposed as web service) to support the tweets
  • a table Last Tweet to record the last tweet id coming into NAV and a page to be able to update this last tweet id, so the flow does not collect again the same tweets but only new tweets published after the last tweet brought to NAV

NAV tables

And this is what the flow is doing:

  1. Start with a recurrence action:Recurrence
  2. Get the last tweet id by using the NAV connection:Search Tweets
  3. Set up the search for tweets action:Search Tweets
  4. Insert a new record in the NAV Tweet table by using a NAV connection and mapping twitter fields to NAV fields:Net NAVTweet
  5. Update “Last Tweet” table with the last action in the flow:UpdateLastTweet

And this is how the flow looks like:

WholeFlow

The C/AL code is included here.

Thanks for reading. Enjoy Flowing!

 

 

 

 

Building a NAV server performance baseline using Logman and Perfmon Windows utilities

Standard

More and more NAV servers are being deployed to the cloud, yet a large number of NAV customers are keeping their NAV installations on premise (either on a physical server or a VM). Those that choose cloud(and if you didn’t please read this) get the benefit of cloud specific tools for measuring the hardware and database performance.

Unless you have a specialized tool, to monitor the performance of your server you need to establish a baseline. During server’s deployment, administrators can  still tweak (especially if it’s a VM or part of a hyper-v infrastructure) the server hardware until the server reaches a satisfactory performance level.

Regular recordings of server’s performance can uncover problems in their early stages, sometimes long before they become obvious in production.

Windows Server 2008 and later server versions as well as Windows 8/8.1/10 come with two easy to use tools for measuring (and scheduling) the performance of the system: perfmon and logman.

With perfmon we get a graphical interface in which we can manually create our alerts or counters, run them and analyze the results.

With logman we can manage the counters from command line.

On my system I created two counters, one for measuring SQL server counters, the other was targeted at the hardware performance:perfmon

Double-clicking on HardwareBaseline we can manage the counters:

hardware

To create these two counters I ran the following script:

install-counters

To start and stop the counters, run:

start-counters

Or manually, in Performance Monitor, right click on the counter and choose Start or Stop.

A few “logman” command switches I use:

-b and -e switches to allow the performance counters to run in a specific time period.

-v switch to add a timestamp for when the output file is created

-f specifies the file format used for collecting data

After a few minutes the performance counters graph will look like this:

graph

With a Task Scheduler entry you can control when to start and stop the performance counters collection.

As for the analysis of collected data, there are lots of places online where you can find valuable information on counter’s results interpretation.

Download the package with hardware and SQL counters and  a sample script from here.

A  baseline case study

One of the processes that is pressing all resources on a NAV server instance in our solution is the running of a report that posts sales invoices for all customers for a specific Due Date. I’ll run the report and record and discuss the counters recorded.

With the installation of MS Dynamics server you get out of the box a few counters NAV specific:

nav-specific-counters I created a new data collector with the following counters:

my-nav-counters

In Performance Monitor right click on your new Data Collector Set and Start.

Next I went to RTC and ran the report.

After the report finished I came back to Performance monitor and stopped the Data Collector Set.

reporton-counters

Microsoft Dynamics NAV\.NET CLR\% Time  in GC:

If RAM is insufficient for MS Dynamics NAV Server instance, you might see a spike in the “% Time in GC” – which measures the .NET memory garbage collection activity. My process shows a maximum of 7% nd an average of a bit over 2%  – numbers that do not show that NAV is looking for more RAM.

Microsoft Dynamics NAV\% Primary key cache hit rate:

This counter should be above 90% – otherwise your cache might be too small because either your cache settings are set too low or the cache is shared between tenants and might need to be increased. I my case is above 99%:

cachehits

% Calculated fields cache hit rate  averages 63% which means that in 63% of the cases when we launched a CALCFIELDS command we hit the cache – decent number!

calcfieldsjpg

# Open Connections is 5, and refers to the number of open connections made from the service tier to the database hosted on SQL Server. You might be interested in the counter “# Active Sessions” which keeps track of the number of concurrent connections  the service tier manages.

openconn

The rest of the counters give numbers of rows, which might or might not be too relevant considering that in time the number of rows increases.

Having a baseline and comparing regular performance counters log against it, is not just a proactive measure in order to keep a NAV server healthy. It is fairly easy to use and cheap (logman and perfmon are both builtin Windows), qualities that appeal to NAV customers and Dynamics VARs.