NAV Upgrade Notes – Turning Comments into Notes using .NET

Standard

Upgrading a NAV AddOn is not solely merging code and upgrading the database through all NAV versions in between source and target. It involves analysis and prototyping of present AddOn features into features existent in the latest NAV versions. We often ask ourselves how can we replace feature X in our AddOn hosted on a NAV 2009 installation with the new feature Y in NAV 2017?

Sometimes the answer is Yes: we can merge AddOn functionality with existent features in standard. The benefit is major.

Take Comment Line table for example. Almost any AddOn has a storage place for AddOn entities to record comments or notes.

Why keep them recorded in a table in ISV’s range when we could take advantage of an existent standard table perfect for this purpose? Table Record Link was introduced in NAV 2013 and can keep track of notes and links.

The short demo below assumes the existence of:

  • a table “Generic Entity”
  • a table “Generic Entity Comment”
  • a card page “Generic Entity Card”
  • a list page “Generic Entity Comments”
  •  codeunit “Upgrade Notes 1”

To generate notes for each comment in table “Generic Entity Comment” run the codeunit 90003″Upgrade Note 1″.

The text itself resides in a Text field in “Generic Entity Comment”, but “Record Link” table has a BLOB field for recording the text. Therefore I needed to move text from “Generic Entity Comment”Comment field into a BLOB field in “Record Link”.

text2blob

I wanted to use .NET here not only for exercising my rusty .NET muscles, but recording Danish characters in a BLOB proved to be a non-trivial job. So I used MemoryStream class with BinaryWriter which comes equipped with character encoding capabilities.

For a simple demo follow steps 1-4 below:

capture

Code used is available for download here.

Building a NAV server performance baseline using Logman and Perfmon Windows utilities

Standard

More and more NAV servers are being deployed to the cloud, yet a large number of NAV customers are keeping their NAV installations on premise (either on a physical server or a VM). Those that choose cloud(and if you didn’t please read this) get the benefit of cloud specific tools for measuring the hardware and database performance.

Unless you have a specialized tool, to monitor the performance of your server you need to establish a baseline. During server’s deployment, administrators can  still tweak (especially if it’s a VM or part of a hyper-v infrastructure) the server hardware until the server reaches a satisfactory performance level.

Regular recordings of server’s performance can uncover problems in their early stages, sometimes long before they become obvious in production.

Windows Server 2008 and later server versions as well as Windows 8/8.1/10 come with two easy to use tools for measuring (and scheduling) the performance of the system: perfmon and logman.

With perfmon we get a graphical interface in which we can manually create our alerts or counters, run them and analyze the results.

With logman we can manage the counters from command line.

On my system I created two counters, one for measuring SQL server counters, the other was targeted at the hardware performance:perfmon

Double-clicking on HardwareBaseline we can manage the counters:

hardware

To create these two counters I ran the following script:

install-counters

To start and stop the counters, run:

start-counters

Or manually, in Performance Monitor, right click on the counter and choose Start or Stop.

A few “logman” command switches I use:

-b and -e switches to allow the performance counters to run in a specific time period.

-v switch to add a timestamp for when the output file is created

-f specifies the file format used for collecting data

After a few minutes the performance counters graph will look like this:

graph

With a Task Scheduler entry you can control when to start and stop the performance counters collection.

As for the analysis of collected data, there are lots of places online where you can find valuable information on counter’s results interpretation.

Download the package with hardware and SQL counters and  a sample script from here.

A  baseline case study

One of the processes that is pressing all resources on a NAV server instance in our solution is the running of a report that posts sales invoices for all customers for a specific Due Date. I’ll run the report and record and discuss the counters recorded.

With the installation of MS Dynamics server you get out of the box a few counters NAV specific:

nav-specific-counters I created a new data collector with the following counters:

my-nav-counters

In Performance Monitor right click on your new Data Collector Set and Start.

Next I went to RTC and ran the report.

After the report finished I came back to Performance monitor and stopped the Data Collector Set.

reporton-counters

Microsoft Dynamics NAV\.NET CLR\% Time  in GC:

If RAM is insufficient for MS Dynamics NAV Server instance, you might see a spike in the “% Time in GC” – which measures the .NET memory garbage collection activity. My process shows a maximum of 7% nd an average of a bit over 2%  – numbers that do not show that NAV is looking for more RAM.

Microsoft Dynamics NAV\% Primary key cache hit rate:

This counter should be above 90% – otherwise your cache might be too small because either your cache settings are set too low or the cache is shared between tenants and might need to be increased. I my case is above 99%:

cachehits

% Calculated fields cache hit rate  averages 63% which means that in 63% of the cases when we launched a CALCFIELDS command we hit the cache – decent number!

calcfieldsjpg

# Open Connections is 5, and refers to the number of open connections made from the service tier to the database hosted on SQL Server. You might be interested in the counter “# Active Sessions” which keeps track of the number of concurrent connections  the service tier manages.

openconn

The rest of the counters give numbers of rows, which might or might not be too relevant considering that in time the number of rows increases.

Having a baseline and comparing regular performance counters log against it, is not just a proactive measure in order to keep a NAV server healthy. It is fairly easy to use and cheap (logman and perfmon are both builtin Windows), qualities that appeal to NAV customers and Dynamics VARs.

 

 

 

 

An asynchronous processing solution for MS Dynamics NAV

Standard

From the earliest to the newest NAV versions, long-running tasks are usually accompanied by a progress bar or windows with multiple progress bars. It’s developer’s preferred choice to inform the user with the progress of the launched task – an obvious choice to an hourglass cursor. There are plenty of business processes in any NAV database that do heavy processing. If we look into a posting task, NAV generates Add-On specific ledger entries, standard specific ledger entries, creates invoices and lines, adjacent records and so on. Therefore these tasks  can take a while and can keep the user session busy.

asyncThis blog is about a solution, my boss and I designed and used it on a few target processes, to allow users to immediately regain control of their session after the launch of a long-running task, while assigning the processing to a NAV background session. The results of the processing are recorded and easily retrievable by the calling page automatically by using PingPong Control AddIn.

PingPong info: msdnGunnar’s and Olof Simren ‘s blog.

How does our solution get implemented?

A table with entities to be processed named in the code sample “Generic Entity”. At the minimum the table should have PK fields. Our sample has three fields that are part of the PK: Field_1, Field_2, Field_3.

asynchronous-processing-in-nav

A card page for “Generic Entity” table named “Generic Entity Card” whose only action will launch the async process.

A factbox “Queue Summary Factbox” used on the “Generic Entity Card” to reflect the status of the last queued record processed. The status gets updated by using a PingPong Control AddIn.

A new table “Processing Queue”. At the minimum add enough fields to be able to locate later the record(“Generic Entity”) that launched the process. Additionally, an Option field, “Status” with at least 4 values: Queued, Processing, Error and Success and a Text field “Error Message” to record the errors that prevented the process to finish successfully.

A new codeunit “Process Queue” (whose Rec is “Posting Queue” ). In the Run function of this codeunit wrap the processing in a TryFunction to be able to rollback if errors are encountered.

If wanted you can add more fields to the “Processing Queue”. In our environment we have “Document No.” of the resulting Posted Sales Invoice and “Elapsed Time” to record the duration of the task. You could also enable Lookup on the factbox field to open a list page for Processing Queue table.

If you want to try our solution download it from here.