From BC Telemetry to GitHub Issue — Automatically
A daily Logic App that catches RT0012 database lock timeout events, fetches the offending AL source code, and creates a GitHub Issue with a developer resolution checklist — with zero manual steps.
Database lock timeouts are one of the most common performance problems in Business Central. They’re noisy, hard to reproduce, and easy to miss — unless you’re watching Application Insights full-time. Most teams only find out about them when a user complains.
We wanted to flip that: surface the problem automatically, route it to the right developer, and hand them everything they need to fix it — without anyone having to write a query or read a wall of telemetry.
Here’s what’s been built and how it works.
The AL Extension
The project is a Business Central AL extension called Telemetry2Github.
It contains a deliberately broken demo that reliably triggers the RT0012
telemetry signal — the same signal you’d see in production when two sessions compete
for the same table lock.
The problem is in LockDemoMgt.Codeunit.al. HoldLockForDuration()
calls LockTable() at table level, then sleeps for 65 seconds inside the
same transaction:
procedure HoldLockForDuration(SleepMilliseconds: Integer)
var
LockDemoTable: Record "Lock Demo Table";
begin
LockDemoTable.LockTable(); // table-level lock
if LockDemoTable.FindFirst() then begin
LockDemoTable.Description := StrSubstNo('Locked at %1', Format(CurrentDateTime()));
LockDemoTable.Modify();
end;
Sleep(SleepMilliseconds); // holds the lock open — this is the problem
Commit(); // releases lock, far too late
end;
LockTable() at table scope followed by a long-running operation
before Commit() — is the most common cause of RT0012 in production extensions.
Triggering RT0012 in Business Central
Reproducing RT0012 only takes two browser tabs:
- Open Lock Demo – RT0012 Trigger → click Seed Data
- Tab 1 → Hold Lock (65 s) — table lock acquired, session sleeps
- Tab 2 → Try Lock within 30 seconds
- Tab 2 times out — BC throws a lock timeout error and emits
RT0012to Application Insights
Because the demo holds the lock for 65 seconds and BC’s server lock timeout is ~30 seconds, the timeout is 100% reproducible on demand.
From Application Insights to GitHub Issue
Once RT0012 lands in Application Insights, a daily Azure Logic App takes over.
It queries Application Insights for new lock timeout events, searches the GitHub repo for
the offending .al source file, calls an AI model with the event data and
actual code, and creates a GitHub Issue with everything the developer needs:
- Event summaryObject, lock count, sessions affected, first/last seen
- Root causeOne sentence grounded in the actual code
- ChecklistSpecific, actionable fixes as GitHub checkboxes
- Code analysisFindings across LockTable, long transactions, Commit placement, and SetAutoCalcFields
- Suspect linesThe exact lines from the source file
- Prevention tipGuidance to avoid the same class of issue in future
The whole pipeline is defined as Bicep and deploys with a single
az deployment group create command.
What’s Next — All Performance Events
Everything we built is event-agnostic. Expanding to all BC performance signals is one KQL change:
-- Current
| where customDimensions.eventId == 'RT0012'
-- Expanded
| where customDimensions.eventId startswith 'RT'
This covers RT0012, RT0013, RT0014, RT0015, and more — with no changes to the Logic App, AI analysis, or issue structure.
Summary
With a Logic App, a Bicep template, and an AI API call, we turned a noisy Application Insights signal into an actionable GitHub Issue — root cause, code analysis, checklist, and prevention tip, all generated automatically from the real source code.
No manual telemetry review. No copy-pasting KQL results into tickets. The pipeline is self-contained, cheap to run, and takes one deploy command to set up.
2 Responses