Category: Dynamics 365 Business Central

  • Using an Azure Function as an OAuth 2.0 redirect url

    Using an Azure Function as an OAuth 2.0 redirect url

    We’ve got some great development tools these days in Business Central but it’s still not possible to solve every problem with AL. This week I hit such a problem while developing a Javascript Control Addin to embed a third-party web application in Business Central SaaS. The problem was OAuth 2.0 authentication, or more specifically how to get an access token from a redirect URL.

    The third party web application required OAuth 2.0 implicit flow, for my use this was going to look something like this:

    OAuth 2.0 Implicit flow

    This is a slight simplification as we only need to authorize if our previously obtained token has expired, but this is not really the point of this blog post.. My problem was 5. redirect; this is a url the authentication server will redirect our users’ browser to (in this case the src of the iFrame used by the Control Addin). The data we need is in the url generated by the authentication server which will need to be stripped off (6) and used in following interactions with the third-party web app.:

    GET $base_url/api/oauth2/authorize?client_id=$client_id&response_type=token&redirect_uri=$redirect_uri
    
    ==>
    
    Location: $redirect_uri#expires_in=3600&token_type=Bearer&scope=$scope&access_token=$token

    So how do we handle this requirement in Business Central? The authentication server needs a URL ($redirect_uri) from us which it will use to send the access data after the user has logged in. This redirect service will then need to redirect the user back to our application. Well the answer is we can’t. There is no way for us to create a web service in Business Central that will handle such a request. We need to build a service to accept an HTTP GET request and extract the query parameters to use in our application (technically the parameters after the redirect url are not query parameters; note the use of # rather than ?.. more on this later), and then load the web app we want to authenticate with into our Control Addin iFrame.

    Using an Azure Function as an OAuth 2.0 redirect url

    The great news is Microsoft has an offering (which I’ve been trying to find an excuse to use for some time!) called Azure Functions. Azure Functions allow you to quickly (and cheaply) deploy functions as web services to be used by other applications.

    I needed an Azure Function, that when called would send the data (hash property) in the url to my Control Addin:

    Azure Function as redirect url

    1. The iFrame src is set to the OAuth authentication url, and the user logs in with their credentials.
    2. On success, the authentication server will redirect to our Azure Function url with the access details in the url hash property.
    3. The Azure function will load in the Control Addin iFrame.
    4. The Azure Function sends the hash property back to our application
    5. Decode the hash property to extract the access token

    Hash Property vs. Query Parameters

    I made the distinction earlier, that what we’re trying to extract from the redirect url is not query parameters, but the location hash property. This distinction is important because it affects how the data is retrieved by our Azure Function.

    $redirect_uri#expires_in=3600&token_type=Bearer&scope=$scope&access_token=$token

    A hash property in a url starts with a # character, whilst query parameters follow a ? character. The fundamental difference is that the hash property is only available to the browser and is not sent to the web server. This means that we’ll need to use client side scripting to retrieve the data.

    Note: OAuth 2.0 does not always send data using a hash property, it depends on the flow you’re implementing.

    Implementing the Azure Function

    The Azure function is incredibly simple, all we are doing is receiving a request and sending the hash property to the Control Addin. I used a node.js based function, simply because I was already using Javascript and AL for this project and didn’t feel the need to add a third language 🙂

    Step 1 – get the hash property:

    let hashString = window.location.hash;

    Step 2 – send the hash property to the Control Addin for processing:

    This takes a little more thought. You may be tempted down the lines of sending the hash value back to Business Central via a web service. Definitely do-able.. but as our Azure function is running inside the iFrame in our Control Addin we can simply use client-side window messaging to post the value back to our main window for processing:

    window.parent.postMessage(msg, "*");

    The above Javascript code is posting a message to the parent window, which will be our Control Addin. 

    The receiving window will need to know what the message is in order to process it. I create the msg variable as a Javascript object, so I can pass through some additional information:

    let msg = {
    type: "xp.authentication",
    hash: hashString
    };

    My message now has a type value and a hash value which I’ll be able to pick up in my Control Addin code. 

    Of course this is client-side Javascript (remember the hash property is only available to the browser), and will need to run inside the iFrame when the Azure Function is invoked. This means our Azure Function will need to return this code for the browser to execute. I did this by creating a simple HTML document as a string and passing it back as the response body. The full Azure Function code looks like this:

    module.exports = async function (context, req) {
    const responseMessage = '<html><script>' +
    'let hashString = window.location.hash;' +
    'let msg = {' +
    'type : "xp.authentication", ' +
    'hash : hashString ' +
    '}; ' +
    'window.parent.postMessage(msg, "*"); ' +
    '</script>' +
    '<h1>Getting access token...</h1></html>';
    context.res = {
    headers: { 'Content-Type': 'text/html' },
    body: responseMessage
    };
    }

    Retrieving the hash in the Control Addin

    The final part of the jigsaw is to pick up the message sent by the Azure Function. This is done using browser events.

    Within our Control Addin code we can add an event listener to a message event as follows:

    window.addEventListener("message", function (pEvent) {
       if (pEvent.source !== iFrame.contentWindow)
           return;

       handleMessage(pEvent.data);
    });

    The above code will use an anonymous function as an event listener to the message event. I’m using message events to communicate with the third-party web app as well so the above code send all message event data that has come from our Control Addin iFrame to my handleMessage function:

    function handleMessage(pMessage) {
       //redirect token received?
       if (pMessage.type === "xp.authentication") {
           decodeAuthHash(pMessage.hash);
           iFrame.src = 'https://the-url-of-third-party-app.com';
       }
    // ... more event "types" picked up here
    }

    Now you can see why it was important to give the msg variable a type in my Azure Function. If I find the message is of type xp.authentication then I will try and process the accompanying hash property using the decodeAuthHash() function. I’m then switching the iFrame src to the third-party application url specific to my solution.

    From here we can extract the required fields out of the hash string for use in our application. I like to create a JSON object to hold the data as it’s convenient format to use:

    function decodeAuthHash(authHash) {
       if (authHash === '' || authHash === undefined) {
           return;
       }

       authHash = authHash.replace('#', '');
       let hashPairs = authHash.split('&');
       const hashJson = JSON.parse('{}');
       hashPairs.forEach(function (hashPair) {
           let splitPair = hashPair.split('=');
           hashJson[splitPair[0]] = splitPair[1];
       });
       tAccessToken = hashJson;
    }

    I’m assigning the JSON variable to a global variable tAccessToken to use in further functions. I can then retrieve the access_token as follows:

    let accessToken = tAccessToken.access_token;

    See what I mean about JSON being a convenient format in Javascript? You can use it like any other object with properties, no need to find the key and get the value as a JsonToken like we do in AL.. the code is much cleaner. That said, I’m no Javascript expert so please let me know if you have a more elegant solution! 

    That’s it, thanks for reading.

  • How-to stop auto-login to Business Central On-prem with Windows Auth

    authentication

    Probably quite an unusual scenario.. so I’ll explain why it came up!

    A retail customer using LS Central wants to use their POS machine for both POS and back office functionality.

    The complication is; the POS (Web Client) needs to automatically login with the POS Windows account (the signed in Windows account) but for back office tasks they want the user to log in with their own AD accounts.

    The first thing that comes to mind is NavUserPassword for the BO users right? Well, the users are setup in AD so makes sense to use this rather than adding extra overhead of maintaining users/passwords in two places, plus they already have infrastructure in place using Windows auth. UserName auth? Sounds like it should work but again will require additional BC service and as the POS machine is on the domain and we’re using the Web Client it doesn’t really fit the brief:

    UserName – With this setting, the user is prompted for username/password credentials when they access Business Central. These credentials are then validated against Windows authentication by Business Central Server. There must already be a corresponding user in Windows. Security certificates are required to protect the passing of credentials across a wide-area network. Typically, this setting should be used when the Business Central Server computer is part of an authenticating Active Directory domain, but the computer where the Dynamics NAV Client connected to Business Central is installed is not part of the domain.


    So anyway, to cut a long story short you can disable passthrough Windows authentication in Chromium based web browsers (Chrome and Edge) by emptying the authentication server whitelist. This is done by adding a command line switch: –auth-server-whitelist=”_”

    In this case this meant creating a new button called Backoffice in LS Start to open Chrome and adding the –auth-server-whitelist switch to the parameter list along with the BC url.

    disable auto logon
    Of course, you can also add this command line switch to a Windows shortcut:

    auth server whitelist

    You’ll notice this will disable passthrough authentication for the open browser window, the problem is if you open another link in Chrome/Edge by default it will open it in the same window. To get around this we’ll need to add another command line switch to force Chrome/Edge to open this link in its own window: –new-window

    If you want to learn more about Chromium command line switches, have a browse over here: List of Chromium Command Line Switches « Peter Beverloo

    Job done.

  • Uninstall all dependant apps in Business Central On-premise

    Uninstall all dependent apps

    Ever tried to uninstall a Business Central app in PowerShell, and failed due to dependencies on that app? I hit that problem and decided to share a script to detect and uninstall all dependant apps.

    I was looking to publish an app from VS Code on a copy of a customers database so I could debug and develop locally and needed to uninstall the app first before I could deploy. The problem was, this app was a dependency for several other apps created for the customer.

    What I found is we can get a full list of apps installed with the Get-NavAppInfo cmdlet, then for each app iterate through the dependencies until we find our app:

    https://gist.github.com/DanKinsella/b3b4a534fe23204c35affc56b296d3c2.js

    The script prints out a list of app names and versions which can be used for reinstalling them after. 

    Hope this is useful for someone else.

  • Developing with the new enhanced email feature

    Enhanced Email Feature

    Yesterday, I had an email notification about a new blog post from my colleague Josh Anglesea. A great post on the new enhanced email feature coming with Business Central v17. Out of pure coincidence, I happened to be looking into the same functionality, but from a development perspective… So inspired by Josh here’s a blog post about developing with the new enhanced email feature.

    Josh already did a great job explaining how to setup the new functionality, so I highly recommend you read his post before continuing as I won’t go into this.

    The major change this new enhanced email functionality offers is the ability to configure multiple email accounts and have Business Central select the correct account based on the scenario. This is in contrast to the single account you can configure per Business Central company using the old SMTP Setup page.

    This brings us to the first new concept: Email Scenario

    When you create a new email account, you can choose which scenarios this account will be used in. The scenarios offered by the base application are defined in an enumextension object, which extends the systems app’s Email Scenario Enum:

    enumextension 8891 "Base Email Scenario" extends "Email Scenario"
    {
       value(1; "Invite External Accountant")
       {
           Caption = 'Invite External Accountant';
       }
       value(2; "Notification")
       {
           Caption = 'Notification';
       }
       value(3; "Job Planning Line Calendar")
       {
           Caption = 'Job Planning Line Calendar';
        }

        // Document usage

        // ------------------------------------------------------------------------------------------------

       value(100; "Sales Quote")
       {
           Caption = 'Sales Quote';
       }
       value(101; "Sales Order")
       {
           Caption = 'Sales Order';
       }
       value(102; "Sales Invoice")
       {
           Caption = 'Sales Invoice';
       }
       value(103; "Sales Credit Memo")
       {
           Caption = 'Sales Credit Memo';
       }
       value(105; "Purchase Quote")
       {
           Caption = 'Purchase Quote';
       }
       value(106; "Purchase Order")
       {
           Caption = 'Purchase Order';
       }
       value(115; "Reminder")
       {
           Caption = 'Reminder';
       }
       value(116; "Finance Charge")
       {
           Caption = 'Finance Charge';
       }
       value(129; "Service Quote")
       {
           Caption = 'Service Quote';
       }
       value(130; "Service Order")
       {
           Caption = 'Service Order';
       }
       value(131; "Service Invoice")
       {
           Caption = 'Service Invoice';
       }
       value(132; "Service Credit Memo")
       {
           Caption = 'Service Credit Memo';
       }
       value(184; "Posted Vendor Remittance")
       {
           Caption = 'Posted Vendor Remittance';
       }
       value(185; "Customer Statement")
       {
           Caption = 'Customer Statement';
       }
       value(186; "Vendor Remittance")
       {
           Caption = 'Vendor Remittance';
       }
    }



    Building your own functionality on top of this, you’ll probably want to add to the list of scenarios. Good news! This is as simple as extending the Email Scenario Enum in our own apps:

    enumextension 50100 "Dan Test Email Scenario DDK" extends "Email Scenario"
    {
       value(50100; "Dan Test 1 DDK")
       {
           Caption = 'Dan Test 1';
       }
       value(50101; "Dan Test 2 DDK")
       {
           Caption = 'Dan Test 2';
       }
    }


    Once you’ve created your enumextension, the new options are automatically included in the Email Scenario list:

    Email Scenario

    Next up, we need to create an email message and send. This is done using the Email Message and Email Codeunits.

    Note: The enhanced email app is part of the system app, so you won’t be able to see the source code from VS Code, or by extracting the base application app file. The system application apps are open source and available on GitHub to view and submit your own changes, the email app can be found here: https://github.com/microsoft/ALAppExtensions/tree/master/Modules/System/Email


    The following example show how you can create an email message and send using the email scenario functionality:

        procedure SendEmail()
       var
           Email: Codeunit Email;
           EmailMessage: Codeunit "Email Message";
       begin
           EmailMessage.Create('dan@dankinsella.blog', 'My Subject', 'My message body text');
           Email.Send(EmailMessage, Enum::"Email Scenario"::"Dan Test 1 DDK");
        end;

    As you can see, we can pass the email scenario into the send procedure. If an account is associated with this scenario it will be selected for use. If no account is assigned to this scenario the default account will be used.

    Note the Email.Send() procedure is overloaded, meaning it can take different sets of parameters, so have a look at the object here for all the available options.

    Some other cool stuff to check out:

    You can open the new email editor using Email.OpenInEditor() to allow your users to edit the email before sending:

     /// <summary>
    /// Opens an email message in "Email Editor" page.
    /// </summary>
    /// <param name="EmailMessage">The email message to use as payload.</param>
    /// <param name="EmailScenario">The scenario to use in order to determine the email account to use on the page.</param>
    procedure OpenInEditor(EmailMessage: Codeunit "Email Message"; EmailScenario: Enum "Email Scenario")
    begin
    EmailImpl.OpenInEditor(EmailMessage, EmailScenario, false);
    end;

    We can also send the email in the background by putting it on the scheduler using Enqueue procedure:

     /// <summary>
    /// Enqueues an email to be sent in the background.
    /// </summary>
    /// <param name="EmailMessage">The email message to use as payload.</param>
    /// <param name="EmailScenario">The scenario to use in order to determine the email account to use for sending the email.</param>
    procedure Enqueue(EmailMessage: Codeunit "Email Message"; EmailScenario: Enum "Email Scenario")
    begin
    EmailImpl.Enqueue(EmailMessage, EmailScenario);
    end;

    Attachments can be added to the email message using an InStream, something like this:

        procedure SendEmailWithAttachment(AttachmentTempBlob: Codeunit "Temp Blob")
       var
           Email: Codeunit Email;
           EmailMessage: Codeunit "Email Message";
           AttachmentInStream: InStream;
       begin
           EmailMessage.Create('dan@dankinsella.blog', 'My Subject', 'My message body text');
           AttachmentTempBlob.CreateInStream(AttachmentInStream);
           EmailMessage.AddAttachment('My Attachment name', 'PDF', AttachmentInStream);
           Email.Send(EmailMessage, Enum::"Email Scenario"::"Dan Test 2 DDK");
       end;
  • How-to: Create External POS Commands for LS Central

    POS Command

    In this blog post I show how-to create External POS Commands for LS Central, and introduce a snippet to make this task repeatable.

    External POS commands allow you to extend LS Central functionality by creating Codeunits that can be run via POS buttons or actions.

    Before you get started, you might want to find out how-to run LS Central in a Docker container.

    The high-level tasks are as follows:

    • Create a Codeunit to contain our new POS Command module.
    • Write the POS Command logic in a procedure.
    • Add code to register the module and POS command(s).
    • Register the new POS command module in LS Central.

    To view the POS commands within Business Central, open the POS Commands list:

    POS Command

    As we can see above, POS commands have a Function Code which is used to call the command and a field Function Type which can be either:

    • POS Internal – Don’t specify a codeunit, these are used internally (hard-coded) within LS Central
    • POS External – Allow you to specify a Codeunit to extend LS Central functionality

    We’re going to be focusing on External POS commands, which we can see from the filtered External POS Commands List:

    External POS Commands

    The listed POS Commands can be assigned to buttons on the POS, POS Actions or called directly in AL code.

    Create a POS Command Codeunit

    POS Command Codeunits follow a pattern:

    • Must have the TableNo property set to “POS Menu Line”.
    • OnRun() handles the registration event.
    • OnRun() handles command invocation with a case statement.

    The OnRun trigger of our Codeunit will need to handle one of two scenarios: either the Codeunit is being registered, or a command is being invoked:

    To register our new module and external POS command we need to make use of the “POS Command Registration” Codeunit, which contains two procedures we’ll need:

    • RegisterModule – used to store the module information:
      • Module – A code for your Codeunit Module (Code[20]).
      • Description – A description of the module (Text[50]).
      • Codeunit – The Codeunit Object Id being registered.
    • RegisterExtCommand – used to register each of the commands in our module:
      • FunctionCode – A code for the POS Command being registered (Code[20]).
      • Description – A description of the POS command (Text[50]).
      • Codeunit – The Object Id of the Codeunit containing the POS command.
      • ParameterType – The parameter data type, an option field on the POS Command table. 0 for no parameter.
      • Module – The code of the module the POS command belongs to.
      • BackGrFading – Boolean to fade the POS background when this command runs.

    When a POS command is being requested, the OnRun() trigger is invoked with the “POS Menu Line” record’s Command field holding the FunctionCode of our POS command. We use a case statement to determine which (if any) procedure within our Codeunit to run:

    As a personal preference, I like to encapsulate the FunctionCode for each POS command and the Module code value in procedures:

    It’s good practice not to enter literal text values into code generally, but I also find it useful to be able to retrieve these codes in certain circumstances. Note I’ve added the locked parameter to the Label constants, this is to stop the code value being translated. The code should stay the same for consistency, whatever the language used.

    Register the POS Command

    Before we can use our new POS command module we’ll need to register it within LS Central. This is done from the External POS Commands page:

    Register POS Command

    Filter to our new Codeunit and hit OK:

    Select POS Command Codeunit

    Check our new POS command is registered:

    Registered POS command

    Note: Every time you add a new POS command to your module Codeunit you’ll need to re-register.

    Get the full demo Codeunit on GitHub.

    Get the Visual Studio Code snippet here.

  • AL Page: show integer or decimal as mandatory

     

    A quick tip, as it’s been a while since my last post..

    The ShowMandatory property for fields on a page object is helpful for drawing the user’s attention to required fields on a page.

    If a field is not populated on a page and the ShowMandatory property for that field is set to true, then a red asterisk (*) will appear:

    Mandatory Decimal

    The problem is, for number based fields which default to 0, this field is actually populated so the asterisk will not show up.

    Luckily there is an easy solution; to show integer or decimal fields as mandatory we can also set the field’s BlankZero field to true:

    pageextension 50000 "Item Card DK" extends "Item Card"
    {
        layout
        {
            modify("Net Weight")
            {
                BlankZero = true;
                ShowMandatory = true;
            }
    }
    If you have more complex (or fringe) requirements, such as numbers must be positive, there is also the BlankNumbers page field property. BlankNumbers can be set to the following values:
    Value Description
    DontBlank (default) Not clear any numbers
    BlankNeg Clear negative numbers
    BlankNegAndZero Clear negative numbers and zero
    BlankZero Clear numbers equal to zero
    BlankZeroAndPos Clear positive numbers and zero
    BlankPos Clear positive numbers

    Note: BlankNumbers is not available when modifying fields from a pageextension object, you can only use this property on a  field declaration.

  • Programming Lookup and Numpad controls for LS Central Web POS

    Programming lookups and numpads for LS Central Web POS

    If you’ve been used to working with LS Central Windows POS you my find a bit of a surprise when upgrading your existing solution or programming lookups and numpads for LS Central Web POS

    Programming lookups and numpads for LS Central Web POS

    LS have implemented the POS user interface with client add-in components embedded within a Business Central page.

    The fundamental difference between the Web and Windows POS is in the interaction between Business Central and the LS client add-in components:

    • LS Windows POS – Uses .NET add-in components which behave synchronously.
    • LS Web POS – Uses JavaScript client add-in components which behave asynchronously.

    So what does this mean for my AL code?

    Synchronous behavior means you can invoke the Numpad or Lookup control and get the result as a return value. Nice and simple in terms of AL code:

    if OpenNumericKeyboard(DepositTxt, '', ValueTxt, false) then
      Evaluate(Amount, ValueTxt);
    
    

    With the LS Web POS, the asynchronous behavior of the JavaScript add-in components requires a bit more work; You must first invoke the control, then wait for an event which will contain the result.

    The event you need to subscribe to will vary depending on the control you’ll be using, but you’ll find the relevant events in Codeunit “EPOS Controler” (LS spelling, not mine!), for example:

    • OnKeyboardResult
    • OnLookupResult
    • OnNumpadResult

    OK, so lets look at an example.

    Programming a Numpad control

    So the steps are as follows:

    • Open the numeric keyboard control
    • Subscribe to the OnNumpadResult event to get response
    • Process the response, if the result is for you

    Open the NumericKeyboard

    To invoke the number pad control we use the OpenNumericKeyboardEx() procedure from Codeunit 10012732 “EPOS Control Interface HTML”, which has the following parameters:

    • Caption: The title of the number pad control. Should be instructional to the user.
    • DefaultValue: If required, you can pre-populate the value on the popup using this parameter.. or leave blank
    • Result: Not used
    • payload: An identifier we can use when deciding if to handle the OnNumpadResult() event, for instance we could use the POS Command code.
        local procedure RecordFootfall()
        var
            EPOSControl: Codeunit "EPOS Control Interface HTML";
            EnterFootfallLbl: Label 'Enter store footfall';
            Result: Action;
        begin
            EPOSControl.OpenNumericKeyboardEx(EnterFootfallLbl, '', Result, RecordFootfallCommand());
        end;

    Subscribe to the OnNumPadResult() event

    Once we’ve opened the numeric keyboard control, well need to wait to pick up the result via an event. We subscribe to the OnNumpadResult() event in Codeunit 10012718 “EPOS Controler”, and test the raised event is the one we’re interested in using the payload parameter:

        [EventSubscriber(ObjectType::Codeunit, Codeunit::"EPOS Controler", 'OnNumpadResult', '', false, false)]
        local procedure OnNumpadResult(payload: Text; inputValue: Text; resultOK: Boolean; VAR processed: Boolean)
        begin
            case payload of
                RecordFootfallCommand():
                    HandleFootfall(inputValue, resultOK, processed);
            end;
        end;

    Process the result

    The processed parameter needs to be set to true to stop LS from trying to continue and process this result. It will actually error in our case as at some point it will try and evaluate the payload to an integer.. in usual LS error handling finesse…

    We can use the resultOK parameter to check if the user pressed the OK button. As the return value is a Text variable we’ll have to evaluate this to a Decimal to get the format we require.

        local procedure HandleFootfall(FootfallAmountTxt: Text; ResultOk: Boolean; var Processed: Boolean)
        var
            Footfall: Decimal;
            DataTypeErr: Label 'Footfall amount must be a decimal value.';
        begin
            Processed := true;
            if not ResultOk then
                exit;
            if FootfallAmountTxt = '' then
                exit;
            if not Evaluate(Footfall, FootfallAmountTxt) then
                error(DataTypeErr);
        end;
    That’s it, thanks for reading.
  • Managing AL Language extensions per workspace

    When working on multiple different versions of Business Central, you may have got in the habit of managing AL Language extensions by installing and uninstalling / disabling different versions of the AL language extension as you move between projects.

    Microsoft have made this management easier in recent versions of the AL language extension found on the Visual Studio Code marketplace place by providing multi-version support. This allows the developer to select the target platform version when creating a project workspace:

    Select Business Central platform

     

    But, what if you want to develop for a specific on-premise build of Business Central / Dynamics NAV 2018 or a currently unsupported version such as an insider build from the Collaborate programme? You’ll still need to import the VSIX file that ships with this version.

    Managing AL Language extensions

    Visual Studio Code provides functionality to enable / disable extensions on a per workspace basis.

    So to use this in practise lets say you want your default AL language extension in Visual studio code to be the version that comes from the Visual Studio Code Marketplace. If we leave this version alone after install, it will be enabled globally (i.e. available for all projects):

    Gloabbly available VS Code extension

     

    Now lets create a new project where we want to use a specific AL language extension shipped with the Business Central version we’re developing for.

    There are a few steps we’ll need to complete as follows:

    1. Obtain the VSIX file for the target AL language extension (found on the product DVD, or output in the terminal if using containers).
    2. Create a new workspace in Visual Studio Code by opening a folder.
    3. Import the VSIX file.
    4. Disable the new AL language extension. Then select Enable (Workspace)
    5. Identify the global AL Language extension and select Disable (Workspace)
    Obtain target VSIX file

    VSIX is the file format for Visual Studio Code extension packages. Each version of Business Central on-premise (and Dynamics NAV 2018) ships with a VSIX file in the product DVD.

    In the Business Central 2019 Wave 2 “DVD” the VSIX package is in the following location (assuming you’ve unzipped to C:\Temp):

    C:\Temp\Dynamics 365 Business Central 2019 Release Wave 2.GB.36649\ModernDev\program files\Microsoft Dynamics NAV\150\AL Development Environment\ALLanguage.vsix

    When using NAV/BC Docker containers a link to download the VSIX package is printed to the console when creating the container. If you’ve closed your console since creating the container you can use the docker logs command to display this information for any given container:

    PS C:\WINDOWS\system32> docker logs ALDEMO
    Initializing...
    Starting Container
    Hostname is ALDEMO
    PublicDnsName is ALDEMO
    Using NavUserPassword Authentication
    Starting Local SQL Server
    Starting Internet Information Server
    Creating Self Signed Certificate
    Self Signed Certificate Thumbprint B4342A2900B851600763A08FD1C8B03CC8B28622
    Modifying Service Tier Config File with Instance Specific Settings
    Starting Service Tier
    Registering event sources
    Creating DotNetCore Web Server Instance
    Enabling Financials User Experience
    Creating http download site
    Setting SA Password and enabling SA
    Creating dank as SQL User and add to sysadmin
    Creating SUPER user
    WARNING: The password that you entered does not meet the minimum requirements. 
    It should be at least 8 characters long and contain at least one uppercase 
    letter, one lowercase letter, and one number.
    Container IP Address: 172.30.134.252
    Container Hostname : ALDEMO
    Container Dns Name : ALDEMO
    Web Client : http://ALDEMO/BC/
    Dev. Server : http://ALDEMO
    Dev. ServerInstance : BC
    
    Files:
    http://ALDEMO:8080/al-4.0.192371.vsix

    Just copy the VSIX file URL into your browser to download.

    Create a new workspace in Visual Studio Code

    So you could change the target AL Language extension on an existing project, but you may need to change some of the parameters in the launch.json and/or app.json files that get generated by the AL language extension due to differences between versions.

    To keep things simple I’m going to create a new project to use by creating a new folder and opening that in VS Code. Once I’ve activated the AL Language version I require, I’ll use that to generate the app.json and launch.json files.

    1. Hit F1 to open the command palette.
    2. Search for and execute Open Folder.
    3. In Open Folder Dialog create new folder and open.
    Import the VSIX file into Visual Studio Code

    The AL language VSIX file can now be imported into Visual Studio Code:

    Import VSIX - VS Code

    Enable new AL Language extension version for current workspace only

    With our new extension installed, we’ll first need identify it based on the version number, disable it, and then enable it for the current workspace only:

    Enable Visual Studio Code extension for workspace

    Disable the global AL Language extension for the current workspace

    Next we need to disable our default AL Language extension for the currently opened workspace.

    Disable VS Code extension for current workspace

     

     

     

     

     

     

     

     

     

     

     

     

    Now we can complete the project setup by creating a new .al file in the workspace which will prompt us to generate a manifest file (app.json). The launch.json file will get created automatically if one doesn’t already exist in the workspace when you try to download symbols.

  • Business Central AL Interface type

    The AL Interface Type

    Unfortunately I’m not able to attend NAVTech Days this year, but I am paying attention from a far and saw some very interesting posts on Twitter about a new type available in the forthcoming Business Central 16.x release. The AL interface type.

    The concept of interfaces won’t be new to any one from the object orientated world of programming languages. I first used them with Java and it’s great to see Microsoft expanding the AL language to give us more features that open up a whole new world of software design.

    The AL interface type give us the ability to use the techniques of abstraction and loose coupling in AL. To explain this lets look at an interface declaration:

    interface IErrorHandler
    {
        procedure HandleError(ErrorCode : Code[10]; ErrorMessage : Text[1024]);
    }

    As we can see, the interface IErrorHandler declares a procedure but does not have a procedure body. The procedure is not implemented in the interface.

    A codeunit must be created to implement the interface and provide the behaviour. Implementing codeunits must implement every procedure declared by any interface it implements. An important point to remember when designing interfaces.

    To implement an interface, we use the implements keyword followed by the interface name after the codeunit declaration:

    codeunit 50104 "Throw Error" implements IErrorHandler
    {
        procedure HandleError(ErrorCode: Code[10]; ErrorMessage: Text[1024])
        var
            Errortext: Label 'Error Code: %1\Error Message: %2';
        begin
            Error(ErrorText, ErrorCode, ErrorMessage);
        end;
    }

    An interface can be implemented by many different codeunits, all providing their own behaviour. Lets create another implementation of IErrorHandler:

    codeunit 50105 "Log Errors in Database" implements IErrorHandler
    {
        procedure HandleError(ErrorCode: Code[10]; ErrorMessage: Text[1024])
        var
            ErrorLog : Record "Error Log";
        begin
            ErrorLog.Validate(Code, ErrorCode);
            ErrorLog.Validate(Description, ErrorMessage);
            ErrorLog.Validate("Logged On", CurrentDateTime);
            ErrorLog.Insert();
        end;
    }

    So now we have two codeunits which both implement IErrorHandler in their own way. The compiler knows that any codeunit that implements IErrorHandler must implement the HandleError function, which means we can write generic, loosely coupled code to handle the processing of errors and pass in the implementing codeunit as required:

    codeunit 50103 "Error Creator"
    {
        procedure ProcessError(ErrorHandler : Interface IErrorHandler)
        begin
            ErrorHandler.HandleError('Error1', 'This is an error message!');
        end;
    
        procedure CallErrorHandler(PersistErrors : Boolean)
        var
            ErrorLogCU : Codeunit "Log Errors in Database";
            ThrowErrorCU : Codeunit "Throw Error";
        begin
            If PersistErrors then
                ProcessError(ErrorLogCU)
            else
                ProcessError(ThrowErrorCU);
        end;
    }

    The ProcessError() method above takes a parameter of type Interface IErrorHandler, this means we can pass in any codeunit that implements IErrorHandler as seen in the CallErrorHandler() method.

    A codeunit can implement multiple interfaces using comma separation:

    codeunit 50105 "Log Errors in Database" implements IErrorHandler, ISomeOther, ISomeOther2
    {
       // implementation here...
    }

    Note: The I prefix on the interface name is not mandatory but is a common convention used in C#.

    The interface type was revealed during the NAVTechDays 2019 opening keynote, which is now available on YouTube here (1:17:30).

  • Add User in Dynamics 365 Business Central Cloud

    Business Central Cloud User

    In one of my earlier blog posts I wrote how you can add a user in Business Central On-premises (formerly Dynamics NAV) using PowerShell. This blog post shows how to add a user in Business Central Cloud / SaaS.

    The big difference between Business Central On-premises and Business Central Cloud (or SaaS) is that the Cloud version requires Azure Active Directory (AAD) authentication. As a consequence, to create a new user in Business Central Cloud you must first create a user in AAD and assign a Business Central licence to this user.

    There are a number of ways to create an AAD user, for instance via the Office 365 Administration Center or from the Azure Portal. In this post we’ll add the AAD user though the office.com admin center.

    Creating an AAD user through the Microsoft 365 Admin Center

    As an Office 365 administrator, open office.com and you’ll have access to the Admin Center:

    Office 365 Administration
    Office 365 Administration

    Note: If you can’t see the admin center icon above, try selecting ‘All apps’ and look through the list. If it’s still not there check your Microsoft 365 user permissions with an administrator.

    With the Microsoft 365 admin center open we can create a new user:

    Add Office 365 user

    Fill in the basic user information:

    Add Microsoft 365 userAfter selecting Next, assign the Business Central licence to the user (and any other licenses required):

    Assign user licences

    Optionally add additional user rights, then finish:

    Finish adding Microsoft 365 user

    With the Microsoft 365 user created, we can now add the user to Business Central.

    Add User in Dynamics 365 Business Central Cloud

    Once the user has an Azure Active Directory account with a Business Central licence assigned, we can add the user to Dynamics 365 Business Central Cloud via the Users page:

    Find the Users page via the search function (Alt+Q):

    Find Business Central Users page

    Open the Process menu:

    Business Central Add User

    Click Get New Users from Office 365:

    Get New Users from Office 365

    Business Central will now query active directory and add any new users it finds with a Business Central licence assigned.

    Add User in Dynamics 365 Business Central Cloud

    With the new Business Central user created, you can now continue the user setup by assigning User Groups and/or Permission Sets to the user record.