Technical

Category Archives

Dreaming about C/AL

Small note: When moving this article to dynamicstailor.eu, I decided to shorten this article by about 50%, by removing a few wishes that Microsoft has granted us in the meanwhile, or is about to give us. Two of these were a better editor and/or Visual Studio integration, and version control (any!).

I’ve been programming in Dynamics NAV since 2013, but I wrote my first code over 20 years earlier. It was 1993, I was still living with my parents and I hadn’t even met my first girlfriend yet. My father came home with a white box saying “Microsoft Visual Basic 3”. Before I knew it I was determined to build my own games in stead of save money to buy the newest titles.

A few disks, a big stack of books: Coding in Visual Basic was so incredibly simple, a 10 (and-a-half!) year old could get going within weeks!

A few years later, I switched to VB6, and after this VB.NET. Both steps were a lot more complex, but after you grab the concept, it’s suddenly possible to write more efficient code.

C/AL

When I started working at a Microsoft Dynamics NAV partner in 2013, I had my first encounter with C/AL. 20 years later, yet I felt like in a playground again. C/AL is incredible versatile, incredibly simple, and incredibly fast (in terms of coding).

With the release of Dynamics NAV 2016, things even got better: Suddenly there’s TRY…CATCH functionality (although Vjeko already blogged about it’s darkside – I’ll get back to this later), and then there’s that editor that finally moved past Notepad-level! Then we suddenly got VS Code, Extensions, AL… but I’m still missing a few things.

Transaction Management

Somewhere in 2009, I did a project where a lot of Transact SQL programming was envolved, and learned to work with database triggers and transactions. You can imagine my surprise when an NAV developer told me that COMMIT exists in NAV, but using it is asking for a death sentence. The surprise got even bigger when I heard there is no BEGIN TRAN!

By now, I’ve learned how to use COMMIT, but to this date, I still try to stay away frOMMIT.

Concerning transaction management, I don’t have any ideas on how to implement it exactly, but it would be very cool to control database writes the way we can on SQL level.

Data Object datatype

I’ve seen so many developers bump their proverbial nose on this one, and there’s such a simple solution. Take a look at tables 36, 37, 110, 111, 112, 113, 114, 115, 5107, 5108, 5126 and I’m probably forgetting a few: The famous Sales-tables. These have a lot in common, for example:

  • They all use fields Document Type (an Option) and (Document) No. (Code 20) in their primary key.
  • All the lines tables have a Description field (Text 30).

Now, let’s implement a few function that use this standard (Document) No. field:

All working fine! We can build similar functions all through the application, and use them without any problems.

Now… let’s imagine Microsoft is getting complaints about the size of the Document No. field. They decide to make it a Code 25, in stead of a code 20.

Still nothing wrong. Your code will compile, and if you also work at the customer which is using this code, you won’t hear anyone complain either. Right until the moment someone actually uses these extra characters: You will see the fames “Overflow under type conversion of Text to Text”.

Fixing this is a matter of switching on the debugger and tracing exactly where this field is thrown in functions, then editing the length of the variable in the function parameters. Maybe you’ll need to do some data recovery (if someone else threw that dreadful COMMIT in the code above yours…), but usually nothing terrible happens.

My point is that it can be solved, by Microsoft, fairly easily!

There. Fixed. It’s right below the MenuSuite, and it’s a Data Object.

What it does is simple, and could work well in two ways. I’ll explain both in a few steps.

Table-based Data Reference Object

After creating table 36, we create a Data Reference Object for the primary key: Document Type, and Document No. This Data Reference Object “remembers” that it’s based on T36, and will continue to mirror vital properties of the fields. As soon as this exists, it’s possible to reference the Data Reference Object for function parameters, as the sweetspot between a Record parameter and a load of hand-built parameters. Doing so gives us at least three advantages;

  • We should be able to call the function without first preparing a record, for example: DoSomething(‘Sales Order’, ‘SO00001’);
  • It should be possible to reference option values that come from the referenced object, hence are always the same as in the table;
  • If someone decides to change the design of any of the sales tables fields, it should not compile until all Data Reference Objects are updated as well;

This would already be a big advantage. However, there are other ways:

Data (Reference) Object based Tables

One of the first things I learned about databases (in university) were the objectives of normalization; third normal form quickly became our holy grail. Let’s turn this thought process around for a second, and go back as far to the flat table as possible. Let’s also ignore the record size limit. Dynamics NAV could probably do with no more than 50 tables. Don’t worry, my next suggestion won’t be to switch to NoSQL and forget about relational databases altogether ;=)

The next step is where it gets interesting: Let’s separate our flat tables from our table structure: We now have ourselves a data model.

Now let’s take the Sales data model (which contains everything from Customers through Sales Headers, Sales Lines, Sales Prices etcetera), and build a new Sales Header table by simply selecting which fields from our data model should be there. We can now build the Sales Line table by selecting a few different fields. It gets even more interesting if we look at the Sales Price table: That should be in Item data model as well. Maybe it’s a good idea to reference from one data model to another as well: Table relations will be easy (if not automatically generated), and we’ll have infinity integrity!

Okay, cool, but do we need this?

Well, no. We don’t need it. Then again, we didn’t need a Web Client, but people use it every day… And if you’d think nobody ever changes field properties, think again. Or take a good look at that Description field I mentioned earlier. It’s not a Text 30 anymore, it’s a Text 50…


NAV Three-Tier Performance and Load Balancing

Disclaimer: This article is not meant to be a best-practice guide! It’s my first publication on LinkedIn, in which I’ve tried to summarize and share some of the information I found scattered through the interwebs. Also, I hope to gain tips & tricks from you. I still don’t have a solid answer on questions found below (although, admittedly, I haven’t directly consulted Microsoft). More advice is always welcome!

In 2008, Microsoft shocked the Navision world by introducing Dynamics NAV 2009. It gave us a new three-tier architecture that has brought a lot of advantages. I started using NAV 2009R2 Classic in 2012 and was a bit skeptical about all that “newness”, but after working with it for a year (mainly 2013, 2013R2 and 2015), I can wholeheartedly say it’s a huge improvement.

Even though it’s a really matured environment, I’m having difficulties finding knowledge on the topic, especially concerning the RTC-client and it’s service-tiers. For example, after consulting a few Microsoft partners, consultants, developers and asking them if the middle tier load balances by itself or if I should set up a certain number of middle tiers, I didn’t find the clear, well-documented answers I am used to in the NAV community.

What I’d like to tune, and why

The few large Dynamics NAV three-tier environments I’ve seen had a few things in common:

  •  Defining “large”, 100+ concurrent users in one database, and all or most users working in one company;
  • Database server and service-tier(s) on dedicated servers;
  • A fairly well performing database server (fast direct SQL-queries);
  • Low CPU usage and low memory usage on the service-tiers; Two to three servicetiers for over 100 users.
  •  Mediocre to fair performance in the client.

Of course, people can live and work with fair performance, but I’d prefer the user to be the limiting factor! Therefore I started looking into how to performance-tweak two of these environments.

What screws do I turn to make it go faster?

Even though I asked quite a few people, I found only a few things to look at:

  • SQL Server: Like in older versions of Dynamics NAV, the database can make or break performance in an environment. When switching from Classic clients to RTC, if you were happy with performance before the migration, it wouldn’t be the first place I’d look for gaining performance: In a three-tier environment the middle-tier acts as a buffer between clients and the database. Because of this, load on the database is less than when directly connecting clients to the DB.
  •  Page Design: If your NAV environment is customized, it is worth the hassle to invest time in your page design. I won’t go into detail here, but lots of fields in list pages, FactBoxes and pages with subpages accessing multiple tables will slow clients down. Of course, users can switch FactBoxes off by themselves to speed the system up… but they can also switch them ón to slow it down, if they need the information!
  • Network Protocol: This shouldn’t affect performance on fast LAN networks, but when connecting through a slow network or VPN fixing the Network Protocol on Sockets should improve performance. However, this is the protocol used by the service-tier to communicate with the database. I didn’t notice any improvement in my test environment.
  • Metadata Provider Cache Size: In Classic environments, one would set the Object Cache (KB) as high as possible to improve client performance. This is a similar setting, but now NAV objects are being cached in the service tier. The default setting is 150 (objects), and I really don’t know why it is this low – a dedicated middle-tier server should have at least 16GB of RAM, why not use it? Because I didn’t configure different service tiers for different departments (so basically everybody uses all objects), I decided to match the amount of objects in the database with the cache size: 5000. Then I started the service tier and opened all pages, reports, XMLPorts etc. I could think of. This resulted in about a 50MB difference in used memory on the server and positively impacted performance by a truckload. I’d advise anyone to try upping this parameter, of course, carefully watching memory load on the server.
  • Data Cache Size: Sets the amount of data cached on the service tier (in-memory). Can be convenient to lower the load on the database server. The standard settings is 9, which equals 512 MB. 8 = 256 MB, 10 = 1024 MB, 11 = 2048 MB etcetera.
  • Compression Threshold: This might be an interesting setting. It determines how much data the service tier will send to the client in one go without compression. By standard, this is set to 64 kilobytes. I didn’t manage to find any information on how this data is structured and how the service tier behaves around this setting: will it prefer sending a compressed dataset that is larger than the screen? Or does it limit the dataset sent to 64 KB even though there’s more records within the filter? It might be worthwhile experimenting with this setting if either your network bandwidth is limited (decrease the threshold) or your processor power on the middle-tier server is limited (increase to make sure the server doesn’t spend valuable time compressing data). Although the environment I’m testing in is not really limiting either way, I am testing if settings 24 and 512 are noticeable on different servicetiers within the same environment.
  • Multiple service tiers: Although I still don’t know where the sweet spot is in the amount of users per service tier, it’s understandable that there ís a sweet spot. I’ve had advice ranging from 25 users all the way up to 100 user per service, and have guesstimated the sweetspot to be about 30 users. This way, all these users will manage to cache all objects in a short time so NAV will “wake-up” quickly in the morning. Multiple services will then make-up for our “manual multi-threading”.

So, how do I balance this load?

Let’s keep it simple: In most environments, setting up different links during client install will divide users over multiple service tiers. Problem solved?
Yes, it does work. However, in case of maintenance or stability problems, it will be difficult to redirect these users to one of the working service tiers. It isn’t flexible!

After searching for days (okay, hours) and not finding anything useful, I decided to give up and see if I could think of a solution myself. What I built is a simple command-line executable launcher that randomly chooses one of the pre-defined servicetiers and then runs NAV with these parameters. Although it’s quick and dirty, it does provide me with basic load balancing, and gives me the flexibility to quickly switch off one of the servicetiers, or even one of the servers. If I know I’ll have to restart a server for maintenance, I’ll modify the settings XML, and then ask users to quit and restart NAV when they prefer, for example anywhere during the next hour. If you’d like to try or use this tool, get it for free here.

Future ideas for this little tool would be to build in true load balancing (checking the database for active sessions per service tier), (de-)activating service tiers from a GUI and – if I figure out a way to do it – switching active users from one service tier to another. Although it’s fun to build, I’d like to know first: Am I the only one running in to this problem? Does anyone have a better solution?

Well, that’s all folks! I hope there’s more info in the NAV community about tuning the three-tier environment – if you find mistakes in my essay, or have any good tips please let me know.


Page 2 of 212