ArcherPoint Dynamics NAV Developer Digest – vol 38
The ArcherPoint technical staff—made up of developers, project managers, and consultants – is constantly communicating internally, with the goal of sharing helpful information with one another.
As they run into issues and questions, find the answers, and make new discoveries, they post them companywide on Yammer for everyone’s benefit. We in Marketing watch these interactions and never cease to be amazed by the creativity, dedication, and brainpower we’re so fortunate to have in this group—so we thought, wouldn’t it be great to share them with the rest of the Microsoft Dynamics NAV Community? So, the ArcherPoint Microsoft Dynamics NAV Developer Digest was born. Each week, we present a collection of thoughts and findings from the ArcherPoint staff. We hope these insights will benefit you, too.
Crystal Tollison asked about slow performance of Dynamics NAV 2009R2:
A client just started using the RTC version of their 2009R2 database and they are complaining of the early morning slowness. I recall something about a 'warm-up routine' for this version. Does anyone know more about this?
Have we confirmed that someone set the MetadataProviderCacheSize to something greater than it's default? That's definitely where I would start. Here is an excerpt from a Yammer post describing the recommended settings:
For the past 2 days, the users have been scoring the performance as an average between a 7 and 8. This is a tremendous improvement over the 3 average they gave for the last 2 days of the week.
There are a couple of things that have made the improvement possible.
1. They switched all of the users to Kerberos Authentication only, no NTLM, which expedites the authentication between SQL Server and the Service Tier and back to the RTC. Previously this was set to both.
2. We changed the metadataProviderCacheSize from 150 objects to 5000 objects. This seems to be the largest break in their performance levels as we saw an immediate affect for some critical users who were reporting performance levels of 3 or lower.
Everything else has checked out and they have currently setup some network traces to see if there are any spikes that may cause them issues.
In the end I believe that it comes down to the speed of Authentication and to the cache size.
My understanding is that the service tier is slow after a reboot because the objects aren't cached until used. So, there should only be "warm-up" slowness when the server running the service tier is rebooted. Do they have a nightly reboot of that server?
There have been several reboots this week for changes being made as issues are resolved. I will discuss with them and see if it is continuing. Thanks Scott!
Jon Long asked: Is there anything wrong with the following code?
Item.RESET; Item.SETFILTER("No.", SalesLine."No."); Item.FIND('-');
OK - I'll start a fire storm. I will quite often do this:
rather than this
because I think it is 5000% easier to read and understand.
So I guess the only thing I see offensive about your example is the FIND('-') instead of FINDFIRST or FINDSET.
Unfortunately, that would also error out if the Sales Line Item Type isn't Item. They also forgot to put the filter for the Type like you did.
You are a scholar and a gentleman. I like the explicit Type::Item because it has the added bonus of the design time compile check, and it even suggests the options. On the other hand, the string filter will cause a runtime error unless you are a perfect programmer, which apparently you are. Still works the same. In fact, the SETRANGE is a wrapper for the SETFILTER, behind the scenes. The main amateurism in my example was the fact that a GET will do the same thing with less back and forth to SQL. The lesson here is, always use a GET when you can. Not only is it easier to read (one liner), it's performance is enhanced.
Ideally the order of preference is GET, SETRANGE, SETFILTER.
SETFILTER actually takes nearly 60% longer than SETRANGE. It's a minor detail for small routines, but it seriously adds up in long running jobs (talking hours before you know it).
Here's a blog I wrote about it a while back:
You have convinced me of the error of my ways.
I agree with Matt. During performance tuning for one company, I had to change code in NAV 4 SP3 in various posting code units from find('-') to findfirst. The change made a marked improvement on speed.
Question on security setup in SQL for NAV to call stored procedures:
Does anybody know what security needs to setup on SQL in order for NAV to call stored procedures in the 3-tier infrastructure (RTC)? Is it on the service tier LogOnAs account or can you send a login string through the automation using an account with elevated privileges?
The NST needs to run under a proper AD account (which I think is a good idea in general). However, I did not experiment enough to figure out whether the execute rights for the SP have to be granted to that AD account, or to Public, or both. You can have the automation (or dotnet) send its own login string, but why do that when you can have it use whatever windows authentication?
I’d be concerned about security implications of granting more than the minimal rights to the LogOnAs account. I'm hoping there is a way to pass an explicit account through/directly to SQL.
I found this thread on mibuso:
You can use the datatype dotnet and use System.Data as your assembly and go down to System.Data.SqlClient.Connection and form a connection string where you can input an SQL DB user that has permissions to access the stored procedure.
That being said, it also depends on if the SQL server is in mixed mode for authentication.
Be sure to check out all of our Development Blogs.