Posts Tagged ‘GP maintenance’

Why my GP is too slow?

Hello all, I would like to provide you with some information on what to look when 312339_117542088417759_1094173370_nDynamics GP seems to be working slow. As you may already know Dynamics GP is a process driven application and you may experience slow performance when specific processes are performed in GP please take a look:

 

· While posting might be due to the PJOURNAL table as you know checks post too or remittance is being printed separately

· Client workstations should have a default printer setup and online, remove invalid printers

· While opening windows, the autocomplete feature may cause performance issues and if not used can be turned off

· While login into Dynamics GP or utilizing 3rd party dictionaries as well if Menu Master table (SY07110) became too large.

· The location of the modified dictionaries other than local workstation

· Certain Smartlist reminders might interfere with login into Dynamics GP

· You may have shortcuts to network locations that are no longer mapped or available

· Printing to file directly into the client/remote computer instead of the hosted server user folders

· OLE Notes path in Dex.ini

· SQL AutoClose and AutoShrink options not set to false

· Virus scanner setup not excluding the following extensions (CNK, DIC, CHM, SET, INI, DAT, IDX, VBA, LOG, LDF, MDF)

· The Dynamics GP homepage smartlist favorites

· The Dynamics GP homepage outlook integration

· Enabling tracing options in DEX.ini

· Bad user defined triggers in SQL

· Bad configuration of SQL server memory allocation

· SQL server or Dynamics GP server available disk space

· SQL server log file is full and is not set to Autogrow

· TNT*.* files, your %TEMP% folder has not been cleaned

· Non-compliant SQL server/GP Server/Client hardware

· Different DB owner than DYNSA

· Little or no SQL server maintenance (Table Fragmentation)

· You might be missing table indexes or statistics

· When exporting a budget, thru the budget wizard it seems locked (if you are using the excel wizard to export, make sure the “save as” dialog is not on the background, Alt-Tab to it as it must have been opened and its behind your main GP windows.

· Too much history (You can archive historical years, specially if you have large tables like Item Master, Customers, Vendors) Believe me I ran reconcile one time and it took 6.5 days on a company with more than half a million SKUs and 5 years of sales.

 

I have witnessed how few administrators that in order to preserve enough disk space they have a tendency of running SHRINK on the SQL server, this obviously will fragment tables affecting performance. I have a post that covers that here.

 

If you want us to take a look at your environment don’t forget to contact us, and as always when troubleshooting record answers for the following questions:

1.- Can you replicate the issue? write down steps that let you reproduce the issue.

2.- If its related to posting, please note the module( s ), how many transactions are in the batch, how long does the process last? how long did the process last before?

3.- On a Server/Client install, can you replicate on the server?

4.- Can you reproduce on all or other clients?

5.- Are there any 3rd party products running on the same SQL/GP server or together with Dynamics GP?

6.- Are there any customizations in GP?

 

Until my next post and let us know if we can help!!

Francisco G. Hillyer

DYNSA and SQL Maintenance for Dynamics GP

Hello all, it has been quite some time since my last post, kind of missing all of you, specially with the holidays approaching etc.598846_4784214609951_774409899_n

This time I would like to share with you something that I recently learned “the hard way” obviously on a support case.

First of all, I want to express the importance of validating your information and that the engineers/partners that are/were involved in your company setup of Dynamics GP were bound to the best practices established by Microsoft and supported by many of my colleagues.

In my case, a customer did a side by side upgrade of SQL, with this came the issue of not having the DYNSA login in the new SQL instance, we followed certain processes to make sure DYNSA was the owner of the Databases Dynamics GP is using.

But you may ask who or what is DYNSA? my friend Mariano Gomez has a post very complete about this subject and you can find it here: Mariano’s DYNSA Info since I am not reinventing the wheel take a look at Mariano’s blog its packed with information for all audiences (GP related !!).

So when I was working on this customer DYNSA setup, I suddenly remembered another case where I was having issues with a third party, I jumped into their environment (literally) and started investigating this DB configuration, to my surprise the owner of the databases was an AD account not DYNSA. I proceeded to replace the owner and then certain SQL reports started working and producing results. I am still intrigued on why, but I will do a full research on spare time.

So back on the game for this customer issue I was having while trying to archive data, just imagine a SOP30300 table with 11 million records and a huge base of customers.

The queries running were taking countless hours to execute not even mentioning the impact on the processor. memory and of course user experience.

I learned that the customer had a “Maintenance Plan” where they executed the Shrink process on SQL, as you may know I am a SQL enthusiast and I recalled an important blog post from another noted resource Mr. Pinal Dave aka “The SQL Authority” here is his post about why is BAD to shrink a DB Shrink is Bad for you… there is one section in the article that explains that Shrinking a DB to obtain disk space will actually fragment your tables, obviously to reduce fragmentation you rebuild indexes. So this maintenance plan was being executed to reduce disk space, then to improve performance, and the disk space was gone again. Wise words from a mentor that prefers to be in the shadows once said “with current prices on storage why waste time shrinking when you can focus on performance”.

I ended up tweaking some SQL scripts to automate a SQL job on finding fragmented tables in the DB and executing that as part of DB maintenance, as I said “go buy another disk drive and add it to your server, move the logs to this new disk and keep data apart from the logs and you will be better than now”.

If you need help, let us know, our team at RoseASP and RBS we have experience in solving this type of issues.

I hope my experience helps you for a better community.

 

Until my next post

Francisco G. Hillyer

Backup routine

It’s amazing how many times I get calls from people that have a catastrophe with their GP system but have no backup. Yesterday I finished up a call with a user that had an LDF file (GP log file) magically disappear. The log file itself was a second log file that is used to speed up transaction log performance since their database is about 25 GBS.

“Didn’t seem like that big of an issue just to have a log file deleted” said the user. With the disappearance of this log file users had not been able to get into GP for 3 days. The backups that should have been run nightly had not been done since late September. The drive that the backups were scheduled to was a removable drive that was removed and not replaced in September. No backups were able to be done and the log file had grown to 27 GBS which used up all the space on the data drive and kicked SQL into single user mode. After several calls and attempts at restoring the MDF file from early November we ended up having to restore back to September 28th. They lost an entire months work because no backup were being done.

Here are some suggestions to creating a successful backup routine to avoid situations like this:

  1. Make checking to see if backups are being done a weekly (at least) job requirement for the IT department in your company. As IT people come in and out of the company, make this a department responsibility so you don’t have the unfortunate/frequent occurrence of new people saying “the person responsible left and no one was checking backups”.
  2. Have the accounting department also check to make sure backups are being done. Surely with two departments checking this will be monitored sufficiently. Place a shortcut on the accounting persons desktop so they can verify they see a BAK file in the backup folder on the server.
  3. Place all modified reports files on the server so these files can be backed up nightly as well.
  4. Make sure all files are being backed up that could cause you loss of data. For example – The actual .BAK file generated with SQL jobs for both the dynamics and company database. Reports.dic , Forms.dic and any other modified dic file for 3rd party apps. FRx Sysdata folder, integration manager IM.MDB file etc, etc. Ask for recommendations from a consultant that knows your system.
  5. Make sure backups are stored at a different location than the actual data. Main idea is to not store backups in a place where you will loose both backup and data if a server goes down. Often suggested to take/store backups off site.
  6. Restore backup to test company periodically to test backups are being created successfully. Also gives you a playground to test things on with recent data.
  7. If you are a user think about saving a backup of the reports.dic file, FRx tdb file (export of FRx reports), im.mdb file etc. on your local workstation after you make changes to reports. This will hopefully be another help in the event of lossed data.
  8. If something happens to your GP system (eg. server crashes) contact help immediately. Don’t touch a thing and don’t wait 3 days before requesting help.

Any other suggestion?

Subscription Options:
Subscribe via RSS
Articles Categories