Developer forum

Forum » Dynamicweb 9.0 Upgrade issues » Slow loading of user page in admin module after upgrade

Slow loading of user page in admin module after upgrade

Daniel Hollmann
Reply

Hi

 

After upgrading a custom solution to version 9.13.11, we have experienced a slow render of the user page in the admin panel.
The user page comes back to live between 2-10 minutes.
Our customer has two different environments, one hosted on Azure, and one self-hosted on a different virtual machine.
We have experienced the scenarios on both environmental, however the self-hosted version was even slower to load again.
The exact same problem is described in this thread: https://doc.dynamicweb.com/forum/dynamicweb-9-0-upgrade-issues/9-13-3-upgrade-causes-slow-loading-of-users-page?PID=1605

But since there is no news on an update, I will try with a new thread.
 

This only occurs the first time after the application has either been restarted or been idle. It locks the current user’s thread,
so none of the site is accessible while this process is taking place. However, it does not block the whole site, as other users with another session still can access the site.

I have a attached a gif, that showcases the situation.


Additional Information:
* The original version of solution is 9.7.4, and we tried to upgrade to 9.8.13 and the same exact thing happen. Therefore, we made efforts to upgrade to the latest version and it also happen here.
* The environment on the non-Azure environment is slower to recover than the other, but also contains significant more users, so a theory is that the amount of accessusers could make this problem
worse.
* After the process there seems to be no logs in the event viewer that suggest what the problem is, but an error also never occurs, some process just take a long time to finish.
* The solution does uses a index using the inbuild “Dynamicweb.UserManagement.Indexing.UserIndexBuilder”, that is set to build every 15 min.

showcase.gif

Replies

 
Scott Forsyth Dynamicweb Employee
Scott Forsyth
Reply

HI Daniel,

We just made a big discovery on this issue this week. We haven't confirmed for sure that it applies to all situations though.

Can you check your /Files/System/Diagnostics/ folder, and get a count of subfolders? Do you have a really large number (like hundreds of thousands)? If so, that's probably the issue. Every run of the reindex leaves behind a Status.xml file in a new date based folder.

You would think that it seems unrelated to users, but it appears that the backend treeview makes a call to the repositories, which in turn are perform a 'write' (yep, a write) on every one of those folders.

Once it's cached, it doesn't need to do it again until the cache expires or is cleared, which is why it's fast again after the first load.

I'm curious if that is the issue for you too. I can share a PowerShell cleanup script if you confirm that's your issue.

Scott

 
Daniel Hollmann
Reply

Well that could very well be the case. I stopped the count at 1 million folders.

So can I safely delete the contains of the diagnistics folder, or would you like to share your Poweshell script?

 
Scott Forsyth Dynamicweb Employee
Scott Forsyth
Reply

Hi Daniel,

That sounds like a good confirmation that it's the same issue then.

If you delete all but the most recent (based on the filename and not the last modified date), then the index will continue to run. The rest of the folders are all stale. 

I've attached our Clean-Logs script, which includes this new addition. Make sure to review lines 35, 36, 37 to point to the path on your site. This assumes that it's the folder above your site root so that it can process multiple sites if needed. It also assumes that the /files folder is directly in the site root. If that convention isn't the same for you, the script will need to be tweaked for your needs. 

With that many folders, it may run for a few hours the first time, but then you run this on a schedule every night, and it will keep it in check.

All the best,

Scott 

 

 
Daniel Hollmann
Reply

So if I get this correctly I only need to run the "Remove-StaleDiagnosticsFolders" function in the powershell? 
And in my case my files folder is a virtual directory and not at the site root. So I can do some adjustment in line 173 to get the correct path to the Diagnostics Folder?

 
Scott Forsyth Dynamicweb Employee
Scott Forsyth
Reply

Hi Daniel,

Yes, exactly, except for one more thing. Also set the $StatusXmlFolderRetentionAmount variable. 3 should be a good value. That should do it.

Scott

 

 
Imar Spaanjaars Dynamicweb Employee
Imar Spaanjaars
Reply
This post has been marked as an answer

Instead of using a custom PowerShell script, can't you just use auto-purging of log files and include the diagnostics folder? Seems easier and more visible to me.

Imar

 

Votes for this answer: 1
 
Scott Forsyth Dynamicweb Employee
Scott Forsyth
Reply

The problem that I've run into is that it touches each of the folders when the backend is first viewed, so the timestamp is always refreshed. So, if you try to delete anything over X days, they are never deleted. 

 
Imar Spaanjaars Dynamicweb Employee
Imar Spaanjaars
Reply

It's not what I see on the solutions I checked; cleanup seems to delete those files and folders just fine.

Imar

 
Scott Forsyth Dynamicweb Employee
Scott Forsyth
Reply

Hi Imar,

That's good to know that it may work. There are some areas to consider.

The most exciting part of this is the new finding that keeping /Files/System/Diagnostics under control addresses the slow backend loading of the users (here). Consider an example with 5 repositories, each with an A and B instance, rebuilding every 15 minutes. That's 960 new folders per day, or 30K / month, or 345K over a year. It's even worse if it's rebuilt every 5 minutes, or if there is a full and partial repository, each being rebuild regularly. 

Regarding methods to clean it up, there are a couple things to consider. The performance issue only came up in 9.13, so it may not have been as much of a problem prior to that.

There are some gotchas

  • You want to clear all but at least one folder, but never clear the last folder, otherwise the index fails. We had times in the past where we deleted all files older than X days, but the index rebuilding task was turned off for a few days, so no new updates were applied. That resulted in deleting the last folder, and the index failed and the site appeared to fail until the index was rebuilt. 
  • That means that you may need to have a different schedule for prod and dev, depending on the scheduled tasks on a dev instance. 
  • So you need to optimize between keeping a fairly low number of days to keep, but never having the schedule task runner, or the repository scheduled task disabled for that many days.
  • Personally, I prefer the powershell script that can be scheduled off hours and runs outside of the website process. We were having high CPU on the w3wp worker process for extended periods of time, so this process is better able to throttle that and/or giving visibility into which is causing the high CPU.
  • With the recent change in 9.13, it appears to update the timestamp, so I'm surprised that it even works, but, since I haven't tested the backend cleaning tool recently, I can't say for certain, so it is good to have the backend option that you mention as an option.

 

 

 
Oleg Rodionov Dynamicweb Employee
Oleg Rodionov
Reply
This post has been marked as an answer

Hi all,

Thanks a lot for researching. I was able to catch the issue on test environment based on last DW9.13.12/14.0 with 1M users and 50K+ folders/files inside Diagnostics folder. The issue can be fixed in regular way using log clearance feature enabled as Imar has mentioned above. Besides, I've created new task 9022 could implement the following DW settings allow to avoid huge quantity of logs on indexing by task:

1. The option to disable the logging of the tasks and repositories to these files entirely

2. The option to get only the log files when a build or task fails, and not from every successful build.

BR, Oleg QA

 

Votes for this answer: 1
 
Scott Forsyth Dynamicweb Employee
Scott Forsyth
Reply

Hi Oleg,

Excellent. I'm glad that you were able to reproduce this and confirm it. Your workaround sounds good, which confirms Imar's suggestion too. The new task recommendations sound good too. 

We're happy to have it figured now.

Scott

 
Oleg Rodionov Dynamicweb Employee
Oleg Rodionov
Reply

Hi,

Moreother, I've created new task 9026 to investigate a reason of the issue for user page loading with the conditions, hope it could be helpful as well.

BR, Oleg QA

 
Scott Forsyth Dynamicweb Employee
Scott Forsyth
Reply

Hi Oleg,

I like it. That sounds like a good idea.

Scott

 
Daniel Hollmann
Reply

Hi again. I tried to run the log clearence that Oleg we proposed. This worked well, given that the user module worked fast again. However the fear that Scott he proposed, might came true for me. My Indexes is now unloadable and I can't edit them. I tried to deleted them, and then upload the .index file agian, but that did not fix anything.
This is what I see when I try to access the index thorugh the admin interface. And attached is the console log to the web browser.

I don't see any logs in the event viewer.


 

Any Idea on how I should fix this???

2022-09-16_11h06_08.png 2022-09-16_11h11_12.png
 
Daniel Hollmann
Reply

This also happend when I tried to create a new index. It happens in two of my enviroments.

I have upgraded and downgraded the solution (Due to this problem) and that may also factor in, why i'm getting this error.

I can still build the index'es through the scheduled tasks, and they seems to send me back data as well, when used through query publisher

2022-09-16_11h20_57.png

 

You must be logged in to post in the forum