Developer forum

Forum » Dynamicweb 9.0 Upgrade issues » Performance issue with first hit

Performance issue with first hit

Søren Heide Larsen
Søren Heide Larsen
Reply

Hi,

A few days ago my team started complaining about long wait times when and rebuilding the application. When I investigated this, I noticed that first hit takes a lot of time. Now first hit usually takes a lot longer as caching is buing built, but not minutes. I also noticed that this was due to the fact that we reach a SQL server over VPN and Dynamicweb creates a lot of SQL Queries.

So my prayer would be to reduce the amount of SQL queries all around with first hit. If i took a page with some items on I was experiencing 229 sql queries, and many redundant, e.g.

"SELECT * FROM Information_Schema.Tables WHERE Table_Type = 'BASE TABLE'" was called 61 times
"SELECT [Page].[PageNavigation_UseEcomGroups], [Page].[PageNavigationParentType], [Page].[PageNavigationGroupSelector], [Page].[PageNavigationShopSelector], [Page].[PageNavigationMaxLevels], [Page].[PageNavigationProductPage], [Page].[PageNavigationIncludeProducts], [Page].[PageNavigationProvider] FROM [Page] WHERE ( [Page].[PageId] = @p0 )" was called 42 times, etc.

I know that the last example might be called using different @p0, but you may consider a strategy that simply pulls everything out it needs in one go instead of picking them up one by one (Like prepare on prices).

My Dw version is 9.3.2, I don't know if any of this is resolved in 9.3.4, but I did not see bug fix notes regarding this issue it in the release notes up till the newest.

Have a nice weekend :)

/Søren

 


Replies

 
Martin Vang
Martin Vang
Reply

Hi Søren,

We're currently working on caching and database access in Ecommerce (all of it). It will not be a hotfix (that would be too "hot"), and expect to see most of it as the 9.4 release.

If this is going to solve all your issues is currently unknown. Our performance on first hit is not what we optimize for - normal usage is. :)

 

We'll look at first-hit performance once we're done with caching and database access in all levels of our application stack (currently planned for "later").

BR

Martin

 
Søren Heide Larsen
Søren Heide Larsen
Reply

Hi Martin,

I investigated it a bit more and from Dynamicweb 9.2 to 9.3 the initial calls have grown massively, in fact 173 of the calls i experienced are related to Items, which you just rewrote. Just because you cache your calls does not mean that you can ignore the performance hit when the cache is invalidated or the application recycles.You really need to take first-hit implications into consideration both in your development strategy and in your QA process, otherwise you will end up having to reinvest in something you just invested in.

I still urge that you will look into this, at least in the items area, as this is where we face the biggest hit.
Sounds good that you're looking into e-commerce as well, just keep this in mind when re-doing that :)

/Søren

 

 
Martin Vang
Martin Vang
Reply

Hi Søren,

Items are cached in a lazy load fashion. If you have a massive performance hit during startup and this is because of items, it's because you access a massive amount of items during startup. If you look up 173 items during startup, we need to request the database that many times.

We currently work with 3 caching strategies:

1. Eager: Store everything related to X in memory and ensure synchronization of X to database. All of X is loaded at application startup.

2. Lazy: Load each X and store them in memorycache with sliding expiration for 15 minutes. First request adds X to cache, to solve frequent access issues.

3. None: Don't cache X.

Our decision was to use nr 2 for Items, as some of our customers have too many items to feasibly use option 1.

If you have some suggestions as how we can improve this, please let me know. :)

 

I hope this helped explain our considerations regarding caching and will let you come up with feedback that can help us make it even better. :)

BR

Martin

 

 

 
Søren Heide Larsen
Søren Heide Larsen
Reply

Hi Martin,

I think we both can agree that if you call the exact same SQL call 61 times, then you are doing something wrong regardless of which strategy you use.

 

In my opion items are quite compareable to products and you actually did a nice strategy for prices with the prepare option.
We also use the same strategy as you do, however, we always stribe to ensure 2 things:

1. We implement a prepare strategy whenever we use the object in relation to something enumerable - in your case that would be paragraphs, query publisher etc.

2. We recently started to implement a warmup strategy that we can use in a scheduled manner, that ensures the whole site is warmed up at production.

BR
Søren

 
Martin Vang
Martin Vang
Reply

Regarding your problems, to take this talk into another direction.

We're going to look into the bug/issue related to the two recurring database request you wrote about in your first post (I can see that I was not clear about that we where going to look into it).

Regarding your problem with items: Can you explain to me what your usecase is, that results in so many items being need on first hit? Do you have 173 items in play for a single page rendering?

Edit: I'll note down your suggestion regarding "Prepare items" and incoorporate this into a future version of our item cache (any lazy loaded data, really). 

BR

Martin

 
Søren Heide Larsen
Søren Heide Larsen
Reply

Hi Martin,

Sounds great.

The 61 calls mentioned in my first post seems to be done initally regardsless of which page I take, so this proberly represents the amount of item types in my database.

The rest of the cases are devided into several item types where they are fetched one by one. An example is [ItemType_PageProperties], that we use for pages, which is called many times.

We especially use a lot of pages with paragraphs with items that uses item lists, which in the end results in a lot of calls individual calls. If you want more specifics I would like to do this on mail instead to avoid exposing the customer but I do not think it would be hard to replicate :)

/Søren

 
Nuno Aguiar Dynamicweb Employee
Nuno Aguiar
Reply

Hi Soren,

 

Do you by any chance do multiple GetLoops for the same data? We ran into issues like that and found that if we instanciated a variable for every time we duplicated a GetLoop, we'd get a performance enhancement.

 

So whe do stuff like this

 

var productsLoop = GetLoop("Products"),

if (productsLoop.Any() {

    foreach (var product in productsLoop) {

        // Go sell them

    }

}

 

We apply this technique everywhere, so if you are calling the same item list from the page properties multiple times (and/or on evry paragraph template), this might be a way to do it.

 

Best Regards,

Nuno Aguiar

 
Martin Vang
Martin Vang
Reply

Hi Nuno,

Just because Im curious, I want to clarify what you do. :)

var productsLoop1 = GetLoop("Products"),

if (productsLoop1.Any() {

    foreach (var product in productsLoop1) { [do stuff] }

}

var productsLoop2 = GetLoop("Products"),

if (productsLoop2.Any() {

    foreach (var product in productsLoop2) { [do stuff] }

}

Should be faster then

foreach (var product in GetLoop("Products")) { [do stuff] }

foreach (var product in GetLoop("Products")) { [do stuff] }

?

Or did you mean, that it's going to be faster if you do like this:

var productsLoop = GetLoop("Products");

var anyProducts = productsLoop.Any();

if (anyProducts) {

    foreach (var product in productsLoop) { [do stuff] }

}

if (anyProducts) {

    foreach (var product in productsLoop) { [do stuff] }

}

?

 

BR

Martin

 
Nuno Aguiar Dynamicweb Employee
Nuno Aguiar
Reply

Hi Martin,

 

It's the latter. We only call GetLoop once and loop through it as many times as we need.

 

We also use .Any() instead of .Count() as it performs faster as well (although unoiticable with small record sets), so that even if our recordset grows, that line of code performs the same.

 

Best Regards,

Nuno

 
Søren Heide Larsen
Søren Heide Larsen
Reply

Hi Nunu, I completely agree that is the way to go in your given examples and we also use a lot of energy to ensure our c# is as effective as possible. 

I do not think our performance hit is in the razor itself though, as this would prosumably affect all requests and not just the initial one :)

/Søren

 

You must be logged in to post in the forum