NHibernate Second Level Caching - It's Complicated
NHibernate Second Level Cache is the best friend that you can't trust. Its a mixed bag of hits and misses. We'll analyze some of the issues/nuances in detail here.
Lets assume we have the following hypothetical model. We have a Customer who can have one or more addresses and each address can have one or more phones.
Our goal is to load the Customer, It's addresses and phones in a single query.
For our mapping we have Customer with a Bag of Addresses and Address with a Bag of Phones.
The listing below is the abbreviated mapping using NH 3 mapping by code.
public class CustomerMap : ClassMapping
{
public CustomerMap()
{
Table("Customer");
Cache(map => {map.Usage(CacheUsage.ReadWrite); map.Region("Customers");});
Id(x => x.Id, map => {map.Column("id"); map.Generator(Generators.HighLow, g => g.Params(new { table = "NextIdTable", column = "next_id", max_lo = "0", where = "type = 'CU'" }));});
...
...
...
Bag(x => x.Addresses, map => { map.Inverse(true); map.Cascade(Cascade.None); map.Key(k => k.Column("customer_id")); map.Fetch(CollectionFetchMode.Join); }, r => r.OneToMany(o => o.EntityName("Ord.App.Domain.Address")));
}
}
Similary we have the following mapping for Address
public class AddressMap : ClassMapping
{
public AddressMap()
{
Table("Address");
Id(x => x.Id, map => {map.Column("id"); map.Generator(Generators.HighLow, g => g.Params(new { table = "NextIdTable", column = "next_id", max_lo = "0", where = "type = 'AD'" }));});
...
...
...
Bag(x => x.Addresses, map => { map.Inverse(true); map.Cascade(Cascade.None); map.Key(k => k.Column("address_id")); map.Fetch(CollectionFetchMode.Join); },
r => r.OneToMany(o => o.EntityName("Ord.App.Domain.Phone")));
}
}
Scenario 1 : Load using session.Get(123)
Observations :
-
On the first call, NH will generate the query as expected with Joins From Customer to Address and Phone tables.
-
On the second call (In a different session. We are interested in 2nd Level Cache here). NH load the Customer and then sends and individual query for each of the Addresses and Phones. We have an N+1 problem here.
To solve the problem we need to mark Addresses to be cacheable too. Add the following to the AddressMap
Cache(map => {map.Usage(CacheUsage.ReadWrite); map.Region("Addresses");});
And add the following to CustomerMap and AddressMap in the Bag mapping -> Caching configuration block.
map.Cache(c => { c.Usage(CacheUsage.ReadWrite); c.Region("Addresses");});
map.Cache(c => { c.Usage(CacheUsage.ReadWrite); c.Region("Phones");});
The code above enables the collections to be marked cacheable.
Scenario 2 : Load using Criteria Query.
session.CreateCriteria().Add(Restrictions.In("Id", new List{123,345,456}));
Observations :
-
For this to be cacheable you first have to enable query caching (While building the SessionFactory)
-
Then also the data is not cached (You would think so because the entities are marked cacheable and so it the collection relationships)
-
You need to mark the criteria query to be cacheable too.
criteria.SetCacheable(true).CacheRegion("Customers");
Now having done all this you'll expect all to work. Unfortunately it still doesn't.
-
The count of the addresses and phones in the Customer will be all messed up. Since the original join query returned duplicates. The hydrated entities would have duplicates too.
-
To fix this you can apply a ResultTransformer to the criteria.
criteria.SetResultTransformer(new DistinctRootEntityResultTransformer());
Unfortunately this only works at first level. So Customer.Addresses would be correct now, But the Address.Phones would still have duplicates and I couldn't figure out a way to remove that duplication.
- To really fix this issue, you'll need to make the collection Subselect, rather than Join.
So in the mapping above change map.Fetch(CollectionFetchMode.Join); to map.Fetch(CollectionFetchMode.Subselect);
Finally we'll have the correct counts and caching.
Well we are not done here yet. In the next blog I'll post more situations/issues and potential solutions for NHibernate Second Level Cache.