Most of last week was spent tracking down and fixing the remaining big memory leak in the import process. This was actually caused by our cache of resource strings being effectively uncapped so as an import proceeds the cache gradually takes over all the available memory. I’ve now updated the resource cache so that it has a (configurable) limit set per store that is opened. The setting to configure this is BrightstarDB.ResourceCacheLimit. I’m not totally happy about the fact that this cache limit is (due to implementation) a per-store limit rather than a global limit across all stores, nor that the limit is specified in terms of number of cache entries rather than memory used. I’m going to look into a way to change this – possibly by replacing the home-rolled caching with something else as long as I can find a solution that doesn’t impact the gains in import speed.

I’ve also had time to extend the Data Objects API and the Entity Framework API so that it is now possible to target the updates made through these APIs to apply to a particular named graph in a BrightstarDB store. This opens up some really interesting possibilities for using these higher-level APIs to do domain-specific inferencing or data processing that stores its output in a separate graph from the data it operates on – previously this was only possible to achieve if you used the low-level RDF API, so I think this is a good step forwards in the usability of the APIs. The documentation is a bit terse at the moment but you can find it here for the Data Object Layer, and here for the Entity Framework.


update