Under some circumstances the calls to config.get or config.getint can become performance bottlenecks.
Those configuration values are static data and shouldn't vary between server restarts (even if we thought briefly about configuration hot reloading) and thus they should be cached.
To upload designs, you'll need to enable LFS and have an admin enable hashed storage. More information
Child items
0
Show closed items
No child items are currently assigned. Use child items to break down this issue into smaller parts.
Linked items
0
Link issues together to show that they're related.
Learn more.
The implementation does caching on the module level.
Some values are not cacheable. After fixing the tests I still wonder if a cache in our subclass of RawConfigParser is not more correct.
Nicolas Évrardadded 1 deleted label and removed 1 deleted label
I'd probably be more in favor of integrating a cache directly in
TrytonConfigParser, but the current approach allows to decided
value per value if we want to cache it or not, which may be
useful in some cases.
Jean Cavalloadded 1 deleted label and removed 1 deleted label
* Jean CAVALLO [2019-10-22 10:57:53]:
>
> Jean CAVALLO <jean.cavallo@coopengo.com> added the comment:
>
> I'd probably be more in favor of integrating a cache directly in
> TrytonConfigParser,
Yes me too but the fear is that there would be no gain by using a
cache on this level. Would you be ready to test both implementation
and report back?
I agree with msg52605 but I'm wondering if it should be usefull to add (or at least discuss) on the python language. IIRC the TrytonConfigParser is a default configParser with only a sensible defaults. So any performance improvement on python will benefit tryton also.
Did we have some script to test: the current performance and the performance with both patches?
I think this information will be very usefull to decide which is the best solution.
>> I'd probably be more in favor of integrating a cache directly in
>> TrytonConfigParser,
> Yes me too but the fear is that there would be no gain by using a
> cache on this level. Would you be ready to test both implementation
> and report back?
I would, my use case only requires the trytond patch, since it's mostly the
cache / record-model-field caches that are taking some time.
> Did we have some script to test: the current performance and the performance with both patches?
I noticed the problem when testing a very demanding API in our custom application,
and got to the point where, out of a 5 second call, about 0.5 second (10%) was
spent in config.getint
AFAIU, this was mainly caused by trytond loading the cache size informations
when creating the local cache when instantiating new records (I have about 50k
instantiations in this call).
Basically, manually patching getint to hardcode with a simple if / elif the
cache related values gave those 0.5 seconds back.
> I'm wondering if it should be usefull to add (or at least discuss) on the python language.
It could but we will benefit only in 1.5 years (at best). And it does not make that using constant value in top would be faster (local lookup) than calling a method.
Cédric Krieradded 1 deleted label and removed 1 deleted label
Or in trytond/ir/routes.py (the implementation of get_config computes
the key).
> > I'm wondering if it should be usefull to add (or at least discuss)
> > on the python language.
>
> It could but we will benefit only in 1.5 years (at best). And it
> does not make that using constant value in top would be faster
> (local lookup) than calling a method.
The delay is a very good reason. Also I think that if we have an
implementation to demonstrate with some gains it would give us a
leverage when submitting the patch upstream.
* Jean CAVALLO [2019-10-22 15:18:26]:
>
> Jean CAVALLO <jean.cavallo@coopengo.com> added the comment:
>
> >> I'd probably be more in favor of integrating a cache directly in
> >> TrytonConfigParser,
>
> > Yes me too but the fear is that there would be no gain by using a
> > cache on this level. Would you be ready to test both implementation
> > and report back?
>
> I would, my use case only requires the trytond patch, since it's mostly the
> cache / record-model-field caches that are taking some time.
I made a second patch with the cache on the config parser level (not
tested on the modules yet).
Here are some raw results on the trytond tests (few care has been
taken when compulsing those figures :P):
- without any cache: 34.988s, 35.081s, 34.448s, 34.905s
- cache on the module level: 33.895s, 33.040s, 33.107s, 34.028s
- cache on the config level: 33.048s, 33.714s, 33.782s, 33.703s
So obviously the cache on the module level is a bit faster. But is the
gain worth the introduction of those global variables?
I tried to test your second review, but it does not work properly,
because when updating the configuration through the trytond.conf
file, ConfigParser does not call set but rather directly updates
the internal variables. So the only values I get when starting the
server are the default values.
I looked at the ConfigParser implementation and I do not see a lot of work done there except by the BasicInterpolation.
So I think it is good to convert into const the three configuration value of the ORM: trytond/transaction.py, trytond/backend/__init__.py and trytond/model/modestorage.py
And to use the Dummy Interpolation instead of BasicInterpolation for the ConfigParser.
The other value can stay as they are for now except if we can spot that it is a very frequently used code.