Issue 11480

Title
Bad performance when computing deletable and writable state
Priority
bug
Status
testing
Nosy list
acaubet, ced, reviewbot, smn
Assigned to
ced
Keywords
backport, review

Created on 2022-05-12.23:43:08 by ced, last changed 1 week ago by reviewbot.

Messages

Author: [hidden] (ced) Tryton committer Tryton translator
Date: 2022-05-13.09:54:15

For me you are giving just marketing point. What is the point to have users feeling for at least 6 months performance degradation due to a new feature added.
We are talking here for a degradation of performance of 40000% (50 seconds instead of 20 ms to open 80 account move lines).

Author: [hidden] (acaubet)
Date: 2022-05-13.09:47:37

I understand your concern for backporting.
But to be consistent I would like to not backport, as we don't do with other performances. My reasons are:
. Encourage the user to migrate to new series.
. Users can see performance before/after the migration, so they have a feeling of progression. Anyway they can think there were server issues or overloading.
. If you have a large database it's more important that you stay with new versions to obtain the max performance.

Author: [hidden] (ced) Tryton committer Tryton translator
Date: 2022-05-12.23:47:23

As this can be an important performance regression on large database and because the fix is very simple, I think we should backport it.

Author: [hidden] (ced) Tryton committer Tryton translator
Date: 2022-05-12.23:43:07

Since issue9357 we compute a _write and _delete value based on ir.rule by adding a SQL expression as SELECT expression.
This is killing performance when the ir.rule comes from __access__ because it uses a where operator which is converted into a IN subquery. So the database can not really optimize, especially when the target table is very large because it need to retrieve all the target record ids for which the user has access.

So for example with the account.move.line, we got a subquery expression like:

(("a"."move" IN (SELECT "b"."id" AS "id" FROM "account_move" AS "b" WHERE (("b"."company" IN (1)))))) AS "_delete"

So it needs to materialize a table with all the moves of the company (which can be very large).

I do not see how we could rewrite the query to be more efficient.

But we can optimize for the case where the rule for write and delete are the same as the rule for read. In this case we know the expressions for _delete and _write are True for all readable records.

History
Date User Action Args
2022-05-13 10:19:25reviewbotsetmessages: + msg76623
2022-05-13 09:54:15cedsetmessages: + msg76621
2022-05-13 09:47:37acaubetsetmessages: + msg76620
nosy: + acaubet
2022-05-13 09:38:46smnsetnosy: + smn
2022-05-12 23:53:07reviewbotsetmessages: + msg76616
nosy: + reviewbot
2022-05-12 23:47:23cedsetkeyword: + backport, review
messages: + msg76615
reviews: 419121003
status: in-progress -> testing
2022-05-12 23:43:08cedcreate

Showing 10 items. Show all history (warning: this could be VERY long)