How to repeat it:
1. Create two productions with many input moves (so that assignment doesn't happen immediately)
2. Purchase enough quantity to fulfill one production
3. Assign productions concurrently (one after another)
Result: Both productions assigned
Expected Result: One production failed to assign
Expected result happens when assigning productions one by one, but not concurrently.
To upload designs, you'll need to enable LFS and have an admin enable hashed storage. More information
Child items
...
Show closed items
Linked items
0
Link issues together to show that they're related.
Learn more.
For me, the issue comes from also the change to repeatable read instead of serializable.
So I guess the only solution if we want to keep this isolation level, is to use a SELECT ... FOR UPDATE on move having the products.
But it is only supported on postgresql.
1) Clicking assign on production #1, waiting for it to finish, clicking assign on production #2 (in another window), waiting for it to finish (actually fail because of not enough quantity). Success scenario
2) Doing the same thing, but not waiting for #1 to finish. Fail scenario, all assignments succeeded.
Indeed, an easier way is to not wait for lock in other transactions.
This review5901002 will be easier to backport and is also probably better than "SELECT ... FOR UPDATE" as it will require less write on disk.
Not waiting for other transactions is likely to end up with the application crashing to the user so I don't think that's a good solution either.
There's been the patch with advisory locks for some time review999002 which would also reduce contention. Although it is only available for PostgreSQL. Other databases could use a standard lock table.
Another option which would be quite simple would be to add the possibility to define the isolation level with RPC(serializable=True) and pass that to database backends. The main problem with this is that it would only be possible to make assign_try() work correctly if the transaction is started with serializable so anyone calling assign_try() elsewhere (because they're automating from a write() call or something like that) would have some problems.
On 22 Dec 01:04, Albert Cervera i Areny wrote:
> Not waiting for other transactions is likely to end up with the application crashing to the user so I don't think that's a good solution either.
Yes and that's exactly what we want: retry until we get a serializable
(over stock_move table) transaction and if we can not then it must fail.
> There's been the patch with advisory locks for some time review999002 which would also reduce contention. Although it is only available for PostgreSQL. Other databases could use a standard lock table.
This doesn't help because it just return true/false and it doesn't
restart the all transaction.
> Another option which would be quite simple would be to add the possibility to define the isolation level with RPC(serializable=True) and pass that to database backends. The main problem with this is that it would only be possible to make assign_try() work correctly if the transaction is started with serializable so anyone calling assign_try() elsewhere (because they're automating from a write() call or something like that) would have some problems.