Created on 2016-01-08.12:14:15 by pokoli, last changed 56 months ago by roundup-bot.
New changeset e279797465b1 by C?dric Krier in branch 'default': Use a list to store feed data in JSONUnmarshaller http://hg.tryton.org/tryton/rev/e279797465b1
As I posted on , I don't think the chunk size is the real issue here. But indeed it is the JSONUnmarshaller that create a new string for each feed call. Here is review19951003 that improves the parser by using a list for feed data. Also I don't think Python will change its library because the default parser XML parser is better than our current json parser.  https://bugs.python.org/msg260970
El 09/01/16 a les 11:28, Cédric Krier ha escrit: > Cédric Krier<email@example.com> added the comment: > > I think we should wait until a solution is chosen for Python. Totally agree
I think we should wait until a solution is chosen for Python.
Opened issue on python bug tracker. You can follow http://bugs.python.org/issue26049 if interested.
Indeed I did not see it was a client patch not a server patch. So now, it makes sense to try to get it in trunk.
On 2016-01-08 13:07, Sergi Almacellas Abellana wrote: > > I just tested with the wsgi implementation (the proposed review and the client one) and the observed time is nearly the same. The same of what?
I just tested with the wsgi implementation (the proposed review and the client one) and the observed time is nearly the same.
I don't think we should waste our time to improve the current implement based on the xmlrpclib but instead we should focus on getting the WSGI implementation from review20541003. Like that we will no more be responsible for the HTTP management because it will be delegated to external solution.
By default, python xmlrpclib parser reads data by chunks of 1024 bytes , which leads to a lot of data concatenations when reading large data, which is very slow in python. The attached patch overrides parse_response function in order to read all the data directly, so the performance is improved. We have done the following test: 1. Create a new database with ir module, and create an ir.attachment with a file of 20MB. 2. Open the attachment list from Administration -> Models -> Attachments. And have observed the following results: with patch: 0.245282sec without patch: 1min 48.933491sec So this is a huge difference in user experience. This not only affects on attachments, but also improves the time of openning big reports.  https://hg.python.org/cpython/file/2.7/Lib/xmlrpclib.py#l1479
|2016-03-05 17:24:25||roundup-bot||set||status: testing -> resolved|
nosy: + roundup-bot
messages: + msg24585
|2016-02-28 11:15:07||reviewbot||set||reviews: 24711002 -> 24711002, 19951003|
|2016-02-28 11:10:28||ced||set||status: deferred -> testing|
assignedto: pokoli -> ced
messages: + msg24404
|2016-01-22 11:38:27||resteve||set||nosy: + resteve|
|2016-01-12 11:46:33||vbastos||set||nosy: + vbastos|
|2016-01-09 13:58:43||pokoli||set||messages: + msg23623|
|2016-01-09 11:28:29||ced||set||status: testing -> deferred|
|2016-01-09 11:28:18||ced||set||messages: + msg23620|
|2016-01-08 14:39:34||pokoli||set||messages: + msg23615|
|2016-01-08 13:34:48||ced||set||messages: + msg23611|
Showing 10 items. Show all history (warning: this could be VERY long)