- 03 Sep, 2015 2 commits
-
-
Ivan Tyagov authored
-
Ivan Tyagov authored
-
- 01 Sep, 2015 2 commits
-
-
Ivan Tyagov authored
-
Ivan Tyagov authored
-
- 24 Aug, 2015 1 commit
-
-
Ivan Tyagov authored
-
- 07 Aug, 2015 1 commit
-
-
Ivan Tyagov authored
-
- 04 Aug, 2015 1 commit
-
-
Ivan Tyagov authored
-
- 03 Aug, 2015 2 commits
-
-
Ivan Tyagov authored
-
Ivan Tyagov authored
-
- 30 Jul, 2015 1 commit
-
-
Ivan Tyagov authored
-
- 29 Jul, 2015 1 commit
-
-
Ivan Tyagov authored
-
- 16 Jul, 2015 1 commit
-
-
Ivan Tyagov authored
-
- 15 Jul, 2015 3 commits
-
-
Kirill Smelkov authored
4dfcdc5f (DataArray: get array slice by HTTP Range Request) added HTTP range support for DataArray by way of copying code from ERP5's BigFile implementation. It is better to not copy the code and just use the base class and hook into it in a proper way (I think it is possible to do even without mixin). Add a proper FIXME/TODO so we do not forget to fix it one day. /reviewed-by @Tyagov /cc @klaus
-
Ivan Tyagov authored
DataArray: get array slice by HTTP Range Request See merge request !5
-
Klaus Wölfel authored
-
- 13 Jul, 2015 1 commit
-
-
Klaus Wölfel authored
-
- 03 Jul, 2015 7 commits
-
-
Ivan Tyagov authored
Allow to pass additional keyword arguments to transformation script See merge request !4
-
Ivan Tyagov authored
-
https://lab.nexedi.cn/klaus/wendelinIvan Tyagov authored
Conflicts: bt5/erp5_wendelin/SkinTemplateItem/portal_skins/erp5_wendelin/DataStream_transform.xml
-
Ivan Tyagov authored
-
Ivan Tyagov authored
-
Ivan Tyagov authored
-
Ivan Tyagov authored
A simple example of a map reduce job that stores intermediate result to an Active Process and calculates average on slices of a ZBig Array.
-
- 02 Jul, 2015 1 commit
-
-
Ivan Tyagov authored
-
- 01 Jul, 2015 5 commits
-
-
Ivan Tyagov authored
Allow these portal types to be used as Predicates to define grouping of "data documents" within the system.
-
Ivan Tyagov authored
-
Ivan Tyagov authored
-
Ivan Tyagov authored
-
Ivan Tyagov authored
Proxify title in Data Array View Proxification See merge request !2
-
- 30 Jun, 2015 5 commits
-
-
Klaus Wölfel authored
-
Ivan Tyagov authored
-
Ivan Tyagov authored
Display source/destination instead of *_section in Data Supply Line listbox To complement the change from source/destination_section -> source/destination in Data Supply Line See merge request !1
-
Klaus Wölfel authored
-
Ivan Tyagov authored
-
- 29 Jun, 2015 4 commits
-
-
Kirill Smelkov authored
Instead of verifying only min/max/len of the result, we can verify that the result is explicitly the array we expect, especially that it is easier to do and less lines with just appropriate arange() and array_equal(). /cc @Tyagov
-
Kirill Smelkov authored
As explained in previous commit, real_data tail was ending without \n 99988,99989\n99990,99991,99992,99993,99994,99995,99996,99997,99998,99999\n100000 and because DataStream_copyCSVToDataArray() processes data in full lines only, the tail was lost. Fix t by making sure the last line is always terminated properly with \n. /cc @Tyagov
-
Kirill Smelkov authored
Consider this: In [1]: l = range(100000) In [2]: min(l) Out[2]: 0 In [3]: max(l) Out[3]: 99999 In [4]: len(l) Out[4]: 100000 so if we assert that zarray min=0 and max=99999 the length should be max+1 which is 100000. NOTE the length is not 100001, as one would guess from test number sequence created at the beginning of the test: def chunks(l, n): """Yield successive n-sized chunks from l.""" for i in xrange(0, len(l), n): yield l[i:i+n] ... number_string_list = [] for my_list in list(chunks(range(0, 100001), 10)): number_string_list.append(','.join([str(x) for x in my_list])) real_data = '\n'.join(number_string_list) because processing code "eats" numbers till last \n and for 10001 last \n is located before 100000: 99988,99989\n99990,99991,99992,99993,99994,99995,99996,99997,99998,99999\n100000 I will fix input data generation in the following patch. /cc @Tyagov
-
Kirill Smelkov authored
When we conditionally create new BigArray for appending data, we should create it as empty, because in DataStream_copyCSVToDataArray() creation is done lazily only when destination array is not yet initialized and we anyway append data to the array in the following code block. Creating BigArray with initial shape of appending part will result in destination array being longer than neccessary by first-appended-chunk length with this-way-introduced extra header reading as all zeros. Fix it. /cc @Tyagov
-
- 26 Jun, 2015 1 commit
-
-
Ivan Tyagov authored
source_section -> source destination_section -> destination
-
- 25 Jun, 2015 1 commit
-
-
Ivan Tyagov authored
-