- 13 Jul, 2020 1 commit
-
-
Ivan Tyagov authored
-
- 08 Jul, 2020 7 commits
-
-
Ivan Tyagov authored
See merge request nexedi/wendelin!54
-
Eteri authored
-
Eteri authored
-
Ivan Tyagov authored
See merge request nexedi/wendelin!52
-
Ivan Tyagov authored
See merge request nexedi/wendelin!51
-
Ivan Tyagov authored
-
Ivan Tyagov authored
-
- 07 Jul, 2020 4 commits
-
-
Eteri authored
-
Eteri authored
-
Eteri authored
-
Ivan Tyagov authored
See merge request nexedi/wendelin!50
-
- 03 Jul, 2020 4 commits
-
-
Roque authored
-
Roque authored
-
Ivan Tyagov authored
-
Ivan Tyagov authored
-
- 02 Jul, 2020 11 commits
-
-
Ivan Tyagov authored
Since ERP5's 4773fac27bb33f1beec4ed44e5ba05df009a2654 BigFile component is no longer on file-system thus adjust imports accordingly (and add its bt5).
-
Ivan Tyagov authored
See merge request nexedi/wendelin!47
-
Eteri authored
-
Eteri authored
-
Eteri authored
-
Ivan Tyagov authored
See merge request nexedi/wendelin!46
-
Ivan Tyagov authored
See merge request nexedi/wendelin!48
-
Ivan Tyagov authored
See merge request nexedi/wendelin!49
-
Eteri authored
-
Eteri authored
-
Eteri authored
erp5_wendelin_data: add new category Big Data for business applications and Stream Ingestion category to "use"
-
- 29 Jun, 2020 1 commit
-
-
Roque authored
-
- 26 Jun, 2020 5 commits
- 25 Jun, 2020 1 commit
-
-
Ivan Tyagov authored
Be explicit and set desired portal_type as otherwise by default Data Stream Bucket will be created and tests will fail for missing API.
-
- 24 Jun, 2020 1 commit
-
-
Ivan Tyagov authored
See merge request nexedi/wendelin!45
-
- 22 Jun, 2020 5 commits
-
-
Roque authored
-
Roque authored
erp5_wendelin_data_lake_ingestion: split ingestion validation is done right after last chunk (eof) is ingested instead of waiting for the alarm - better handling of data stream hash calculation and publication
-
Roque authored
-
Roque authored
-
Roque authored
erp5_wendelin_data_lake_ingestion: split files are no longer appended/processed and no data streams are removed anymore - all ingestions and data streams corresponding to split parts are kept - client will receive the list of all data streams and it will be in charge of merging the parts during the download - validate chunk data streams only when full file was ingested - only process split ingestions when full file was ingested - calculate full split file size - calculate hash and add state control during data stream validation - stop invalidated ingestions
-