Build #2,714

Build: #2714 was successful Changes by Simon Basle

Build result summary


9 minutes
c0a7bb8f4fce039dea2e40e7b9f6c06e9762aa28 c0a7bb8f4fce039dea2e40e7b9f6c06e9762aa28
Total tests
First to pass since
#2713 (Changes by Simon Basle)


Code commits

Author Commit Message Commit date
Simon Basle Simon Basle c0a7bb8f4fce039dea2e40e7b9f6c06e9762aa28 c0a7bb8f4fce039dea2e40e7b9f6c06e9762aa28 fix #1992 Reimplement boundedElasticScheduler to allow reentrancy
The general idea is to abandon the facade Worker and instead always
submit tasks to an executor-backed worker. In order of preference, when
an operator requests a Worker:

 - if thread cap not reached, create and pick a new worker
 - else if idle workers, pick an idle worker
 - else pick a busy worker

This implies a behavior under contention that is closer to parallel(),
but with a pool that is expected to be quite larger than the typical
parallel pool.

The drawback is that once we get to pick a busy worker, there's no
telling when its tasks (typically blocking tasks for a
BoundedElasticScheduler) will finish. So even though another executor
might become idle in the meantime, the operator's tasks will be pinned
to the (potentially still busy) executor initially picked.

To try to counter that effect a bit, we use a priority queue for the
busy executors, favoring executors that are tied to less Workers (and
thus less operators). We don't yet go as far as factoring in the task
queue of each executor.

Finally, one noticeable change is that the second int parameter in
the API, maxPendingTask, is now influencing EACH executor's queue
instead of being a shared counter. It should be safe in the sense that
the number set with previous version in mind is bound to be
over-dimensionned for the new version, but it would be recommended for
users to reconsider that number.

Reviewed-in: #2040


Fixed tests 1
Status Test Failing since View job Duration
Successful ContextTest defaultPutAllWorksWithParallelStream History
Failing since build #2713 (Changes by Simon Basle) Core < 1 sec