Fix own mistake in rBd617de965ea20e5d5 from late December 2015.
Brain melt here, intention was to reduce number of tasks in case we have not much chunks of data to loop over, not to increase it! Note that this only affected dynamic scheduling.
This commit is contained in:
@@ -914,7 +914,7 @@ static void task_parallel_range_ex(
|
|||||||
state.chunk_size = max_ii(1, (stop - start) / (num_tasks));
|
state.chunk_size = max_ii(1, (stop - start) / (num_tasks));
|
||||||
}
|
}
|
||||||
|
|
||||||
num_tasks = max_ii(1, (stop - start) / state.chunk_size);
|
num_tasks = min_ii(num_tasks, (stop - start) / state.chunk_size);
|
||||||
|
|
||||||
for (i = 0; i < num_tasks; i++) {
|
for (i = 0; i < num_tasks; i++) {
|
||||||
BLI_task_pool_push(task_pool,
|
BLI_task_pool_push(task_pool,
|
||||||
|
Reference in New Issue
Block a user