Learn more about these different git repos.
Other Git URLs
We're more often seeing builds which consumes a lot of memory or malformed makefiles with no limits on threads (resp. no_threads=no_cpus) which often leads to exhausted memory and killed builds. Does it make sense to use used memory in updateWeight calculation, so new tasks are not picked up if memory on host is going low?
updateWeight
btw, wouldn't median time be better estimator than average?
SELECT EXTRACT (epoch FROM percentile_cont(0.5) WITHIN GROUP (ORDER BY completion_time - start_time)) FROM build JOIN events ON build.create_event = events.id WHERE build.pkg_id = %(packageID)i AND build.state = %(st_complete)i AND build.task_id IS NOT NULL;
More explanation: updateWeight is run only once in the beginning of task. So updating weight according to current (whole builder) memory usage is a) abusing of what weight means b) not too relevant. It just tries to catch, that resources are possibly near to be exhausted.
Other option would be for kojid to monitor memory independently on tasks and throttle its own task_load when memory is going low. So, task_load could be computed as sum(task_weights) * used_memory_coefficient. Coefficient could be something like for 100% free memory 1.0, for 0%, inf, for 20% 2.0 (ad hoc numbers, exponential/linear/whatever). (Coefficient could be further/instead modified with CPU/memory ratio).
task_load
sum(task_weights) * used_memory_coefficient
Metadata Update from @tkopecek: - Custom field Size adjusted to None - Issue tagged with: scheduler
Log in to comment on this ticket.