[autotest] Split host acquisition and job scheduling II.
This cl creates a stand-alone service capable of acquiring hosts for
new jobs. The host scheduler will be responsible for assigning a host to
a job and scheduling its first special tasks (to reset and provision the host).
There on after, the special tasks will either change the state of a host or
schedule more tasks against it (eg: repair), till the host is ready to
run the job associated with the Host Queue Entry to which it was
assigned. The job scheduler (monitor_db) will only run jobs, including the
special tasks created by the host scheduler.
Note that the host scheduler won't go live till we flip the
inline_host_acquisition flag in the shadow config, and restart both
services. The host scheduler is dead, long live the host scheduler.
TEST=Ran the schedulers, created suites. Unittests.
BUG=chromium:344613, chromium:366141, chromium:343945, chromium:343937
CQ-DEPEND=CL:199383
DEPLOY=scheduler, host-scheduler
Change-Id: I59a1e0f0d59f369e00750abec627b772e0419e06
Reviewed-on: https://chromium-review.googlesource.com/200029
Reviewed-by: Prashanth B <beeps@chromium.org>
Tested-by: Prashanth B <beeps@chromium.org>
Commit-Queue: Prashanth B <beeps@chromium.org>
diff --git a/scheduler/query_managers.py b/scheduler/query_managers.py
index d6325c7..5893d83 100644
--- a/scheduler/query_managers.py
+++ b/scheduler/query_managers.py
@@ -81,11 +81,14 @@
where=query, order_by=sort_order))
- def get_prioritized_special_tasks(self):
+ def get_prioritized_special_tasks(self, only_tasks_with_leased_hosts=False):
"""
Returns all queued SpecialTasks prioritized for repair first, then
cleanup, then verify.
+ @param only_tasks_with_leased_hosts: If true, this method only returns
+ tasks with leased hosts.
+
@return: list of afe.models.SpecialTasks sorted according to priority.
"""
queued_tasks = models.SpecialTask.objects.filter(is_active=False,
@@ -101,6 +104,8 @@
where=['(afe_host_queue_entries.id IS NULL OR '
'afe_host_queue_entries.id = '
'afe_special_tasks.queue_entry_id)'])
+ if only_tasks_with_leased_hosts:
+ queued_tasks = queued_tasks.filter(host__leased=True)
# reorder tasks by priority
task_priority_order = [models.SpecialTask.Task.REPAIR,
@@ -128,8 +133,8 @@
active=1, complete=0, host_id__isnull=False).values_list(
'host_id', flat=True))
special_task_hosts = list(models.SpecialTask.objects.filter(
- is_active=1, is_complete=0, host_id__isnull=False,
- queue_entry_id__isnull=True).values_list('host_id', flat=True))
+ is_active=1, is_complete=0, host_id__isnull=False,
+ queue_entry_id__isnull=True).values_list('host_id', flat=True))
host_counts = collections.Counter(
hqe_hosts + special_task_hosts).most_common()
multiple_hosts = [count[0] for count in host_counts if count[1] > 1]