Commit 753eddce authored by Morten Rasmussen's avatar Morten Rasmussen Committed by Ionela Voinescu
Browse files

sched/pelt: [HACK] Make PELT trace points unconditional

This is a temporary hack.

Currently trace points for PELT are only triggered when the PELT metrics
consumed by the scheduler are actually updated, i.e. util_avg. This
means no updates if no 1 ms boundary is being crossed by the update.
When reconstructing the PELT signal based on this data, the peak PELT
value can therefore be up to 1 ms worth of PELT accumulation off (23 in
absolute terms). This leads to a discrepancy that causes test cases to
fail.

This patch ensures that trace events are always emitted even if the
metrics haven't been updated which should allow accurate reconstruction
of the PELT signals.
parent 55a0bbb3
......@@ -303,6 +303,7 @@ int __update_load_avg_blocked_se(u64 now, struct sched_entity *se)
trace_pelt_se_tp(se);
return 1;
}
trace_pelt_se_tp(se);
return 0;
}
......@@ -317,6 +318,7 @@ int __update_load_avg_se(u64 now, struct cfs_rq *cfs_rq, struct sched_entity *se
trace_pelt_se_tp(se);
return 1;
}
trace_pelt_se_tp(se);
return 0;
}
......@@ -332,6 +334,7 @@ int __update_load_avg_cfs_rq(u64 now, struct cfs_rq *cfs_rq)
trace_pelt_cfs_tp(cfs_rq);
return 1;
}
trace_pelt_cfs_tp(cfs_rq);
return 0;
}
......@@ -358,6 +361,7 @@ int update_rt_rq_load_avg(u64 now, struct rq *rq, int running)
trace_pelt_rt_tp(rq);
return 1;
}
trace_pelt_rt_tp(rq);
return 0;
}
......@@ -384,6 +388,7 @@ int update_dl_rq_load_avg(u64 now, struct rq *rq, int running)
trace_pelt_dl_tp(rq);
return 1;
}
trace_pelt_dl_tp(rq);
return 0;
}
......
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment