Results processor for LISA
Created by: raw-bin
[Disclaimer: The enhancements here may not be applicable to LISA alone but I'll mention them here all the same to gather viewpoints. I've also not been through all the issues in the issue list - so there may be some redundancy].
There's a set of common 'views' of data that can be very useful in diagnosing a given use-case or system to form a first order viewpoint for diagnosis etc.
For example - in the past cr2 produced a fixed set of plots that were very valuable to form an initial opinion of what's up. It would be good to have a similar setup with one 'report' generator that first confirms that all necessary tokens are available in input trace and if so, produces all the views.
It's kind of easy to get carried away with this sort of stuff admittedly - one type of view can sometimes be inferred from another so you don't always want to plot every type of view. However, in my experience, if the generation cost is minimal, having multiple views of interest - some redundancy notwithstanding - is a very useful correlation tool to diagnose problems.
Some of the views that I can think of are:
- OPP residency break-down for a given trace input.
- Per-CPU OPP residency break-down in a suitable plot.
- Great to know where the compute's going. For example, Audio playback should show low OPPs on the LITTLE side only etc.
- @JaviMerino has an implementation internally which might be reusable.
- Idle state residency break-down.
- Per-CPU/cluster idle residency break-down in a suitable plot.
- Similar to 1 above, can help correlate problems.
- Parallelisation scoring for a given trace input.
- Aggregate scoring - ie what %age of time were N cpus active (N: {0,1,...NCPUs}) - in a suitable plot.
- Most obvious use is for benchmarks where you want to verify for example 1 thread per CPU kind of dynamic.
- WA has a result processor that does this and the above. IMHO those kind of processors are better implemented in LISA.
- IRQ histogram by CPU
- Great for confirming who's causing frequent wake-ups on the wrong CPU type for example.
Some more items that came up in recent discussions which are worth including in a 'top level diagnostic notebook':
- Per CPU task context switch count
- Could help in contrast analyses for a given use-case between EAS and other strategies (including default mainline behaviour)
- OPP transition rate indication
- The idea being that you can see if scheduler driven DVFS is being too trigger happy compared to conventional governors
Please add to the list as ideas come up.