Skip to content
  • Ionela Voinescu's avatar
    cppc_cpufreq: replace per-cpu data array with a list · 05540e8e
    Ionela Voinescu authored
    
    
    The cppc_cpudata per-cpu storage was inefficient (1) additional to causing
    functional issues (2) when CPUs are hotplugged out, due to per-cpu data
    being improperly initialised.
    
    (1) The amount of information needed for CPPC performance control in its
        cpufreq driver depends on the domain (PSD) coordination type:
    
        ANY:    One set of CPPC control and capability data (e.g desired
                performance, highest/lowest performance, etc) applies to all
                CPUs in the domain.
    
        ALL:    Same as ANY. To be noted that this type is not currently
                supported. When supported, information about which CPUs
                belong to a domain is needed in order for frequency change
                requests to be sent to each of them.
    
        HW:     It's necessary to store CPPC control and capability
                information for all the CPUs. HW will then coordinate the
                performance state based on their limitations and requests.
    
        NONE:   Same as HW. No HW coordination is expected.
    
        Despite this, the previous initialisation code would indiscriminately
        allocate memory for all CPUs (all_cpu_data) and unnecessarily
        duplicate performance capabilities and the domain sharing mask and type
        for each possible CPU.
    
    (2) With the current per-cpu structure, when having ANY coordination,
        the cppc_cpudata cpu information is not initialised (will remain 0)
        for all CPUs in a policy, other than policy->cpu. When policy->cpu is
        hotplugged out, the driver will incorrectly use the uninitialised (0)
        value of the other CPUs when making frequency changes. Additionally,
        the previous values stored in the perf_ctrls.desired_perf will be
        lost when policy->cpu changes.
    
    Therefore replace the array of per cpu cpu_data with a list. The memory
    for each structure is allocated at policy init, where a single structure
    can be allocated per policy, not per cpu. In order to accommodate the
    struct list_head node in the cppc_cpudata structure, the now unused cpu
    and cur_policy variables are removed.
    
    For example, on a arm64 Juno platform with 6 CPUs: (0, 1, 2, 3) in PSD1,
    (4, 5) in PSD2 - ANY coordination
    
    Memory allocation comparison shows:
    
    Before patch:
    
     - ANY coordination:
       total    slack      req alloc/free  caller
           0        0        0     0/1     _kernel_size_le_hi32+0x0xffff800008ff7810
           0        0        0     0/6     _kernel_size_le_hi32+0x0xffff800008ff7808
         128       80       48     1/0     _kernel_size_le_hi32+0x0xffff800008ffc070
         768        0      768     6/0     _kernel_size_le_hi32+0x0xffff800008ffc0e4
    
    After patch:
    
     - ANY coordination:
        total    slack      req alloc/free  caller
         256        0      256     2/0     _kernel_size_le_hi32+0x0xffff800008fed410
           0        0        0     0/2     _kernel_size_le_hi32+0x0xffff800008fed274
    
    Additional changes:
     - A pointer to the policy's cppc_cpudata is stored in policy->driver_data
     - All data allocation is done from the driver's init function
     - Driver registration is skipped if _CPC entries are not present.
    
    Signed-off-by: default avatarIonela Voinescu <ionela.voinescu@arm.com>
    05540e8e