1. 27 Sep, 2019 1 commit
    • Nicolin Chen's avatar
      mmc: tegra: Implement ->set_dma_mask() · b960bc44
      Nicolin Chen authored
      The SDHCI controller on Tegra186 supports 40-bit addressing, which is
      usually enough to address all of system memory. However, if the SDHCI
      controller is behind an IOMMU, the address space can go beyond. This
      happens on Tegra186 and later where the ARM SMMU has an input address
      space of 48 bits. If the DMA API is backed by this ARM SMMU, the top-
      down IOVA allocator will cause IOV addresses to be returned that the
      SDHCI controller cannot access.
      
      Unfortunately, prior to the introduction of the ->set_dma_mask() host
      operation, the SDHCI core would set either a 64-bit DMA mask if the
      controller claimed to support 64-bit addressing, or a 32-bit DMA mask
      otherwise.
      
      Since the full 64 bits cannot be addressed on Tegra, this had to be
      worked around in commit 68481a7e
      
       ("mmc: tegra: Mark 64 bit dma
      broken on Tegra186") by setting the SDHCI_QUIRK2_BROKEN_64_BIT_DMA
      quirk, which effectively restricts the DMA mask to 32 bits.
      
      One disadvantage of this is that dma_map_*() APIs will now try to use
      the swiotlb to bounce DMA to addresses beyond of the controller's DMA
      mask. This in turn caused degraded performance and can lead to
      situations where the swiotlb buffer is exhausted, which in turn leads
      to DMA transfers to fail.
      
      With the recent introduction of the ->set_dma_mask() host operation,
      this can now be properly fixed. For each generation of Tegra, the exact
      supported DMA mask can be configured. This kills two birds with one
      stone: it avoids the use of bounce buffers because system memory never
      exceeds the addressable memory range of the SDHCI controllers on these
      devices, and at the same time when an IOMMU is involved, it prevents
      IOV addresses from being allocated beyond the addressible range of the
      controllers.
      
      Since the DMA mask is now properly handled, the 64-bit DMA quirk can be
      removed.
      Signed-off-by: default avatarNicolin Chen <nicoleotsuka@gmail.com>
      [treding@nvidia.com: provide more background in commit message]
      Tested-by: default avatarNicolin Chen <nicoleotsuka@gmail.com>
      Acked-by: default avatarAdrian Hunter <adrian.hunter@intel.com>
      Signed-off-by: default avatarThierry Reding <treding@nvidia.com>
      Cc: stable@vger.kernel.org # v4.15 +
      Signed-off-by: default avatarUlf Hansson <ulf.hansson@linaro.org>
      b960bc44
  2. 21 Aug, 2019 1 commit
  3. 10 Jun, 2019 1 commit
  4. 05 Jun, 2019 1 commit
  5. 28 May, 2019 1 commit
  6. 15 Apr, 2019 5 commits
    • Sowjanya Komatineni's avatar
      mmc: tegra: add sdhci tegra suspend and resume · 71c733c4
      Sowjanya Komatineni authored
      
      
      This patch adds suspend and resume PM ops for tegra SDHCI.
      Acked-by: default avatarThierry Reding <treding@nvidia.com>
      Signed-off-by: default avatarSowjanya Komatineni <skomatineni@nvidia.com>
      Signed-off-by: default avatarUlf Hansson <ulf.hansson@linaro.org>
      71c733c4
    • Sowjanya Komatineni's avatar
      mmc: tegra: fix CQE enable and resume sequence · b7754428
      Sowjanya Komatineni authored
      
      
      Tegra CQHCI/SDHCI design prevents write access to SDHCI block size
      register when CQE is enabled and unhalted.
      
      CQHCI driver enables CQE prior to invoking sdhci_cqe_enable which
      violates this Tegra specific host requirement.
      
      This patch fixes this by configuring sdhci block registers prior
      to CQE unhalt.
      
      This patch also has a fix for retry of unhalt due to known Tegra
      specific CQE resume bug where first unhalt might not succeed when
      clear all tasks is performed prior to resume and need a second unhalt.
      
      This patch also includes CQE enable fix for CMD CRC errors that
      happen with the specific sandisk emmc device when status command
      is sent during the transfer of last data block due to marginal timing.
      Tested-by: default avatarJon Hunter <jonathanh@nvidia.com>
      Acked-by: default avatarAdrian Hunter <adrian.hunter@intel.com>
      Signed-off-by: default avatarSowjanya Komatineni <skomatineni@nvidia.com>
      Signed-off-by: default avatarUlf Hansson <ulf.hansson@linaro.org>
      b7754428
    • Sowjanya Komatineni's avatar
      mmc: tegra: add Tegra186 WAR for CQE · c6e7ab90
      Sowjanya Komatineni authored
      
      
      Tegra186 CQHCI host has a known bug where CQHCI controller selects
      DATA_PRESENT_SELECT bit to 1 for DCMDs with R1B response type and
      since DCMD does not trigger any data transfer, DCMD task complete
      happens leaving the DATA FSM of host controller in wait state for
      the data.
      
      This effects the data transfer tasks issued after the DCMDs with
      R1b response type resulting in timeout.
      
      SW WAR is to set CMD_TIMING to 1 in DCMD task descriptor. This bug
      and SW WAR is applicable only for Tegra186 and not for Tegra194.
      
      This patch implements this WAR thru NVQUIRK_CQHCI_DCMD_R1B_CMD_TIMING
      for Tegra186 and also implements update_dcmd_desc of cqhci_host_ops
      interface to set CMD_TIMING bit depending on the NVQUIRK.
      Tested-by: default avatarJon Hunter <jonathanh@nvidia.com>
      Reviewed-by: default avatarRitesh Harjani <riteshh@codeaurora.org>
      Signed-off-by: default avatarSowjanya Komatineni <skomatineni@nvidia.com>
      Acked-by: default avatarAdrian Hunter <adrian.hunter@intel.com>
      Signed-off-by: default avatarUlf Hansson <ulf.hansson@linaro.org>
      c6e7ab90
    • Sowjanya Komatineni's avatar
      mmc: tegra: update hw tuning process · ea8fc595
      Sowjanya Komatineni authored
      
      
      This patch includes below HW tuning related fixes.
          configures tuning parameters as per Tegra TRM
          WAR fix for manual tap change
          HW auto-tuning post process
      
      As per Tegra TRM, SDR50 mode tuning execution takes upto maximum
      of 256 tuning iterations and SDR104/HS200/HS400 modes tuning
      execution takes upto maximum of 128 tuning iterations.
      
      This patch programs tuning control register with maximum tuning
      iterations needed based on the timing along with the start tap,
      multiplier, and step size used by the HW tuning.
      
      Tegra210 has a known issue of glitch on trimmer output when the
      tap value is changed with the trimmer input clock running and the
      WAR is to disable card clock before sending tuning command and
      after sending tuning command wait for 1usec and issue SW reset
      followed by enabling card clock.
      
      This WAR is applicable when changing tap value manually as well.
      Tegra SDHCI driver has this implemented correctly for manual tap
      change but missing SW reset before enabling card clock during
      sending tuning command.
      
      Issuing SW reset during tuning command as a part of WAR and is
      applicable in cases where tuning is performed with single step size
      for more iterations. This patch includes this fix.
      
      HW auto-tuning finds the best largest passing window and sets the
      tap at the middle of the window. With some devices like sandisk
      eMMC driving fast edges and due to high tap to tap delay in the
      Tegra chipset, auto-tuning does not detect falling tap between the
      valid windows resulting in a parital window or a merged window and
      the best tap is set at the signal transition which is actually the
      worst tap location.
      
      Recommended SW solution is to detect if the best passing window
      picked by the HW tuning is a partial or a merged window based on
      min and max tap delays found from chip characterization across
      PVT and perform tuning correction to pick the best tap.
      
      This patch has implementation of this post HW tuning process for
      the tegra hosts that support HW tuning through the callback function
      tegra_sdhci_execute_hw_tuning and uses the tuned tap delay.
      Tested-by: default avatarJon Hunter <jonathanh@nvidia.com>
      Signed-off-by: default avatarSowjanya Komatineni <skomatineni@nvidia.com>
      Acked-by: default avatarAdrian Hunter <adrian.hunter@intel.com>
      Signed-off-by: default avatarUlf Hansson <ulf.hansson@linaro.org>
      ea8fc595
    • Sowjanya Komatineni's avatar
      mmc: tegra: fix ddr signaling for non-ddr modes · 92cd1667
      Sowjanya Komatineni authored
      
      
      ddr_signaling is set to true for DDR50 and DDR52 modes but is
      not set back to false for other modes. This programs incorrect
      host clock when mode change happens from DDR52/DDR50 to other
      SDR or HS modes like incase of mmc_retune where it switches
      from HS400 to HS DDR and then from HS DDR to HS mode and then
      to HS200.
      
      This patch fixes the ddr_signaling to set properly for non DDR
      modes.
      Tested-by: default avatarJon Hunter <jonathanh@nvidia.com>
      Acked-by: default avatarAdrian Hunter <adrian.hunter@intel.com>
      Signed-off-by: default avatarSowjanya Komatineni <skomatineni@nvidia.com>
      Cc: stable@vger.kernel.org # v4.20 +
      Signed-off-by: default avatarUlf Hansson <ulf.hansson@linaro.org>
      92cd1667
  7. 25 Feb, 2019 3 commits
  8. 17 Dec, 2018 1 commit
  9. 08 Oct, 2018 22 commits
  10. 30 Jul, 2018 1 commit
  11. 16 Jul, 2018 3 commits
    • Aapo Vienamo's avatar
      mmc: tegra: Add and use tegra_sdhci_get_max_clock() · 44350993
      Aapo Vienamo authored
      
      
      Implement and use tegra_sdhci_get_max_clock() which returns the true
      maximum host clock rate. The issue with tegra_sdhci_get_max_clock() is
      that it returns the current clock rate of the host instead of the
      maximum one, which can lead to unnecessarily small clock rates.
      
      This differs from the previous implementation of
      tegra_sdhci_get_max_clock() in that it doesn't divide the result by two.
      Signed-off-by: default avatarAapo Vienamo <avienamo@nvidia.com>
      Tested-by: default avatarMarcel Ziswiler <marcel.ziswiler@toradex.com>
      Signed-off-by: default avatarUlf Hansson <ulf.hansson@linaro.org>
      44350993
    • Stefan Agner's avatar
      mmc: tegra: prevent ACMD23 on Tegra 3 · 726df1d5
      Stefan Agner authored
      
      
      It seems that SD3.0 advertisement needs to be set for higher eMMC
      speed modes (namely DDR52) as well. The TRM states that the SD3.0
      advertisement bit should be set for all controller instances, even
      for those not supporting UHS-I mode...
      
      When specifying vqmmc-supply as a fixed 1.8V regulator on a Tegra
      SD/MMC instance which is connected to a eMMC device, the stack
      enables SD3.0. However, enabling it has consequences: If SDHCI 3.0
      support is advertised the stack enables Auto-CMD23. Unfortunately
      Auto-CMD23 seems not to work well with Tegra 3 currently. It leads
      to regular warnings:
        mmc2: Got command interrupt 0x00010000 even though no command operation was in progress.
      
      It is not entirely clear why those errors happens. It seems that
      a Linux 3.1 based downstream kernel which has Auto-CMD23 support
      does not show those warnings.
      
      Use quirk SDHCI_QUIRK2_ACMD23_BROKEN to prevent Auto-CMD23 being
      used for now. With this the eMMC works stable on high-speed mode
      while still announcing SD3.0.
      
      This allows to use mmc-ddr-1_8v to enables DDR52 mode. In DDR52
      mode read speed improves from about 42MiB/s to 72MiB/s on an
      Apalis T30.
      Signed-off-by: default avatarStefan Agner <stefan@agner.ch>
      Tested-by: default avatarMarcel Ziswiler <marcel.ziswiler@toradex.com>
      Signed-off-by: default avatarUlf Hansson <ulf.hansson@linaro.org>
      726df1d5
    • Stefan Agner's avatar
      mmc: tegra: fix eMMC DDR52 mode · e300149e
      Stefan Agner authored
      
      
      Make sure the clock is doubled when using eMMC DDR52 mode.
      Signed-off-by: default avatarStefan Agner <stefan@agner.ch>
      Tested-by: default avatarMarcel Ziswiler <marcel.ziswiler@toradex.com>
      Signed-off-by: default avatarUlf Hansson <ulf.hansson@linaro.org>
      e300149e