Verified Commit bf75703d authored by Lukas Wunner's avatar Lukas Wunner Committed by Mark Brown
Browse files

dmaengine: bcm2835: Avoid accessing memory when copying zeroes

The BCM2835 DMA controller is capable of synthesizing zeroes instead of
copying them from a source address. The feature is enabled by setting
the SRC_IGNORE bit in the Transfer Information field of a Control Block:

"Do not perform source reads.
 In addition, destination writes will zero all the write strobes.
 This is used for fast cache fill operations."

The feature is only available on 8 of the 16 channels. The others are
so-called "lite" channels with a limited feature set and performance.

Enable the feature if a cyclic transaction copies from the zero page.
This reduces traffic on the memory bus.

A forthcoming use case is the BCM2835 SPI driver, which will cyclically
copy from the zero page to the TX FIFO. The idea to use SRC_IGNORE was
taken from an ancient GitHub conversation between Martin and Noralf:

Tested-by: default avatarNuno Sá <>
Tested-by: default avatarNoralf Trønnes <>
Signed-off-by: default avatarLukas Wunner <>
Acked-by: default avatarVinod Koul <>
Acked-by: default avatarStefan Wahren <>
Acked-by: default avatarMartin Sperl <>
Cc: Florian Kauer <>

Signed-off-by: Mark Brown's avatarMark Brown <>
parent 571e31fa
......@@ -42,11 +42,14 @@
* @ddev: DMA device
* @base: base address of register map
* @dma_parms: DMA parameters (to convey 1 GByte max segment size to clients)
* @zero_page: bus address of zero page (to detect transactions copying from
* zero page and avoid accessing memory if so)
struct bcm2835_dmadev {
struct dma_device ddev;
void __iomem *base;
struct device_dma_parameters dma_parms;
dma_addr_t zero_page;
struct bcm2835_dma_cb {
......@@ -693,6 +696,7 @@ static struct dma_async_tx_descriptor *bcm2835_dma_prep_dma_cyclic(
size_t period_len, enum dma_transfer_direction direction,
unsigned long flags)
struct bcm2835_dmadev *od = to_bcm2835_dma_dev(chan->device);
struct bcm2835_chan *c = to_bcm2835_dma_chan(chan);
struct bcm2835_desc *d;
dma_addr_t src, dst;
......@@ -743,6 +747,10 @@ static struct dma_async_tx_descriptor *bcm2835_dma_prep_dma_cyclic(
dst = c->cfg.dst_addr;
src = buf_addr;
info |= BCM2835_DMA_D_DREQ | BCM2835_DMA_S_INC;
/* non-lite channels can write zeroes w/o accessing memory */
if (buf_addr == od->zero_page && !c->is_lite_channel)
info |= BCM2835_DMA_S_IGNORE;
/* calculate number of frames */
......@@ -845,6 +853,9 @@ static void bcm2835_dma_free(struct bcm2835_dmadev *od)
dma_unmap_page_attrs(od->, od->zero_page, PAGE_SIZE,
static const struct of_device_id bcm2835_dma_of_match[] = {
......@@ -927,6 +938,14 @@ static int bcm2835_dma_probe(struct platform_device *pdev)
platform_set_drvdata(pdev, od);
od->zero_page = dma_map_page_attrs(od->, ZERO_PAGE(0), 0,
if (dma_mapping_error(od->, od->zero_page)) {
dev_err(&pdev->dev, "Failed to map zero page\n");
return -ENOMEM;
/* Request DMA channel mask from device tree */
if (of_property_read_u32(pdev->dev.of_node,
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment