32 bits PCI device DMA problem on T1042 platform Linux 64 bits

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

32 bits PCI device DMA problem on T1042 platform Linux 64 bits

1,691 Views
antoinedurand
Contributor II


Hi,

on a T1042 platform,
with Linux kernel 4.1.8 from NXP QorIQ SDK 2.0,
I'm writing a PCI driver for a PCI 32 bit device.

I need to setup a DMA_FROM_DEVICE page (one page = 4KB) with dma_map_single() ... and it fails.

the buffer i want to dma_map_single() is "kalloced" with GFP_KERNEL | GFP_DMA, the corresponding Physical address, is everytime above 0xF0000000. the pci device's dma mask is set by dma_set_mask(dev, DMA_BIT_MASK(32)).

dma_map_single() is defined as dma_map_single_attrs() [include/asm-generic/dma-mapping-common.h] which call the corresponding dma_ops of swiotlb : swiotlb_map_page() [lib/swiotlb.c].
swiotlb_map_page() test for the given buffer's physical address to check if it is dma_capable() [include/asm-generic/dma-mapping.h] directly.
as it is not, it try to map_single() [lib/swiotlb.c] again, test again the new physical address returned and it is still not dma_capable().

dma_capable() check if physical address is below dev->archdata.max_direct_dma_addr.
dev->archdata.max_direct_dma_addr is set in pci_dma_dev_setup_swiotlb() [arch/powerpc/kernel/dma-swiotlb.c] based on the pci controller dma_window_base_cur and dma_window_size.
those values were been set to 0x0 and 0xc0000000 in setup_pci_atmu() [arch/powerpc/sysdev/fsl_pci.c] based on the device tree ranges.
what i understood of the inbound/outbound window setup is :
    the inbound window set up in setup_pci_atmu() start from 0x0 to the low address of the pci Memory outbound window,
    that is, all remaining adress space between zero and the beginning of outbound windows
    is that correct and normal ?

I don't known if dev->archdata.max_direct_dma_addr (= 0xc0000000) is wrong
or if map_single() [lib/swiotlb.c] is wrong and must return a physical address value below 0xc0000000 ?

Does someone see what's wrong ?

My DRAM is located from 0x00000000 to 0x200000000 (8 GB) physical address
an d here is my device tree  for pci nodes :

Thank you for any help,

------------------------------------

    pci0: pcie@ffe240000 {
        reg = <0xf 0xfe240000 0 0x10000>;
             /* 32 bits non-prefetchable memmory region beginning on PCI address [0xC0000000..0xFFFFFFFF] of 1GB mapped onto CPU physical address [0xC00000000..0xC3FFFFFFF] */
        ranges = <0x02000000 0 0xc0000000 0xc 0x00000000 0x0 0x40000000
             /* I/O region beginning on PCI address 0x0 of 64 KB mapped onto CPU address 0xff8000000 */
              0x01000000 0 0x00000000 0xf 0xf8000000 0x0 0x00010000>;
        pcie@0 {
            ranges = <0x02000000 0 0xc0000000
                  0x02000000 0 0xc0000000
                  0 0x40000000

                  0x01000000 0 0x00000000
                  0x01000000 0 0x00000000
                  0 0x00010000>;
        };
    };

    pci1: pcie@ffe250000 {
        reg = <0xf 0xfe250000 0 0x10000>;
             /* 32 bits non-prefetchable memmory region beginning on PCI address [0xC0000000..0xDFFFFFFF] of 512 MB mapped onto CPU physical address [0xC40000000..0xC5FFFFFFF] */
        ranges = <0x02000000 0x0 0xc0000000 0xc 0x40000000 0x0 0x20000000
              0x01000000 0x0 0x00000000 0xf 0xf8010000 0x0 0x00010000>;
        pcie@0 {
            ranges = <0x02000000 0 0xc0000000
                  0x02000000 0 0xc0000000
                  0 0x10000000

                  0x01000000 0 0x00000000
                  0x01000000 0 0x00000000
                  0 0x00010000>;
        };
    };

Labels (1)
0 Kudos
2 Replies

1,117 Views
antoinedurand
Contributor II

ok, I solved my problem,

My arch/powerpc/platforms/85xx/yyyyy.c platform specific initialisation file was missing the line :

limit_zone_pfn(ZONE_DMA32, 1UL << (31 - PAGE_SHIFT));

as in corenet_generic.c.

I get buffer under 0x80000000 when using GFP_DMA32 now !

No it does not work !

as soon as i use a bigger intramfs or usdpaa_mem, it fails again because kmalloc() return a buffer above 0xc0000000

I had to add a fixed reserved-memory node in my device tree at 0xbf000000 (for instance)

and use it as DMA inbound buffer explicitly in my driver using __va(0xbf000000) in dma_map_single !

until I undersand why Software IO TLB work with a 64 MB buffer at 0xfbfff000, that is not usable for DMA.

"software IO TLB [mem 0xfbfff000-0xfffff000] (64MB) mapped at ..."

0 Kudos

1,117 Views
yipingwang
NXP TechSupport
NXP TechSupport

Hello Antoine Durand,

Please refer to the example code in drivers/net/ethernet/intel/e1000/e1000_ethtool.c for how to use dma_map_single.

              

               ... ...

               buf = kzalloc(E1000_RXBUFFER_2048 + NET_SKB_PAD + NET_IP_ALIGN,
                              GFP_KERNEL);
                if (!buf) {
                        ret_val = 7;
                        goto err_nomem;
                }
                rxdr->buffer_info[i].rxbuf.data = buf;

                rxdr->buffer_info[i].dma =
                        dma_map_single(&pdev->dev,
                                       buf + NET_SKB_PAD + NET_IP_ALIGN,
                                       E1000_RXBUFFER_2048, DMA_FROM_DEVICE);
                if (dma_mapping_error(&pdev->dev, rxdr->buffer_info[i].dma)) {
                        ret_val = 8;
                        goto err_nomem;
                }

You also could refer to interfaces dma_map_page/dma_unmap_page, which deal with page/offset pairs instead of CPU pointers.

For details about dma_ map_single and dma_map_page, you could refer to the document Documentation/DMA-API-HOWTO.txt in Linux Kernel source code.


Have a great day,
TIC

-----------------------------------------------------------------------------------------------------------------------
Note: If this post answers your question, please click the Correct Answer button. Thank you!
-----------------------------------------------------------------------------------------------------------------------

0 Kudos