nagi reddy chitta

How to increase vmalloc address space on LS1020A system

Discussion created by nagi reddy chitta on Nov 3, 2016

Hi,

I am working on a system where we have 2GB physical RAM and we are giving 512MB to Linux Kernel(through kernel command line parameters (mem=512M) and the rest 1.5GB we are using it for local buffers.

We use using "request_mem_region" and "ioremap_nocache" kernel functions to allocate and map required buffers for our local buffers and  "pci_resource_start" and  "pci_ioremap_bar"  to allocate and map required buffers for PCI devices.

 

This total (Local buffers address space + PCI devices address space) address space requirement is coming to 1256MB.

I see that system is working fine as we have vmalloc address space of 1512MB.

---------------------------------------------------------------------------------------

With 512MB given to Linux via command line argument:

Virtual kernel memory layout:

    vector  : 0xffff0000 - 0xffff1000   (   4 kB)

    fixmap  : 0xfff00000 - 0xfffe0000   ( 896 kB)

    vmalloc : 0xa0800000 - 0xff000000   (1512 MB)

    lowmem  : 0x80000000 - 0xa0000000   ( 512 MB)

    pkmap   : 0x7fe00000 - 0x80000000   (   2 MB)

    modules : 0x7f800000 - 0x7fe00000   (   6 MB)

      .text : 0x80008000 - 0x80434e60   (4276 kB)

      .init : 0x80435000 - 0x8045e0c0   ( 165 kB)

      .data : 0x80460000 - 0x8049e028   ( 249 kB)

       .bss : 0x8049e030 - 0x804ce42c   ( 193 kB)

-----------------------------------------------------------------------------------------

 

Now we have a requirement to increase Linux memory to 1GB. 

I have changed the linux memory to 1GB in kernel command line and the below is the Virtual kernel memory layout for with 1GB kernel

---------------------------------------------------------------------------------------

With 1GB given to Linux via command line argument:

---------------------------------------------------------------------------------------

    Virtual kernel memory layout:

    vector  : 0xffff0000 - 0xffff1000   (   4 kB)

    fixmap  : 0xfff00000 - 0xfffe0000   ( 896 kB)

    vmalloc : 0xc0800000 - 0xff000000   (1000 MB)

    lowmem  : 0x80000000 - 0xc0000000   (1024 MB)

    pkmap   : 0x7fe00000 - 0x80000000   (   2 MB)

    modules : 0x7f800000 - 0x7fe00000   (   6 MB)

      .text : 0x80008000 - 0x80434e60   (4276 kB)

      .init : 0x80435000 - 0x8045e0c0   ( 165 kB)

      .data : 0x80460000 - 0x8049e028   ( 249 kB)

       .bss : 0x8049e030 - 0x804ce42c   ( 193 kB)

 

---------------------------------------------------------------------------------------

 

But i am hitting an issue here when i try to allocate and map address space to the same buffers mentioned above.

This issue is failing to allocate vmalloc regions and i see that out total requirement of address space for Local buffers + PCI Device buffers are going beyond 1000MB ( vmalloc size is 1000MB as per the above Kernel Virtual memory layout).

 

I was going thru some mailing lists and see that we can change user/kernel virtual split by configuring "VMSPLIT" in Linux configuration,but when i try to change it i see that kernel does not comeup and stuck at "Kernel Loading...."

 

Default split i see is CONFIG_VMSPLIT_2G(i.e 2G/2G, User/Kernel).

I am trying to enable CONFIG_VMSPLIT_1G (i.e 1G/3G, User/Kernel) thinking that vmalloc address space increases.

 

I have the following doubts which i am hoping to get clarified by someone in this forum.

 

a) Does address spaces in "Virtual kernel memory layout:" gets affected if i change VMSPLIT configuration in Linux?

b) What would be the reason for not booting the kernel if i change the VMSPLIT to either CONFIG_VMSPLIT_3G or CONFIG_VMSPLIT_1G?

 

Can someone pls through a light here as i am tired of experimenting and googling for last 2 days?

 

My board details:

SoC: LS1020A

DDR Size: 2GB

Linux version: 3.12.37-rt51+g43cecda

 

Best Regards,

Nagi

Outcomes