iMX8QM Multi-GPU Configuration & OpenCL Support

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

iMX8QM Multi-GPU Configuration & OpenCL Support

602 Views
sbertrand
Contributor III

I am looking for further information about multi-GPU configuration on iMX8QM.

I would like to expand from this previous post.

In the i.MX Graphics User's Guide the following configuration are reported :

8.2 Multi-GPU configurations
Vivante Multi-GPU IP may be configured into one of the following behavior model through software:
• Combined Mode where two (or more) GPU cores in the multi-GPU design behave in concert. Driver presents multi-GPU to SW application as a single logical GPU. The multiple GPUs work in the same virtual address space and share the same MMU page table. The multiple GPUs fetch and execute a shared Command Buffer.
• Independent Mode where each GPU in the multi-GPU design performs independently. The multiple GPUs work in different virtual address spaces but share the same MMU page table. Each GPU core fetches and executes its own Command Buffer. This enables different SW applications to run simultaneously on different GPU cores.
• OpenCL API allows application to handle the multi-GPU Independent Mode directly, as each GPU core in a multi-GPU design represents an independent OpenCL Compute Device.

8.4 OpenCL on multi-GPU device
OpenCL driver works in bridged mode as single logical compute device. In this configuration, multiple GPUs in the device operate as individual OpenCL Compute Devices. The OpenCL application is responsible to assign and dispatch the compute tasks to each GPU (Compute Device). The following OpenCL APIs return the list of compute devices available on a platform, and the device information.

cl_int clGetDeviceIDs (cl_platform_id platform, cl_device_type device_type, cl_uint num_entries, cl_device_id *devices, cl_uint *num_devices)

cl_int clGetDeviceInfo (cl_device_id device, cl_device_info param_name, size_t param_value_size, void *param_value, size_t *param_value_size_ret)

This combined and independent mode can be seen in the kernel driver enum gceMULTI_GPU_MODE. However those are not referenced anywhere.

typedef enum _gceMULTI_GPU_MODE
{
gcvMULTI_GPU_MODE_COMBINED = 0,
gcvMULTI_GPU_MODE_INDEPENDENT = 1
}
gceMULTI_GPU_MODE;

From the platform, I get the following OpenCL information

$ clinfo -l
Platform #0: Vivante OpenCL Platform
`-- Device #0: Vivante OpenCL Device GC7000XSVX.6009.0000  

$ clinfo

Number of platforms 1
    Platform Name Vivante OpenCL Platform
Number of devices 1
    Device Name Vivante OpenCL Device GC7000XSVX.6009.0000
    Device Vendor Vivante Corporation
    Device Vendor ID 0x564956
    Device Version OpenCL 3.0
    Device Numeric Version 0xc00000 (3.0.0)
    Driver Version OpenCL 3.0 V6.4.3.p4.398061
[..]
    Max compute units 2

 We do have 1 platform with 1 device providing 2 compute units. This indicates the system is configured in Combined Mode.

Overall I am just a bit confused between Compute Device and Compute Unit.
The OpenCL API clGetDeviceIDs and clGetDeviceInfo are used by clinfo.

How would we configured the system in Independent Mode ?
I would expect to see 2 devices being listed with 1 compute unit each. So The OpenCL application is responsible to assign and dispatch the compute tasks to each GPU (Compute Device).

 

The goal is to be able to select the compute device from gstreamer plugings.

Element Properties:
    device : OpenCL device
        flags: readable, writable
            Enum "GstOclDevicesEnum" Default: 0, "Vivante Corporation GPU"
                (0): Vivante Corporation GPU - Vivante OpenCL Device GC7000XSVX.6009.0000

 

Tags (2)
0 Kudos
2 Replies

547 Views
sbertrand
Contributor III

Testing Multi-GPU with SoftISP from IMX-GPU-SDK

~# clinfo | grep -i -e name -e unit
Platform Name Vivante OpenCL Platform
Platform Name Vivante OpenCL Platform
Device Name Vivante OpenCL Device GC7000XSVX.6009.0000
Max compute units 2

~# VIV_MGPU_AFFINITY=1:0 clinfo | grep -i -e name -e unit
Platform Name Vivante OpenCL Platform
Platform Name Vivante OpenCL Platform
Device Name Vivante OpenCL Device GC7000XSVX.6009.0000
Max compute units 1

~# VIV_MGPU_AFFINITY=1:1 clinfo | grep -i -e name -e unit
Platform Name Vivante OpenCL Platform
Platform Name Vivante OpenCL Platform
Device Name Vivante OpenCL Device GC7000XSVX.6009.0000
Max compute units 1

Starting interrupts are 

~# cat /proc/interrupts | grep gal
137: 29879 0 0 0 0 0 GICv3 96 Level galcore:0
138: 20088 0 0 0 0 0 GICv3 97 Level galcore:3d-1

Running SoftISP on first unit

~# VIV_MGPU_AFFINITY=1:0 /opt/imx-gpu-sdk/OpenCL/SoftISP/OpenCL.SoftISP
Denoise status: false
CycleNum status: 1000
Initializing device(s)...
Get the Device info and select Device...
# of Devices Available = 1
# of Compute Units = 1
# compute units = 1
Getting device id...
Creating Command Queue...
Creating kernels...
Please wait for compiling and building kernels, about one minute...
Kernel execution time on GPU (kernel: badpixel): 11.126999999999999 ms
Kernel execution time on GPU (kernel: sigma): 1.6630559999999948 ms
Kernel execution time on GPU (kernel: awb): 1.803011000000001 ms
Kernel execution time on GPU (kernel: equalize1): 3.069138000000005 ms
Kernel execution time on GPU (kernel: equalize2): 0.27961299999999895 ms
Kernel execution time on GPU (kernel: equalize3): 2.587928000000001 ms
Kernel execution time on GPU (kernel: debayer): 7.999 ms

~# cat /proc/interrupts | grep gal
137: 39930 0 0 0 0 0 GICv3 96 Level galcore:0
138: 20088 0 0 0 0 0 GICv3 97 Level galcore:3d-1

Running SoftISP on second unit 

~# VIV_MGPU_AFFINITY=1:1 /opt/imx-gpu-sdk/OpenCL/SoftISP/OpenCL.SoftISP
Denoise status: false
CycleNum status: 1000
Initializing device(s)...
Get the Device info and select Device...
# of Devices Available = 1
# of Compute Units = 1
# compute units = 1
Getting device id...
Creating Command Queue...
Creating kernels...
Please wait for compiling and building kernels, about one minute...
Kernel execution time on GPU (kernel: badpixel): 11.206999999999999 ms
Kernel execution time on GPU (kernel: sigma): 1.6622440000000023 ms
Kernel execution time on GPU (kernel: awb): 1.694835000000004 ms
Kernel execution time on GPU (kernel: equalize1): 3.034601999999997 ms
Kernel execution time on GPU (kernel: equalize2): 0.3158639999999996 ms
Kernel execution time on GPU (kernel: equalize3): 2.4953949999999994 ms
Kernel execution time on GPU (kernel: debayer): 8.116 ms

~# cat /proc/interrupts | grep gal
137: 39930 0 0 0 0 0 GICv3 96 Level galcore:0
138: 30080 0 0 0 0 0 GICv3 97 Level galcore:3d-1

Running SoftISP on combined units

~# /opt/imx-gpu-sdk/OpenCL/SoftISP/OpenCL.SoftISP
Denoise status: false
CycleNum status: 1000
Initializing device(s)...
Get the Device info and select Device...
# of Devices Available = 1
# of Compute Units = 2
# compute units = 2
Getting device id...
Creating Command Queue...
Creating kernels...
Please wait for compiling and building kernels, about one minute...
Kernel execution time on GPU (kernel: badpixel): 5.635 ms
Kernel execution time on GPU (kernel: sigma): 1.7863699999999962 ms
Kernel execution time on GPU (kernel: awb): 1.0335089999999996 ms
Kernel execution time on GPU (kernel: equalize1): 2.8791019999999947 ms
Kernel execution time on GPU (kernel: equalize2): 0.2856570000000003 ms
Kernel execution time on GPU (kernel: equalize3): 1.272296 ms
Kernel execution time on GPU (kernel: debayer): 4.1739999999999995 ms

~# cat /proc/interrupts | grep gal
137: 49767 0 0 0 0 0 GICv3 96 Level galcore:0
138: 35092 0 0 0 0 0 GICv3 97 Level galcore:3d-1

 

0 Kudos

576 Views
luisleon
Contributor I

Hi @sbertrand,

From the i.MX Graphics User's Guide, page 63-64, you can find the environment variable `VIV_MGPU_AFFINITY`. The GPU is in combined mode by default (if not set). You can set either:

  1. VIV_MGPU_AFFINITY=1:0, to assign GPU0.
  2. VIV_MGPU_AFFINITY=1:1, to assign GPU1.

In GStreamer, if you have multi-capture, you can set a GPU per pipeline. If you need any combination, you can use gst-interpipe as well.

Best regards,

Leon

MHPC. Luis G. Leon-Vega
RidgeRun Embedded Solutions - NXP Partner
www.ridgerun.com
0 Kudos