QMAN portal IRQ affinity mask

取消
显示结果 
显示  仅  | 搜索替代 
您的意思是: 
已解决

QMAN portal IRQ affinity mask

跳至解决方案
7,612 次查看
Simbu
Contributor II

Hi All,

I'm working on the p4080 processor in which I attempted to change the IRQ mask for Q-Man Portal as below.

root> ls -lrt /proc/irq/106/smp_affinity

-rw------- 1 root root 0 Jun 12 17:22 /proc/irq/106/smp_affinity

root> cat /proc/irq/106/smp_affinity

02

Getting Input Output error as below:

root> echo 0x01 > /proc/irq/106/smp_affinity

-bash: echo: write error: Input/output error

Any idea how IRQ can be masked to different core?

I understand that these portals are dedicated to a single core, but for one specific reason (performance of other server processes running on this core) want to change this setting.

Thanks in Advance.

标签 (1)
标记 (6)
0 项奖励
回复
1 解答
6,353 次查看
vakulgarg
NXP Employee
NXP Employee

In dts file find and edit qportal1 node.

Remove line 'cpu-handle = <&cpu1>;'

qportal1: qman-portal@4000 {

  cell-index = <0x1>;

  compatible = "fsl,p4080-qman-portal", "fsl,qman-portal";

  reg = <0x4000 0x4000 0x101000 0x1000>;

  cpu-handle = <&cpu1>;                                             <-- Remove this line.

  interrupts = <106 0x2 0 0>;

  fsl,qman-channel-id = <0x1>;

  fsl,qman-pool-channels = <&qpool1 &qpool2 &qpool3

   &qpool4 &qpool5 &qpool6

   &qpool7 &qpool8 &qpool9

   &qpool10 &qpool11 &qpool12

   &qpool13 &qpool14 &qpool15>;

  };

Recompile dts and use it and let me know the result (with bootlog)

在原帖中查看解决方案

0 项奖励
回复
21 回复数
6,313 次查看
vakulgarg
NXP Employee
NXP Employee

Each portal's DQRR is processed by only single core (which is called its owner/master core).

The interrupts of qman portals are affined to their respective owner core. Its affinity cannot be directly changed.

For your case, you do not want a particular core to receive qman portal's interrupt.

For this you can configure the core not to get any qman portal in owner mode.

In other words, this means that the corresponding core in question will not process the portal's DQRR (dequeued messages).

Let me know if this can work for you. I can post the method to achieve this.

Which SDK version are you using?

0 项奖励
回复
6,313 次查看
Simbu
Contributor II

Thanks Vakul for quick prompt.

Could you please help me on how to achieve this For E.g the need is on core 1 .?

For this you can configure the core not to get any qman portal in owner mode.


Thanks in Advance.



0 项奖励
回复
6,314 次查看
vakulgarg
NXP Employee
NXP Employee

For preventing qman portal interrupts from reaching core1, adding following option in kernel bootargs u-boot env variable.

qportals=s0,2-7

This will make core0 get shared portal.

Core2 - core7 get their own private portal.

The remaining core, i.e. core1 will get slave portal. This means it will share portal with core0.

The cores getting slave portal do not participate in portal's DQRR receive processing.

Let me know if this worked for you.

6,314 次查看
Simbu
Contributor II

Thanks again Vakul.

I updated the bootargs with the provided option (qportals=s0,2-7) but still the interrupts are handled by the core 1 (monitored /proc/interrupts). In additional, added bportals options also (bportals=s0,2-7) and no change was observed.

root> cat /proc/cmdline

root=/dev/nfs rw ip=<IP address> console=ttyS0,9600 nfsroot=$nfs_ip:_tftpboot/cnp/p4080,tcp,v3, panic=5 bportals=s0,2-7 qportals=s0,2-7

root>

Please let me know your view ?

0 项奖励
回复
6,314 次查看
vakulgarg
NXP Employee
NXP Employee

Can you send me full dump of /proc/interrupts?

0 项奖励
回复
6,314 次查看
Simbu
Contributor II

Sent the snapshot.

0 项奖励
回复
6,314 次查看
vakulgarg
NXP Employee
NXP Employee

Can you tell whether you are accessing portals from user space (i.e. using USDPAA software)?

0 项奖励
回复
6,314 次查看
Simbu
Contributor II

We are not accessing/using USDPPA portals from user space. But the USDPAA configuration is enabled from kernel side.

CONFIG_FSL_USDPAA_SHMEM=y

CONFIG_FSL_USDPAA_SHMEM_LOG4=8

0 项奖励
回复
6,314 次查看
vakulgarg
NXP Employee
NXP Employee

Which sdk version are you using?

0 项奖励
回复
6,314 次查看
Simbu
Contributor II

I think 1.3, but any major change submitted across SDK that will change the behavior of DPAA or smp_affinity?

0 项奖励
回复
6,313 次查看
vakulgarg
NXP Employee
NXP Employee

SDK-1.3 is very old. You won't get good support on this SDK version.

The kernel in sdk-1.3 does not parse qportals= and bportals= bootarg.

Here the kernel may be relying on static portal binding with the cores.

That is why you do not see any impact.

Can you upgrade to some recent release sdk-1.5 (or sdk-1.6 which is going to be released this week)?

0 项奖励
回复
6,313 次查看
Simbu
Contributor II

:smileysad: unfortunately, we are almost over with all the application porting on top of the SDK and will be difficult to upgrade/take new version.

Is there any way out other than upgrade to new SDK?

Also, any way to make that smp_affinity work or that file is not usable anymore under QMan Portals?

0 项奖励
回复
6,313 次查看
vakulgarg
NXP Employee
NXP Employee

I checked sdk-1.3 kernel source code. It seems it supports 'qportals'.

Please send me kernel bootlog with qportals=s0,2-7 in bootargs.

Also send me kernel file linux/drivers/staging/fsl_qbman/qman_driver.c.

0 项奖励
回复
6,314 次查看
Simbu
Contributor II

Please find the requested log and file.

0 项奖励
回复
6,314 次查看
vakulgarg
NXP Employee
NXP Employee

Looking at your qman_driver.c, my guess is that you are not using sdk-1.3. You sdk is as old as SDK-1.0.

SDK-1.0 relies on static portal allocation to cpus and does not have concept of slave portals.

Can you share linux/arch/powerpc/boot/dts/p4080ds.dts? I may be able to suggest something for your case.

0 项奖励
回复
6,314 次查看
Simbu
Contributor II

Thanks Vakul ..

Please find the requested data attached along with this email.

I thought the support of (share/slave) mode is available in my SDK by referring  the below code in which it attempts to loop through the bootstrap and get "something" to decide on share/slave portal.

----

        list_for_each_entry(pcfg, &cfg_list, list) {

                int is_shared = ((sharing_cpu >= 0) &&

                                (pcfg->cpu == sharing_cpu));

                struct qman_portal *p;

                if (pcfg->cpu < 0)

                        continue;

                p = init_affine_portal(pcfg, pcfg->cpu, NULL,

                                        recovery_mode, is_shared);

                if (p) {

                        if (is_shared)

                                sharing_portal = p;

                        cpumask_clear_cpu(pcfg->cpu, &slave_cpus);

                }

-----

0 项奖励
回复
6,354 次查看
vakulgarg
NXP Employee
NXP Employee

In dts file find and edit qportal1 node.

Remove line 'cpu-handle = <&cpu1>;'

qportal1: qman-portal@4000 {

  cell-index = <0x1>;

  compatible = "fsl,p4080-qman-portal", "fsl,qman-portal";

  reg = <0x4000 0x4000 0x101000 0x1000>;

  cpu-handle = <&cpu1>;                                             <-- Remove this line.

  interrupts = <106 0x2 0 0>;

  fsl,qman-channel-id = <0x1>;

  fsl,qman-pool-channels = <&qpool1 &qpool2 &qpool3

   &qpool4 &qpool5 &qpool6

   &qpool7 &qpool8 &qpool9

   &qpool10 &qpool11 &qpool12

   &qpool13 &qpool14 &qpool15>;

  };

Recompile dts and use it and let me know the result (with bootlog)

0 项奖励
回复
6,312 次查看
Simbu
Contributor II

The above suggested approach configured Qman 1 on slave mode as expected and network traffic was no more handled by Qman portal1. But, observed few side-effects on userland process running on the core where the Qman portal is configured on slave node.

0 项奖励
回复
6,314 次查看
Simbu
Contributor II

Hi Vakul,

Is it possible to exclude Qman 1 from the FMAN dispatch list? Basically, is there any way to mask he visibility of QMan Portal 1 from FMAN view as not to dispatch any network traffic to this queue?

0 项奖励
回复
6,311 次查看
vakulgarg
NXP Employee
NXP Employee

To avoid receiving any network traffic on portal dedicated to cpu1, try removing qpool1 from qportal1.

This will configure the portal dedicated to cpu1 not to receive traffic on pool channel qpool1.

The ethernet ports (as per dts file you shared)  have been configured to use qpool1 for their FQs.

You need to use:

qportal1: qman-portal@4000 {

  cell-index = <0x1>;

  compatible = "fsl,p4080-qman-portal", "fsl,qman-portal";

  reg = <0x4000 0x4000 0x101000 0x1000>;

  cpu-handle = <&cpu1>;

  interrupts = <106 0x2 0 0>;

  fsl,qman-channel-id = <0x1>;

  fsl,qman-pool-channels = <&qpool2 &qpool3               <-- Removed qpool1 from this line

   &qpool4 &qpool5 &qpool6

   &qpool7 &qpool8 &qpool9

   &qpool10 &qpool11 &qpool12

   &qpool13 &qpool14 &qpool15>;

  };

0 项奖励
回复