i.MX8MP-evk GPIO Edge Event ISR handler has 300us lag. We cant get it any earlier.

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

i.MX8MP-evk GPIO Edge Event ISR handler has 300us lag. We cant get it any earlier.

1,405 Views
JesterOfHw
Contributor IV

NXP i.MX Release Distro 5.10-hardknott imx8mpevk

Out test condition is using a function generator for input pin, and setting a GPIO output in the ISR Handler so we can see the propagation delay in the ISR handler with o-scope. Its always around 305us with some jitter. This does not match the expected time for the HW ISR system of ARM (< 50us at most). it appears like a thread is being used. However, we have taken steps to unthread this but doesn't appear to help. 

in the .config we see this defined for ISR's

CONFIG_IRQ_FORCED_THREADING=y

that seems like it could be related but setting it to =n is forced back to =y upon building the kernel.

 

Here is a shortened change list of registering the IRQ Handler and the handler itself.

Is it true that this isn't being handled in ISR context but a thread? if so how do we get this to be a true ISR level?

+static int gpio_test_probe(struct platform_device *pdev)
+{
+ int ret;
+ struct gpio_test_data *test_data;
+
+ struct device *dev = &pdev->dev;
+
+ int xinput_irq;

+ test_data->xinput_gpio = devm_gpiod_get_optional(dev, "xinput", GPIOD_IN);
+ if (IS_ERR(test_data->xinput_gpio))
+      return PTR_ERR(test_data->xinput_gpio);

+ ret = request_irq(xinput_irq, gpio_xinput_irq_handler,    \
+       IRQF_NO_THREAD, "GPIO TEST --->INUPT", test_data);

+ platform_set_drvdata(pdev, test_data);
+
+ dev_info(dev, "sos GPIO fan initialized\n");
+
+ return 0;
+}

 

+static irqreturn_t   gpio_xinput_irq_handler(int irq, void *dev_id)
+{
+ struct gpio_test_data *testdata = dev_id;
+
+ gpiod_set_value(testdata->koutput1_gpio, koutput1val);
+ if(koutput1val)
+   koutput1val=0;
+ else
+   koutput1val=1;
+
+ /* printk(KERN_INFO "-----sos11-------%s---------\n", __func__); */
+
+ return IRQ_HANDLED;
+}

0 Kudos
Reply
3 Replies

1,244 Views
JesterOfHw
Contributor IV

It looks like there is a major kernel threading switchover related to CPU IDLE. We hold ISR timing if we send commands in shell to disable idle state.

DISABLE:
echo 1 > /sys/devices/system/cpu/cpu0/cpuidle/state0/disable
echo 1 > /sys/devices/system/cpu/cpu0/cpuidle/state1/disable
echo 1 > /sys/devices/system/cpu/cpu1/cpuidle/state0/disable
echo 1 > /sys/devices/system/cpu/cpu1/cpuidle/state1/disable
echo 1 > /sys/devices/system/cpu/cpu2/cpuidle/state0/disable
echo 1 > /sys/devices/system/cpu/cpu2/cpuidle/state1/disable
echo 1 > /sys/devices/system/cpu/cpu3/cpuidle/state0/disable
echo 1 > /sys/devices/system/cpu/cpu3/cpuidle/state1/disable

 

afterwards the ISR lag is 10-15 microseconds which is what we were really hoping to see originally. Is this a good solution in the long run? not sure, but the mystery may be far less now as to the root cause.

 

ENABLE:
echo 0 > /sys/devices/system/cpu/cpu0/cpuidle/state0/disable
echo 0 > /sys/devices/system/cpu/cpu0/cpuidle/state1/disable
echo 0 > /sys/devices/system/cpu/cpu1/cpuidle/state0/disable
echo 0 > /sys/devices/system/cpu/cpu1/cpuidle/state1/disable
echo 0 > /sys/devices/system/cpu/cpu2/cpuidle/state0/disable
echo 0 > /sys/devices/system/cpu/cpu2/cpuidle/state1/disable
echo 0 > /sys/devices/system/cpu/cpu3/cpuidle/state0/disable
echo 0 > /sys/devices/system/cpu/cpu3/cpuidle/state1/disable

now ISR lag is back to ~305 microseconds again.

We can go back and forth between enable and disable, showing lag and no lag associated with this idle control.

ENABLE = ~305uS  DISABLE = ~12us

For our system, we need to begin processing and time stamping of an asynch event that we don't know is coming until it is here. My view is idle cant work as it causes too much timing lag to get started up. However, the system appears to work as originally expected if IDLE is disabled as shown via shell commands.

This IDLE system is new to me so not sure what else may need to be changed or is involved with it.

0 Kudos
Reply

1,334 Views
jimmychan
NXP TechSupport
NXP TechSupport

Hello,

 

Could you tell me which version of BSP are you using?

Which pin are you using for the test? Could you send me the device tree file for checking?

0 Kudos
Reply

1,322 Views
JesterOfHw
Contributor IV

for BSP its as is in hardknott aside from DTS change for pins we used for timing testing. we are using the 8MPLUS-EVK board for all of this. I've included some files for your evaluation if needed.

--------------------------------------------------

NXP Software Content Register

Release - Linux 5.10.35-2.0.0
June 2021

Outgoing License: LA_OPT_NXP_Software_License v24 June 2021 - Additional distribution license granted - Section 2.3 applies
License File: COPYING

Yocto Project recipe manifest:
repo init -u https://source.codeaurora.org/external/imx/imx-manifest -b imx-linux-hardknott -m imx-5.10.35-2.0.0.xml

Release tag: lf-5.10.35-2.0.0

-------------------------------------------

for pins definition I'm showing a dts snip, 19 is input using edge detection events, the rest are used for outputs to put our scope probes on. They are direct pins, not going through serial expansion chips.

+ gpio-test {
+ compatible = "gpio-test";
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_gpio_test>;
+ xinput-gpios = <&gpio3 19 GPIO_ACTIVE_HIGH>;
+ koutput1-gpios = <&gpio3 20 GPIO_ACTIVE_HIGH>;
+ koutput2-gpios = <&gpio3 21 GPIO_ACTIVE_HIGH>;
+ koutput3-gpios = <&gpio3 22 GPIO_ACTIVE_HIGH>;
+ koutput4-gpios = <&gpio3 23 GPIO_ACTIVE_HIGH>;
+ koutput5-gpios = <&gpio3 24 GPIO_ACTIVE_HIGH>;
+ status = "okay";
+ };

 

0 Kudos
Reply