How Error Correction Code works on e500

cancel
Showing results for 
Search instead for 
Did you mean: 

How Error Correction Code works on e500

1,385 Views
NXP Employee
NXP Employee

I'm trying to find out how Error Correction and Error Handing works on e500 from DDR point of view,I don't find any support for the same in

Linux source.

Can anyone please point me out ,how ECC works e500 and MCE is handled on e500?

Tags (3)
0 Kudos
12 Replies

32 Views
NXP Employee
NXP Employee

Hi,

For DDR ECC you mean how it works on the DDR memory controller on e500 implementation SOC? For example on P2020, below is quote from it's reference manual:

The memory detects all double-bit errors, detects all multi-bit errors within a nibble, and corrects all single-bit errors. Other errors may be detected, but are not guaranteed to be corrected or detected. Multi-bit errors are always reported when error reporting is enabled. When a single-bit error occurs, the single-bit error counter register is incremented, and its value compared to the single-bit error trigger register. An error is reported when these values are equal. The single-bit error registers can be programmed such that minor memory faults are corrected and ignored, but a catastrophic memory failure generates an interrupt.

For MCE, the description in e500 reference manual:

Specific causes of machine check exceptions are implementation-dependent, as are the details of the actions taken on a machine check interrupt.

32 Views
NXP Employee
NXP Employee

Thanks lunminliang for your reply.

Actually,I wanted to provoke a machine check(lets say L1 Data Cache Parity) from user space to see Linux machine check handling is working or not?

As per this doc, if I set MSR[ME] and L1CSR0[CPE]  from user space ,would it provoke Machine check(or how can I do the same)?

http://cache.freescale.com/files/32bit/doc/app_note/AN3532.pdf

0 Kudos

32 Views
NXP TechSupport
NXP TechSupport

Both MSR and L1CSR0 are core, supervisor-modifiable SPRs. You cannot

alter them from Linux user mode.

32 Views
NXP Employee
NXP Employee

Thanks bpe for your reply.

I understood that we can't modify these register from user space .

But could you please confirm that If I set these registers from kernel space ,would it trigger machine check exception and invoke Linux machine check handler?

0 Kudos

32 Views
NXP TechSupport
NXP TechSupport

Yes, in kernel mode all SPRs are accessible, including those in discussion, and the idea should work as you want although I did not test it on practice.

0 Kudos

32 Views
NXP Employee
NXP Employee

Cache error injection is difficult to use properly -- it doesn't just inject a single error, but causes all accesses to get errors until the bit is turned off.  If the machine check handler is cacheable, which in Linux it is, you'll get a recursive machine check and hang.

If you want to test machine check handling in general, rather than a specific type of machine check, I suggest accessing a physical address that doesn't match any LAW (e.g. using /dev/mem)

0 Kudos

32 Views
NXP Employee
NXP Employee

Thank you Scott for your reply on this :smileyhappy:

I'm trying to inject MCE(Instruction Cache Parity Error) from QEMU and hoping that it would be taken care by Linux(arch/powerpc/kernel/traps.c) MC handler .For the same, I have come with following changes.

But I'm not sure, this is the right way to do it.

Request you to guide me on the same.

hmp-commands.hx                      |   13 +++++++

linux-headers/asm-powerpc/kvm.h      |   2 ++

linux-headers/asm-powerpc/kvm_para.h |        3 +-

monitor.c                            |   17 +++++++++

target-ppc/cpu-qom.h                 |   1 +

target-ppc/kvm.c                     |   64 ++++++++++++++++++++++++++++++++++

6 files changed, 99 insertions(+), 1 deletion(-)

diff --git a/hmp-commands.hx b/hmp-commands.hx

index e37bc8b..20e81e4 100644

--- a/hmp-commands.hx

+++ b/hmp-commands.hx

@@ -1496,6 +1496,19 @@ STEXI

Stop the QEMU embedded NBD server.

ETEXI

+{

+.name       = "mceppc",

+.args_type  =      "",

+.params     =   "",

+.help       = "inject mceppc",

+.mhandler.cmd = do_inject_mceppc,

+},

+STEXI

+@item mceppc @var{cpu}

+@findex mceppc

+Inject an mce on the default CPU (ppc64) .

+ETEXI

+

#if defined(TARGET_I386)

diff --git a/linux-headers/asm-powerpc/kvm.h b/linux-headers/asm-powerpc/kvm.h

index ab4d473..b9a2f5c 100644

--- a/linux-headers/asm-powerpc/kvm.h

+++ b/linux-headers/asm-powerpc/kvm.h

@@ -50,6 +50,8 @@ struct kvm_regs {

     __u64 sprg7;

     __u64 gpr[32];

+        __u32 l1csr0;

+        __u32 l1csr1;

};

#define KVM_SREGS_E_IMPL_NONE    0

diff --git a/linux-headers/asm-powerpc/kvm_para.h b/linux-headers/asm-powerpc/kvm_para.h

index 2abcc46..1bd4b72 100644

--- a/linux-headers/asm-powerpc/kvm_para.h

+++ b/linux-headers/asm-powerpc/kvm_para.h

@@ -56,7 +56,8 @@ struct kvm_vcpu_arch_shared {

     __u32 mas6;

     __u32 esr;

     __u32 pir;

-

+       __u32 l1csr0;

+       __u32 l1csr1;

     /*

      * SPRG4-7 are user-readable, so we can only keep these consistent

      * between the shared area and the real registers when there's an

diff --git a/monitor.c b/monitor.c

index 7e4f605..fa7b109 100644

--- a/monitor.c

+++ b/monitor.c

@@ -73,6 +73,8 @@

#include "qapi/qmp-event.h"

#include "qapi-event.h"

+#include "target-ppc/cpu-qom.h"

+

/* for pic/irq_info */

#if defined(TARGET_SPARC)

#include "hw/sparc/sun4m.h"

@@ -2152,6 +2154,21 @@ static void do_acl_remove(Monitor *mon, const QDict *qdict)

     }

}

+static void do_inject_mceppc(Monitor *mon, const QDict *qdict)

+{

+    CPUState *cs;

+    PowerPCCPU *cpu;

+

+    int cpu_index = 0 ;

+

+    cs = qemu_get_cpu(cpu_index);

+

+    if (cs != NULL) {

+        cpu = POWERPC_CPU(cs);

+        kvmppc_inject_mce(cpu,mon);

+    }

+}

+

#if defined(TARGET_I386)

static void do_inject_mce(Monitor *mon, const QDict *qdict)

{

diff --git a/target-ppc/cpu-qom.h b/target-ppc/cpu-qom.h

index 6967a80..ed99c37 100644

--- a/target-ppc/cpu-qom.h

+++ b/target-ppc/cpu-qom.h

@@ -112,6 +112,7 @@ static inline PowerPCCPU *ppc_env_get_cpu(CPUPPCState *env)

PowerPCCPUClass *ppc_cpu_class_by_pvr(uint32_t pvr);

PowerPCCPUClass *ppc_cpu_class_by_pvr_mask(uint32_t pvr);

+void kvmppc_inject_mce(PowerPCCPU *cpu, Monitor *mon);

void ppc_cpu_do_interrupt(CPUState *cpu);

bool ppc_cpu_exec_interrupt(CPUState *cpu, int int_req);

void ppc_cpu_dump_state(CPUState *cpu, FILE *f, fprintf_function cpu_fprintf,

diff --git a/target-ppc/kvm.c b/target-ppc/kvm.c

index 1edf2b5..af1b486 100644

--- a/target-ppc/kvm.c

+++ b/target-ppc/kvm.c

@@ -32,6 +32,7 @@

#include "sysemu/device_tree.h"

#include "mmu-hash64.h"

+#include "target-ppc/cpu-qom.h"

#include "hw/sysbus.h"

#include "hw/ppc/spapr.h"

#include "hw/ppc/spapr_vio.h"

@@ -828,6 +829,8 @@ int kvm_arch_put_registers(CPUState *cs, int level)

     regs.pid = env->spr[SPR_BOOKE_PID];

+    regs.l1csr0 = env->spr[SPR_Exxx_L1CSR0];

+

     for (i = 0;i < 32; i++)

         regs.gpr[i] = env->gpr[i];

@@ -937,6 +940,65 @@ int kvm_arch_put_registers(CPUState *cs, int level)

     return ret;

}

+//amit

+

+void ppc_booke_set_msr_l1csr0(CPUPPCState *env, target_ulong msr,

+                               target_ulong l1csr0)

+{

+    env->msr = msr_me ;

+    env->spr[SPR_Exxx_L1CSR0] = l1csr0 | L1CSR0_CPE;

+}

+

+typedef struct MCEInjectionParams {

+    Monitor *mon;

+    PowerPCCPU *cpu;

+} MCEInjectionParams;

+

+static void do_inject_mce_ppc(void *data)

+{

+    MCEInjectionParams *params = data;

+    CPUPPCState *env = &params->cpu->env;

+    CPUState *cpu = CPU(params->cpu);

+

+    int ret;

+

+    cpu_synchronize_state(cpu);

+

+    struct kvm_regs regs;

+

+

+    ret = kvm_vcpu_ioctl(env, KVM_GET_REGS, &regs);

+

+    if (ret < 0 ) {

+        return ret;

+    }

+

+    env->msr = regs.msr;

+

+    env->spr[SPR_Exxx_L1CSR0] = regs.l1csr0;

+

+    ppc_booke_set_msr_l1csr0(env, regs.msr,regs.l1csr0);

+

+    regs.msr  = env->msr;

+

+    regs.l1csr0 = env->spr[SPR_Exxx_L1CSR0] ;

+

+    kvm_vcpu_ioctl(env, KVM_SET_REGS, &regs);

+}

+

+void kvmppc_inject_mce(PowerPCCPU *cpu,Monitor *mon)

+    {

+        CPUState *cs = CPU(cpu);

+        CPUPPCState *env = &cpu->env;

+

+        MCEInjectionParams params = {

+        .mon = mon,

+        .cpu = cpu,

+    };

+

+    run_on_cpu(cs, do_inject_mce_ppc, &params);

+}

+

static void kvm_sync_excp(CPUPPCState *env, int vector, int ivor)

{

      env->excp_vectors[vector] = env->spr[ivor] + env->spr[SPR_BOOKE_IVPR];

@@ -981,6 +1043,8 @@ int kvm_arch_get_registers(CPUState *cs)

     env->spr[SPR_BOOKE_PID] = regs.pid;

+    env->spr[SPR_Exxx_L1CSR0] = regs.l1csr0;

+

     for (i = 0;i < 32; i++)

         env->gpr[i] = regs.gpr[i];

arch/powerpc/include/uapi/asm/kvm.h      |    3 +++

arch/powerpc/include/uapi/asm/kvm_para.h |    3 ++-

arch/powerpc/kvm/booke.c                 |    4 ++++

arch/powerpc/kvm/e500.h                  |    2 --

arch/powerpc/kvm/e500_emulate.c          |   14 +++++++-------

arch/powerpc/kvm/e500mc.c                |    9 +++++++++

6 files changed, 25 insertions(+), 10 deletions(-)

diff --git a/arch/powerpc/include/uapi/asm/kvm.h b/arch/powerpc/include/uapi/asm/kvm.h

index ab4d473..57e6bb1 100644

--- a/arch/powerpc/include/uapi/asm/kvm.h

+++ b/arch/powerpc/include/uapi/asm/kvm.h

@@ -50,6 +50,9 @@ struct kvm_regs {

     __u64 sprg7;

     __u64 gpr[32];

+

+   __u32 l1csr0;

+   __u32 l1csr1;

};

#define KVM_SREGS_E_IMPL_NONE    0

diff --git a/arch/powerpc/include/uapi/asm/kvm_para.h b/arch/powerpc/include/uapi/asm/kvm_para.h

index 91e42f0..93ba5f9 100644

--- a/arch/powerpc/include/uapi/asm/kvm_para.h

+++ b/arch/powerpc/include/uapi/asm/kvm_para.h

@@ -56,7 +56,8 @@ struct kvm_vcpu_arch_shared {

     __u32 mas6;

     __u32 esr;

     __u32 pir;

-

+   __u32 l1csr0;

+   __u32 l1csr1;

     /*

      * SPRG4-7 are user-readable, so we can only keep these consistent

      * between the shared area and the real registers when there's an

diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c

index 9b55dec..b3009b5 100644

--- a/arch/powerpc/kvm/booke.c

+++ b/arch/powerpc/kvm/booke.c

@@ -1437,6 +1437,8 @@ int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)

     regs->sprg6 = kvmppc_get_sprg6(vcpu);

     regs->sprg7 = kvmppc_get_sprg7(vcpu);

+   regs->l1csr0 = vcpu->arch.shared->l1csr0;

+

     for (i = 0; i < ARRAY_SIZE(regs->gpr); i++)

         regs->gpr[i] = kvmppc_get_gpr(vcpu, i);

@@ -1465,6 +1467,8 @@ int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)

     kvmppc_set_sprg6(vcpu, regs->sprg6);

     kvmppc_set_sprg7(vcpu, regs->sprg7);

+        vcpu->arch.shared->l1csr0 = regs->l1csr0;

+

     for (i = 0; i < ARRAY_SIZE(regs->gpr); i++)

         kvmppc_set_gpr(vcpu, i, regs->gpr[i]);

diff --git a/arch/powerpc/kvm/e500.h b/arch/powerpc/kvm/e500.h

index 72920be..5825e03 100644

--- a/arch/powerpc/kvm/e500.h

+++ b/arch/powerpc/kvm/e500.h

@@ -76,8 +76,6 @@ struct kvmppc_vcpu_e500 {

     unsigned int host_tlb1_nv;

     u32 svr;

-    u32 l1csr0;

-    u32 l1csr1;

     u32 hid0;

     u32 hid1;

     u64 mcar;

diff --git a/arch/powerpc/kvm/e500_emulate.c b/arch/powerpc/kvm/e500_emulate.c

index ce7291c..721dcdb 100644

--- a/arch/powerpc/kvm/e500_emulate.c

+++ b/arch/powerpc/kvm/e500_emulate.c

@@ -109,7 +109,7 @@ static int kvmppc_e500_emul_dcbtls(struct kvm_vcpu *vcpu)

     struct kvmppc_vcpu_e500 *vcpu_e500 = to_e500(vcpu);

     /* Always fail to lock the cache */

-    vcpu_e500->l1csr0 |= L1CSR0_CUL;

+    vcpu->arch.shared->l1csr0 |= L1CSR0_CUL;

     return EMULATE_DONE;

}

@@ -231,12 +231,12 @@ int kvmppc_core_emulate_mtspr_e500(struct kvm_vcpu *vcpu, int sprn, ulong spr_va

         break;

#endif

     case SPRN_L1CSR0:

-        vcpu_e500->l1csr0 = spr_val;

-        vcpu_e500->l1csr0 &= ~(L1CSR0_DCFI | L1CSR0_CLFC);

+       vcpu->arch.shared->l1csr0 = spr_val;

+       vcpu->arch.shared->l1csr0 &= ~(L1CSR0_DCFI | L1CSR0_CLFC);

         break;

     case SPRN_L1CSR1:

-       vcpu_e500->l1csr1 = spr_val;

-       vcpu_e500->l1csr1 &= ~(L1CSR1_ICFI | L1CSR1_ICLFR);

+      vcpu->arch.shared->l1csr1 = spr_val;

+      vcpu->arch.shared->l1csr1 &= ~(L1CSR1_ICFI | L1CSR1_ICLFR);

         break;

     case SPRN_HID0:

        vcpu_e500->hid0 = spr_val;

@@ -354,10 +354,10 @@ int kvmppc_core_emulate_mfspr_e500(struct kvm_vcpu *vcpu, int sprn, ulong *spr_v

         *spr_val = vcpu->arch.tlbps[1];

         break;

     case SPRN_L1CSR0:

-        *spr_val = vcpu_e500->l1csr0;

+        *spr_val = vcpu->arch.shared->l1csr0;

         break;

     case SPRN_L1CSR1:

-        *spr_val = vcpu_e500->l1csr1;

+        *spr_val = vcpu->arch.shared->l1csr1;

         break;

     case SPRN_HID0:

         *spr_val = vcpu_e500->hid0;

diff --git a/arch/powerpc/kvm/e500mc.c b/arch/powerpc/kvm/e500mc.c

index cda695d..9b88475 100644

--- a/arch/powerpc/kvm/e500mc.c

+++ b/arch/powerpc/kvm/e500mc.c

@@ -143,6 +143,12 @@ static void kvmppc_core_vcpu_load_e500mc(struct kvm_vcpu *vcpu, int cpu)

     mtspr(SPRN_GDEAR, vcpu->arch.shared->dar);

     mtspr(SPRN_GESR, vcpu->arch.shared->esr);

+    asm volatile ("sync");

+    asm volatile ("isync");

+    mtspr(SPRN_L1CSR0, vcpu->arch.shared->l1csr0);

+    asm volatile ("isync");

+

     if (vcpu->arch.oldpir != mfspr(SPRN_PIR) ||

         __this_cpu_read(last_vcpu_of_lpid[get_lpid(vcpu)]) != vcpu) {

         kvmppc_e500_tlbil_all(vcpu_e500);

@@ -169,6 +175,9 @@ static void kvmppc_core_vcpu_put_e500mc(struct kvm_vcpu *vcpu)

     vcpu->arch.oldpir = mfspr(SPRN_PIR);

+    asm volatile ("sync");

+    vcpu->arch.shared->l1csr0 = mfspr(SPRN_L1CSR0);

+

     kvmppc_booke_vcpu_put(vcpu);

}

0 Kudos

32 Views
NXP Employee
NXP Employee

Could you take this to e-mail (preferably on the kvm list)?  These forums aren't the easiest place to do code review or rapid back-and-forth.

0 Kudos

32 Views
NXP Employee
NXP Employee

> Cache error injection is difficult to use properly -- it doesn't just inject a single error, but causes all accesses to get errors until the bit is turned off.  If > the machine check handler is cacheable, which in Linux it is, you'll get a recursive machine check and hang.

If I understand it correctly ,when I inject Cache error I won't be able to access machine check handler code which has already been cached as part of kernel but don't understand that how I would get recursive machine check and hang.

Also would it helpful if we map the handler code to static pages, would it then stop machine check handler code from getting cached?

Thanks

Amit.

0 Kudos

32 Views
NXP Employee
NXP Employee

You say you understand that you won't be able to access the machine check handler code... so what do you think the CPU is going to do next, when it can't fetch code to run?

What do you mean by "static pages"?

0 Kudos

32 Views
NXP Employee
NXP Employee

Ok, When CPU is not able to run machine check handler code which has been cached( because of cache parity error) system would go in hang state

If CPU can't fetch code to run , it would generates the Page fault, right?.

There would be dual faults situation

1) We can't access the cache .

2). We can't run the MC handler code.

Now, page with MC handler must be mapped on the MMU , is there any way to set any attribute in MMU to make this page uncached(CACHE_INHIBITED bit controls the mapping of page in MMU) ?

Also , we know the IOVR1 holds the address of MC handler which in our case is "machine_check_e500mc" , could we make anything

out of it.

Thank you.

Amit.

0 Kudos

32 Views
NXP Employee
NXP Employee

PowerPC doesn't have anything called "page fault".  It has TLB error interrupts and data storage interrupts, but you'll only get those if the reason you can't execute code has to do with the TLB.  If it's due to a cache error, you'll get a machine check.

Yes, it's theoretically possible to map the machine check code uncached, but Linux doesn't do it, and it's complicated because Linux wants to map all of memory using large cacheable pages, and you can't map the same memory twice with differing cacheability.  You'd also need to ensure that everything that the machine check handler depends on is uncached.

0 Kudos