LPSPI_Reset causes HardFault (KE18F)

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

LPSPI_Reset causes HardFault (KE18F)

Jump to solution
1,140 Views
johnadamson
Contributor III

Sorry, this is probably a stupid newbie error for the KE18F or the fsl drivers, but I'm not seeing it.  

I'm trying to bring up the LPSPI.  I installed an example app and have been trying to follow it, but I can't use it directly, as I don't have the twrke18f board.  Code that follows is an attempt to trim it down to minimum.  Building for MKE18F256VLH16.   I enable port clocks, set the pin mux, set the clock for LPSPI0, but when I try to reset the SPI, I end up in the HardFault_Handler.   If I comment out the LPSPI_Reset call, it runs fine, so it seems to not be an inadvertent side effect of earlier code.  

So apparently there's something more that must be done for the LPSPI on the KE18F that didn't need to be done on the K60?

int main(void) {
/* Init board hardware. */

//====================================================
// BOARD_InitBootPins();

/* Clock Control: 0x01u */
CLOCK_EnableClock(kCLOCK_PortA);
/* Clock Control: 0x01u */
CLOCK_EnableClock(kCLOCK_PortB);
/* Clock Control: 0x01u */
CLOCK_EnableClock(kCLOCK_PortC);
/* Clock Control: 0x01u */
CLOCK_EnableClock(kCLOCK_PortD);
/* Clock Control: 0x01u */
CLOCK_EnableClock(kCLOCK_PortE);

/* PORTB1 (pin 33) is configured as LPSPI0_SOUT */
PORT_SetPinMux(PORTB, 1U, kPORT_MuxAlt3);

/* PORTB2 (pin 32) is configured as LPSPI0_SCK */
PORT_SetPinMux(PORTB, 2U, kPORT_MuxAlt3);

/* PORTB3 (pin 31) is configured as LPSPI0_SIN */
PORT_SetPinMux(PORTB, 3U, kPORT_MuxAlt3);


//=============================================================
// BOARD_InitBootClocks();
scg_sys_clk_config_t curConfig;

/* Init FIRC. */
CLOCK_CONFIG_FircSafeConfig(&g_scgFircConfig_BOARD_BootClockRUN);
/* Set HSRUN power mode. */
SMC_SetPowerModeProtection(SMC, kSMC_AllowPowerModeAll);
SMC_SetPowerModeHsrun(SMC);
while (SMC_GetPowerModeState(SMC) != kSMC_PowerStateHsrun)
{
}

/* Init SIRC. */
CLOCK_InitSirc(&g_scgSircConfig_BOARD_BootClockRUN);
/* Init SysPll. */
CLOCK_InitSysPll(&g_scgSysPllConfig_BOARD_BootClockRUN);
/* Set SCG to SPLL mode. */
CLOCK_SetHsrunModeSysClkConfig(&g_sysClkConfig_BOARD_BootClockRUN);
/* Wait for clock source switch finished. */
do
{
   CLOCK_GetCurSysClkConfig(&curConfig);
} while (curConfig.src != g_sysClkConfig_BOARD_BootClockRUN.src);

/* Set SystemCoreClock variable. */
SystemCoreClock = BOARD_BOOTCLOCKRUN_CORE_CLOCK;
/* Set PCC LPSPI0 selection */
CLOCK_SetIpSrc(kCLOCK_Lpspi0, kCLOCK_IpSrcFircAsync);

printf("Hello World\n");

LPSPI_Reset(LPSPI0);

/* Force the counter to be placed into memory. */
volatile static int i = 0 ;
/* Enter an infinite loop, just incrementing a counter. */
while(1) {
   i++ ;
}
return 0 ;
}

Labels (1)
Tags (3)
0 Kudos
1 Solution
872 Views
Robin_Shen
NXP TechSupport
NXP TechSupport

Hi John,

CLOCK_EnableClock(kCLOCK_Lpspi0); can be used to enable the clock gate of LPSPI0.

CLOCK_EnableClock.PNG

_clock_ip_name.PNG

Best Regards,

Robin

 

-----------------------------------------------------------------------------------------------------------------------
Note: If this post answers your question, please click the Correct Answer button. Thank you!
-----------------------------------------------------------------------------------------------------------------------

View solution in original post

0 Kudos
3 Replies
872 Views
johnadamson
Contributor III

Okay, I think I see what the problem is, but it only raises more questions.  The code for CLOCK_SetIpSrc in fsl_clock.h is:

static inline void CLOCK_SetIpSrc(clock_ip_name_t name, clock_ip_src_t src)
{
uint32_t reg = (*(volatile uint32_t *)name);

assert(reg & PCC_CLKCFG_PR_MASK);
assert(!(reg & PCC_CLKCFG_INUSE_MASK)); /* Should not change if clock has been enabled by other core. */

reg = (reg & ~PCC_CLKCFG_PCS_MASK) | PCC_CLKCFG_PCS(src);

/*
* If clock is already enabled, first disable it, then set the clock
* source and re-enable it.
*/
(*(volatile uint32_t *)name) = reg & ~PCC_CLKCFG_CGC_MASK;
(*(volatile uint32_t *)name) = reg;
}

Tracing through it, I realized that the next-to-last line does indeed "first disable it", but the last line does NOT "then re-enable it".  If I change the last line to:

(*(volatile uint32_t *)name) = reg | PCC_CLKCFG_CGC_MASK;

the later code doesn't hard fault.  Because, of course, the clock for LPSPI0 was never getting enabled before and now it is.

So...am I missing a major step in my initialization, 'cause otherwise I don't get how anyone's code ever works. 

John

0 Kudos
873 Views
Robin_Shen
NXP TechSupport
NXP TechSupport

Hi John,

CLOCK_EnableClock(kCLOCK_Lpspi0); can be used to enable the clock gate of LPSPI0.

CLOCK_EnableClock.PNG

_clock_ip_name.PNG

Best Regards,

Robin

 

-----------------------------------------------------------------------------------------------------------------------
Note: If this post answers your question, please click the Correct Answer button. Thank you!
-----------------------------------------------------------------------------------------------------------------------

0 Kudos
872 Views
johnadamson
Contributor III

Yup, that was it.  In trying to duplicate the functionality of LPSPI_MasterInit, I missed that line.  

Thanks,

John

0 Kudos