iMX6 / XIO2213BZAY / Drive Connection

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

iMX6 / XIO2213BZAY / Drive Connection

1,006 Views
jalden
Contributor I

I am currently trying to find a solution to the below setup

 

Setup:

The setup consists of a connection from iMX6 to a XIOO2213BZAY chip via PCI then connected to removable memory modules via firewire

 

Reference:

I have been using the nitrogen6x to model the behavior.

 

Kernel Version:

I have tested a variety of versions.

3.10.17, 3.10.53, 3.14.52, & 4.1.15

 

Behavior:

The expected behavior is that the drives would connect every time. Right now the behavior changes depending on the Linux kernel version. We have had the best luck with 3.10.17 which works intermittently. Some boards will not connect the PCI at all, some boards are intermittent, and some connect every time. It is about a 25% failure rate. When applying the other Linux Kernels, the PCI connects 100% of the time.

 

Linux Kernel 3.10.53, 3.14.52 & 4.1.15 were able to connect the PCI component; however, the failed loading firewire_sbp2 driver. This occurs 100% of the time with these kernels.

 

Attached:

dmesg logs for failed and successful drive connections.  At the end of the captures, I include the results for lsmod & lspci.

Original Attachment has been moved to: rmm_not_connect.cap.zip

Original Attachment has been moved to: rmm_connect.cap.zip

Labels (2)
0 Kudos
7 Replies

813 Views
igorpadykov
NXP Employee
NXP Employee

Hi Jason

PCIe phy settings are defined in IOMUXC_GPR8 register, one

also can look at AN4784 PCIe Certification Guide for i.MX 6Dual/6Quad

and i.MX 6Solo/6DualLite

http://www.nxp.com/files/32bit/doc/app_note/AN4784.pdf 

so if some kernel fails, one can check these settings.

Best regards
igor
-----------------------------------------------------------------------------------------------------------------------
Note: If this post answers your question, please click the Correct Answer button. Thank you!
-----------------------------------------------------------------------------------------------------------------------

0 Kudos

812 Views
jalden
Contributor I

Hello, is anyone available here?

0 Kudos

813 Views
igorpadykov
NXP Employee
NXP Employee

Hello Jason

this device is not supported in NXP BSPs, standard support way may be

NXP Professional Services:

http://www.nxp.com/support/nxp-professional-services:PROFESSIONAL-SERVICE

Best regards
igor

0 Kudos

813 Views
jalden
Contributor I

Thanks for your response Igor, I do know the device is not supported; however, I believe PCI and DMA are supported.  The logging reveals failure on the part of a DMA read.  Additionally, firewire driver works as expected as long as the DMA read is successful.

Is there anything I can do to ensure a successful read from DMA? 

0 Kudos

813 Views
igorpadykov
NXP Employee
NXP Employee

Hello Jason

PCIe is supported, but i.MX6 DMA with PCIe not supported in

NXP BSPs. One can add such support himself using patches on

https://community.freescale.com/docs/DOC-95014 

Best regards
igor

0 Kudos

813 Views
jalden
Contributor I

Thanks Igor, I will give it a shot

Jason

0 Kudos

813 Views
jalden
Contributor I

Hello Igor, 

I have dug deeper into the issue I am having with the firewire drives.  It appears that I am getting funny readings from DMA.  I modified the driver to do a repeat read, continuously, if a failure occurs.  I have also included a variety of command like 'smp_mb()', 'sync', "echo 3 > /proc/sys/vm/drop_cache"  that in some cases would help establish a connection.  Once a connection was established I was able to connect to the drives just fine.  The failure seems to be coming from a sbp2_login attempt.  'orb->response' is used as the buffer passed into dma_map_single which seems to have mixed up results.  Half of the array is always consistent element 0 & 1; meanwhile, element 2 & 3 return as 0x00's in some cases.  

 

Do you have any ideas what would cause incomplete reading from DMA?

 

I have also attached the dmesg if needed.

Any help is appreciated,

Jason

 

*********************************  Successful Run ************************************ 

sbp2_login: size of response: 16
sbp2_send_management_orb: dma_map_single: DMA_FROM_DEVICE
sbp2_send_management_orb: dev id: 0xD219E868
sbp2_send_management_orb: orb->response_bus: 0x223752B8 ;function: 0x0
sbp2_send_management_orb: lu->tgt->management_agent_address = 0xFFFFF0030000
sbp2_send_management_orb: dma_map_single: DMA_TO_DEVICE
sbp2_send_management_orb: orb->base.request_bus: 0x22375298
sbp2_send_orb 0x22375298
complete_transaction 0x22375298
sbp2_status_write: spin_lock_irqsave
sbp2_status_write: sync
sbp2_status_write: spin_unlock_irqrestore
complete_managment_orb 0x223752B8
sbp2_send_management_orb:wait_for_completion_timeout: 2997 , orb->done = 0 ; timeout = 196611
sbp2_cancel_orbs:
sbp2_send_management_orb: dma_unmap_single: DMA_TO_DEVICE
sbp2_send_management_orb: dma_unmap_single: DMA_FROM_DEVICE
sbp2_send_management_orb: memcpy size = 16
sbp2_send_management_orb: orb->response[0] = 0x1000
sbp2_send_management_orb: orb->response[1] = 0xFFFFC0FF
sbp2_send_management_orb: orb->response[2] = 0x10F0
sbp2_send_management_orb: orb->response[3] = 0x3000000

 

*********************************  Failed Run ************************************ 

sbp2_login: size of response: 16
sbp2_send_management_orb: dma_map_single: DMA_FROM_DEVICE
sbp2_send_management_orb: dev id: 0xD219E868
sbp2_send_management_orb: orb->response_bus: 0x225FFFB8 ;function: 0x0
sbp2_send_management_orb: lu->tgt->management_agent_address = 0xFFFFF0030000
sbp2_send_management_orb: dma_map_single: DMA_TO_DEVICE
sbp2_send_management_orb: orb->base.request_bus: 0x225FFF98
sbp2_send_orb 0x225FFF98
complete_transaction 0x225FFF98
sbp2_status_write: spin_lock_irqsave
sbp2_status_write: sync
sbp2_status_write: spin_unlock_irqrestore
complete_managment_orb 0x225FFFB8
sbp2_send_management_orb:wait_for_completion_timeout: 2997 , orb->done = 0 ; timeout = 196611
sbp2_cancel_orbs:
sbp2_send_management_orb: dma_unmap_single: DMA_TO_DEVICE
sbp2_send_management_orb: dma_unmap_single: DMA_FROM_DEVICE
sbp2_send_management_orb: memcpy size = 16
sbp2_send_management_orb: orb->response[0] = 0x1000
sbp2_send_management_orb: orb->response[1] = 0xFFFFC0FF
sbp2_send_management_orb: orb->response[2] = 0x0
sbp2_send_management_orb: orb->response[3] = 0x0

0 Kudos