MQXUG; I/O drivers "not covered in this book", help understanding _io_fopen

取消
显示结果 
显示  仅  | 搜索替代 
您的意思是: 
已解决

MQXUG; I/O drivers "not covered in this book", help understanding _io_fopen

跳至解决方案
3,182 次查看
CarlFST60L
Senior Contributor II

Copied from the MQXUG: 

--- 

2.10I/O drivers
I/O drivers are an optional component at the BSP level. They consist of formatted I/O
drivers are not described in this book.

2.10.1Formatted I/O
MQX provides a library of formatted I/O functions that is the API to the I/O subsystem.
2.10.2I/O subsystem
You can dynamically install I/O device drivers, after which any task can open them.

 

---

 

copied from the http://www.freescale.com/webapp/sps/site/overview.jsp?code=MQXRTOS

 

  • Code Reuse – Freescale MQX RTOS provides a framework with a simple API to build and organize the features across Freescale’s broad portfolio of embedded processors.
  • Intuitive API – Writing code for Freescale MQX RTOS is straight forward with a complete API and available reference documentation.

 

Not sure about anyone else, but I thought this meant that there was documentation to support the above statments?

If its not described in 'this book' where is it?

Is there some document we are am missing that covers all the I/O control, drivers, API functions, how it all works?

Is the only way to learn the inner workings of MQX to read/reverse engineer the source code? 

 

As an example, the I2C example uses fopen (_io_fopen), once we start tracing everything back, you end up needing to understand the kernel (first thing fopen does is _get_kernel_data, then starts using queues controlled via the kernel). Is there some details exactly how MQX handles everything on the back end? 

 

 

Can someone clarifiy what this section of code is doing and more importantly why...?

 

//--------------------------

First things to happen inside _io_fopen function:

... 

   _lwsem_wait((LWSEM_STRUCT_PTR)&kernel_data->IO_LWSEM);
   dev_ptr = (IO_DEVICE_STRUCT_PTR)((pointer)kernel_data->IO_DEVICES.NEXT);
   while (dev_ptr != (pointer)&kernel_data->IO_DEVICES.NEXT) {
      dev_name_ptr = dev_ptr->IDENTIFIER;
      tmp_ptr      = (char _PTR_)open_type_ptr;
      while (*tmp_ptr && *dev_name_ptr && (*tmp_ptr == *dev_name_ptr))
      {
         ++tmp_ptr;
         ++dev_name_ptr;
      } /* Endwhile */
      if (*dev_name_ptr == '\0') {
         /* Match */
         break;
      } /* Endif */
      dev_ptr = (IO_DEVICE_STRUCT_PTR)((pointer)dev_ptr->QUEUE_ELEMENT.NEXT);
   } /* Endwhile */

   _lwsem_post((LWSEM_STRUCT_PTR)&kernel_data->IO_LWSEM);
  
   if (dev_ptr == (pointer)&kernel_data->IO_DEVICES.NEXT) {
      return(NULL)

 

///// 

 

 While were at it, it would be great to have something (should really be in the MQXRM) on all the ioctl options, how to use them, how to create your own...

Message Edited by CarlFST60L on 2009-02-19 04:03 AM
Message Edited by CarlFST60L on 2009-02-19 04:57 AM
0 项奖励
1 解答
1,600 次查看
JuroV
NXP Employee
NXP Employee

Hi Carl.

The code you have pasted here is quite easy to explain. Imagine that more tasks can install driver at one time. Therefore, when the structure is unified in the kernel, you need mutual exclusion to have the structure of drivers (the list of drivers) in the kernel accessed by one task.This is explanation of _lwsem_wait at the beginning.

 

Then, you have to search for corresponding driver in the kernel list. Kernel uses QUEUES, but please dont miss these with message queues. This is kernel internal structure to handle list of structures (chain of structures). I think for a programmer, it is not important how the stuff works, but you can still use Code Warrior to get the macros how queues of structures are working. Perhaps one note helps you: the list is created in a circle, so there is not 'last structure'. In the core of loop, 2 strings (open_type_ptr and dev_ptr->IDENTIFIER) are compared.

 

At the end, we check, if we found something just testing that we did not reach the end of list, i.e. we are not at the beggining of mentioned circle of structured (beginning of circle is up to you to choose, in this case it is kernel_data->IO_DEVICES.NEXT).

 

I hope this helped you a bit. If you have further question(s), just ask here.

Message Edited by JuroV on 2009-03-04 08:22 AM
Message Edited by JuroV on 2009-03-04 08:31 AM

在原帖中查看解决方案

0 项奖励
2 回复数
1,601 次查看
JuroV
NXP Employee
NXP Employee

Hi Carl.

The code you have pasted here is quite easy to explain. Imagine that more tasks can install driver at one time. Therefore, when the structure is unified in the kernel, you need mutual exclusion to have the structure of drivers (the list of drivers) in the kernel accessed by one task.This is explanation of _lwsem_wait at the beginning.

 

Then, you have to search for corresponding driver in the kernel list. Kernel uses QUEUES, but please dont miss these with message queues. This is kernel internal structure to handle list of structures (chain of structures). I think for a programmer, it is not important how the stuff works, but you can still use Code Warrior to get the macros how queues of structures are working. Perhaps one note helps you: the list is created in a circle, so there is not 'last structure'. In the core of loop, 2 strings (open_type_ptr and dev_ptr->IDENTIFIER) are compared.

 

At the end, we check, if we found something just testing that we did not reach the end of list, i.e. we are not at the beggining of mentioned circle of structured (beginning of circle is up to you to choose, in this case it is kernel_data->IO_DEVICES.NEXT).

 

I hope this helped you a bit. If you have further question(s), just ask here.

Message Edited by JuroV on 2009-03-04 08:22 AM
Message Edited by JuroV on 2009-03-04 08:31 AM
0 项奖励
1,600 次查看
CarlFST60L
Senior Contributor II

Thanks for the reply. 

 

We have plenty of questions, however, I will wait until the IO documentation scheduled for this month is released. Hopefully that will solve our requirments regarding writing SPI drivers and Flash write routines which have not been covered to date. I would think that as they are core functions of the processor they should be covered in this release. 

 

Thanks,

Carl 

0 项奖励