OpenCV on Sabre Lite board + OV5642 camera (Nit6X_5MP) ?

キャンセル
次の結果を表示 
表示  限定  | 次の代わりに検索 
もしかして: 

OpenCV on Sabre Lite board + OV5642 camera (Nit6X_5MP) ?

ソリューションへジャンプ
6,669件の閲覧回数
gpsoft
Contributor I

Hi,

We bought this board and camera from boundarydevices.com. I am using OpenCV 2.4.2 what I successfully compiled on that board. The operating system is Linaro: Linaro 12.11 (GNU/Linux 3.0.35-2026-geaaf30e-02070-g705bf58 armv7l) (downloaded from boundaydevices.com as well).

I tried few examples using the camera (facedetection) and it works fine with USB webcam (some logitech HD cam). However it doesn't work with the OV5642 camera. The camera works with gstreamer. It doesn't work with standard linux video utilites so I guess it's not complatible with video4linux2. I tried to run luvcview and I don't have a picture from that, however it says it supports some video modes.

Is it possible to make this camera working in OpenCV, (so I can grab an uncompressed images in loop) ?

Thank you,

Peter

ラベル(1)
0 件の賞賛
返信
1 解決策
2,332件の閲覧回数
andre_silva
NXP Employee
NXP Employee

Hi Peter,

some hacks in the opencv will be needed in order to make it points to the FSL camera driver instead the default V4L fom linux. I´m going to do some tests and will be posting the progress here.

Regards,

Andre

元の投稿で解決策を見る

0 件の賞賛
返信
16 返答(返信)
2,331件の閲覧回数
andre_silva
NXP Employee
NXP Employee
0 件の賞賛
返信
2,330件の閲覧回数
EricNelson
Senior Contributor II

Thanks much, Andre!

Ironically, mahyar@boundarydevices.com just got a gstreamer solution working last week. It uses ffmpegcolorspace to do the conversion to YUV.

Now if only we could use the GPU to do that piece...

Do you know if the Canny edge detector (or any of OpenCV) can work natively in the YUV color-space?

0 件の賞賛
返信
2,332件の閲覧回数
andre_silva
NXP Employee
NXP Employee

Hi Eric,

We are working to see if the IPU can do this conversion to us, so far this piece of code I extracted from the OpenCL SDK I am developing and will be available soon to customers, all the colorspace conversions I am doing using the GPU, you can see the demo video here:

https://www.youtube.com/watch?v=jErkzxcwOXA

the only thing I am doing in CPU in this video is to put zeros in the any other color but blue.

Unfortunately I am not aware of any function working natively in YUV, sorry =(

regards,

Andre

0 件の賞賛
返信
2,332件の閲覧回数
EricNelson
Senior Contributor II

Thanks Andre,

Why use the IPU instead of the GPU? The GPU doesn't have those nagging issues with line length and such.

My experience using the glimagesink gstreamer component showed that it was relatively straightforward to do a color-space conversion to RGB (to the frame buffer in that case).

I look forward to seeing the OpenCL code in action!

Regards,

Eric

0 件の賞賛
返信
2,331件の閲覧回数
andre_silva
NXP Employee
NXP Employee

I said IPU, because I am using it through V4L and I am getting in YUV format, maybe I could get directly in RGB.

regards,

Andre

0 件の賞賛
返信
2,332件の閲覧回数
EricNelson
Senior Contributor II

Hi Andre,

Unless I'm missing something, the V4L2 buffers are guaranteed to be in contiguous physical memory, and can be mapped to an OpenGL surface in the same way that VPU frames are co-opted by the glimagesink code.

Once there, the entire OpenGL (and presumably OpenCL) API is available directly.

0 件の賞賛
返信
2,332件の閲覧回数
andre_silva
NXP Employee
NXP Employee

I was investigating which changes we should do in order to make opencv works and here is what I found out:

if you go to ../OpenCV-2.4.X/modules/highgui/src you will find the source code for every capture api that OpenCV can be built, this means that OpenCV doens't look for the library object file (.so), OpenCV uses the api of the library to provide its own capture functions.

you also can see that opencv has 2 files called: cap_libv4l.cpp and cap_v4l.cpp, these are the files that uses the v4l-utils api, and it is different from the v4l used by the onboard camera (CSI), for the use of this camera Freescale provides the mfw_v4l2_capture driver, which is based on the mfw_v4l2 api, which is different from the v4l2 but also creates a /dev/videoX.

so, if we want to get the onboard camera working with opencv, we need to create the capture functions based on the mfw_v4l2 api, that means creating 2 new files in the modules/highgui/src called: cap_mfw_libv4l and cap_mfw_v4l, which now will be based on the Freescale's api.

these are the changes needed to get the onboard camera working.

regards,

Andre


0 件の賞賛
返信
2,332件の閲覧回数
varsmolta
Contributor V

thanks for this info. I just got my hands on the ov5642 camera. Are you coding up the capture functions needed by opencv? If so, would you mind sharing it here? If not, then I'll start coding these functions. Thanks

0 件の賞賛
返信
2,332件の閲覧回数
andre_silva
NXP Employee
NXP Employee

I'm not coding it, since opencv is not supported by FSL, this porting should be done by customer, and on my side, it would take too much time to implement it. But here is my suggestion, try implementing the gstreamer code (pipeline) for the camera capture, then once you have the camera´s frame in a buffer you can keeping using opencv to process it. when I worked with the kinect device I didn´t use OpenCV to do the capture stuff, only to play with the images.

regards,

Andre

0 件の賞賛
返信
2,332件の閲覧回数
varsmolta
Contributor V

There is already support for gstreamer imx6 image capture:

https://community.freescale.com/docs/DOC-93789

So are you saying to use this in a pipeline with gst-opencv plugin?

http://gstreamer.freedesktop.org/data/doc/gstreamer/head/gst-plugins-bad-plugins/html/gst-plugins-ba...

0 件の賞賛
返信
2,332件の閲覧回数
andre_silva
NXP Employee
NXP Employee
0 件の賞賛
返信
2,332件の閲覧回数
andre_silva
NXP Employee
NXP Employee

take a look in this blog: www.imxcv.blogspot.com, there is a post related to video streaming to a texture, I used a gst pipeline to get the frames from the video file, you should do the same, just modifying the pipeline to get the frames from camera instead.

0 件の賞賛
返信
2,332件の閲覧回数
andre_silva
NXP Employee
NXP Employee

I never touched these plugins, what I said was you implementing the capture pipeline to use ov5643 camera and once you have the buffer you could use the normal opencv to process it.

0 件の賞賛
返信
2,333件の閲覧回数
andre_silva
NXP Employee
NXP Employee

Hi Peter,

some hacks in the opencv will be needed in order to make it points to the FSL camera driver instead the default V4L fom linux. I´m going to do some tests and will be posting the progress here.

Regards,

Andre

0 件の賞賛
返信
2,332件の閲覧回数
varsmolta
Contributor V

Hi, any progress on getting opencv to work with the FSL camera driver? I am aiming to use it with the 0v5642 and was wondering if you have been able to make it work? Thanks

0 件の賞賛
返信
2,332件の閲覧回数
andre_silva
NXP Employee
NXP Employee

I was working to install the latest opencv in our latest bsp, I got it working now and I´m going to start investigating how to get the onboard camera working with it. I will post here the progress.

0 件の賞賛
返信