Hello, all:
I test SATA performace on our custom board with intel SSD and figure out performance difference in kernel version.
My SSD is INTEL SSDSA2CT040G3.
The benchmark under windows 7 64 bit with CrystalDiskMark 5.1.2:
The test commad i use on target board:
echo performance > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
Write test:
echo 3 > /proc/sys/vm/drop_caches
dd if=/dev/zero of=tempfile bs=1M count=1024 conv=fsync
Read test:
echo 3 > /proc/sys/vm/drop_caches
dd if=tempfile of=/dev/null bs=1M count=1024
Test Result:
(Rootfs is buildroot)
linux 3.14.52:
Write:
1073741824 bytes (1.0GB) copied, 26.856296 seconds, 38.1MB/s
Read:
1073741824 bytes (1.0GB) copied, 13.097736 seconds, 78.2MB/s
linux 3.10.53:
Write:
1073741824 bytes (1.0GB) copied, 28.990223 seconds, 35.3MB/s
Read:
1073741824 bytes (1.0GB) copied, 8.182314 seconds, 125.1MB/s
linux 3.0.35:
Write:
1073741824 bytes (1.0GB) copied, 36.756555 seconds, 27.9MB/s
Read:
1073741824 bytes (1.0GB) copied, 10.264888 seconds, 99.8MB/s
As you see, the read speed is 30% worse in linux 3.14.52 then linux 3.10.53.
Any idea ?
BR,
Richard
Hi richardhu
one can check who is holding ddr bandwidth using profiling tool.
There is a yocto project recipe imx-test that provides a Multi-Mode DDR Controller
profiling tool mxc_oprofile_test: : run it - /unit_tests/mmdc2 <master>: where master is
described in Table 44-10. i.MX 6Dual/6Quad AXI ID i.MX6 RM
Best regards
igor
-----------------------------------------------------------------------------------------------------------------------
Note: If this post answers your question, please click the Correct Answer button. Thank you!
-----------------------------------------------------------------------------------------------------------------------
Hello, igor:
Thanks for your reply.
I run the "mmdc2" test as follows:
In linux 3.14.52:
=========================================================
root@edm-fairy-imx6:~# /unit_tests/mmdc2 SUM
i.MX6Q detected.
MMDC SUM
MMDC new Profiling results:
***********************
Measure time: 501ms
Total cycles count: 264084408
Busy cycles count: 69431977
Read accesses count: 873416
Write accesses count: 1657
Read bytes count: 55592984
Write bytes count: 53024
Avg. Read burst size: 63
Avg. Write burst size: 32
Read: 105.82 MB/s / Write: 0.10 MB/s Total: 105.92 MB/s
Utilization: 5%
Overall Bus Load: 26%
Bytes Access: 63
root@edm-fairy-imx6:~# /unit_tests/mmdc2 DSP1
i.MX6Q detected.
MMDC DSP1
MMDC new Profiling results:
***********************
Measure time: 500ms
Total cycles count: 264083472
Busy cycles count: 69405646
Read accesses count: 864288
Write accesses count: 0
Read bytes count: 55314432
Write bytes count: 0
Avg. Read burst size: 64
Avg. Write burst size: 0
Read: 105.50 MB/s / Write: 0.00 MB/s Total: 105.50 MB/s
Utilization: 4%
Overall Bus Load: 26%
Bytes Access: 64
=========================================================
In linux 3.10.53:
=========================================================
root@edm-fairy-imx6:~# /unit_tests/mmdc2 SUM
i.MX6Q detected.
MMDC SUM
MMDC new Profiling results:
***********************
Measure time: 1000ms
Total cycles count: 528087080
Busy cycles count: 138780500
Read accesses count: 1730322
Write accesses count: 498
Read bytes count: 110729160
Write bytes count: 15936
Avg. Read burst size: 63
Avg. Write burst size: 32
Read: 105.60 MB/s / Write: 0.02 MB/s Total: 105.61 MB/s
Utilization: 4%
Overall Bus Load: 26%
Bytes Access: 63
root@edm-fairy-imx6:~# /unit_tests/mmdc2 DSP1
i.MX6Q detected.
MMDC DSP1
MMDC new Profiling results:
***********************
Measure time: 1001ms
Total cycles count: 528091376
Busy cycles count: 138768053
Read accesses count: 1728000
Write accesses count: 0
Read bytes count: 110592000
Write bytes count: 0
Avg. Read burst size: 64
Avg. Write burst size: 0
Read: 105.36 MB/s / Write: 0.00 MB/s Total: 105.36 MB/s
Utilization: 4%
Overall Bus Load: 26%
Bytes Access: 64
=========================================================
As test result, the DDR bandwidth is occupied by DSP1 in both linux 3.10.53 and linux 3.14.52.
So, it seems it's not the root cause to result in bad performance of SATA.
How can I fix this issue?
Thank you !
BR,
Richard