FATFs SDHC issue (doesn't write past 4GB)

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

FATFs SDHC issue (doesn't write past 4GB)

5,210 Views
dachancellor
Contributor II

I have encountered a problem which I believe is related to the implementation of FATFs.

(I use PE components provided by Erich Styger, so I'm not sure if these are ones that come with CW or I have downloaded from Erich's site.)

I have a program that continuously writes 8MB data files.

I've noticed on multiple occasions, that once we reach 4GB of data (512 files), the next file will not write (Assertion failed error), and the file system gets corrupted.

It seems like it is wrapping around and overwriting the FAT instead of correctly continuing to the next location.

Any ideas ?

Has anyone written more than 4GBs worth of data to an SD card ?
I've had this error occur on multiple SD cards, of different brands and sizes (32/64GB).

I have written past the 4GB mark on a card using the SDHC directly without FATFs.

FATFs is also quite popular, so I would assume the 4GB limit is not inherent to that library, and that the implementation of it is where the issues lies.

Thanks in advance!

V/R,

Chandler

Labels (1)
Tags (4)
0 Kudos
7 Replies

1,258 Views
dachancellor
Contributor II

I believe I've found the culprit but still need to test the theory.

The problem lies in the driver handling both SD and SDHC cards.

Unfortunately, it is a terrible implementation.

For SD cards, the controller wants the byte address.

For SDHC cards, the controller wants the sector address.

FATFs works with sectors.

The logical solution would be to use the sector number provided, and if the card is not high capacity, then multiply by sector size to get the correct byte address.

Unfortunately, the code passes sector number * sector size (512 here), then tests for high capacity, and if true shifts >> 9 to divide by the 512.

Not only is this a complete waste of cycles multiplying by 512 only to divide by 512, it also causes an issue once you get past sector 8388608, which corresponds to my 4GB limitation.

My solution modifies the call to TransferBlock within disk_write, removing the multiply by 512.

TransferBlock calls ByteToCardAddress.

This is modified to keep the sector address if high capacity, or shift << 9 if not high capacity.

I believe this will work, but will update this post once I have verified.

V/R, Chandler

0 Kudos

1,258 Views
dachancellor
Contributor II

Marc,

The FATFs version used is 8a.

That blurb about fixing truncating when the file size is close to 4GB (FAT32 file size limit) doesn't apply here.

My files are only 8MB, and the 4GB limit is related to overflowing a 32-bit number (explained further below).

Mark,

I understand that FAT works in sectors, as do SDHC cards, whereas the low capacity SD cards work in bytes.

I agree that a multiply by 512 in the low level routines is required for compatability with these older cards.

The issue is in the implementation.

I just created a new bareboard project with Processor Expert.

I add a FAT_FileSystem component, which references a FatFsMemSDHC compoment.

I then generated the source code.

All of the issues can be found in FATM1.c

Within both the disk_read and disk_write functions, there are calls to SD_TransferBlock, which pass sector*FATM1_BLOCK_SIZE.

Within SD_TransferBlock, there is a call to SD_ByteToCardAddress, which checks if the card is HighCapacity and does >> 9 if it is.

By this logic, the assumption is that the card is low capacity, so a multiply by 512 is done when passing the value to SD_TransferBlock.

Then, if the card is high capacity, a divide by 512 is done.

This is incorrect for two reasons, one logical and one performance.

Regarding performance...

In the case of high capacity cards, we are always multiplying by 512, then dividing by 512.

It makes more sense to only multiply by 512 if the card is NOT HighCapacity.

The true logic bug exists once we need a sector past 8388607 on an SDHC.

The initial multiply by 512 overflows, resulting in 0.

The divide by 512 stays a 0, and the FAT table gets overwritten.

I changed the calls to SD_TransferBlock to pass only the sector and not sector * FATM1_BLOCK_SIZE

I modified SD_ByteToCardAddress to << 9 if not a HighCapacity card.

With these changes, I'm still compatible with low capacity cards, and I remove the unnecessary multiply and divide by 512 for SDHC.

I'm currently filling an SD card, and I suspect it will have no issues getting past the 4GB data limit that I saw previously.

V/R,

Chandler

1,258 Views
mjbcswitzerland
Specialist V

Hi

As I stated "as well as causing overfows on 32 bit representation as you found". (8388607 + 1) * 512 = 32 bit overflow.

I don't think that this is FATFs itself since this problem is not known and also the code that you refer to doesn't seem to have anything to do with FATFs code.

Maybe someone ported a part of it for the Freescale SDHC interface and made a conceptional mistake in the process (?)

Processor Expert gives you something to start with but is maybe not always a silver bullet.

All my FAT work is done with utFAT which has more functionality and supports all Kinetis parts.

Regards

Mark

0 Kudos

1,258 Views
dachancellor
Contributor II

I agree.

I never thought it was an issue with FATFs, as an issue like that would have been caught a long time ago.

It was indeed just an issue with implementing the SDHC interface and FATFs.

I have tested my fixes and it solved the issue. (I've currently written over 7GBs, and there was no rollover at 4GBs like before)

And I agree with Processor Expert only be a start, though it is a good one.

I have several components frozen (do not regenerate source), as I've had to make small mods and fixes in multiple components.

I know there is an issue with the NFC code, where the condition tested uses < instead of <=, which resulted in the last block throwing an error...

Thanks everyone for all of their input !

V/R,

Chandler

0 Kudos

1,258 Views
danielecortella
Contributor V

Hi, i have seen you posts. If i have undestand you have my same problem. I'm using the FATFS (last version taken by the site). Using the SDHC is necessary to address this by the sector address and not by byte, so you have solved this by shift the sector calculate by the FATFS of << 9? Is rigth? And if you pass the 4GB? Thanks

0 Kudos

1,258 Views
mjbcswitzerland
Specialist V

Hi

Which version of the FATFs were you using? Is it an officially maintained one or one that has been modified for other reasons?

FAT works with a basic unit of a sector and I never saw an implementation that worked with bytes (also not FATFs versions that I have seen). The standard method is to multiply the sector number by 512 in the lowest level routines when reading/writing low capacity SD cards for compatibility with the older read/write commands and not do anything in code at higher layers since this would also make it incompatible with other storage media (like NAND Flash drivers or USB-MSD drivers), as well as causing overfows on 32 bit representation as you found. It sounds as though you had a version that someone messed around with for some reason.

Regards

Mark

0 Kudos

1,258 Views
bowerymarc
Contributor V

I wonder which version of FATFs the component is using?  I saw this in the elm-cha FatFS site:

R0.10, Oct 02, 2013 Added selection of character encoding on the file. (_STRF_ENCODE) Added f_closedir(). Added forced full FAT scan for f_getfree(). (_FS_NOFSINFO) Added forced mount feature with changes of f_mount(). Improved behavior of volume auto detection. Improved write throughput of f_puts() and f_printf(). Changed argument of f_chdrive(), f_mkfs(), disk_read() and disk_write(). Fixed f_write() can be truncated when the file size is close to 4GB. Fixed f_open(), f_mkdir() and f_setlabel() can return incorrect error code. 
0 Kudos