Getting around the FAT32 4GB file size limit
Solution 1:
Unfortunately, there is no way to copy a >4GB file to a FAT32 file system. And a quick google says your PS3 will only recognize FAT32 file systems.
Your only option is to use smaller files. Maybe chop them into pieces before moving them or compress them.
I would try a networked solution to file sharing.
Solution 2:
Natively, you cannot store files larger than 4 GB on a FAT file system. The 4 GB barrier is a hard limit of FAT: the file system uses a 32-bit field to store the file size in bytes, and 2^32 bytes = 4 GiB (actually, the real limit is 4 GiB minus one byte, or 4 294 967 295 bytes, because you can have files of zero length). So you cannot copy a file that is larger than 4 GiB to any plain-FAT volume. exFAT solves this by using a 64-bit field to store the file size but that doesn't really help you as it requires a reformat of the partition.
However, if you split the file into multiple files and recombine them later, that will allow you to transfer all of the data, just not as a single file (so you'll likely need to recombine the file before it is useful). For example, on Linux you can do something similar to:
$ truncate -s 6G my6gbfile
$ split --bytes=2GB --numeric-suffixes my6gbfile my6gbfile.part
$ ls
my6gbfile my6gbfile.part00 my6gbfile.part01
my6gbfile.part02 my6gbfile.part03
$
Here, I use truncate
to create a sparse file 6 GiB in size. (Just substitute your own.) Then, I split them into segments approximately 2 GB in size each; the last segment is smaller, but that does not present a problem in any situation I can come up with. You can also, instead of --bytes=2GB
, use --number=4
if you wish to split the file into four equal-size chunks; the size of each chunk in that case would be 1 610 612 736 bytes or about 1.6 GiB.
To combine them, just use cat
(concat
enate):
$ cat my6gbfile.part* > my6gbfile.recombined
Confirm that the two are identical:
$ md5sum --binary my6gbfile my6gbfile.recombined
58cf638a733f919007b4287cf5396d0c *my6gbfile
58cf638a733f919007b4287cf5396d0c *my6gbfile.recombined
$
This can be used with any maximum file size limitation.
Many file archivers also support splitting the file into multi-part archive files; earlier this was used to fit large archives onto floppy disks, but these days it can just as well be used to overcome maximum file size limitations like these. File archivers also usually support a "store" or "no compression" mode which can be used if you know the contents of the file cannot be usefully further losslessly compressed, as is often the case with already compressed archives, movies, music and so on; when using such a mode, the compressed file simply acts as a container giving you the file-splitting ability, and the actual data is simply copied into the archive file, saving on processing time.
Solution 3:
Expanding on Michael's idea, many compression utilities/formats support a "store" mode, where they don't actually do any compression. Most of those same utilities also support splitting into multiple archives. Combine the two, and you can split a file without wasting a bunch of time compressing it, especially if it's non-compressible data. I've used this technique myself to overcome the exact problem you're having.
One big advantage to doing it this way is that the compression format acts as a wrapper, keeping you from accidentally doing anything with only one part of the file. It also tends to be simpler for non-technical users. (Not everyone knows how to cat
files, but almost everyone can open a zip.) It's also very obvious that it's a multipart file, since the file is formatted as such. Loose files may not look like a multipart file, especially if they lose their filenames somehow.
Of course, if you actually want to be able to work on the separate files, this doesn't work as well. This may be important if you don't have any "scratch space" to write the final file to. In that case, you should just split the file.
Here's an example of splitting a file using zip on Linux:
zip -0 -s 3g out.zip foobar
# "-0" sets the compression level to 0, or store
# "-s 3g" sets the split size to 3 GB
# Add "-r" if "foobar" is a directory
# The output will be "out.zip", and "out.z01", "out.z02", and so on...
If you're more of a GUI person, my goto has always been 7-Zip.
Solution 4:
Another option not stated would be to use partitions. A USB flash drive is most often treated by the OS as a hard drive. Resize the FAT32 partition and make an exFAT (or other supporting filesystem) partition that is large enough to hold the file.
If you ever need to access the large file in place on the USB drive, this is probably the best solution. If all you need to do is transfer the file, and don't mind having to copy it to the hard drive to use it, the splitting solution is probably better.
This won't work if your USB drive is setup as a "super floppy," but this is increasingly uncommon. You can convert a "super floppy" into a hard drive format by using a partioning tool such as fdisk or gparted. But it will probably involve copying the files off, converting, copying them back, and then making the drive bootable again.
Solution 5:
As answered by others splitting the file and joining works. But the easiest solution is to use ext 2/3/4
file system for your usb drive. It the native filesystem for linux. In windows use ext2fsd for reading the data. It also support write mode. Just install the free app on windows access file, no splitting , no joining.