SSHFS mount to AWS Transfer for SFTP: cannot create regular file
I'm trying to setup an SSHFS mount point via Amazon's new service Transfer for SFTP. I can sftp into the endpoint just fine using sftp
and can get/put files.
I can mount it using sshfs
. I'm having problems where copying and moving files are showing errors. Whenever I do it creates 0 byte files. I have no problem using rm
to remove them. What is also strange is that if I issue a 2nd cp
command, it asks if I want to overwrite and then the file is there perfectly.
Here are some examples with SSHFS debug.
Note that I'm doing everything as root:
~$ sshfs -o workaround=all -o reconnect -o delay_connect -o sshfs_sync \
-o sync_readdir -o no_readahead -o debug -o noauto_cache \
-o cache=no [email protected]:/my-bucket /mnt/s3
FUSE library version: 2.9.7
nullpath_ok: 1
nopath: 1
utime_omit_ok: 0
unique: 1, opcode: INIT (26), nodeid: 0, insize: 56, pid: 0
INIT: 7.26
flags=0x001ffffb
max_readahead=0x00020000
INIT: 7.19
flags=0x00000011
max_readahead=0x00020000
max_write=0x00020000
max_background=0
congestion_threshold=0
unique: 1, success, outsize: 40
unique: 2, opcode: ACCESS (34), nodeid: 1, insize: 48, pid: 2285
access / 04
unique: 2, success, outsize: 16
unique: 3, opcode: LOOKUP (1), nodeid: 1, insize: 47, pid: 2285
LOOKUP /.Trash
getattr /.Trash
X11 forwarding request failed on channel 0
unique: 3, error: -2 (No such file or directory), outsize: 16
unique: 4, opcode: LOOKUP (1), nodeid: 1, insize: 52, pid: 2285
LOOKUP /.Trash-1000
getattr /.Trash-1000
unique: 4, error: -2 (No such file or directory), outsize: 16
This does seem to successfully mount. I can ls
and get a quick response.
As you can see I've turned off all caching and async via the options. I've also enabled all possible workarounds. I tried a bunch of different combinations of these options.
But when I try to cp
anything:
~$ cp temp.txt /mnt/s3
cp: cannot create regular file './temp.txt': No such file or directory
unique: 86, opcode: GETATTR (3), nodeid: 1, insize: 56, pid: 18222
getattr /
unique: 86, success, outsize: 120
unique: 87, opcode: LOOKUP (1), nodeid: 1, insize: 49, pid: 18222
LOOKUP /temp.txt
getattr /temp.txt
unique: 87, error: -2 (No such file or directory), outsize: 16
unique: 88, opcode: LOOKUP (1), nodeid: 1, insize: 49, pid: 18222
LOOKUP /temp.txt
getattr /temp.txt
unique: 88, error: -2 (No such file or directory), outsize: 16
unique: 89, opcode: CREATE (35), nodeid: 1, insize: 65, pid: 18222
create flags: 0x80c1 /temp.txt 0100644 umask=0022
unique: 89, error: -2 (No such file or directory), outsize: 166
What is strange is that it does create a 0 byte file after a short delay:
~$ ls /mnt/s3
-rwxr--r-- 1 root root 0 Mar 7 12:19 temp.txt
If I issue a second cp
it will overwrite without issue:
~$ cp temp.txt /mnt/s3
cp: overwrite './temp.txt'? y
unique: 65, opcode: GETATTR (3), nodeid: 1, insize: 56, pid: 18131
getattr /
unique: 65, success, outsize: 120
unique: 66, opcode: LOOKUP (1), nodeid: 1, insize: 49, pid: 18131
LOOKUP /temp.txt
getattr /temp.txt
NODEID: 6
unique: 66, success, outsize: 144
unique: 67, opcode: LOOKUP (1), nodeid: 1, insize: 49, pid: 18131
LOOKUP /temp.txt
getattr /temp.txt
NODEID: 6
unique: 67, success, outsize: 144
unique: 68, opcode: OPEN (14), nodeid: 6, insize: 48, pid: 18131
open flags: 0x8001 /temp.txt
open[139699381340688] flags: 0x8001 /temp.txt
unique: 68, success, outsize: 32
unique: 69, opcode: SETATTR (4), nodeid: 6, insize: 128, pid: 18131
truncate /temp.txt 0
getattr /temp.txt
unique: 69, success, outsize: 120
unique: 70, opcode: WRITE (16), nodeid: 6, insize: 539, pid: 18131
write[139699381340688] 459 bytes to 0 flags: 0x8001
write[139699381340688] 459 bytes to 0
unique: 70, success, outsize: 24
unique: 71, opcode: FLUSH (25), nodeid: 6, insize: 64, pid: 18131
flush[139699381340688]
unique: 71, success, outsize: 16
unique: 72, opcode: RELEASE (18), nodeid: 6, insize: 64, pid: 0
release[139699381340688] flags: 0x8001
unique: 72, success, outsize: 16
~$ ls /mnt/s3
-rwxr--r-- 1 root root 459 Mar 7 12:26 temp.txt
Using rm
is no problem. Works as expected.
The mv
command has the same issue as cp
.
I do NOT want to use s3fs because it is unreliable so please don't offer as a solution.
Solution 1:
Not to necro the thread, but s3ql may do what you want. Presents a totally normal POSIX filesystem where you mount it, and stores to S3 object store. Deduplication, choice of compression (I use zlib for speed), and can encrypt it too. It uses a cache (5GB default)
It presents a very normal POSIX file system, I've compiled off it, ran VirtualBox images off it, wine apps, besides sshfs and rsync. I'm using a local backend (point it to a directory, and it stores the objects in there, you end up with a directory tree of lots of 10MB or smaller files.) But of course as the name suggests it's primarily for S3, and supports a few other cloud object storage systems too. It uses a cache for the currently in use file chunks (both for read and write), which I think defaults to 5GB, for my use I set it to 50. (I'm using it to extend my storage, due to deuplication and compression I have like 6TB of files on my 4TB HDD and still have about 1TB free.)
One possible fly in the ointment, s3ql is exclusive access, you cannot have multiple instances pointing to the same object store. You can mount it on one and then share that out over sshfs or nfs or whatever though.