How can I retrieve a directory (folder) from Ubuntu Server 14.04 VM without FTP, SCP, or NFS

Solution 1:

As well as saying all the things it doesn't have, what about saying the things it does have? How big is the folder and how much free disk space is there?

Assuming you have no access to the CLI at all, even from the console:

  1. If it's ESXi 5.0 or above, and the server has VMware Tools installed, use PowerCLI and the Copy-VMGuestFile cmdlet to copy files from it.
  2. It's a VM, restore a backup of it somewhere isolated, boot it, make all the changes you like to get at the files.
  3. It's a VM, restore a backup of the disk and download and mount the VMDK file.

Even though you don't have SSH access, assuming you have some kind of access to type commands and manage the server via the VM console:

  1. It's a web server, download the files through a browser
    1. Maybe gzip them into /tmp (memory) and symbolic link into the webserver folder to avoid changing Apache config
  2. Does it have FTP client installed? FTP from the server to somewhere else and upload the files.

    1. Does it have SSH client installed? SCP them from the server to a remote SSH server
    2. Email them to yourself
    3. WGET/Curl POST them to a remote web server upload
    4. TFTP them out
  3. Does it have netcat (nc) installed? You can pipe tar | nc and stream the data to a socket, and use nc | tar on another computer on the network to receive. example

  4. If it's a typical Linux install, it might have Python on it, Python comes with SimpleHTTPServer module which defaults to serving the current directory as a website, run that on a different port to the main webserver and download the files from there.
  5. Script stuffing them into your database as a binary blob into an in-memory table, and then selecting them out from a client maybe relevant
  6. Does the server syslog to a remote destination? Base64 encode the files and stream them into the syslog... ymmv on reading it back.

Solution 2:

If the server has Python installed -- which it almost certainly does; Python is used by enough system services that it's pretty much guaranteed to be present -- you can start up a HTTP server to serve files from the current directory using the command:

python -m SimpleHTTPServer 9999

to start up a web server on port 9999.

Keep in mind that there are no access controls on this server, so you may not want to do this if the server is accessible to the public.


Another option will be to pipe a tar archive over the network. Assuming that you have another computer (host name "client") at IP address 1.2.3.4 that's reachable from the Ubuntu server, you can do this by running the following command on the client:

nc -lp 9999 > files.tar.gz

and running on the host:

tar cz /path/to/directory > /dev/tcp/1.2.3.4/9999

While this does create an archive, the archive is never written to disk -- it's streamed directly over the network -- so there is no risk of running out of disk space.

Solution 3:

The virtual machine disk is hopefully stored in a way that is backed up regularly - so you can restore the backup to a new location and then use the VM disk as a second disk in a new VM.

If there are no official backups, then there should be, and this needs sorting - but if the disk is stored on a LVM partition, you may be able to create a snapshot partition, and then copy the disk from there as though it was a back up

Solution 4:

Well, there is already an accepted answer, and in addition, there is a real good answer making use of the fact it is a web server.

But just to point out a line of attack that I don't think is addressed much is if you have command line access, you are probably using a terminal program, and most of those have some sort of way to log stuff that scrolls off of the screen.

So maybe this answer could help in the future for someone who doesn't have a web server running. Basically what might be called a field expedient file transfer using common Linux tools

One could send the folder as ASCII. Tar the folder, convert that from binary to ASCII using a number of possibilities. Then on the machine running the terman program, recover the ASCII blob from scrollback, and extract the binary from it.

tar czf - folderpath | uuencode temp.tar.gz

or

tar cvf - folderpath | base64

will dump a huge block of text to your terminal. If you can capture that as scrollback, clean it up a bit in a text editor, then decode it back to binary, you'll have a tar.gz of the folder. Sure, you might have to do something like set your scroll back in putty to a huge amount of lines.

I think you could even use xxd to do the encoding and decoding if you wanted to.

If you couldn't fit it in scrollback in one time, you could even break it up using head and tail and send the ASCII over a chunk at a time.

Details depend on what utilities you have access to and how much effort you want to take squeezing that data through a console data stream

(OP should use the web server)