Migrate mongodb database from localhost to remote servers

I created a database on my local ubuntu machine.

How can I transfer it to my remote server (ec2 Ubuntu)


Solution 1:

TL;DR

Use mongodump and mongorestore to take (and restore) a full binary backup of your MongoDB database. Compress the backup dump directory to make it faster to copy to your Amazon instance (BSON tends to compress very well).

Best practices

Rather than following adhoc instructions, I would strongly recommend reading the standard Backup and Restore with MongoDB Tools tutorial in the MongoDB manual.

You can also use a Filesystem snapshot, but mongodump and mongorestore only export the data so your backup will be smaller (i.e. your remote server will not inherit any excessive storage allocation due to preallocation).

Solution 2:

Auto Sync between 2 Server
If your local host is available from outside you can use copydb in admin.
Migrate mongodb data one hardware to another hardware:

user@server:~$ mongo
MongoDB shell version: 2.6.11
connecting to: test
> use admin
switched to db admin
>db.runCommand({copydb:1,fromhost:'your previous host',fromdb:'Auctions_Data',todb:'Auctions_Data'})
{ "ok" : 1 }

Solution 3:

In addition to the other solutions you can create a bash script and preform this very easily.

#!/bin/bash

HOST="somehost.com"
PORT="2345"
REMOTE_DB="some-remote-db"
LOCAL_DB="your-local-db"
USER="remote-user-name"
PASS="passwordForRemoteUser"

## DUMP REMOTE DATABASE
echo "Dumping '$HOST:$PORT/$REMOTE_DB'..."
mongodump --host $HOST:$PORT --db $REMOTE_DB -u $USER -p $PASS

## RESTORE DUMP DIRECTORY
echo "Restoring to '$LOCAL_DB'..."
mongorestore --db $LOCAL_DB --drop dump/$REMOTE_DB

## REMOVE DUMP FILES
echo "Removing dump files..."
rm -r dump

echo "Finished."

Solution 4:

You can create a database backup and transfer it to a S3 bucket.

First, install s3cmd:

sudo yum --enablerepo epel install s3cmd

#to configure s3cmd
s3cmd --configure

Then create a backup routine in a backup.sh file:

#!/bin/bash

#Force file syncronization and lock writes
mongo admin --eval "printjson(db.fsyncLock())"

MONGODUMP_PATH="/usr/bin/mongodump"
MONGO_HOST="prod.example.com"
MONGO_PORT="27017"
MONGO_DATABASE="dbname"

TIMESTAMP=`date +%F-%H%M`
S3_BUCKET_NAME="bucketname"
S3_BUCKET_PATH="mongodb-backups"


# Create backup
$MONGODUMP_PATH -h $MONGO_HOST:$MONGO_PORT -d $MONGO_DATABASE

# Add timestamp to backup
mv dump mongodb-$HOSTNAME-$TIMESTAMP
tar cf mongodb-$HOSTNAME-$TIMESTAMP.tar mongodb-$HOSTNAME-$TIMESTAMP

# Upload to S3
s3cmd put mongodb-$HOSTNAME-$TIMESTAMP.tar s3://$S3_BUCKET_NAME/$S3_BUCKET_PATH/mongodb-$HOSTNAME-$TIMESTAMP.tar


#Unlock databases writes
mongo admin --eval "printjson(db.fsyncUnlock())"

When you run bash backup.sh a new file will be created like mongodb-localhost-10-10-2013.tar

On the remote server you can use a wget to download file from Amazon S3. Extract backup file using tar like tar -xvf backupname.tar.

To restore you can use:

mongorestore --dbpath <database path> <directory to the backup>

Like this:

mongorestore --dbpath /var/lib/mongo backup_directory_name

I hope this is enough to help you

Solution 5:

Install mongo software on your remote server Stop mongod on your local computer. copy the data files and configuration to the remote computer. verify permissions of the data files are the same as on your local computer. and then start mongod on the remote server.