How to store files with meta data in LoopBack?

Here's the full solution for storing meta data with files in loopback.

You need a container model

common/models/container.json

{
  "name": "container",
  "base": "Model",
  "idInjection": true,
  "options": {
    "validateUpsert": true
  },
  "properties": {},
  "validations": [],
  "relations": {},
  "acls": [],
  "methods": []
}

Create the data source for your container in server/datasources.json. For example:

...
"storage": {
    "name": "storage",
    "connector": "loopback-component-storage",
    "provider": "filesystem", 
    "root": "/var/www/storage",
    "maxFileSize": "52428800"
}
...

You'll need to set the data source of this model in server/model-config.json to the loopback-component-storage you have:

...
"container": {
    "dataSource": "storage",
    "public": true
}
...

You'll also need a file model to store the meta data and handle container calls:

common/models/files.json

{
  "name": "files",
  "base": "PersistedModel",
  "idInjection": true,
  "options": {
    "validateUpsert": true
  },
  "properties": {
    "name": {
      "type": "string"
    },
    "type": {
      "type": "string"
    },
    "url": {
      "type": "string",
      "required": true
    }
  },
  "validations": [],
  "relations": {},
  "acls": [],
  "methods": []
}

And now connect files with container:

common/models/files.js

var CONTAINERS_URL = '/api/containers/';
module.exports = function(Files) {

    Files.upload = function (ctx,options,cb) {
        if(!options) options = {};
        ctx.req.params.container = 'common';
        Files.app.models.container.upload(ctx.req,ctx.result,options,function (err,fileObj) {
            if(err) {
                cb(err);
            } else {
                var fileInfo = fileObj.files.file[0];
                Files.create({
                    name: fileInfo.name,
                    type: fileInfo.type,
                    container: fileInfo.container,
                    url: CONTAINERS_URL+fileInfo.container+'/download/'+fileInfo.name
                },function (err,obj) {
                    if (err !== null) {
                        cb(err);
                    } else {
                        cb(null, obj);
                    }
                });
            }
        });
    };

    Files.remoteMethod(
        'upload',
        {
            description: 'Uploads a file',
            accepts: [
                { arg: 'ctx', type: 'object', http: { source:'context' } },
                { arg: 'options', type: 'object', http:{ source: 'query'} }
            ],
            returns: {
                arg: 'fileObject', type: 'object', root: true
            },
            http: {verb: 'post'}
        }
    );

};

For expose the files api add to the model-config.json file the files model, remember select your correct datasources:

...
"files": {
    "dataSource": "db",
    "public": true
}
...

Done! You can now call POST /api/files/upload with a file binary data in file form field. You'll get back id, name, type, and url in return.


I had the same problem. I solved it by creating my own models to store meta data and my own upload methods.

  1. I created a model File which will store info like name,type,url,userId ( same as yours)

  2. I created my own upload remote method because I was unable to do it with the hooks. Container model is the model which is created by loopback-component-storage.

  3. var fileInfo = fileObj.files.myFile[0]; Here myFile is the fieldname for file upload, so you will have to change it accordingly. If you don't specify any field, then it will come as fileObj.file.null[0]. This code lacks proper error checking, do it before deploying it in production.

     File.uploadFile = function (ctx,options,cb) {
      File.app.models.container.upload(ctx.req,ctx.result,options,function (err,fileObj) {
        if(err) cb(err);
        else{
                // Here myFile is the field name associated with upload. You should change it to something else if you
                var fileInfo = fileObj.files.myFile[0];
                File.create({
                  name: fileInfo.name,
                  type: fileInfo.type,
                  container: fileInfo.container,
                  userId: ctx.req.accessToken.userId,
                  url: CONTAINERS_URL+fileInfo.container+'/download/'+fileInfo.name // This is a hack for creating links
                },function (err,obj) {
                  if(err){
                    console.log('Error in uploading' + err);
                    cb(err);
                  }
                  else{
                    cb(null,obj);
                  }
                });
              }
            });
    };
    
    File.remoteMethod(
      'uploadFile',
      {
        description: 'Uploads a file',
        accepts: [
        { arg: 'ctx', type: 'object', http: { source:'context' } },
        { arg: 'options', type 'object', http:{ source: 'query'} }
        ],
        returns: {
          arg: 'fileObject', type: 'object', root: true
        },
        http: {verb: 'post'}
      }
    
    );
    

For those who are looking for an answer to the question "how to check file format before uploading a file".

Actual in this case we can use optional param allowedContentTypes.

In directory boot use example code:

module.exports = function(server) {
    server.dataSources.filestorage.connector.allowedContentTypes = ["image/jpg", "image/jpeg", "image/png"];
}

I hope it will help someone.


Depending on your scenario, it may be worth looking at utilising signatures or similar allowing direct uploads to Amazon S3, TransloadIT (for image processing) or similar services.

Our first decision with this concept was that, as we are using GraphQL, we wanted to avoid multipart form uploads via GraphQL which in turn would need to transfer to our Loopback services behind it. Additionally we wanted to keep these servers efficient without potentially tying up resources with (large) uploads and associated file validation and processing.

Your workflow might look something like this:

  1. Create database record
  2. Return record ID and file upload signature data (includes S3 bucket or TransloadIT endpoint, plus any auth tokens)
  3. Client uploads to endpoint

For cases where doing things like banner or avatar uploads, step 1 already exists so we skip that step.

Additionally you can then add SNS or SQS notifications to your S3 buckets to confirm in your database that the relevant object now has a file attached - effectively Step 4.

This is a multi-step process but can work well removing the need to handle file uploads within your core API. So far this is working well from our initial implementation (early days in this project) for things like user avatars and attaching PDFs to a record.

Example references:

http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingHTTPPOST.html

https://transloadit.com/docs/#authentication


For anyone else having that problem with loopback 3 and Postman that on POST, the connection hangs (or returns ERR_EMPTY_RESPONSE) (seen in some comments here)... The problem in this scenario is, that Postman uses as Content-Type "application/x-www-form-urlencoded"!

Please remove that header and add "Accept" = "multipart/form-data". I've already filed a bug at loopback for this behavior