Sharing code and schema between microservices

If you go for a microservices architecture in your organization, they can share configuration via zookeeper or its equivalent. However, how should the various services share a common db schema? common constants? and common utilities?

One way would be to place all the microservices in the same code repository, but that would contradict the decoupling that comes with microservices...

Another way would be to have each microservice be completely independent, however that would cause code duplication and data duplication in the separate databases each microservice would have to hold.

Yet another way would be to implement functional microservices with no context\state, but that's usually not realistic and would push the architecture to having a central hub that maintains the context\state and a lot of traffic from\to it.

What would be a scalable, efficient practical and hopefully beautiful way to share code and schema between microservices?


Regarding common code, the best practice it to use a packaging system. So if you use Java, then use maven, if you use Ruby then Gems, if python then pypi etc. Ideally a packaging system adds little friction so you may have a (say, git) repository for a common lib (or several common-libs for different topics) and publish their artifacts through an artifact repository (e.g. private maven/gems/pypi). Then at the microservice you add dependency on the required libs.So code reuse is easy. In some cases packaging systems do add some friction (maven for one) so one might prefer using a single git repo for everything and a multi-module project setup. That isn't as clean as the first approach but works as well and not too bad. Other options are to use git submodule (less desired) or git subtree (better) in order to include the source code in a single "parent" repository.

Regarding schema - if you want to play by the book, then each microservice has its own database. They don't touch each other's data. This is a very modular approach which at first seems to add some friction to your process but eventually I think you'll thank me. It will allow fast iteration over your microservices, for example you might want to replace one database implementation with another database implementation for one specific service. Imagine doing this when all your services use the same database! Good luck with that... But if each single service uses it's own database the service abstracts the database correctly (e.g. it does not accept SQL queries as API calls for example ;-)) then changing mysql to Cassandra suddenly become feasible. There are other upsides to having completely isolated databases, for example load and scaling, finding out bottlenecks, management etc.

So in short - common code (utilities, constants etc) - use a packaging system or some source code linkage such as git-tree

Database - you don't touch mine, I don't touch yours. That's the better way around this.

HTH, Ran.


The "purest" approach, i.e. the one that gives you the least amount of coupling, is to not share any code.

If you find that two services (call them A and B) need the same functionality, your options are:

  • split if off as a separate service C, so A and B can use C
  • bite the bullet and duplicate the code

While this may sound awkward, you avoid the (not uncommon) problem of creating a "utility" or "common" or "infrastructure" library which everyone depends on, and which is then really hard to upgrade and change (i.e. which indirectly couples the services).

In practice, as usual, it's a tradeoff.

  • If the shared functionality is substantial, I'd go for a seperate service.
  • If it's just constants, a shared library might be the best solution. You need to be very careful about backwards compatibility, though.
  • For configuration data, you could also implement a specific service, possibly using some existing technology such as LDAP.
  • Finally, for simple code that is likely to evolve independently, just duplicating might be the best solution.

However, what's best will depend on your specific situation and problem.


From my project experience

Share a WSDL when using SOAP (not the service model code, since they should be generated from the WSDL). When using REST, have distinct models (copy yes but not share) for client and server. As soon as the second or third consumer comes into play, you'll get into trouble. Keep them decoupled. The operating and usage of a service changed in my past more often than the data structures. Another client wants to use your service or a second version has to be operated at the same time.

Some additional thoughts

Sharing is partial contradictive to scalability. Share-nothing and share-some/share-all have both pros and cons. Sharing nothing gives you full flexibility at any time. Microservices are independent components providing particular domain services.

Sharing business domain data models is a common pattern (http://www.ivarjacobson.com/resources/resources/books/#object%20oriented%20software) which prevents duplications of the same. Since microservices divide and conquer the business parts it might get hard to share something of the business domain data model.

Microservices communicate with each other so I understand the need of sharing these communication data models (mostly HTTP based). Sharing these data models might be OK in case you have a one to one mapping between service provider and consumer. As soon as you have multiple consumers for one service needing different models/fields within the model it gets tough.


As to the ISP principle (Interface segration principle) clients should depend on interface not implementations.I would suggest if it is possible sharing interfaces not implementations via this way it would be better to make system decoupled from implementation.