Microservices – best practices to retrieve the related data to specific user from the other micro services with the minimal memory/time loss

I am trying to create a microservice architecture using Lumen / Laravel Passport.

I have a multiple dockerized services, which all run as an separate Lumen app container in different VMs:

  • API Gateway service (integrated with Laravel Passport for authentication & request validation to further proceeding)
  • Chat Service (service for messaging/chat rooms)
  • News Service
  • … (and many other services)

All of this services has it’s own separated Redis/MySQL databases e.t.c.

In monolithic application, for example, there was a User table in the database, there was the relations between the tables and so else. I have used JOINs and other queries to retrieve data according to the logical selection for the current user id.

But now I have a general page in the Mobile/Web app for example and I must to get the multiple information from different services for one current visible page.

And to receive this data I am sending multiple requests in the different services

Question:

What is the best/correct practice to store user information using microservices architecture and what is the correct way to retrieve the related data to this user from the other micro services with the minimal memory/time loss? And where to store users information like id, phones e.t.c to avoid the data dublication?

Sorry for possible dublicate. Trying to understand..


Solution 1:

Let's say you have services: MS1, MS2, MS3, MS4. The web app / mobile app hits MS1 for information. Now MS1 needs to return a response containing data that are managed by MS2, MS3 and MS4.

Poor Solution - MS1 calls MS2, MS3 and MS4 to retrieve information, aggregates them and returns the final aggregated data

Recommended Solution

  1. Use log-based change data capture (CDC) to generate events from databases of MS2, MS3 and MS4 as and when the DBs are updated by their respective services

  2. Post the events to one or more topics of a streaming platform (e.g. Kafka)

  3. Using stream processing, process the events and create the aggregated data for each user in cache and DB of MS1

  4. Serve the requests to MS1 from the cache and / or DB of MS1

Note, with this approach, the cache or DB will have pre-aggregated data which will be kept up-to-date by the event and stream processing. The updates may lag a little resulting in serving stale data. But the delay shouldn't be more than a few seconds in normal circumstances.

If all the user data can be stored in cache, you can keep the entire data set in cache. Otherwise, you can keep a subset of data in cache with a TTL. The least recently used data can be evicted to make space for new entries. The service will retrieve data from the DB unless itbis not already available in cache.

Advantages:

  1. The latency will be less improving user experience as the response is pre-computed
  2. The tight coupling with other micro services is eliminated. Thus, even if MS2, MS3 or MS4 go down temporarily, your users will still get to see the data, albeit a bit stale, but that's better in most cases than a delayed response or an error message.

Solution 2:

You likely need to investigate a caching layer inside the client application. You don't want to break your encapsulation, but caching this information as close to where it is used as possible will make a huge difference in optimizing the chattiness of your microservices. Just one point though, ensure that you end up creating a cache, and not a distributed store. Cache's will still need revalidation, and an expiration timeline.