Azure Service Bus add get requests to queue
Solution 1:
A queue is especially useful when a publisher sends more messages than a receiver can process and you want to decouple these two entities so the receiver doesn't get overwhelmed with requests. If your goal is to slow down the number of requests hitting the other endpoint you are calling while keeping the possibility for users of your API to keep firing requests, the process you described is a standard way to handle this scenario. Here is a sample illustration: Assuming your API allows users to post jobs, you create a UUID and send it back to the user/GUI as well as enqueuing the message in an Azure Service Bus. If you want to enforce FIFO processing, make sure to use sessions. On the receiver end you add an Azure Function with a service bus trigger which calls your other endpoint (you can control the maximum number of instances your Azure Function can scale to by setting WEBSITE_MAX_DYNAMIC_APPLICATION_SCALE_OUT to avoid overwhelming your other endpoint). If the other endpoint processed the request and responded, you store the result in a database of your choice and mark the message in Azure Service Bus as complete. If you use CosmosDB, you can make use of the change feed feature to listen on new entries and send them to the WebSocket server which maintains the open connections to the users. Keep in mind that the WebSocket connection can close unexpectedly so it might be a good idea to give users an URI to lookup the results once they posted the job.
Solution 2:
A web request is a synchronous operation in its nature. Durable messaging is asynchronous. Mixing the two requires to bridge the gaps and it’s never trivial. Polling is not ideal as with the growing number of clients it will tax your web server. SignalR is a better option to callback a specific client that needs to know the result.
The flip side of this solution is building the client side in a way we’re updates or calculations results are eventual rather than immediate.