Skip to main content
better structure, clearer language
Source Link
amon
  • 136.2k
  • 27
  • 296
  • 387

Event-based models for asynchronous operations

Asynchronous communication means thinking in terms of events rather in terms of requests or channels. Translating a request–response communication model to an event-based model would mean that process A publishes a Request event. Process B listens for Request events and eventually publishes a Response event. Process A listens for Response events. If a matching Response event is created, it will eventually be received by process A which createdcould look like the original Request event.following:

  • process A publishes a Request event
  • process B listens for Request events and eventually publishes a Response event
  • process A listens for Response events. If a matching Response event is created, it will eventually be received by process A which created the original Request event.

The different events can share a correlation ID to indicate that they relate to each other. 

I'm talking about Request and Response events generically. In practice, the events should have a meaning within the domain of the software, for example ChatMessageSent, ChatMessageDelivered, and ChatMessageRead events.

Using HTTP for asynchronous operations

If you have a HTTP/REST API that is a gateway to asynchronous microservices, it is probably appropriatethere are different ways for the server to not wait, in the hopes that a response event is received within some timeout. Instead,handling the server can respond with a 202 Accepted status, which is used specifically forunderlying asynchronous operations. It means that the request has been received, but it doesn't say that the request will succeed. Alternatively, the response could use a 303 status and redirect to a newly created resource representing the operation.

  • The server could wait, in the hopes that a response event is received within some timeout. This can be a good idea if you're pretty sure that the deadline will be met almost always. But in general, waiting is problematic: the client just sees latency and doesn't know if their HTTP request was even properly received yet.

  • The server can respond immediately to indicate that the request was received, but without directly delivering the result. The result can be retrieved later. Specifically, the following HTTP status code can be useful:

    • 202 Accepted is used specifically for asynchronous operations. It means that the request has been received, but it doesn't say that the request will succeed. The body of the response could inform the client how the status could be monitored.

    • Alternatively, the response could use a 303 status and redirect to a newly created resource representing the operation. If such a resource can be created directly, this would be my preferred approach.

Synchronous communication might be good enough

Asynchronous communication means thinking in terms of events rather in terms of requests or channels. Translating a request–response communication model to an event-based model would mean that process A publishes a Request event. Process B listens for Request events and eventually publishes a Response event. Process A listens for Response events. If a matching Response event is created, it will eventually be received by process A which created the original Request event. The different events can share a correlation ID to indicate that they relate to each other. I'm talking about Request and Response events generically. In practice, the events should have a meaning within the domain of the software, for example ChatMessageSent, ChatMessageDelivered, and ChatMessageRead events.

If you have a HTTP/REST API that is a gateway to asynchronous microservices, it is probably appropriate for the server to not wait, in the hopes that a response event is received within some timeout. Instead, the server can respond with a 202 Accepted status, which is used specifically for asynchronous operations. It means that the request has been received, but it doesn't say that the request will succeed. Alternatively, the response could use a 303 status and redirect to a newly created resource representing the operation.

Event-based models for asynchronous operations

Asynchronous communication means thinking in terms of events rather in terms of requests or channels. Translating a request–response communication model to an event-based model could look like the following:

  • process A publishes a Request event
  • process B listens for Request events and eventually publishes a Response event
  • process A listens for Response events. If a matching Response event is created, it will eventually be received by process A which created the original Request event.

The different events can share a correlation ID to indicate that they relate to each other. 

I'm talking about Request and Response events generically. In practice, the events should have a meaning within the domain of the software, for example ChatMessageSent, ChatMessageDelivered, and ChatMessageRead events.

Using HTTP for asynchronous operations

If you have a HTTP/REST API that is a gateway to asynchronous microservices, there are different ways for handling the underlying asynchronous operations.

  • The server could wait, in the hopes that a response event is received within some timeout. This can be a good idea if you're pretty sure that the deadline will be met almost always. But in general, waiting is problematic: the client just sees latency and doesn't know if their HTTP request was even properly received yet.

  • The server can respond immediately to indicate that the request was received, but without directly delivering the result. The result can be retrieved later. Specifically, the following HTTP status code can be useful:

    • 202 Accepted is used specifically for asynchronous operations. It means that the request has been received, but it doesn't say that the request will succeed. The body of the response could inform the client how the status could be monitored.

    • Alternatively, the response could use a 303 status and redirect to a newly created resource representing the operation. If such a resource can be created directly, this would be my preferred approach.

Synchronous communication might be good enough

Source Link
amon
  • 136.2k
  • 27
  • 296
  • 387

It's a stretch to call gRPC communication an “anti-pattern”. Synchronous communication has the tremendous advantage of simplicity, whereas asynchronous communication is more flexible. Do you really need that flexibility, though?

Asynchronous communication means thinking in terms of events rather in terms of requests or channels. Translating a request–response communication model to an event-based model would mean that process A publishes a Request event. Process B listens for Request events and eventually publishes a Response event. Process A listens for Response events. If a matching Response event is created, it will eventually be received by process A which created the original Request event. The different events can share a correlation ID to indicate that they relate to each other. I'm talking about Request and Response events generically. In practice, the events should have a meaning within the domain of the software, for example ChatMessageSent, ChatMessageDelivered, and ChatMessageRead events.

If you have a HTTP/REST API that is a gateway to asynchronous microservices, it is probably appropriate for the server to not wait, in the hopes that a response event is received within some timeout. Instead, the server can respond with a 202 Accepted status, which is used specifically for asynchronous operations. It means that the request has been received, but it doesn't say that the request will succeed. Alternatively, the response could use a 303 status and redirect to a newly created resource representing the operation.

The HTTP response can provide a way to monitor the result. For example, it could link to an URL that represents the status of the operation. The HTTP server could maintain a database of all requests and their status. When the HTTP server observes a response event on the message bus, the status can be updated in the database.

In a chat application, the HTTP communication could be as follows:

  1. The client sends a new chat message. The server responds with a redirect to an URL that represents the message

    > POST /messages
    >
    > {"body": "Lorem ipsum."}
    
    < 303 See Other
    < Location: /messages/ff591577-668c-4ffc-88d6-fe160068f93f
    
  2. The client requests the message status.

    > GET /messages/ff591577-668c-4ffc-88d6-fe160068f93f
    
    < 200 OK
    <
    < {"body": "Lorem ipsum."
    < , "sent": 1641405030
    < , "received": null
    < , "read": null}
    
  3. Time passes. The client requests the message status again.

    > GET /messages/ff591577-668c-4ffc-88d6-fe160068f93f
    
    < 200 OK
    <
    < {"body": "Lorem ipsum."
    < , "sent": 1641405030
    < , "received": 1641405032
    < , "read": 1641405079}
    

Of course, having a client poll for the status of a resource is not necessarily a good design. But polling is the only way built-in to HTTP (no, HTTP/2 server push doesn't address this). If real-time updates are desired, there would have to be some communication channel through which updates can be pushed. In a web context, Websockets play this role. The server could send a notification via websockets that an update for a resource exists so that the client would know to request that resource again.

All of this is complicated. If you're not working on an inherently asynchronous problem, sticking with synchronous communication will be a lot simpler. In this context, synchronous communication just means that every request will receive a response within a reasonable timeout. That can be quite practical if you're only interacting with internal services that are hosted in the same data center.