How do you handle cases where your client is awaiting a response with a decoupled pub/sub backend? E.g a user creates an account and the client needs to know their user id.
Would that user object be the responsibility of one service, or written to many tables in the system under different services, or...?
For one, you could use something like snowflake IDs so that whatever server receives the user data first can generate and return an id for that user before tossing the data on a queue to be processed.
How would you approach a situation where a client updates a record in service A, and then navigates to a page whose data is returned by service B, which has a denormalized copy of service A's records that hasn't consumed and processed the "UpdatedARecord" event?
Do we accept that sometimes things may be out of sync until they aren't? That can be a jarring user experience. Do we wait on the Service B event until responding to the client request? That seems highly coupled and inefficient.
I'm genuinely confused as to how to solve this, and it's hard to find good practical solutions to problems real apps will have online.
I suppose the front end could be smart enough to know "we haven't received an ack from Service B, make sure that record has a spinner/a processing state on it".
You use eventing only when eventual consistency is acceptable. In your scenario, it sounds like it is not. So then you should use synchronous communication to ensure the expected experience is met. However, that also means that now you can't do stuff with service B without service A being up. So you're trading user experience against resiliency.
Also, you should check your domains and bounded contexts and reevaluate whether A and B are actually different services. They might still legitimately be separate. Just something to check.
Some people advocate that microservices own their data and only provide it through an API. In this scenario, Service B would need to query Service A for the authoritative copy of the record. I think the standard way to deal with the query and network time is, yes, to wait until Service A provides the data and timeout if it takes "too long".
Then your question is about optimizing on top of the usual architecture which hopefully is an infrequent source of pain that is worth the cost of making it faster. I could imagine some clever caching, Service A and Service B both subscribing to a source of events that deal with the data in question, or just combining Service A and B into one component.
I would create the account directly from the initial call and return its ID and then publish an account created message. Any other services could receive the message and perform some action such as send a welcome email or do some analytics.
Would that user object be the responsibility of one service, or written to many tables in the system under different services, or...?