I have used reframe to great success, as well as vanilla reagent. I've currently switched to om-next for a variety of reasons.
I have my concerns about om-next, but overall since I'm writing a complex side-project, it's worth it. If I had to ship something tomorrow, I'd go with reagent still, but for longer projects where you can spend more time and get over some learning curve and alpha/beta type bugs, om-next is a good fit for large projects. I found reframe to be generally the best way of working with reagent, but not as good when I had lots of chained events and complex queries. I spent a lot of time optimizing around some ways that reagent/reframe does things where as in om-next, I could tweak things more to my needs. Om-next also offers some additional features many people don't need, but fit my use-cases well (ex: server-story, pulling only what you need, normalizing client app db data).
To summarize, reframe is a nice framework and really great for small projects or projects that need to ship today. It works and is the best approach I've seen on top of reagent (and react). Om-next is good for people that can afford to take some risks and have complex requirements but still want a proper framework on top of react. Om-next will make you do a lot more work up-front and has a lot more ceremony, but there is a reason it exists. Generally, I have found om-next powerful once you are able to grasp the way it wants you to do things. For people who want something like reframe like om-next, there is the "untangled" framework. Anyway, I wouldn't recommend om-next or reframe/reagent over the other, rather I'd suggest either and match your requirements accordingly.
I started a prototype project first to try it out and used Datomic. With Datomic, most things are entierly automatic.
In the second, I'm using a bit of Datomic mainly to save UI-related and small bits of user state. Mainly I use Cassandra, Redis, Kafka, and Elasticsearch. I should add in this project, I'm also using untangled.
I communicate a lot over websockets as well, so between that and Cassandra I have to do a lot of mapping and transformations on my own, however once you get it setup the rest is mostly magic. Most of my reads and writes over the socket are happening with consumers and producers to Kafka so mapping om-next data here is just standard Clojure, nothing fancy.
When I do query Cassandra directly, the queries in particular by nature need to be somewhat restrictive which it turns out play nicely with om-next since mostly for reads I just need to worry about the "select" columns. Redis I just use for LRU caching and to buffer some video-like blocks of data before the user tries to access them. Elasticsearch is just used for searches and some type of filtering, but gets populated through Cassandra data anyway so again no problem with om-next.
I'd say the most annoying thing here was as I hinted, getting it all setup. Things are clearly built for Datomic, but there are integration points all over the place and just using standard functions in Clojure to manipulate vectors and maps is easy. If you're not experienced with datalog or datomic pull syntax, things might be a bit harder at the start for some people.
Regarding API calls, I should add since I am using untangled, I use the default websocket lib which is just sente. I have rigged things up with aleph and yada, but also played around with standard compojure api and other offerings. Long-term I'd like to stick with yada, but I've encountered some documented bugs that are annoying so I've put my focus elsewhere in the short-term when needed.
https://github.com/Day8/re-frame