This is interesting, especially since I just spent the last two weeks setting up a boilerplate for a universal react/redux SPA on spec for a new client. I enjoy the flexibility but the need to develop a deep working knowledge of several independent libraries, transpilers, and build tool configuration files (each of which has several competing options with their own way of doing things) just to get to hello world is cost-prohibitive for most people, I'm sure. At the same time, I'm hesitant to go "all in" on a stack that I haven't heavily researched myself. If the developers are reading, can you go into some details about how you handle routing and data stores? Are you using off the shelf libraries or have you rolled your own?
I'd like to second this by suggesting the docs include a recommendation on how to incorporate this within an existing React app. Apologies if this is available within the Yeoman generator but if so it wasn't obvious from the docs.
FWIW, I've found the isomorphic render examples within the Megaboilerplate examples for React [0] to be instructive.
I've always used React Router for routing. It's super flexible and plays very nicely with Redux and server-side rendering. In terms of syncing up state to views, you might want to look into 'redial' and 'react-redux', which are just higher-order components that ensure your React components can injected with the necessary state.
This is exactly what I'm talking about though, how many different libraries are there for routing alone? How can anyone make a reasonably informed choice between them? It's madness!
I want a website that rates packages in npm with more possibly useful heuristics:
num dependencies (shallow, deep)
total size of node_modules
installed size (correct use of npmignore etc)
code quality (McCabe, jslint, jshint)
typescript/flow/ jsdoc string types etc
test coverage
frequency of releases
documentation coverage
documentation text analysis: pretentiousness, overly laconic, bro speak
has a F-ing README so we could at least have some idea what the hell it is
number of blog and link references
number of authors
reputation of authors based on their other packages
open/closed issues
Make the stats available so others can build indices and ranking algorithms based on these.
Time since last commit, as well as average size and type of the last few commits would also be useful ensure meaningful features or bug fixes are probably being made.
See if others are using it, read the documentation, read the issues list, and/or read the source code. React-router-redux only has 216 lines of code! (Not counting tests, etc.)
Yes, this is basically my heuristic too. The part that's maddening is the low signal to noise ratio. It might take you six hours of googling to realize the library you need is called cthulhu-suspenders or something. You might have been able to write your own module in that time, then upload it to NPM, thus perpetuating the vicious cycle. :)
IDK, I'm hoping that there are enough people who don't check stars and choose a repo simply because it was the first one they found or because they think it works better with their existing code or for other reasons. And honestly there are many times where I've been that person.
Make a decision and run with it. Be confident enough in yourself or your team that you can pivot off that decision if it ends up being the wrong one. Often before you have enough information to make the decision at a level of precision that makes you completely comfortable.
I'm coming up on a decade in web application development and I feel like this is one of the most critical skills. It also took me an embarrassingly long time to develop. I think engineering, in allowing transparency downward into each level of the "stack", engenders the opposite in it's practitioners.
I wanted to transitioned from browserify to webpack and test that sweet redux hot reloading. I mostly got webpack working but I never understood with certainty which hot module reloader was the current one. It seemed everything on GitHub was deprecated and any boilerplate was outdated (one month old ? Forget it).
That ecosystem can be impressive but it's madness to keep up with it.
Picking out libraries, to some developers I work with, has become the same as shopping for a TV as Best Buy. They try to get the one with the bestest of everything, and just end up with a POS that no one wants to watch TV on.
What can ya do? This is where we find ourselves today. It's a mess. But what if I told you there was another way...?
For routing of URL paths to "page" objects we use Yahoo's `routr`.
We've tried to keep data store selection out of React Server core. There are so many good options in the React ecosystem that picking one seems limiting.
The data bindings that the framework cares about are at the http request level. We have a wrapper around `superagent` that manages transfer of response data from the server to the browser. So, when the client controller wakes up in the browser and tries to make the same requests that the server just made while _it_ was rendering the page, the data is already present and the requests are short-circuited.
If you want a frontend framework than does the glueing for you and comes with a clean cli to setup new projects and generate views/controllers etc, take a good look at Ember.
You'll get a Hello World up in 30 seconds, upgrades are painless and you'll get the same rendering speed as with React thanks to the glimmer engine.
And thanks to the standardisation, all Ember apps have the same code and logical structure, which makes it easier to share code, and for developers to dive in new projects.
No, two weeks to take a deep dive into all the associated dependencies for doing a universal SPA. Learning the details of webpack (which requires more googling than reading the documentation), auditioning several flux implementations (and there's relay if you want to go down the graphql rabbit hole), routing libraries, and server page rendering configurations. There's no single way to do this and everybody has their own boilerplate set up on github. You can just get started with one of those, but if you don't understand what's going on under the hood, you'll be up the creek when something breaks and you're on the clock. I'm extremely uncomfortable working with something professionally until I have a good understanding of the source at least at the highest level of abstraction. Unfortunately this seems to be the only way to survive in the NPM ecosystem, pulling down and spending an hour reading the source of every variation of large dependency you might need. Hence the two weeks. Not that it's such a bad way to do things, reading the source. It's just a lot slower than people generally expect web development to take these days.
I don't think so. I'm a somewhat-novice react dev, and for me, even in my first projects, setting stuff up was about a day. It's true though that during the next few weeks you'll have to tweak things while yo start to work on "real code".
I imagine your parent didn't literally spend two whole weeks just on that, but I might be wrong if he's very thorough.
In any case, the configuration of all the parts of the "canonical react stack" (let's not forget, react could be used without all of this junk too) is a pain, but I hope once you get the hang of it you can just reuse and don't think about it so much.
I'd say it's quite high. However, I'd echo that if one is coming at building a boilerplate while simultaneously wanting to learn and understand all of the concepts in the stack, I could see the boilerplate taking more than an hour or two.
(For example, I've seen some that implement some custom Redux middleware simply so that there's a conceptual reference right in the boilerplate as to how one would be done).
Are you thinking of getting into it? I heavily advise checking React out. I think it's fantastic!
"Working on spec" is a term used in freelancing to refer to unpaid work done as preparation for paid work. Usually this involves doing some type of proof of concept for a client in order to have your bid accepted, and is generally frowned upon by freelancers. In this case, I just wanted to get to know the tech stack in depth for my own gratification, and figured it could be used in other projects as well.
I'm totally on board with the "contributions welcome" response from open source maintainers, but when it's the only response to questions about documentation it's pretty off-putting. This is your project, how could you expect a new contributor to add docs without your input? I get that good documentation is hard, and takes time, but if you want your project to succeed I think it's almost mandatory for you to write these docs (at least the initial version!) yourself.
I have to agree here. There's nobody better to explain how something really works than the person who wrote it. The rest of us are just fumbling around guessing.
As a dev and a creative person, I understand the desire to get something out there. But documentation and tutorials aren't just something that's nice to have. They are your marketing tool, your adoption driver, and the way to create educated advocates for your project. They are as valuable to your project's success as a splashy "getting started" web site is.
That process of you fumbling around and guessing is actually really extraordinarily helpful. The things a contributor might think to do or to try, and the things that a newcomer might think to do or to try, are completely different. No documentation survives its first encounter with a real newcomer. We can, and will, continue working on the onboarding experience, but we'll need real users to roll up their sleeves and wade into it, and the first few of them will have questions we didn't anticipate, and make assumptions we've never considered, and try use cases we've never thought of. Creating good documentation is something that newcomers and contributors need to collaborate on.
I completely agree. If you take a look at the closed prs with the label documentation (https://github.com/redfin/react-server/pulls?utf8=%E2%9C%93&...) I hope you get the sense that this is something that I'm thinking about a lot, and that if you look at the issues open with the same label (https://github.com/redfin/react-server/issues?q=is%3Aissue+i...), that it's something I'm working on actively. There are lots of ways to document a healthy project, from tests to tutorials to docs sites to good examples, and we've been working hard on markdown docs, tests and examples because they allow us to scale to having more contributors faster. Striking a balance between kinds of documentation is hard, and its even harder to strike a balance between the rest of the project's needs and documentation. This is a technology that we run in production, and so we have to make sure that it is fast and stable and reliable, and sometimes that means giving short-shrift to other priorities. I'm not trying to absolve us of any short-comings in our documentation -- we have a lot to do in the coming years to improve this project, and we need to spend time writing tutorials and user testing the on-boarding experience for new devs that want to start up their first React Server project -- but I do want you to understand where we're coming from when we say "contributions welcome." Its just pragmatic; we're working on it as hard and as fast as we can, its a priority I continue to focus on, and if you want it faster, the best way is to chip in.
OK, you definitely get it :) Love what you wrote below:
"No documentation survives its first encounter with a real newcomer."
I hope it didn't seem like I was accusing you of negligence – the docs you have are an awesome start, and I fully appreciate the difficulty of the problem. Best of luck with things going forward, looks like a really neat project.
gigabo's account has been rate limited, but he wants to say:
Oh, hey didn't mean to sound flippant there! We're definitely planning to make tutorials, and we're constantly trying to improve our docs. I guess my point is we want this to happen faster, but we're still a pretty small project with a small team of core contributors. So we're thrilled when we're able to bring in new contributors to help out!
I don't think that's true. It's certainly possible for somebody else to run with tutorial creation, especially if the project gets attention / exposure before the tutorials are ready.
If the response "Contributions welcome" is generic, it's because the complaint is as well.
"Contributions welcome" is often echoed, but I like to remind colleagues that it is not the proper mentality to foster a real community. You can't just dump code on GitHub and assume the world is going to start firing off PRs to do your dirty work. You have to assume that no one else cares about your project, but being open to changes in code and leadership will make your project more robust to other use cases.
How does it handle fetching data from a path on the same origin? For instance, I only need to fetch('/api/users.json') on a certain page (/users). This means that it can either be hydrated in the initial state when performing a full page load of /users or needs to be fetched (using xhr/fetch) when navigating to that page from another page on the site (which shouldn't require a full page load).
So how exactly does the server perform that same fetch when attempting to hydrate the full page without actually making an HTTP request against itself?
The other responses seem to be answering how this would work once the hydration is occurring on the client side. I understood the question to be `Does the http request layer support doing API requests to routes which are on the same webserver as react-server`.
There doesn't appear to be special handling for this use case. However, they use SuperAgent (https://github.com/visionmedia/superagent) for http requests and I expect implementing the behavior that you're describing is relatively easy with their plugin system. Specifically I'd bet that plugins previously designed for testing could be used to accomplish your goal (superagent-mock or superagent-mocker).
What I'd like to be able to do is not incur an additional HTTP request against my own server. Does SuperAgent have a way to avoid opening a new socket when making a request against yourself?
Yeah, I understood what you wanted. I may not have been clear in my line of thinking. My understanding is that react-server does not support this. However, I do not believe that it would be difficult to add support using the testing plugins built for SuperAgent.
Imagine writing tests with mocked requests. Now apply that same logic to all calls to fetch() on the server. Does that make sense?
So to the front end when fetch() is called, it actually makes the request. On the server, when fetch() is called it hits a mock plugin which fetches "mocked" data which is just fetching data from whatever local store you normally would on your backend be it a JSON blob on the file system, a database or an in memory cache.
That's the opposite of what the question is asking for. They want to call fetch('/api/foo/bar') but on the client it does an HTTP call while on the server it recognizes that /api* is on the local system and therefore invoke that local route instead of doing an unnecessary http call.
That's what I'm apparently failing to describe how to accomplish :).
I'd like to know the answer to this too. What I've thought so far is that I just call the function that makes db calls or other operation on the server side but this is clearly not the way do in client side if the app was to be universal.
In my experience [1], you'd probably just use an if statement that does something different when you're on the server. That's totally fine and is still a universal app; universal apps don't have to take every exact same code path on the client and server, they just have to run the same code base.
In React, people typically use the ExecutionEnvironment module [2] for this:
if (ExecutionEnvironment.canUseDOM) {
// make client side request
} else {
// make direct db request
}
...but you could also just do some simpler check, like see if `window` exists, or make up your own `IS_SERVER` global, etc.
Next you'll wonder: won't that still ship the server-side code to the browser, even if it's not run, and pull in any server-only modules you're importing, e.g. database access stuff? No: you'd fix this with (for example) webpack's `DefinePlugin` [3], telling it that `ExecutionEnvironment.canUseDOM` should always be `true` wherever it occurs in your client-side JS bundle, and dead code elimination will then rip out those server-only `else` branches before that code gets shipped to the browser.
Or a similar setup, like wwalser hinted at: write 2 versions of your 'request' module: one for the client, and one for the server. Tell webpack it should point to the client-side one when you generate your client JS bundle. People use webpack because it lets you do all kinds of overrides like this.
[1]: I work at Formidable, we do a ton of React for big companies.
I'm actually working on an isomorphic app. My approach is to simply create a PersistenceService interface, with methods returning promises, and to make two classes implementing it, SqlPersistenceService and AjaxPersistenceService. You inject the correct instance when you create the server and when you create the client. No if needed, and as I'm using Typescript, the compiler ensures both SqlPersistenceService and AjaxPersistenceService will keep following the contract of PersistenceService in the future.
Hey, I'm a dev at Redfin, and I've done something like this (with Gigabo's help). It's possible, but it's not well-supported yet.
My team owns one special-snowflake API in React-Server. We want the API to be reachable from the client via HTTP, but also want to execute the code directly during server-side rendering, with no HTTP call. It's your exact use-case.
In order to do that, we
1. Detect if the API is invoked server-side.
2. Tell Superagent* to tell the client "hey, if you want data for $API_URL, don't make an HTTP request; the response will come inline in the page's HTML response.
3. Invoke the API code directly.
4. When the API response is ready, serialize it, and tell Superagent to pass it to the client.
We don't do this kind of thing frequently, so we haven't built any graceful tooling for it.
* I _think_ this is Superagent: https://visionmedia.github.io/superagent/https://visionmedia.... Somehow, we use it in a way that notifies the client of what HTTP requests the server is performing on its behalf; not sure if that's stock Superagent or if we added some magic to it.
gigabo's account has been rate limited, but he wants to say:
There's no data caching across pages. On first page load the responses from data requests for that page are transferred to the browser and rehydrated. For a client-side transition to another page there are two options: Fetch the data from the server as a bundle (this just works... one of the cooler features of React Server, I think) or make the individual requests via xhr from the browser. Both have their place.
If this works as advertised, this may prove to be a very useful project, that can replace numerous homegrown, half-complete implementations strewn about the internet.
This boilerplate served as a base for a project I worked on, which I then extracted out into an npm module for easier maintenance. You might find it helpful: https://github.com/bdefore/universal-redux
That said, react-server attempts to solve the problem in more of a framework fashion. I hope to look further into it.
I see a bunch of failed attempts in console to connect to slack websocket. Is everything looking ok for you?
After blocking 2 iframes with adblocker I could finally inspect what was going on :)
Anyway, I can definitely feel that is fast and seamless and worth to give a deeper look! In the meantime, prefetching all the content in docs or source views upon load generates quite a few requests and might explain your scaling issues. Would you mind sharing statistics for number of users and hardware behind ?
That's a result of the Slack badge at the bottom of the page trying to use a WebSocket to update its counts. We discovered getting WebSockets working through ELB was not trivial and have decided instead to disable the real-time updates to the badge. The changes have been made for that, but sadly, not deployed in time for your visit.
Anyway, the errors in the console aren't impacting the behavior of the site (the badge is falling back to polling to get updates, for instance). They are ugly, though, and will be gone in a future deployment.
Based on the documentation and the design principles this seems like a really promising framework.
The data hydration, incremental HTML delivery and incremental code loading are really, really important for creating web apps that aren't load time hogs. Great to see that they were unopinionated about data fetching, too. That's one of the things that has made it difficult to drop Relay into existing applications.
Is this used in production? Are there any performance numbers that you can share.
"We’ve been using it here at Redfin in production for over a year and it serves the three highest traffic pages on the site. We’re serving 1 billion requests a month from our React Server instance; hundreds of requests per second during peak hours."
Would be useful if this project explained the difference between React and ReactServer. It seems they are as similar as Java and Javascript.
Instead of the render() method in React to output JSX, it appears that ReactServer uses getElements() for a similar purpose. So the entire model and object lifecycle is probably different as well?
React Server uses React; the page that has getElements is sort of a meta-React-component that has other specified behavior, as well as returning a React component to be mounted. So it renders the first page you visit on the server (for fast loading), then all subsequent pages are rendered on the client.
Each page is split into sections.
Each section may wait for async data and API responses.
When all the data arrives, the section is rendered as an HTML string.
The server streams each section's static HTML to the client as soon as it's ready, and after all prior sections are streamed.
The server also streams the async data to the browser. This avoids the latency of the client downloading some HTML, then downloading some JS, then making the requisite API calls for the page's data. (Think of it as a hacky version of HTTP Server Push.)
On the client-side, React and your JS are downloaded, React will recycle as much of the static DOM as possible (writing isomorphic JS isn't always easy), then take over and do its thing.
> Each page is split into sections. Each section may wait for async data and API responses. When all the data arrives, the section is rendered as an HTML string.
Where are these waiting sections? On the browser client or the server?
Is the React code that is to be rendered on the browser served up automatically by ReactServer?
I'm sure it's a wonderful framework, but a diagram is really needed to understand any of this.
Good question. No, the data is downloaded only once.
The server tells the client what requests it's making on the client's behalf.
If the client's JS tries to make a request for a URL that's already in-progress, the client's React-Server code will skip the request and return a promise of the server's streamed response instead.
If the client's JS tries to make a request for something, and the server IS NOT already handling it, then the client will send out an HTTP request, and React-Server will step out of the way.
Humorously this looks a bit like Apache Wicket, a Java-based client/server UI framework which has been around for about 10 years: https://wicket.apache.org
I just recently made the switch into the Web Dev world (coming from C++/Python, desktop world). Since I knew of Django, I've started using it as the 'Backend/Server' part of my Web app dev stack. Basically using Django to render minimal React/HTML/JS/CSS to bootstrap my single page Web app.
Wondering, what advantage would I have would I get from using react-server instead of Django (aside from using JS across the board)?
I have not looked at how this framework does it, but I have built a nodejs server that does similar things myself.
Some advantages:
- The server can render the full page in html for the client, which means the website is viewable even without js (or before js has loaded, for example on slow network)
- The server can preload all data the client needs. The webapp might need to fetch data from 3 API endpoints. Having a server do this and pass the results to the client on load is much more efficient. If the client (browser) loads it, the following happens: html loaded -> js loaded -> start calling APIs. If the server does it, the client instantly has access to this data.
Rendering all the html can be a bit heavy on a weak server though, but you get the advantage that you quickly notice bottlenecks in your rendering. You can also implement some caching on your server to make it faster.
I appreciate your answer, which seems to apply to the general concept of server rendering. But Django can also do all what you mention. My question was: what can React-Server do that Django could not? I hope this clarifies my original interrogation.
The difference is that a "react server" can actually render the dynamic react content. I don't know of a way to do this in django, unless you are able to run javascript code somehow. If you do this, you are kind of making your django app a "react server" though, since it renders all of your react components.
What usually happens in a react app is that the server delivers static content (html, js, css). When everything is loaded, react starts working on the browser and renders / fetches data.
If the server that delivers the html doesn't just deliver a static page, but actually fetches data and renders it first, the browser can instantly display the data and react is smart enough to know that it doesn't even need to do a new render, because the components are already rendered to html.
It's just my opinion, but I think this architecture is only advantageous when you need to build something with a complex user-facing CMS. Think lots of forms and controls with real-time feedback, optimistic updating, drag and drop sorting, etc. Across many pages with many object schemas. If this is the case, you'll save time by setting up a flexible module system with predictable data flow. Users will spend less time waiting for pages to load and edits to save. If this is not what you're building, stick with traditional server-side templates and a little bit of JS on the front end. I certainly wouldn't use this stack on a corporate marketing site, for example.
The company that I was consulting for tried to get Spring and React to play nice via Nashorn, but ended up scraping the idea 6 weeks in because of performance issues and not enough developers knowing the stack. Nashorn was missing a lot of essentials to make this easy out of the box. So looking at this is a breath of fresh air.
Correct me if I'm mistaken, but Hypernova has the javascript rendering siloed as a separate service. Which can be really useful in some cases (especially if you want to render React components from a Rails app).
But if you want it all in one place (reduced complexity and overhead), React Server looks pretty promising.
React-server seems to roll in additional opinions (sane default) about how to go about creating a high performance server-side rendered universal application.
My evaluation is that react-server looks promising for prototyping and first versions of an application where being monolith is a reasonable approach. Hypernova seems like a good fit for organizations that are already building microservices and want to eek some additional performance out of the initial render of their SPAs.
For some reason this week I stumbled over many great React.js articles, so I started a collection here http://deepreact.com mostly to save&share things with friends, since a lot of us are getting deeper into React now.
The website itself is meant to serve as a demo. :)
When you land on the site the first page is rendered by the server. Then the client controller wakes up and subsequent page views ask the server for data, but render in the browser. That's what we mean by "seamless transitions".
"seamless transitions" in the web design world is often taken to mean "transitions" in the sense of CSS animations, i.e. instead of hard page refreshes elements smoothly morph into their positions in the next page state. See eg. Google's Material Design or Apple's iPhone UI. This is a big, largely-unsolved problem in HTML5, at least in making them pervasive, performant on mobile, and easy for developers. Advertising this may lead to some unwarranted excitement and subsequent disappointment, since the website doesn't really have transitions at all of this sort.
Oh, hadn't considered that interpretation. Our full project description on GitHub is "React framework with server render for blazing fast page load and seamless transitions between pages in the browser". The tag line on the website is shortened from that. I think the full version is a little less ambiguous, but it's a little too wordy in the context of the site.
Will have to think about a better tag line... Thanks!
I share nostrademons's interpretation. I'd expect the first page rendered on the server and the rest rendered on the client in any isomorphic framework. When I hear "transitions", I expect elements morphing between assets, ala neon-animated-pages:
All pages are rendered server-side by default. React Server sends each server-side rendered element as soon as it, and every element before it, are ready, up to a timeout, at which point it sends the late arrivals as they trail in. On the client side, React Server picks up where the server left off, to render client side code as the user interacts with the page.
are you asking to have a static site pre-built from React?
I'd assume when he says "first page" he means any first page but generally from there the rest becomes dynamic react. Depending on how you load your data it should be much quicker to render entirely in the browser once the page has loaded.
Well, I was thinking you could have all pages server-side rendered in the case of a low-power mobile device, and leave the hybrid approach for desktop browsers.
The user-agent string could be used to decide which approach to use.
Are there any numbers comparing pre-rendered React versus React communicating with a JSON Api?
It seems to put a lot more stress on the server which can neglect the (theoritical) speed improvements
Not sure, but the time cost is the latency of loading an almost-empty index.html, loading and parsing your app, _then_ it starts running and hopefully fast doing some API requests and then it's off and good to go. Subsequent JSON-API-requests-and-then-rerenders should be pretty fast indeed, but those aren't the point :)
You want to server-side render React apps so that while all that happens, the app already works.
This is something that we're thinking about (https://github.com/redfin/react-server/issues/190), but that we just don't have an answer for right now. React Server was built with running two servers in mind, one for your api and React Server as your front-end. I haven't used Relay much, but I think you could run your Relay server as the api server, then make the request from React Server with ReactServerAgent and it should work?
Its the French word for "born." Its often used for maiden names. In this case, we mean "we called it Triton before we called it React Server," which was more important to understand before https://github.com/redfin/react-server/pull/40 merged
Well page loads are fast, and clicking links within the docs section does feel super fast, but I agree use of the word 'transitions' is confusing (which the author has now addressed elsewhere).
Good opening for a question: what kind of servers were you using and how much traffic did it take to give them problems? (In other words, what's the performance of react like on the server?)
gigabo's account has been rate limited, but he wants to say:
"Well... we made it through the first hour or so of HN traffic on a single t2.medium instance in ec2 before we started seeing errors. We're now on three m4.larges with good head room. Not too bad, I think?
We meant to get cloud front set up before we got this sort of traffic, but glad to be surprised with an early bump. :)"
This is me saying this: I think we are thoroughly over-provisioned now for the load we are seeing, but we wanted to avoid any more hiccups while you all are trying to check it out.
I honestly don't know, what's the actual requests/second for that?
I don't have a good feel for what HN traffic is like. From one post (https://news.ycombinator.com/item?id=8107658), it seems like a few thousand hits/hour over a fraction of a day, but (1) there's a lot of variation, (2) it doesn't tell you the peak rate
We were seeing around 450 active users on the site at the peak. We currently have around 220 active users that are generating about two page views per second, peaking to six page views per second. That's about where we were when we scaled up from a single t2.medium instance.
Update on this: looking through metrics at the end of the day yesterday, we realized that the WebSocket-induced errors discussed in https://news.ycombinator.com/item?id=12271043 were having a much bigger operational impact than we initially thought. That's been dealt with and had a clear and immediate impact on the service health.
It would be nice to have another day of similarly high traffic to verify it, but I think most of our scaling up yesterday was to handle the WebSocket issue.