Hacker News new | past | comments | ask | show | jobs | submit login
Tremor – The React library to build dashboards fast (tremor.so)
167 points by spansoa on July 28, 2023 | hide | past | favorite | 81 comments



I started a react component library project that has been going for 4 years now and gets about 40k downloads a month.

If it wasn't for my test suite, things would have been a giant mess long ago and I probably would have abandoned it. I can't think of doing a react project without a test suite because over time, any dependencies you have (including on react) will end up breaking things in ways that you don't know. Without those tests, you can't be sure of upgrades... which you're forced to do because your users will want to do things like upgrade react.

So... any project like this... in order for me to ever add it to my own project or depend on it in any way, really needs a test suite.

The test suite for this project is pretty much non-existent, which is a huge bummer should should be a huge red flag for anyone who thinks about using it. You're just going to end up with broken code.


My personal projects generally have a better test suite than what I do for work. First of all, I do it at my own pace, but more importantly - it makes the whole thing possible, as you said.

The nature of a personal project is such that I may not come back to it for months, sometimes even years. The first thing I always do is run the test suite. It's my lodestar. It helps me understand what state the project is in so I can confidently pick up where I had left it.

It is also my requirements documentation - as code.


People are often given timelines at work that don't include testing. As a believer in agile, I generally push back hard to include testing.

I'd rather dole out incomplete MVP's with tests and fewer features than to skip tests entirely just to hit crazy timelines with a buggy product. Experience says this almost always has positive long term results.


just don't segregate testing, it is integral to the production of quality software and therefore should be considered the same as, say, asking to skip using a keyboard in an attempt to save time.


Not a react dev, but having a comprehensive test suite is the only reason I feel comfortable pushing code changes and upgrading dependencies. It is difficult at first to build the habit and discipline to always write comprehensive tests for new code, but it is very worth it.


In JS land it is especially important and honestly one of the least done things, which is sad and a source of why things are always breaking randomly.


I work primarily with data scientists in python, so I understand a similar pain.


The mind boggles on how we are using python for ML/AI... I bet stable diffusion would get the hands right if we had better unit tests. /s


I have yet to see a good frontend test, but I'm waiting to be enlightened. For the most part, it just seems to be things like "if they click this button, make sure this div is gone." That smells like a test double to me.


We have hundreds of tests for our client side projects (web app, mobile app), and they definitely do more than what you describe.

Some are simple (although very few are as simple as your example) and some are complex, depending on the portion of the app being tested. Some are basically integration tests for components/widgets, where it runs it in isolation to test the inputs and outputs. Others are more like traditional unit tests where it's purely testing business logic, API and model interactions (mocked), form validation logic, etc.

Unless you're talking about very simple client apps, there's always stuff that can be tested that will provide significant business value by preventing bugs and broken code from getting to end users


Could you give an example? Most UI tests I see are either a) trivial/tautological, b) test implementation details, or c) doesn’t actually even test the UI, i.e “what is visually painted on-screen.”


Almost none of the e2e tests for https://www.executeprogram.com meet this description. They're all tests like:

  - Visit a lesson page with a code example that returns a string.
  - Enter the correct answer to that code example, but without the quotes around it.
  - Assert that the text "Almost, but the answer is a string so it needs quotes." now exists on the page.
No implementation knowledge, nothing faked, real UI behavior tested. You can test most app behavior like this. E.g, we test our entire billing system like this, making these kinds of real assertions.

(To make sure that we know when actual rendered pixels change unexpectedly, we use Percy, which is a separate topic.)


At work we use snapshot tests to see visual changes to ui. It helps a lot with code review especially with adding edge case data to tight designs.

We use ui tests specifically for testing accessibility, analytics and data passed along during navigation.

We use fixtures for all of this.

None of these tests really help with bugs. What they do help with is:

- Making sure the ui stays really decoupled from the business logic. This helps a lot with getting the data unit tested which does help a lot with bugs.

- Making sure that analytics and accessibility are not forgotten about. This helps a ton when refactoring.

- Making the intent of the code really clear during review.

It takes a bit of practice and being ruthless about removing/fixing stuff that’s not helping. I have found it makes it much easier for new developers to work, review code and understand how the code is organized. YMMV.



We have UI client tests that cover everything from API, the business logic layer, as well as testing any local logic in higher order and UI components.

Some examples:

- At the API layer, it mocks the network layer and tests that, when triggered, it sends the correct request and given a response, handles it correctly (both for success and fail responses, as well as different expected data).

- business logic layer is some of the most important tests, this layer will test things like how data flows through the app and how it might be transformed along the way for use in the UI. This layer generally sits between the API and UI, so it mocks functions in the API stack and tests various responses, tests streams of data. This layer often is your "traditional unit tests" - testing individual functions to ensure they are idempotent, no (unintended) side effects creep into the code, putting in X gets back Y, etc.

- Higher order UI components - these are components that, while they may have some UI (yah, I know, not purely higher order), they are generally the component that acts as an intermediary between the business logic and more pure UI (display) components. The tests here would mock any calls into the business logic, make sure it properly requests the needed data and then check to see that any local logic is correct. This layer may be handling route params or query params, and adjusting the data request based on that or other data stored in memory or in the app. It may be handling collection of form data and giving it to a business logic function, and handling any errors that may send back, and the tests will check that. It often has functions that are triggered by actions in display components, and rather than try to wire components together in tests (we do have some e2e tests that check at that level), it's better to test those functions in isolation at this layer.

- Display UI components - I don't want to assume, but based on the comments, this is likely what you have thought of when you think of UI testing. At this layer, there should be much less logic in each component. It usually has data inputs, and then renders that into a template. We don't "test the framework" - there's no sense testing that an input prop is "set" correctly. But if the UI component does anything with that prop to prepare it for display, we'll test that. Or if it has UI actions - this is one place we will sometimes tie to the DOM and test things like click events, but most of the time that isn't really necessary (again, we make as assumption that if we properly wire up a click handler, it will fire on click). So many times we will just test the handler themselves, if they do any logic prior to emitting that value back out to a higher order component or to the business logic layer. We try to keep these simple, but there are sometimes tests here that are testing that expected DOM elements are visible based on the state of input data, or that the expect text is visible based on user action (i.e. did a panel display text when the user clicked it open). This is the layer we have to be fairly careful not to write "lazy" tests (did the framework render the text assigned to a variable and interpolated in the template isn't super valuable).

If it matters (and I don't think it should, because the practice of writing good tests, even for client apps, isn't specific to language/framework) our primary apps right now are written in Flutter and Angular. We also use or have used Swift and Kotlin (iOS/Android respectively), Java, React (little bit of React Native but quickly abandoned in favor of Flutter), and Ember (had a pretty big test suite for an Ember app, since shut down though). And, despite all of the above, most of our logic does not actually reside on the client side but rather lives in the API/backend and we are mindful to try to keep as much logic there as possible. And we have good test coverage there as well. Of course, for all our code, the coverage could always be better.

Anyways, I can't really pull code samples as the apps are proprietary (and this is getting long enough as it is), but I hope that answers your question and helps illustrate at least why I believe there can be significant value in client side tests.


> I have yet to see a good frontend test

You haven't written one yourself? Is your theory that since you can't write one, they aren't possible?

All my tests are good enough to ensure that if I upgrade a library or change underlying functionality, something breaks if the output is not expected. That is my primary concern and that is what I test for.


No, that's not what I meant. I mean tests are great on the backend, but I've never seen them prevent a bug on the frontend, especially in react where it's usually obvious what effect your changes will have. In fact, many frontend tests don't even truly test the UI



> disables input when disabled is passed

That's exactly the kind of test double I'm talking about


And? There was an actual bug filed around that and the issue was resolved with a test that ensures it doesn't happen again. That is all I care about. Make sure we don't make the same mistake twice and that over time things continue to work with new versions.


Can you elaborate on what’s problematic with this test? Testing that the component does what expected based on input is telling me the component is working as expected.


The developer that forgets to make disable work, will also forget to write this kind of test for it. Once the behaviour is added, it'll never be randomly removed again so the test isn't needed then. It looks nice but never helped find a bug.


Part of the complexity of integrating a form library with a ux library is passing all of the correct properties around between the two. In this case, I wasn't doing that correctly and it resulted in a bug where disabled was not being set correctly. Someone filed a bug. The bug was fixed and a test was written to ensure that this doesn't happen again in the future.

You can read the history here: https://github.com/lookfirst/mui-rff/issues/455

If you're working with people who randomly 'forget' things while they are doing development, then I guarantee that you're working with people who also write buggy code. In fact, in this case, it was ME who wrote that buggy code. I own it. Not all code gets 100% coverage and sometimes things get missed. That is ok. What is not ok is skipping tests because you might forget something and therefore think it isn't worth writing tests at all.

I consider buggy code the act of developers writing the code at least 2x instead of 1x. If you or your company is paying someone $X a year to write code once and they are actually writing code more than once, then I would highly suggest you look for new people to work with because that is a terrible return on investment.

If your developers are writing tests, along with their code, then the code is far more likely to be correct and better thought out and less buggy than code that was just hand tested as they developed it. Speaking of that 2x example, I'd rather pay someone 2x the amount of time to write code, with tests, than the other way around.


> if you're working with people who randomly 'forget' things while they are doing development, then I guarantee that you're working with people who also write buggy code. In fact, in this case, it was ME who wrote that buggy code. I own it.

We all write bugs, we're human and it's hard to think of everything all of the time.

My point was, when we make a thinking error that causes us to write a bug, that same thinking error means we also don't write that test that could find it. If you had thought of writing a test for the disabled thing when you wrote the code, you also would have written it correctly because you would have had that case in mind then.

Now you have a test, but because someone reported the bug and you fixed it. Now it's almost certainly not useful to have anymore.

I believe in automated tests, but for tricky logic mostly.


If you practice test-driven development, the mindset is slightly different.

Humans can't think of everything all the time - TDD guides you to think about one thing at a time.

No one is sitting there adding a disabled prop on their component that then disables an input for no reason. You had intent when writing it. It's possible that a co-worker distracted you, or you went for a break, or you went down another rabbit hole, and ended up skipping disabling the input based on the prop.

If you're practicing TDD, you would have written a failing test before creating the disabled prop. That test will continue to fail until the productive code has been written. It helps to protect against those thinking errors in the first place.


I am all about TDD, but in this case, I'm integrating two third party libraries that have many many features.

There is almost no way to even know what features they have, especially over time. Imagine that MUI added disabled in a new release and now my project isn't implementing it during an upgrade. TDD wouldn't have caught that.


An example where it’s helped me personally is when migrating from Angular Material to Ant Design. The test cases gave me confidence the expected behaviour was preserved


Very good example. Tests are all about 'delta'. Changes over time. They bring confidence over delta.


I know how to make tests for _logic_, that's usually in Typescript functions (e. g., React hooks).

But for components, that call some of those hooks and then render some JSX, I've never been able to write a test that found a bug later on. Every single time one of them failed, either the change made was so large that the whole build failed to compile anyway, or it was because there was new desired behaviour and the test needed changing, not the code.

If a type of test doesn't help find bugs, it's a waste of space.


I've given an example of a whole repository, that is now 4 years old, full of the types of tests you're talking about. It isn't that hard.

If the desired behavior is changing, then the test should fail and yes, you need to rewrite it. That's how testing works.


It's not hard, but they just make the build of our monorepo take longer and longer without finding actual bugs, in my experience. I feel most unit tests should be commented out once the thing under test is done.


You are arguing against tests. It is not a winnable argument.


No, people are arguing against frontend tests because they're orthogonal to test driven development


I really don't understand this comment at all.


You should take a look at the end to end testing tool named Cypress. As an end to end test it goes beyond click this check that. As an end to end test it will make sure that your app with its features works seamlessly for the user and that your React components work well together. Cypress is amazingly built and it is super fast to write tests with it. It’s easier to debug and seeing the robot doing the tests is pretty cool. For me it’s a must have.

You can check:

https://youtu.be/BQqzfHQkREo https://youtu.be/OVNjsIto9xM&t=28m06s https://youtu.be/VvLocgtCQnY


Right, that is the one I was thinking about. Thanks!


i only skimmed the home page, but it seems far too opinionated for general use. if you're starting fresh and plan to use tremor as your all-encompassing ui kit, it seems fine. but at that point, why not grafana, retool, or some other no-to-low-code dashboard solution?


It is fine for a quick one off project, but if you're going to depend on anything over the long term, you have to be able to upgrade it. Especially in JS land where things change almost weekly. 99% of my releases have been simply about dealing with changes of dependencies. I wouldn't be confident to do a release if I didn't have a test suite.

You're right on the point of using another low code solution though... it still opens you up to dependency issues. You might also want this as a product with its own skinned UX/UI that you have full control over. Third party 'commercial' products won't always give that to you.


What does your test suite look like? (What's the library is maybe a better question)


Good question.

`@testing-library/react` and jest to run it. snapshots and a bunch of action code too.

There are more modern frameworks now too that can be used for integration tests, but I haven't bothered yet as the snapshots have almost always caught the issue.

It boggles my mind that react.dev doesn't start off teaching, with writing tests.


Are snapshot tests the things that fail because they can't tell "background-color: #ff0000" and "background-color:#f00" are the same?


It depends on the tool doing the diff on the snapshot and what you're putting into it to begin with. https://jestjs.io/docs/snapshot-testing

The components I test all use classes, so there aren't embedded styles. Occasionally the generated class names change across different versions, but it is a small price to pay and easy enough to just update the shots.

I'd also argue that I'd want that snapshot to fail. I'd want to know why someone made that change, especially if they were going in the other direction... f00 -> ff0000.


Is your point that snapshot tests are not perfect? I don’t think anyone argues against that.

Are they useful and do they prevent bugs is the better question


Yes, that’s right… they are of dubious utility imho.


4 years of maintaining a react project downloaded 40k times a month argues otherwise.


It probably works ok for a solo project but IME with large scale codebases snapshot tests are awful. You update some implementation detail of a common shared component and suddenly 5000 tests break despite the look and behavior being unchanged.


Give me an example. Those components depend on this common shared component... if you do something that changes that shared component in such a way that it causes other snapshots to fail, I'd absolutely want to know about it. That's the whole point of dependency failures.

My dependency is on MUI, which is a massively used common shared component library. If MUI changes, and it breaks my snapshots, I'd absolutely want to know about it.


This has been my experience in the past with a heavily snapshot covered codebase. Class names can change, the structure of your HTML can change, the underlying CSS can even change and the end result is still the same because you were just refactoring. At a large enough scale it can be painful to have hundreds of snapshots break for a simple change - especially when you add required code review by others into the mix.

Currently figuring out a strategy for introducing testing into an already large codebase and being very cautious of snapshot tests because of this. Experimenting with visual regression testing but early indicators suggest it could get very expensive if we're not careful about what is covered.


Give people the tools to easily update the snapshots and read the diff's of PR's (github is quite good at this imho).


I kinda think "just blindly update the snapshots" is teaching the wrong lesson. We removed them from our projects and haven't missed them.

Visual diffing, cypress >>>> snapshots IMO.


Who said blindly update snapshots?

Agreed that cypress today is probably a better solution.


What is your test suit like?


I've gotten so jaded on these "batteries included! everything you'll ever need!" component libraries over the years. Inevitably you'll run into a component that is half-baked or incompatible with your architecture, which forces you to install a 3rd party alternative.

Then it happens again. And again...

Eventually you're left pulling in this huge library with 2MB of CSS just for the four components you're still using from it. Then a new dev joins the team and gets to enjoy the confusion of "should i use $library component here? Or do we have another wrapper for it? or is there something else entirely being used for it?".

Just avoid the hassle and stick with a curated list of well fleshed out individual components that are tested and have thousands of stars each. Create a barebones wrapper over it to match your design system, and call it a day.


I agree with you. But I think this is a symptom of a different problem: Building admin panels with UI tools instead of config-driven tools.

While component libs like this can be used for consumer-facing UI, when we talk about "dashboards" it's also often backoffice admin panels.

You shouldn't be reaching for hand-writing UI design & logic in that scenario. You should be looking for an admin panel interface tool. Laravel has these in spades: Nova, Backpack, and Filament are the big three. You write some code and some logic tied to it, and the tool creates the UI for you. This lets you get back to writing code for the actual customers instead of for business processes.

These can be used for consumer facing panels too, but with less control over the design, they're not really intended for it.

But the JS ecosystem insists on being fragmented, so everything is rewritten manually.

"Batteries included" needs a lot more than just batteries (the UI components).


I've gotten jaded with that approach too. The same thing happens, just on an individual component scale. (The number of times I've gotten written into a corner with $dropdown_component is too damn high.) My pendulum has swung to do it all yourself with ample links in the source to other projects on GitHub for inspiration.


If you are looking for a dashboard system that is written in vanilla JS, I will be open sourcing my DevBoard in the next month or two. You can see it in action at https://devboard.gitsense.com/microsoft/vscode and learn more about the widget system at https://devboard.gitsense.com/microsoft/vscode?board=gitsens... Note the repo that is mentioned in the intro page hasn't been pushed to GitHub yet, but will be soon.

The server is a very simple node/express app and the front end is written in vanilla javascript. I also use GitHub's primer css (https://github.com/primer/css) and a heavily stripped down version of tabler's css (https://github.com/tabler/tabler)

Note, DevBoard is more geared towards hackers, so Tremor's is probably a much better fit if you are looking for an out of the box solution.


As per refine.dev blog, these are some alternatives to Tremor:

* Ant design pro (https://github.com/ant-design/ant-design-pro/) * Material Dashboard React (https://github.com/creativetimofficial/material-dashboard-re...) * Volt React (https://github.com/themesberg/volt-react-dashboard)

Anyone have more suggestions?


Very tongue in cheek, but the uptime widget is clearly ready for industry-wide use, the numeric value says 99% uptime, but the colored illustration next to it clearly shows worse performance.


On a short enough timeline were all 99.999% uptime.


Interesting it's open-source but they don't seem to highlight it on their home page, I had to search for their Github page to find it https://github.com/tremorlabs/tremor

Then when I came back to the home page and searched for "license", I confirmed it says "Apache-2.0 license" in light-grey, small font in the middle of the page.


Hi, Chris here from Tremor. Many thanks for the feedback! I wasn't aware that it is not directly understood as an open-source library by our website hero title


I totally missed it as well, got to the bottom of the page, saw something about pre-release and assumed it was paid for. This comment above is what has made me go back and bookmark.


There are plenty of React dashboard libraries. They seem to be useful in building a top-level overview of a database with many tables. Can someone list a few real-world examples (web sites) where the dashboard is more than 10% of the work in building the site? The cases I know, e.g. wise.com or quickbooks.intuit.com would be much less than 10%.


Pretty much any client area of a B2B site. Obviously the actual backend is a significant part of that time, but on many apps, most pages will use components along these lines.


Backend / API companies like Stripe and AWS


I've been using this while prototyping a few projects and absolutely loving it, the charts are beautiful! Just missing a few essential components like a dialog, toast/snackbar and notification banner but covers at least 80% of the components I need :)


Looks great and I like the variety!

Feature request: I wish the table supported basic sorting and maybe even filtering out of the box, I find these useful in the dashboard context. Perhaps I’m not seeing it, or maybe there’s a way to add this easily?


BE Dev here who knows a little FE. Can this be used easily with something like htmx?


Anyone know what the just-CSS version of this would be? Like Bootstrap for Dashboards?


Regarding the charts, you would have to use low-level chart libraries, such as VisX or D3


There are server-rendered charts, but those have more or less fallen out of vogue since there's no interactivity.


Write the SVG by hand use DOM manipulation to alter the d attribute of an SVG <path>, use a low lever library like d3-shape to compute the interpolations.


Highcharts seems also really customizable


This library just wraps Recharts


chartscss.org could cover off basic stuff


I am curious how Vercel or cal.com uses this library for their products


We don’t use Tremor for our product dashboard, but do use it for some starter templates we create to build on.


Another example built by vercel: edge-data-latency.vercel.app


cal.com uses it for its insight section -> cal.com/insights


Things like this really need corresponding figma components as well.


tremor.so/figma




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: