Hacker News new | past | comments | ask | show | jobs | submit | dominic_cocch's comments login

"Orca can capture about 4,000 tons of carbon per year (for scale, that’s equal to the annual emissions of 790 cars).

Now Climeworks is building another facility that makes Orca seem tiny by comparison. The company broke ground on its Mammoth plant this week. With a CO₂ capture capacity of 36,000 tons per year, Mammoth will be almost 10 times larger than Orca."

A lot of negativity in this thread, oddly. This is a 10X improvement over a previous version. Another magnitude or two and this becomes incredible for the environment. Other solutions should also happen, but a problem as big as climate change should have many parallel solutions. We don't have time to put all our eggs in one basket.


> A lot of negativity in this thread, oddly.

It's better to reduce emissions than to try and capture a couple of 0.0X% of CO2 in the atmosphere. That's obvious to everyone and so everyone agrees and that gets upvoted.

In reality, good luck having cows and steel (=iron+carbon) and cement on this planet without GHG emissions. Even plane fuel (long-distance flights) is basically going to have to do carbon capture to make 'electrofuels' (or biofuels) that they then burn and put back into the atmosphere, at least with how it's currently looking.

It saddens me to see that people are rambling, and others are voting it to be the current top comment, about teratons (a unit even I hadn't heard being thrown around before) which is of course a ridiculous notion. The point of this technology is to neutralize unavoidable emissions in thirty-odd years. We can't, in thirty years, start to develop this tech and hope it works the next week.

It also allows us to put a direct price point on CO2. You pick: remove CO2 or don't emit it. A smart company will choose the cheaper option. Only a few years ago, planting trees or "preventing emissions" magic accounting was considered offsetting. This sets a new standard.

So long as it's within proportion, I really see no downsides to funding the development of this tech. The roll-out to megaton or gigaton scales, yeah we should see about that when we actually have renewable energy to spare, not when the gas, nay, coal plants are still in full operation. But for now, we're struggling to reach a few dozen kilotons economically, and that's why this is necessary work and good news.


> It also allows us to put a direct price point on CO2.

The problem is that the price on this will be severely underestimated. Every year we hear our estimates of climate damage are underestimated.

Then the wildfires around the world and in the arctic started becoming to frequent to ignore.

The real problem is that the extraction industries will fight tooth & nail to not have subsidies much less additional cost - and they have won and continue to win to this day.


How can the price be underestimated if we can pay commercial providers to build more of those plants? At minimum, it will be the cost price of doing this work, which (at some minimum) is hard to argue about.

I guess you mean the underestimated impact, i.e. "cost" to society (insofar as human lives have a € value), if we don't do anything? Because that's a different discussion. I presume that lawmakers would be in agreement that we need to curb emissions, otherwise it's an entirely different conversation to be having (and flashbacks from 10-20 years ago).


Well said. Saw in your bio that you're working on capturing carbon dioxide, can you share more?


Thank you. Sadly that's poorly phrased on my part. The bio says that I'm "into" that tech; i.e. it interests me. All I'm doing to capture CO2 at the moment is pay Climeworks monthly. I also took a sneak peek at their Orca plant in Iceland when I was there, but not much to see from the outside aside from a big 'keep out' sign.

I asked them if there would be any sort of tour available, given that the neighboring power plant has an exhibition (which is superb by the way! Easily worth the money, and I spent quite a bit of time geeking out there on.is/en/geothermal-exhibition/). Initially Climeworks responded, we exchanged a few emails, and one of their marketing guys wanted to give me a call, so I sent my number and... got ghosted. No replies to reminder/follow-up emails or anything. Bit bummed but oh well, didn't expect there would be anything available in the first place so I can't complain.

As for the other climate-related part of my bio, reducing emissions, there's a whole host of things but mostly things everyone already knows is an option: I chose to live in a place where I can commute by public transport, I buy and sell second hand instead of new when possible, reduce meat consumption (prioritized by a CO2/kg chart, which unfortunately includes cheese above chicken iirc) and buy veggie/vegan food to vote with my wallet, vote green in elections since imo basically everything else (short of war-like situations) can wait a few years, etc.


Well better than nothing! If you have plans to return to Iceland, maybe I can point you to someone helpful on their team. my email is my username @ airminers.org.

And if you do get curious about working on carbon removal, check out the AirMiners Boot Up: https://bootup.airminers.org/


Thanks for the pointer! Definitely checking that out. And thanks for the offer, though Iceland was expensive and required... yup... a flight, so I might not be going back too soon to Iceland specifically. Beautiful place though.

I also love how this picture on your website about sums up this discussion about how to solve the climate problem: https://images.squarespace-cdn.com/content/v1/602c4ede5fdbd7...


memes will save the climate!


Wouldn't be HN without letting perfect be the enemy of good. Kind of a given due to the sort of person it takes to be interested in hanging out here (passionate about tech, somewhat cynical etc :p).


Yes HN is typically the "it doesn't solve the problem perfectly so it shouldn't attempt to solve it at all" types. It really is a drag :|


Common trope everywhere honestly. Technology Connections speaks of it in the "but sometimes!" video https://www.youtube.com/watch?v=GiYO1TObNz8 where LED lighting in traffic lights is meeting resistance¹ because sometimes it freezes/snows over. It's efficient, so no more heat to keep itself snow-free. Oh no, says the general public, we can't install that! Of course a little resistance heater is primitive technology, cheap to add, and it takes about two brain cells to realize that sometimes not being better than incandescent is still better than always using incandescent, yet still it's apparently a thing to work through. Not a problem exclusive to HN

¹ edit: I had not realized the irony here while writing this :-)


No. It doesn't solve the problem at all.

Billions of tons of carbon will need to be extracted from the atmosphere. But removing it will do no good as long as even more is released. 100kt/y is like breathing on somebody who is thirsty. Yes, there is moisture in your breath. No, it didn't help.

Creating a carbon tax such that emitting carbon costs as much as they spend extracting it, and then handing that over to extractors, could enable scaling up to the point where it could do some good.


It's a 10x improvement by being 10x the size -- it hasn't improved efficiency by 10x or anything remotely like that.


Ya this is an aspect I'm curious about as well. Is this all scaling up/copy pasting or are there other improvements?


Geothermal energy is not infinitely scalable. And I have serious doubts about CO2 emmisions of this thing!


> Geothermal energy is not infinitely scalable.

The tech is supposed to be used (at scale) once we exhausted the low-hanging fruit of closing (replacing) gas plants and the like. Once we have days with excess sunlight and wind, for example, this can be used anywhere. And I like that they're using renewable energy during this R&D phase, because indeed it doesn't matter if you remove the CO2 next to a USA freeway or in remote Iceland place that sounds like hell (hellisheiði, "ð" as "th") but has excess electricity and heat and suitable rock.

> And I have serious doubts about CO2 emissions of this thing!

For the previous plant, some third party (iirc KPMG, obviously paid by Climeworks so there's a conflict of interest) says it's 90% efficient, meaning that the associated emissions are about 10% of what it captures.

But since Climeworks only published the claim and not the report, I don't know if that includes construction, or if they picked a nice number that only considers plant operation. And construction might have included a lot of R&D, so then it wouldn't really be a fair comparison because building more of the same would not require those R&D-related emissions.

For what it's worth, I'm optimistic given this 90% claim. Even if you apply the marketing department discount and reality might be 75%, it's still a whole lot better than nothing.


》The containers are blocks of fans and filters that suck in air and extract its CO2, which Carbfix mixes with water and injects underground, where a chemical reaction converts it to rock.

It uses A LOT of water! Bitcoin mining is propably more ecological than this.


Anything that actually makes progress on removing atmospheric CO2 will be hated -- because it breaks the argument for the intentional economic destruction and mass poverty creation which has always been the goal of the climate catastrophists.


Very cool. We do something very similar on Instacart's site. Our main Redux store and header component track window size and scroll position so that all components on the site have that info and can react to if they need. Works wonderfully!


Agreed, I didn't show it, but only have a single reducer handle those actions... I have state.window.{size,position} that keeps those values, that several components can then use for rendering.

I'm still curious as to when/if we may see the likes of aphrodite really take off.


For now we use Rails on the server and a single backbone router for the the client side. This isn't how we want it stay. :)

We do have a couple side apps internally using React Router. We've liked it so far, but it's a pretty basic implementation.


That is definitely one limiting aspect for some places. We actually have a similar issue, since we're a Rails shop first and foremost. Luckily there are a few tools we can use to make it work well with Rails. Maybe a future post!

I also think jQuery and other libraries with large usage are kind of Swiss army knives, so more people have more use for them.

I do believe over time React will become as pervasive as jQuery. Or possibly even the DOM api will eventually allow a similar style of building things natively. This seems to be happening to jQuery now :)


We've slowly been growing our React presence on the site for almost a year now. At this point, we're fairly close to the size of the original Backbone app as far as views are concerned.

We're still somewhat tangled in with Backbone, but it's been a great success internally. Performance-wise, development speed-wise and also developer happiness-wise we haven't hit too many snags.


This should really only be an issue when the children of an element are all the same type of element and are at the same depth level. For example, this is very common for li elements in a ul element.

We try to keep the structure of the html so that wrappers generally keep different UI components separate from each other. In other cases, like in the example in the blog post, we just use the index from the .map() loop to add a key to every element we know will need one. React should also warn you when this is an issue.


> we just use the index from the .map() loop to add a key to every element

Doing this could hide weird bugs. Let's say we render a list of posts with "remove" button for each one. The user clicks on "remove" button, we modify the underlying this.state.posts. And on next re-render removed post key would be assigned to the next post! The next post could inherit just-deleted post handlers, for example.

It's much better to use unique keys like database ids.


Great point. This has to be the funniest and most confusing bug when you first encounter it!

If your list is totally static, an index will do the job. However, we usually end up appending something like an id from the db since almost nothing on Instacart is static.


Wait, you're using the loop index as the key? This is React's default behaviour, and it's what it warns you about. Assuming you're looping over some kind of list of models, use the model's ID instead.

Basically, no matter where in the array a value is, it should always have the same key.


Moreover, you'll get a warning in the console if there are ambiguous children telling you to add keys.


What prevents state cross-talk from distinct components? Do we just hope that their structure is sufficiently different that React doesn't mistakenly infer identity?

Even the simplest elements like <p> have state and meaningful identity, like their text selection range.


NO! Don't use your collections indices as a key, that defeats the whole purpose. Your indices can change without content changing! You want to use item.id or something not as volatile as a collection index!


As a side note, I use uniqueId() from the venerable underscore.js utility lib (last version is 5,7kb only).


Won't that cause the element to get a different key every time render() is called, causing React to think each is always a new element thus throwing away the state every render?


You are correct. _.uniqueId() will absolutely wreck your list rendering performance if it is called on every render. 99% of the time, you can find a better unique identifier, but if you absolutely need something like _.uniqueId(), it should never be called in the render loop. It should be attached to the data being iterated over, in advance.

But... the other problem with _.uniqueId is that in an isomorphic setup your server will just keep counting up and up while your client will start from 0 on every page load. So the first time you reinflate React on the client side the checksums won't match and will cause a full-page re-render.


I only do that on the client, and yes, as a matter of facts, things are rerendered each time. But we are talking of a couple uls with <10 elements, frankly the eye doesn't catch anything at all... Honestly, the real damage of this bad habit of mine show off when I've to explain to my co-workers why it could be bad to do this for every situation.


The backbone model api was made with totally mutable data in mind. React, on the other hand, only works well with immutable props and some mutable state in the top levels of components. That's our first issue, for sure.

Backbone also has a dependency on jQuery, which isn't very useful in React apps since mutatuing the DOM (one of jQuery's main purposes) is a no no. So most of the Backbone library isn't very useful to us anymore. I do still really enjoy the backbone way of fetching and saving models, especially in a Rails context. But I think we can get those features in other ways.


That's pretty much experience too: our apps that use React + Backbone, I find myself more and more using a fraction of the features (very basic API integration (fetching, saving) and the router), both of which could be easily replaced with something else. Not that it would be particularly hard to extensively use Backbone with React, it's just that there seems to be other (better) ways to do the same things too.


I use React and Backbone too. Here's the source code: https://github.com/pixyj/feel/tree/master/client/app

I use jQuery for animations and also for non-crud pixel-level DOM manipulations as in the graph shown in the home page: https://conceptcoaster.com/course/python-tutorial/ In the graph, I need to dynamically calculate height of certain elements based on the height of others. The height cannot be predicted in advance. Can React work for such cases too?


You definitely can achieve it with React sans-jQuery, though it is definitely more boilerplate/code than the equivalent jQuery call. This is an acceptable trade-off for us, but might not be for you, YMMV


Really like the app. I like the top progress bar loading animation. I'm assuming this is the relevant file? https://github.com/pixyj/feel/blob/master/client/app/base/js...

Trying to figure out a way to implement this with React + React Router...


Thank you! Yes, that's right! I'm not familiar with React Router yet though :(


Taking Redux, for example; doesn't a Redux store's getState, dispatch and subscribe methods roughly map to a Backbone model's get, set and on methods respectively? Isn't the immutabability of data in a redux store an implementation detail of the store? How does this impact the React views?


Unlike Backbone models, Redux state doesn’t have setters. Components can’t just go ahead and change it as they like.

This adds more ceremony around changing the state but in my experience (I’m biased: I wrote Redux) makes such changes easier to debug and test.

Every change comes as an action that the whole reducer tree can handle, so changes are predictable and reproducible if you record actions. You can change how actions are handled without changing the components dispatching them. A good example is how your components can emit SHOW_NOTIFICATION action but you can refactor your reducers from allowing only a single notification at the time to keeping a stack of notifications, without changing any of the components emitting them. This is what separating actions from state changes gives you.

You can log every action and see the entire state tree before and after the action. This, in my experience, makes it easier to fix incorrect mutations spanning across multiple entities. If the user clicks a button and the state is updated, and then a network response comes, and the state is updated again, you can log diffs between the state changes and have a clear picture of everything that’s happened. In fact Redux DevTools even provide a UI for that, e.g. https://github.com/alexkuz/redux-devtools-inspector.

One feedback I hear particularly often about Redux is that people who never wrote unit tests for front-end apps started writing them because it is just so easy to test reducers. You don’t need to mock any dependencies or simulate AJAX requests because any new data (whether local or remote) comes in the form of actions. So you can just call the reducer with a state and an action, and assert that its output matches what you expect.

With a single state tree I found it easier to ensure that all state is normalized and you don’t have duplicate data. In Backbone, it is encouraged that models are instantiated as you parse the request which makes it non-trivial to normalize them and merge the changes. In Redux, any merging is going to be explicit, and you can always inspect the single state tree to make sure the structure is normalized and matches what you expect it to be.

By contrast, in Backbone, models are active records so user.followUser(anotherUser) will set() a few flags on itself, fire off an async request, set() fields on another model, then possibly revert if the request failed. This is pretty hard to do consistently because there is no central point to mutations. You also have to build some sort of cache and entity resolve mechanism to make sure you don’t have duplicate out-of-sync models describing the same data.

Finally, in Redux mutations are forbidden so it is easy for the views to bail out of reconciliation if the corresponding state fields have not changed. In my experience, Backbone doesn’t make such optimizations easy because it was designed with completely mutability in mind.

So Redux, in a way, is similar to Backbone, but also is very different.


Personally, I'd say the biggest similarity between Redux and Backbone is that both are simple, minimal, "no-magic" implementations of a few specific concepts, and have communities that have built addons to fit additional use cases.

Backbone is really just an event bus, some smart wrapper logic around updating an object and an array, and a nicer setup syntax for jQuery event handlers for a particular element tree. People have then built addons that handle relational models, derived data, view lifecycles, and so on.

Redux is just putting all/most your data in a single "model", making sure all writes happen explicitly and immutably, and firing a single "change"-like event. From there people have built various ways of organizing side effects, listening for specific changes, and hooking into other libraries.

So, both simple libraries that form the basis for an ecosystem, and let you pick-and-choose the pieces that you specifically need.


Yes, exactly. I would argue that Redux has an even smaller scope than Backbone (e.g. it doesn’t handle views or AJAX) but the landscape has changed enough that this is not necessary for a state management library. But it definitely fills the same niche.


Thanks for the comprehensive reply. I think this answers my original question well. But in my follow up, I was basically asking if, from the perspective of the view, it makes much of a difference if it's communicating with a Backbone model or a Redux store.

P.S.I actually recently watched your Redux videos. Very nice!


>But in my follow up, I was basically asking if, from the perspective of the view, it makes much of a difference if it's communicating with a Backbone model or a Redux store.

I think it makes a difference. When you work with a Backbone model, you directly modify it:

    model.set('completed', true);
When the logic changes, you need to change the call sites to call a model method or several model methods.

On the other hand, with Redux the view specifies what it wants to happen and not how:

    dispatch({ type: 'TOGGLE_TODO', id })
When you need to change how the whole state tree responds to that change you don’t need to change the components. This is the difference.


Thanks for the insightful explanation. Adding https://github.com/atmin/freak to the comparison (disclaimer, I'm the author):

* it supports set operations, `model(‘fieldName’, newValue)`, like Backbone and unlike Redux, which trigger only the necessary update actions in the (implicitly built) dependency graph. set is idempotent, even if your computed properties have side effects

* it has .on() method

* "You can log every action and see the entire state tree before and after the action.”. In freak, too:

    const model = freak(statePOJO);
    const history = [];
    model.on(‘change’, prop => {
      // change is called exactly once per mutation
      history.push({...model.values});
    });
freak’s README is a literal test suite, consisting of state changes and assertions.

Probably it's a good idea to add 'beforeChange' event, to make it possible to cancel a mutation.

* it's currently implemented internally using mutable data structure, but this is implementation detail and can be changed to immutable data in the future, allowing simply `history.push(model.values)`


Since when does Backbone depend on jQuery?


> Backbone's only hard dependency is Underscore.js ( >= 1.7.0). For RESTful persistence and DOM manipulation with Backbone.View, include jQuery ( >= 1.11.0), and json2.js for older Internet Explorer support. (Mimics of the Underscore and jQuery APIs, such as Lo-Dash and Zepto, will also tend to work, with varying degrees of compatibility.)

It's not a true dependency but it's a default companion and optional dependency (see Backbone.$).

Source: http://backbonejs.org


Yep, it was the actual Flux library. However, we're not totally settled on that choice just yet.


Awesome! I had chosen flux for one project I was on and really enjoyed it!

Thanks for the post!


Definitely agree. In our real code we're much more sensitive of performance. This was just a quick way to show a simple way Backbone can be used with React.

I think it'd be interesting for us to do another post in the future about how we measure and improve performance within our React apps. There aren't a ton of articles about this with relation to Bakcbone, it seems.

Thanks for giving it a read!


Would definitely like to read that one. We have a backbone/react app and generally use your example pattern, but I can see how it would take a toll on performance at some point. What's the general pattern you use if not forceUpdate? I suppose you could use something like @setState(model: model) but you still have mutable models to deal with in shouldComponentUpdate. Have you guys come up with a good way to reconcile backbone's tendency towards mutation and the performance benefits of immutable props?


Yes and no, really. We mostly keep our backbone models as props and keep them immutable by pure convention. You really could just call set() on one of them and cause some problems. This isn't great long term, of course. Code reviews and code style guides help here for now.

We've also at times converted backbone models to simple objects at the parent level and passed them down that way. Keeps things simpler, but we lose some of the niceties of the backbone model api. Once the models are objects, you can use componentShouldUpdate pretty defensively if you need.

We've also used throttled and debounced functions for anything which gets hit very heavily.


Whoops! Fixed in the repo now.

I can't seem to edit the title of the HN article anymore. Must be past a certain time or something.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: