Hacker News new | past | comments | ask | show | jobs | submit | Matumio's comments login

Not everyone can afford to just walk away more than once or twice.

And people may perceive the uncertain alternative of not getting that job right now as much worse than it would actually turn out, and agree to stuff they don't really want. Like the point made in this short comedy scene: https://www.youtube.com/watch?v=-yUafzOXHPE


energy plus buildings are a thing: https://en.wikipedia.org/wiki/Energy-plus_building

The question is when you consider planet Earth "destroyed". Most likely it will remain blue and keep its atmosphere. Life will continue. It could be "destroyed" in the sense that humans sustainably sabotage their own long-term survival, or the survival of other species.

Short of a nuclear war, I don't think humanity will get close to extinction. But I think we are on a path to lose access to today's cultural knowledge (like microchips, vaccines, aviation). If the population is forced to shrink over the next couple of centuries, wars over fertile ground seem more likely than specialized global supply chains.


When I read "evolution strategy" I was pretty sure to find some variant of the Canonical Evolution Strategy (as in https://arxiv.org/pdf/1802.08842) or maybe CMA-ES or something related. But the implementation looks like a GA. Maybe the term means different things to different people...?


Thanks for pointing that out. The current implementation is not self-adapting the parameters (like mutation strength) of the individuals in the population: https://github.com/SimonBlanke/Gradient-Free-Optimizers/issu...


Then probably you know about NEAT (the genetic algorithm) by now. I'm not sure what has been tried in directly using combinatorical logic instead of NNs (do Hopfield networks count?), any references?

I've tried to learn simple look-up tables (like, 9 bits of input) using the Cross-Entropy method (CEM), this worked well. But it was a very small search space (way too large to just try all solutions, but still, a tiny model). I haven't seen the CEM used on larger problems. Though there is a cool paper about learning tetris using the cross-entropy method, using a bit of feature engineering.


I am familiar with NEAT, it was very exciting when it came out. But, NEAT does not use back propagation or single network training at all. The genetic algorithm combines static neural networks in an ingenious way.

Several years prior, in undergrad, I talked to a professor about evolving network architectures with GA. He scoffed that squishing two "mediocre" techniques together wouldn't make a better algorithm. I still think he was wrong. Should have sent him that paper.

IIRC NEAT wasn't SOTA when it came out, but it is still a fascinating and effective way to evolve NN architecture using genetic algorithms.

If OP (or anyone in ML) hasn't studied it, they should.

https://en.m.wikipedia.org/wiki/Neuroevolution_of_augmenting... (and check the bibliography for the papers)

Edit: looking at the continuation of NEAT it looks like they focused on control systems, which makes sense. The evolved network structures are relatively simple.


What I found interesting is that storing almost-pure CO2 (which is what they are doing) looks pretty economical. They are a special case that allows to separate pure CO2 as a side-product of their process.

But standard combustion processes output air with a single-digit percent of CO2, and there seems to be no cheap way to change that or separate it from air.


Can confirm from personal experience. It has been years, but Deevad was a joy to work with. He will discover your niche project, test your betas, give feedback after trying to work around the quirks and spending time actually using it. He will contribute when possible and promote your software if it gets the job done. I'm glad he keeps doing this over all the years and projects.


I've played Quern for several hours but couldn't bring myself to care as much about the puzzles or, more importantly, walking around the world.

It was not too bad, but my memory of Riven is so much stronger. Maybe I should replay it instead, just to walk through this beautiful world again, even without solving all the puzzles (the puzzles are IMO not why you play it). Riven evoked this constant feeling of wonder with the sounds and short cut-scenes adding a lot to the atmosphere.

There was this place where you walk down towards the water with a beast sitting there in the sun, and that scene almost has a smell to it. Or maybe my memory is colouring it all rosy now.


Warning, FTL can be addictive. It has a heavy luck dependence that makes you want to try again.

That said, the game mechanics are really well done and give you options for creative problem solving. For example your pilot increases the chance to evade missiles. Unless he is busy extinguishing a fire in another room. So instead you can open a door to space and power down your own oxygen supply. And use that power to charge a second weapon.


Agree, nice catch. Also, there are many other opportunities in this patch to hide memory safety bugs.

This is the kind of optimization I might have done in C 10 years ago. But coming back from Rust, I wouldn't consider it any more. Rust, despite its focus on performance, will simply not allow it (without major acrobatics). And you can usually find a way to make the compiler optimize it out of the critical path.


Consider applying for YC's first-ever Fall batch! Applications are open till Aug 27.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: