Hacker News new | past | comments | ask | show | jobs | submit | ubershmekel's comments login

I wrote a bit about this about a year ago: https://blog.usejournal.com/the-next-css-frontier-classless-...

Definitely got me Zen Garden vibes.


Thank you :) fair point. I'll add filtering by domain, and then maybe by hash tags. Though it might be tricky to identify the category across different sites.


Any feedback would be most welcome!


> This person is so convinced that they are a good developer that they refuse to take direction or ask questions

While I agree a coding exercise is valuable - this is the actual problem. The way I test for that humility is by asking "tell me about a time you personally did a poor job." I then follow up by asking for a time they identified a failure of a colleague. I look to see if they're empathetical, open to being wrong, and declaring they're always trying to improve.


A difficulty with this kind of question is that the how-to-beat-the-coding-interview books and sites, which many students and professionals read, already prep them for how to answer.

People get step-by-step strategy for this kind of question, what the interviewer is looking for, and what notes to hit. And sometimes prep them for the exact question, so that they can have a prefab answer ready.

Then we might be selecting for more like what used to be MBA students, or at least people who know how to play along, and say the right corporate going-through-the-motions things.

Some organizations will want to select for test-prep specifically, as a good predictor for the kind of employee they want, but I like to think there's other organizations that decidedly don't want that.


I don’t think this is a great approach. You’re selecting for good talkers. There are many great engineers and coworkers who don’t do well on these types of verbal psychological games.

Folks are given all sorts of interview advice about leading to claim things like “my greatest weakness is that I work too hard”.


One way to test for humility is have them do a programming challenge that isn't too difficult (it's not about whether they remember the algorithm for merge sorts), but has a few common pitfalls and arbitrary choices, and see how they react when you ask questions about their choices or make suggestions to them. Can they explain a choice? Can they follow a suggestion (or correctly and politely argue that it's unnecessary)?


I learned this lesson from the videos of the Harvard course on justice. The lecturer got explanations that I thought at the time were laughable, but somehow in his response he rephrased them to be reasonable and a great platform to deepen the conversation. It requires a great deal of empathy and intelligence to do that and I try my darndest to read as much sense as possible from any question directed my way. Here's an example:

https://youtu.be/kBdfcR-8hEY?t=2446


That lecturer was truly amazing. I thought the trolly car thought experiment was something that is not worth much more than a few minutes thought, but he not only proved otherwise, but was patient enough to ask questions and wait for the class to stumble slowly through the arguments.


That's a good idea. Sort of like the good old css zen garden, but with simple, not ugly, css resets.

Here's my attempt: https://www.cssbed.com/

Feel free to add options at https://github.com/ubershmekel/cssbed


Sweet. I really liked the idea of ccs zen garden. But the designs where mostly absolute positioned images and css hacks. a lot have happened to css since ...


I'm sure they're good, but better is better.


I felt wrapping added a strategic depth for me to explore and a positive sensation of "outsmarting" the red block.


The undecidability of the halting problem has no practical effect.


That statement is too strong.

The practical effect is that you should not try to build an algorithm to detect infinite loops in general (though you wouldn't need undecidability for this, per se. E.g., NP-completeness of the halting problem would probably have been sufficient to kill attempts at a general algorithm dead in the water).

Of course, it is true in specific cases that you can decide whether a given program halts, and, indeed in almost all practical instances if you know how to solve a given problem then you should be able to construct an algorithm to solve it which provably halts. (Not including, of course, things that depend on environmental inputs, something that computes whether a given number satisfies Goldbach's conjecture, etc.)

There is, of course, enormous theoretical value in the undecidability of the halting problem, and in Kurt Gödel's undecidability of a general axiomatic system. And this theoretical value guides future theoretical research which is likely to lead to further practical value in future - which indicates a practical effect in the long-term. Also, of course, it indicates that an attempt to build a general theorem-prover is a useless endeavor (though approximate / specialized theorem provers are, of course, possible and indeed exist).

In particular, Gödel's undecidability has quite real implications for an Artificial General Intelligence which would have to reason about mathematics in general, and about its own thought processes in particular. Under classical mathematical reasoning, such an AI would be unable to "have confidence" in its own reasoning process. However, if the assumptions are relaxed from statements with 0/1 truth values to statements with probabilities, and time-independence of truth is relaxed, it is possible to define an agent which is able to assign (almost) consistent (actually "coherent") probabilities to what its own beliefs w̶i̶l̶l̶ ̶b̶e̶ ̶i̶n̶ ̶t̶h̶e̶ ̶l̶i̶m̶i̶t̶ ̶(̶a̶s̶ ̶t̶ ̶g̶o̶e̶s̶ ̶t̶o̶ ̶i̶n̶f̶i̶n̶i̶t̶y̶)̶) are at time t. See https://arxiv.org/abs/1609.03543 for more details. This is probably the strongest such statement that can be made in this class of arguments.


To be more concise. The undecidability of the halting problem has no practical effect, barring usual considerations about fundamental research.


Isn't it one of the reason limiting the power of static analysis and force us to use dynamic analysis to get certain information about the correctness of our programs ?


If the halting problem was merely NP complex, it would have still limited practical possibilities of static analysis.


Eh, modern SMT solvers are very practical tools, as are solvers for Mixed Integer Linear Programs. NP-completeness doesn't mean that you can't solve most instances you care about in reasonable time.


Yes. And the halting problem doesn't preclude static program analysis even if it works with only a subset of all possible programs.


It looks like we're in perfect agreement then.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: