This reminds of my favorite article by sportswriter Bill Simmons, titled The Consequences of Caring, in which he describes seeing his daughter embrace a sports team and experience crushing disappointment for the first time, and reflects on his own lifetime of fandom.
I've always wanted to do something similar! I've thought about trying to consider the "obviousness" of certain moves, and then only explore those branches with a probability relative to its obviousness.
"Obviousness" would take into account things like the last move played ("ah, your pawn is attacking my bishop; I should move it"), and whether the move is a capture, check, or attacks another piece. A forward move for a knight is more obvious than a backward move, as is moving it towards the center of the board, versus moving it to the edge of board.
As the depth increases, the probability of exploring a branch decreases. I think it would be pretty easy to scale such a system to make it play better or worse by simply adjusting how much the probability of exploring a branch decreases as the depth increases.
Perhaps this could lead to more natural blunders where a line is simply missed?
I've also wanted to come up with an evaluation system that scales better with skill level. A position may be +3, but only if you play the next 5 moves exactly correctly. In another position, one move might be +1, and another might be +0.5, but the +0.5 requires four moves of perfect play from your opponent to keep it even. Can these subtleties be expressed clearly? Maybe something like "turns until positional advantage is converted to a clear material advantage." When you're just starting engine evaluations often don't make any sense.
I've watched a lot of chess on Twitch lately, and it'd also be cool to have scores for all the relatively-subjective terms that the commentators use: "lots of attacking chances" (just pure count of checks possible in next N moves? percentage of lines that lead to checkmate in next N moves?), "very sharp position" (how many moves drastically change the evaluation?), etc.
I would like to work on something involving Rust, compilers/interpreters, or Postgres. I'm very interested in programming languages and, in addition to Rust, would be interested in learning Go, Erlang, Swift, OCaml, or "modern" C/C++.
Since getting laid off due to coronavirus I created a Ruby gem based off an internal library at my last job:
I really think the negativity in the comments here is overblown, and misdirected.
The author has a totally reasonable set of requirements: React + TypeScript, writing posts in Markdown, small amounts of interactability, and static site generation. It may be an indictment of the JavaScript ecosystem and the current state of the web that nothing exists that can help the user accomplish this out of the box, but that doesn't diminish the validity of the author's requirements. (It's also possible that next.js, Gatsby, or another framework can actually support these with relatively little configuration.) But we should applaud the user for wanting to create a totally statically generated site! (I noted that Gatsby's homepage was not static generated and clearly takes a second before displaying content.)
Given that the author wants to render to static HTML, I interpret React + TypeScript to primarily mean JSX + TypeScript. Is this really any different than saying Liquid + Ruby, or Go templates + Go (used by Jekyll and Hugo respectively)? JSX and React's component based system were literally designed for HTML, and I'd argue it is definitively a better solution than the raw template libraries offered by other languages. You get automatic "syntax"-checking of the HTML by using JSX. By using JavaScript you have proper control flow instead of the awkward in-line conditionals and loops in templating libraries. A Component-based system makes code-reuse significantly easier. And by using TypeScript you can get type-checking for all of this. Unfortunately, using JSX and TypeScript means that you need some sort of build pipeline, and that's where the current state of JavaScript really rears its ugly head and adds a lot of complexity.
Writing posts in Markdown is also totally reasonable, as is wanting to support small amounts of interactability in future posts. A couple of comments mention that you could just add raw HTML to a Markdown file and call React from there, but that doesn't solve the compilation problem. A lot of comments also missed the desire for interactivity. It's for a developer blog! I'd love an interactive inline demo or explanation!
The blog title might have been a little overstated (the ideal tech stack), and the "zero memory of how we used to build forms in the pre-React times" isn't a great look, but I do think the actual content is solid.
This is so ridiculous coming from someone that did front end during the pre SPA days. I don’t event know where to start here . It’s a blog, not a complicated backend solution requiring a stack like this. Just throw in something simple to get the posts from the database and update the dates and you’re done . It’s over engineered and symptomatic of major issues in the front end community.
I'd recommend not being so quick to dismiss the author. Your suggested alternative is actually way more complex. The approach described sets up a static site generator that allows for deployment to a static asset webhost (like S3/CloudFront). What this means are that there is no server to administrate or manage. Aside from the cost and performance advantages, this drastically simplifies the operational overhead. There's nothing to patch, no networking to configure, nothing. Just upload and be done. Moreover, there's no database at all either. No need to manage database credentials, no need to pay the costs to keep a database running, no need to patch a database server, etc...
This is the total opposite of overengineering. The stack the author is running is way simpler than one where you have to spin up a webserver, database, caching solution, etc...
Not if your "database" is something like SQLite. That is preferable to most alternatives, if you actually want to manipulate data.
If the blog is not much more than a .PLAN file, sure is overkill. But having access to a DB and programming language opens up a few possibilities – and cleaner code compared to hacking some bash scripts doing text manipulation magic.
Now, starting with a database from day zero? I'd say it is a case of YAGNI. Don't add stuff you don't need now. If you need it in the future, then add it. But don't go paying for a hosted DB if a SQLite would suffice.
Oh, and even then, a database would be more useful for the site generation step. Having a DB being called for every page rendered is a recipe for going down with HN traffic (or Slashdot in the older days).
I was just wishing for a better SQL formatter! I'm wasting too much time pasting queries into a doc just to reformat them.
It would be great if the parser supported question marks as placeholders. The pg_stat_statements table in Postgres saves denormalized queries, replacing the literal values with '?' placeholders (e.g., `SELECT * FROM table_name WHERE id = ?`).
The new implementation is still wrong, according to the specification.
Array.prototype.sort does not sort an array of numbers as expected. In JavaScript, [2, 10].sort() results in the array [10, 2].
As MDN points out, "The default sort order is according to string Unicode code points... If [compareFn is] omitted, the array is sorted according to each character's Unicode code point value, according to the string conversion of each element." [0] It has an example showing off the unintuitive behavior right at the top of the page.
This behavior is intended per the original ECMAScript specification[1, pg. 68]:
When the SortCompare operator is called with two arguments
x and y, the following steps are taken:
1. If x and y are both undefined, return +0.
2. If x is undefined, return 1.
3. If y is undefined, return −1.
4. If the argument comparefn was not provided in the call
to sort, go to step 7.
5. Call comparefn with arguments x and y.
6. Return Result(5).
7. Call ToString(x).
8. Call ToString(y).
9. If Result(7) < Result(8), return −1.
10. If Result(7) > Result(8), return 1.
11. Return +0.
Isn't that a separate issue? The article was pointing out that [5, 4, 1, 5] was sorted as [4, 1, 5, 5], which is wrong no matter if you're sorting numbers or strings. The bug was the algorithm taking one step too few and the fix was reducing the end trigger by one, which has no effect on what sort order is used. I'm seeing an actual bug that was actually fixed, and any breaks from the spec -- real though they might be -- are better brought up to the maintainers themselves than posted on something only tangently related.
It's the worst option except for all the others, I think. JS arrays can contain anything, so if the default behavior was to compare elements as numbers, sorting an array containing mixed types would have unpredictable results (because of comparisons to NaN, etc).
Naturally things are only weird for untyped arrays - calling sort() on e.g. an Int32Array does what you'd expect.
This is unrelated to StatefulSets, but I'm going to take the opportunity to ask a Kubernetes engineer for help, since the the kubernetes-users Slack channel sort of feels like shouting into a void.
We deploy a small cluster (1 master, 6 nodes) at our startup that started misbehaving last week. All of a sudden three of the nodes went down - one became unresponsive and two had the error "container runtime is down." We couldn't ssh into the unresponsive one, but according to AWS the machine was fine, still receiving network requests and using CPU.
Since we couldn't diagnose the issue, we spun up an entirely new cluster using kops, but started seeing the exact same behavior later that night, and again over the weekend. Three nodes were in a not ready state, for the same reasons (unresponsive and container runtime is down). Right now our only solution to solve this issue is to manually terminate the EC2 instances and rely on the Auto-Scaling Group to create new ones. In the mean time, Kubernetes tells us that it can't schedule all of our desired pods, so half of our jobs aren't running, obviously an undesirable situation.
A handful of questions I have about the situation: Why are these nodes going down? What causes a node to go unresponsive? Why does the container runtime go down on a node and why doesn't it get restarted? Why doesn't Kubernetes destroy these nodes when they've been out of commission for 3-4 hours?
Any help would be appreciated!!! I've been looking through half a dozen log files and gotten zero answers.
So first, sorry about the problem. Please come hang out in the sig-aws or kops channels - we're a bit smaller and more focused than kubernetes-users, and can typically get these problems solved pretty quickly together.
IIRC we improved garbage collection settings in the latest kops (1.5.1), so if you were running out of disk, using the latest kops should fix everything. It's also easy to reconfigure to use a bigger root disk if you're churning through containers faster than GC can keep up. But if it's something else we can try to diagnose it as well!
> Why doesn't Kubernetes destroy these nodes when they've been out of commission for 3-4 hours?
We should, I believe. I actually thought we had an issue for this very problem, though I can't find it. I'll open a new one if I can't track it down. There is maybe an argument that we should fix the root cause, but there's an unlimited number of things that can go wrong, so we need to do both.
I ran into something very similar with a cluster almost identical to you. Turns out the default disk size for kops is 25G and when your masters run out of space things start to die with almost no way of telling why.
I rerolled with 100G and I've seen zero problems since.
> Why doesn't Kubernetes destroy these nodes when they've been out of commission for 3-4 hours?
Kubernetes isn't responsible for the lifecycle of its nodes. It can run in a DC where "destroying a node" might mean paging a tech to turn off a server. Something external - in your case, kops & your ASG - is responsible for the nodes that Kubernetes runs on. That's a deliberate design choice.
It should make a correct decision not to schedule work there, which it sounds like it did.
Given that, your other questions are hard to answer. kubelet is a process that runs on the nodes. So is docker. If you can't get into the machine to diagnose the fault, I'd encourage you to set up some monitoring/log shipping off the node so you can see what the state was when it failed.
There's nothing inherently "Kubernetes" about this diagnosis - it's more EC2, node/kernel/OS and Docker troubleshooting, in that order.
Correct, Kubernetes is not responsible for the nodes. I would build a health check into your Autoscale Group (I don't know exactly how to do this on AWS, but am happy to show you an example on GCP - aronchick (at) google).
If you can't get to the machine, there are a million reasons why this would be the case - but ssh is a totally separate process, it's way outside of Kubernetes. VERY commonly, you've run out of memory and processes are fighting among themselves (especially since EVERYTHING seems to be failing), but this is total speculation. OS issues are common too - I've spun up clusters switching from one distro to another, same config, and everything worked great.
It's conceptually very similar to CoreOS' Container Linux, so I might try that if I were looking at Kubernetes elsewhere and wanted a container-only OS.
If I am running an environment with multiple purposes - some container hosts, some regular machines - I'd err on the side of "who is my current vendor/what does my ops team support and know best".
Great thanks for the valuable infos.
We are running SLES12 and also a Suse Openstack Cloud on bare metal and only recently Suse has announced their container strategy (SLE MicroOS Distro) but we haven't had time to evaluate it yet.
At a recent DevConf I saw some interesting talks about immutable container hosts such Fedora Atomic. Seems that there is a lot of work done in this area.
I think one of the most under-mentioned features of vim is that a line is one of its fundamental units.
Suppose you want to move a few lines of code from one function to another in a regular GUI text editor. If you select these lines with a mouse you need to be careful about exactly where you click to start highlighting and where you end highlighting. Do you start from the beginning of the first line or from the end of the previous line? Do you include the new line at the end? (Can you even?) Then, when you paste, where do you paste? Do you just put the cursor on the line you want to paste before and paste? Or do you put the cursor at the end of the line you want to paste after, hit enter and then paste? Oops, you did it wrong and now this the first line isn't indented at all, or now you have two lines of code on one line.
After a while you figure out where you need start and end the selection, and where you need to paste, but it's still easy to mess up.
This isn't an issue in vim. Using Visual Line mode selecting and pasting is super simple. 'V' to enter visual line mode, 'j' and 'k' to highlight everything you want (and 'o' to switch which end of block you're moving!), 'y' to yank (copy) or 'd' to delete. Then put the cursor _anywhere on the line_ that you want to paste after and hit 'p'.
The ease of use is one thing, but I also think it makes more sense to have an entire line be a fundamental unit. When you're editing code, you're usually moving codes around or editing single lines. I rarely copy and paste just one part of a line to another, and I think in most cases it's easier to just paste the line and change the parts I don't want.
The issue is that this includes everyone else's branches that have been pushed to origin. I only want to see the history for local branches. (I guess I'd also want to be able to see origin/master, but I recognize that there's nothing distinguishing that from the other branches on origin I don't want to see.)
I think the issue is that I run `git pull --prune` to get rid of any remote branches that have been deleted. I usually do this after I pull master, so I think I should just be running `git pull origin master --prune` as a single command.
https://grantland.com/features/the-consequences-caring/