Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: ChessCoach – A neural chess engine that comments on each player's moves (chrisbutner.github.io)
506 points by cbutner on Oct 5, 2021 | hide | past | favorite | 89 comments



This took about a year and a half – a little over a year coding in between experiments and training.

It's a chess engine with a primary neural network just like AlphaZero or Leela Chess Zero's, but it adds on a secondary "commentary decoder" network based on Transformer architecture to comment on positions and moves. All of the code and data for training and search is from scratch, although it does use Stockfish code to generate legal moves and manage chess positions.

You can watch it play on Lichess here: https://lichess.org/@/PlayChessCoach/tv or challenge it here: https://lichess.org/?user=PlayChessCoach#friend, and see its commentary in spectator chat. It only plays one game at a time, so you may need to wait a little bit. It's fairly strong (~3450 rating, roughly on par with Stockfish 12 or SlowChess Blitz 2.7), but you can set up a position when challenging it so that it's missing a couple pawns or a piece (Variant: From Position).

I ended up writing much more about it than I expected. If you're into the technical side of chess or machine learning, beyond the linked overview, there's:

High-level explanation: https://chrisbutner.github.io/ChessCoach/high-level-explanat...

Technical explanation: https://chrisbutner.github.io/ChessCoach/technical-explanati... (including code pointers)

Development process: https://chrisbutner.github.io/ChessCoach/development-process... (including timelines, bugs and failures)

Data: https://chrisbutner.github.io/ChessCoach/data.html (including raw measurements and tournament PGN files)

And the code is here: https://github.com/chrisbutner/ChessCoach (C++ and Python, GPLv3 or later)

Happy to answer any questions!


This is a fantastic project. Thanks for sharing!

I had a nice long conversation with two of the authors of [0] at ACL.

One thing we discussed was the reverse problem. That is, as a player, could I give commands to the model and have the engine figure the moves that would best satisfy them.

This ranges from concrete like "take the black square bishop" (there is still variability like which piece should take it or if it's even possible) to more complex positional stuff like "set up to attack the kingside."

Any thoughts on this line of research?

[0] Automated Chess Commentator Powered by Neural Chess Engine (Zang, Yu & Wan, 2019) https://arxiv.org/pdf/1909.10413.pdf


SentiMATE[1] looks at one of the reverse problems in a way - training an engine on commentary data - although it's not exactly what you're talking about.

I think this line of thinking could eventually lead to automated metrics for commentary evaluation, which could in turn lead to better methods than top-k/top-p for turning a bunch of sequential logits into a sentence or paragraph - basically treat it like MCTS/PUCT also.

The problem is that if you look at high-level commentary - maybe Radjabov-MVL on https://www.chess.com/news/view/2021-champions-chess-tour-fi... (I'm not the best judge, just a quick search) - it's not often possible to predict the move starting with the comment. And if you did, you might end up with very dry metrics and reverse commentary.

But this direction has a lot of potential I think, beyond just chess, into more of an algorithmic/generational support for pure NN-based language models.

[1] https://arxiv.org/pdf/1907.08321.pdf


Where did you source the commentary dataset?


Not the author, but ChessBase sell a product (Megabase) which includes 85,000 annotated games in a more-or-less machine readable format. [0]

To me it's probably OK to train a model on this, at least for hobby purposes, though some GitHub Copilot critics might disagree. And a large part of ChessBase's business model is based on ripping off other people's IP and presenting it as their own [1]. But still, I can see why the author might want to be coy about answering this question.

[0] https://en.chessbase.com/post/new-mega-database-2021

[1] https://lichess.org/blog/YCvy7xMAACIA8007/fat-fritz-2-is-a-r...


Seconded. I looked around the writeup site for a bit and couldn't figure that out. That's arguably the most important piece of info about this project.


Can you have it play more games by giving it less time per turn (~2500 rating is plenty good for an opponent/coach) and playing games concurrently while it waits for human to play?

how much does a game cost in CPU time money?

How do I get the commentary for a game I played? Oh, it's in Analysis page.

It plays chess very well, but the commentary is incoherent and doesn't match the game well -- The attacks described are nonsense and the coordinates are wrong. It seems a little confused about which side is which? It thinks a rook can diagonally attack a bishop, and seems to name squares opposite from their actual name.


That's a good idea. A bigger problem than time-slicing is probably GPU/TPU device ownership issues and GPU/TPU memory usage with multiple games going in parallel. There may be some ways to multiplex it intelligently though.

Costs are difficult to work out - it depends on cloud vs. self-hosting, what kind of TPUs/GPUs, how long you're calculating over.

The advantage that classical/NNUE engines have is that they can more easily spread over distributed frameworks like Fishtest.


> the commentary is incoherent and doesn't match the game well > The attacks described are nonsense and the coordinates are wrong.

Agreed, this looks superficially like commentary on the game, but honestly it doesn't seem more pertinent to the game score than a Markov chain trained on all the commentary would be (presumably this isn't true, and the author started with something like that Markov chain and the current version is way better in terms of some fitness function).

I wonder if there just is not enough training data available. GPT-3 overcomes this by harvesting a ridiculous amount of training data. AlphaZero, and the chess engine here, which is excellent, overcome it by generating their own training data through self play. But that's not applicable to the task of generating commentary.


I'm super impressed with what you've managed to create, do you have any further plans with this project? I'm curious now that it's finished and documented to such an extent will you try to bring it publicity and actual usage or was this just a passion project. Thanks


Thank you! I do get that itch to jump in and improve things whenever I see it lose a game, but I don't have further plans (development or commercial) in the near-term. The goal originally was to see whether I liked ML, to decide on my next industry/career move, but there was a lot of "one more month".

I'm actually hopeful that some search techniques such as SBLE-PUCT[1] or better derivations can make their way into other open source projects, but they've had big teams working for a while on similar, often better ideas, so we'll have to see.

[1] https://chrisbutner.github.io/ChessCoach/high-level-explanat...


So, do you like ML?


Haha - I dislike how much of a black box it is, despite the statistical basis (for example, the back and forth on batch normalization rationale). But lots of interesting problems and tech to dig into.


You estimate it’s rated 3400-ish and it loses games????


It loses some games to Stockfish 13 and 14, and Lc0 - rarely at slow time control, and more often at blitz and bullet (actually, it has losses all the way down to Stockfish 9 in blitz).

Partly because of the way it tries to search more widely to avoid tactical traps, it can also be a little sloppy in holding advantages or minimizing losses (this could use some more work and tuning). This ends up making it a little drawish, so it loses less than you'd expect to Stockfish 14, but also doesn't beat up weaker engines as well as Stockfish 14 does.

You can see some of this in the raw tournament results[1]. At 40 moves per 15 minutes, repeating, each engine draws with the ones above and below it, but starts to win and lose at a distance of 2 or 3.

At 5+3 time control, ChessCoach goes 1-0-29 vs. Stockfish 12, but Stockfish 12 is better at beating Stockfish 8-11 than ChessCoach is, so CC ends up between SF11 and SF12 in the end.

On Lichess, where there's no "free time" to get ready for searches, ChessCoach's naïve node allocation/deallocation makes it waste time, and means it can't ponder for very long on the opponent's time - a big opportunity for improvement (it needs a multi-threaded pool deallocator that can feed nodes back to local pools for the long-lived search threads). I think it's also hitting a bug with Syzygy memory mapping that Stockfish works around via reloading every "ucinewgame" (which I don't trigger on Lichess). So, overall, its performance on Lichess is worse.

Also, you can't read too much into this data - very few games, and no opening book.

[1] https://chrisbutner.github.io/ChessCoach/data.html#appendix-...


> It only plays one game at a time, so you may need to wait a little bit

Why this limitation? Is it fairly computationally expensive to run?


Yes, each bot uses a v3-8 Cloud TPU VM, and tries to be constantly playing a game. The search tree is also very memory-hungry. And right now it's also using the Python API for TensorFlow, which is likely wasting a lot of potential.

Lots of room for improvement!


Could using something like AlphaZero.jl make it more efficient?

https://github.com/jonathan-laurent/AlphaZero.jl


The engine itself is in C++, but it calls in to TensorFlow via Python as a portability/distribution vs. performance trade-off.

Next steps could be using one of Lc0's backends for GPU scenarios, or taking the other side of the trade and using the C++ API for TPU.

There's also your typical CPU and memory optimizations that could be made - some baseline work there but not targeted.


I see. I guess compute intensive stuff is usually implemented in c++. By the way, if you don't mind, could you share your experience in learning RL? I am struggling through Sutton and Barto's text right now and wondering if I'll progress faster if I just "dive into things." Also, nice project!


I think it always helps to have a project to apply things to as you're learning something, even if it means coming up with something small. While preparing, I found it helpful to read for at least an hour each morning, and then divided the rest of the day into learning vs. "diving in" as I felt like it.

Getting deep into RL specifically wasn't so necessary for me because I was just replicating AlphaZero there, although reading papers on other neural architectures, training methods, etc. helped with other experimentation.

You may be well past this, but my biggest general recommendation is the book, "Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow" to quickly cover a broad range of statistics, APIs, etc., at the right level of practicality before going further into different areas (for PyTorch, I'm not sure what’s best).

Similarly, I was familiar with the calculus underpinnings but did appreciate Andrew Ng's courses for digging into backpropagation etc., especially when covering batching.


I found "Foundations of Deep Reinforcement Learning - Theory and Practice in Python" by Laura Graesser and Wah Loon Keng quite helpful in that it was somewhat like get a excellent summary course in about 6 years of RL advancements. I will say that it's quite forthcoming with the math. Anyway, I just wanted to know how they (not sure exactly who did it first, I just meant people with machines) got RL to play Atarti Pitfall. So take any recommendation I make with a grain of salt.


I’d like to self host. Will this run with a gpu?


Yes! I haven't done as much testing with GPU, but did validate running with 4x V100s. You just need to adjust the "search_threads" option to the number of GPUs, but set it to at least 2.

Installation for GPU is covered here: https://github.com/chrisbutner/ChessCoach#installation (a little messy, sorry)


> 3... e6: "A solid move, and I have never seen it before. It has been played by many strong players, including GMs, Zvjaginsev, and others.

It seems quite good overall, but occasionally there are fun AI-isms like this :)


> 1. e4: "This game is a game that I lost because of a tactical oversight on my part. I had a tactical idea in mind, but in this game, black resigned. I would have played on for a while, but I was able to find a way to convert my material advantage into a win."


We used to call it when people stand around your chess board talking about you game “kibitzers”

https://en.m.wikipedia.org/wiki/Kibitzer


TIL Where chibitz comes from in Romanian. To us it's the annoying people that comment on the game (cards, football or whatever it is) and usually hope it goes badly for the target of their comments.


Same in Hungarian


Very cool - one bug, it seems to want to show line variations in the chat, but it only shows the last move, which is then linked to the actual move in the game.

e.g. on move 22

I don't know why black played this. As black, I would've played 27...Kf8 to protect f6 and e5.

the 27..Kf8 is a link ... but the game hasn't proceeded to move 27 yet, so the link goes nowhere.

Clearly it's trying to show some line that ends with 27....kf8 ?

Anyway, the commentary itself is pretty excellent, great job!


Yeah the auto-linking is just Lichess doing its best, even when the bot's talking nonsense sometimes.

And thank you!


Just tried it out, really interesting to read post game: https://lichess.org/dKbbqymG/

I don't think this comment was accurate:

> 23. c4: "White tries to get his pawns moving. I am still thinking that I have to move my bishop, but that is too slow. I think white should have moved his king back to c3 to prevent my pawn from becoming a passer."

The idea behind moving the pawn was because black playing c4 would have instantly lost both pawns since the bishop has nowhere safe to go where it still defends c2 (Bh7 leading to g6 losing the piece.) I don't think the king could have made it to the c file in time to stop that.

> 34... Rxg4: "Now I can't stop him from advancing his pawn."

Is this meant to be speaking from whites perspective (since black just got the unopposed h file pawn.)

Interestingly only 10. O-O links with a move, I think it would be really helpful if they all linked to moves. Also i'm really excited for this kind of analysis! It would be really cool if request a computer analysis eventually generated such analyses trying to figure out each sides ideas.


The gameplay seems solid, but the comments are all over the map:

https://lichess.org/4l1urWeU

I wonder if we are getting snippets of variations in some cases.


It does train on variations too, given the scarcity of data available, so that can hurt accuracy, mood, etc.


This is a really amazing project. I hadn't though of coaching as a space for NLP, and I'm unaware of any use of a neural net to accomplish this, although I did study some Expert Systems in college that focused on radiology.

There's real substance here. Well done. I hope you keep developing it, you're on to something novel.


One half-baked idea that came to my mind while watching the bot play on Lichess: You could maybe attempt something similar to how Lichess extracts puzzles. The puzzles generally revolve around sharp positions, so Lichess presumably has some metric for identifying those. They also have some auto-tagging for what kind of situation it is (Fork? Pin? Zugzwang?), although from what I gather these are community-corrected.

That's overall less general than "feed position network output into a transformer", but presumably less data-constrained.


I look forward to reading things like “a classic Boris move” or Fisher or any of the great chessmasters.


It does tend to name-drop: often famous names, but also just "Jeff".

And if you spice up the commentary sampling parameters, it gets even more inventive, making up names, and saying that "the rook is pinning Fischer against the king".


One of my pet peeves is when folks compare human elo ratings from FIDE to (presumably) engine elo ratings from CCRL, a league for chess engines. They're not the same! This is like comparing points between your local youth soccer league and La Liga. All we need to say is that this engine is only 100 elo behind Stockfish. That is akin to occasionally beating God in a game of chess.


I agree with what you're saying. On the flip side, there are multiple systems (Elo, Glicko), anchors, playing pools, etc. in use around the place, and FIDE and CCRL are offset by around 80 magnitude I heard, compared to about 600-700 difference between top humans and top engines.

So for a non-technical audience, I feel like it's easier to give a ballpark that they can understand without having to pull in too much context around Stockfish, CCRL, etc. It may have been better to clarify further in the docs though.

The "Data" document does give the relative Elo breakdown in the appendices.


> That is akin to occasionally beating God in a game of chess.

So you're telling me ... There's a chance?


I thought Alpha Zero is stronger that Stockfish, or was, if it doesn't exist anymore.


The comparison between SF and AZ is hardware-dependent, so there was never a black-and-white answer. Even so, AZ hasn't seen any further development AFAIK but SF is constantly improving. But for this engine, I'm just relying on the author's methodology:

> It plays chess with a rating of approximately 3450 Elo... [compared to] Stockfish 14 at 3550 Elo.


Not sure about stockfish 10, but stockfish 8 struggled to win any games against alpha zero [1].

[1] https://www.chess.com/news/view/updated-alphazero-crushes-st...


The last Stockfish is Stockfish 14.


Amazing, incredible idea. Run with it for a bit. Setup a hosted, super easy to use version, that's billed ultimately at cost plus but in an easy to understand way (maybe # of games, that sort of thing).


Responding to myself, but I also think a "why did I lose?" / "where did I first go wrong?" variation would be super popular with people.


Agreed. These two ideas would be killer features! I want to analyze a game, not from where I am at now, but where I was 10 moves ago when everything started to go sideways. I want an AI that can tell me where my instincts or intuitions are off and the reasoning why. e.g. attacking the middle is generally good, but did I focus too much on that and missed some important lines in other places.


the chess.com system will do that for you today


Very interesting. For humans, analyzing concrete positions with a player that's > 400 rating points above you is fairly useless. You're way better off with a player that's some 200 rating points higher. The reason is that they still, sort of have the same type of understanding of chess than you do, while the high rated player plays another game.

A remark about opening preparation: The best metaphor I've seen here is the one about snooker. Ronnie O'Sullivan needs a good safety game because his opponent can clear the table. You don't.


If I click on "Challenge", and set up a game with unlimited time control (or correspondence), the challenge is declined with:

> This time control is too fast for me, please challenge again with a slower game.

Is that expected?


Oh, that message is a little backwards, but the main bot only accepts challenges from 1+0 or 0+1 up to 15+10 time control.

You can challenge https://lichess.org/?user=chesscoachclassical#friend to 30+20.

Unfortunately, neither of them support correspondence.


Hey, this is great, have been waiting for something like this.

One bug found: https://lichess.org/NvbQTf2O/black

> PlayChessCoach 23. Nc7: "The point of White's previous move. The knight on d7 is trapped and the rook on a8 is not protected."

There is no knight on d7 or no knight that could be moved there. (Or so it seems to me.)

In that same game this was funny banter:

> PlayChessCoach careful not to get mated in the long run."

Thank you for writing architectural overview, enjoying it!


So do I understand correctly: This is a new head on top of the AlphaZero model?

That is, in addition to the usual evaluation and policy heads, this takes the intermediate board representation and outputs a seed vector that is fed into a transformer text generator?

Or do other things go into the seed? Like the search tree somehow? Otherwise I suppose the commentary will not be able to comment on deeper tactics?

Or maybe this doesn't work using a seed vector at all, but with a custom integration from the board into the transformer somehow?


The original hope was for this to be a third head on top of the AlphaZero model, but I couldn't think of a way to generate commentary during self-play (such that it would gradually improve), and trying to rotate supervised commentary training into the main schedule ended up hurting both sides because of the disjoint datasets.

So, now the commentary decoder is just trained separately on the final primary model. The previous and current game positions are fed into the primary model, and the outputs are taken from the final convolutional layer, just before the value and policy heads. Then, that data plus the side to play is positionally encoded and fed into a transformer decoder.

It would be better for a search tree/algorithm to be used for commentary too so that tactics could be better understood, but that would need some kind of subjective BLEU equivalent, and metrics like those don't work well for chess commentary.

You can see a diagram of the architecture here: https://chrisbutner.github.io/ChessCoach/high-level-explanat...


I think training this as a separate head on top of a frozen AlphaZero model makes a lot of sense. I don't think anyone has figured out to do language learning with reinforcement training.

Actually, I can't figure out from your explanation why you trained the whole network yourself instead of just using Leela's network and training the commentary head on top?

If you wanted to in-cooperate the search, maybe you could just take the 1800 or so probabilities output by the MCTS and add some layers on top of that before concatenating with the other data fed into the transformer.

In either case, this is a fantastic project and perhaps an even more impressive write up! Congrats and thank you!


It was partly because I was looking to improve self-play and training tractability on a home desktop with 1 GPU (complete failure), and partly to learn about everything from scratch. I would be interested to see how strong it is with the same search but with Leela's inference backend (for GPU at least) and network.

In terms of search-into-commentary, concatenating like that may be interesting, as long as it can learn to map across - definitely plausible without too much work. I was originally thinking of something more complicated, combining multiple raw network outputs across the tree through some kind of trained weighting, or additional model via recurrence, and punted it.

Ignore my BLEU comment, mixed those up between replies - that was the other potential use of search trees for commentary, an MCTS/PUCT-style alternative to traditional sequential top-k/top-p sampling, once you have logits and are deciding which paragraph to generate.

Thanks!


Very cool project. I'm currently reading The Alignment Problem by Brian Christian on AI (very well written and easy to read), and also (very slowly) My Great Predecessors, Vol 1 by Garry Kasparov. This is a great combination of the two topics!

I'm particularly interested in the second neural net that generates explanations. Is it only a neural net to generate the natural language using artifacts from the original engine net, or is it actually inspecting the state of the engine net to derive the insights?

It's an interesting application of explainability of AI algorithms that Christian talks about in his chapter on transparency. In particular, he discusses "saliency" of algorithms (knowing what parts of the input were most important in producing a prediction) and "multitask nets" that output multiple predictions (so, maybe here, one output is the best move, and another output is the explanation).

The writeups are fantastic reading. I see almost no sources in common with the bibliography in The Alignment Problem (which has a 50 page bib) which makes them nice complements. Only common citations I could find were Sutton 1988 (Temporal Differences) and Silver et al 2016 (AlphaGo), 2017 (AlphaGo Zero) and 2018 (AlphaZero).

There's a note (44) from chapter 5 entitled "Shaping" where Christian talks about "meta-reasoning: the right way to think about thinking. When you play a game–for instance, chess–you win because of the moves you chose, but it was the thoughts you had that enabled you to choose those moves...Figuring out how an aspiring chess player–or any kind of agent–should learn about its thought process seemed like a more important but also dramatically harder task than simply learning how to pick good moves." This note might as well be direct inspiration for this project. It goes on to quote Stuart Russell about "a computation that changes your mind about what is a good move to make...reward that computation by how much you changed your mind...so you could change your mind in the sense of discovering that what was the second-best move is actually even better than what was the best move." That's in the context of a cautionary tale where only optimizing for those "changes of mind" doesn't necessarily find a correct outcome, and that you have to "arrange these internal pseudorewards so that along a path, they add up to the same as true, eventually." This sounds pretty much like the task of a coach.


The commentary net inspects the final state of the engine net, but not internal layers.

Deeper introspection is a really important goal, but by the time you make serious progress there, chess is the least of your worries.

I do really like the work people have put into introspection and visualization so far though: DeepDream comes to mind. There was also another great paper or page that I can't find.


As a person who basically sucks at Chess, it would be great if you keep developing this into something that can help players improve (though I don't have a lot of confidence in my intelligence to expect marked improvement :))


Can only one person play the bot at a time? I try to challenge it with 15+10 and the bot keeps waiting on accepting the challenge. Meanwhile the bot is playing only one other game.


> White is now down a piece, but he has a strong pawn center, and the bishop pair. White is in trouble. The position is still

With no bishops on the board...


I wonder how it measures against chess puzzles which are known to be hard to solve by chess engines and what kind of clever commentary it generates.


Wow. I wanted to see something like this for as far back as I can remember. Thank you for creating this beauty!


This is amazing! This will help a lot of people that want to learn chess and needs someone to guide them.


Looking at the screenshot, I was briefly hopeful that they used actual Chess commentary and gpt3 or something. But it looks like a simple rule engine is translating "user did X but I (the AI) would have done Y" into a slightly nicer phrasing. Meh. Still cool.


Huh? There's tons of commentary that isn't just translating "what he did vs what I would have done":

"This is a good move for black. It attacks the center and attacks the pawn on e4. It also allows the development of the knight to c6."

"The main line. I don't know why I played this. But, I know that it's not a good idea to block in your own pieces, and that's what I would've done. But, this is a mistake because of what I'm about to do." (This is after its own move, and sounds very GPT3-ish.)

"He decides to kick my knight, but this is not a good move. It is also a very good move because it threatens my knight and threatens my pawn on e5. I am not worried about the knight fork on f2."

"I'm not sure what I'm doing, so I thought that I could castle, and try to defend it with a pawn, but I don't think it's worth it."

"The pawn on e5 is pinned to the king, so it is time to castle."

These samples are from the first ten lines or so of a single game.[1] They're all like that. Don't know what you were reading.

1. https://lichess.org/4l1urWeU#5


It is using a full-sized transformer decoder, trained on about 1 million data samples, but with far fewer neural network parameters and training samples than GPT-2 or GPT-3.


Is there any good training software like this but for poker?


Can this be used as a personal chess trainer?


Remarkably cool. My only bit of feedback is that it's unfortunate so much of the commentary uses masculine pronouns (I found it distracting). Is that just a consequence of the training data?


Yeah, that's a massive problem with the natural language domain all across machine learning.

Unfortunately it's very difficult to track down training data for chess commentary in the first place, let alone trim down biases. For reference, I was able to gather about 1 million samples, but it really needs a billion.

Hopefully through data augmentation and better general intelligence models we can make better progress on bias issues soon, as that's a huge problem when we start trusting AI models too much in life.


You might be able to kludge a fix to tokenize the output and replace he/him/she/her with them/their. It's not as sexy as the engine outputting the correct words, but it should get the job done.


Yes, in this case as long as they still agree when it actually names people, I don't think it would be too difficult. There may be factors I'm not considering though.

Harder would be more general models like GPT-2 and GPT-3.


Singular "they" doesn't care about the gender of the person named, so it should be good.


Appreciate the honesty here. Pretty wild how natural this model feels with 1 million samples.


Sometimes it seems really accurate (like the cherry-picked GIF in the overview docs) and sometimes really off.

I think for the most part, it knows more than it lets on, but finding the right sampling methods (or better yet, generalized search) to generate the best comments is a tough problem because it's difficult to evaluate quality.

There's some info on the sampling methods here: https://chrisbutner.github.io/ChessCoach/high-level-explanat...


What makes it unfortunate that it used masculine pronouns? If it had used purely female pronouns would that make you feel better?


Probably not. The issue with always using one set of pronouns is that people with other pronouns may feel isolated. For example, always assuming software devs are male could cause female devs to feel like they don't belong.

Realistically, it's really hard to fix this encoded bias in language models.


It does. Thanks for expressing this.


Yeah it’s always annoyingly jarring when people assume that. You can even see in the comments on this topic that people attack the suggestion of gender neutral language, which further compounds on the feeling like we don’t belong or are explicitly unwanted.


There are certain people whose disapproval is a badge of honor. They show up in threads like this.


Would 'queen' be considered masculine or feminine? Bishops? Knights (joan of arc excepted)? Are pawns slaves? Ahhh, the sport of kings, reduced to 2021.


Unsure how any of those are relevant? I pointed out the descriptions of the current player being trained - a pretty important part of this project.


[flagged]


Personal attacks aren't cool and will get you banned here. Please review the rules and please don't post that way to HN. We're trying for a quite different quality of conversation here.

https://news.ycombinator.com/newsguidelines.html


Not really, no. Singular "they" is pretty much everywhere now, despite some people's best efforts.


It seems delusional to think there are many instituional efforts that try to prevent singular "they".


I'm talking to several individuals in this very thread


It seems like you are the only one in this thread who cares so much about pronouns to put effort into using them.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: