Always look forward to these so I can compare my solutions afterwards. Sometimes it's almost frustrating to see how clean and concise the mess I made could have been, but I do learn a lot. It's like he gets as close to golfing as possible, without the obfuscation.
Too bad he punted on 20... one of the few difficult problems this year. Usually the hard problems are where you really see the contrast between his solutions and the rest of us plebs. Do I sense a hit of frustration in his comment? ;)
"too tedious for too little reward...."
"sea monster as characters inelegant"
I had a similar feeling with that one, found the corners by counting edges on part one, and really got de-motivated for part two and actually solving the puzzle and looking for the patterns
Interesting he thought it was tedious - I thought this one was actually the closest to being like a real world software engineering task! No mathematical tricks or lateral thinking, just a gnarly problem that requires planning and breaking down into more manageable pieces. I actually really enjoyed it for that.
> Family holiday preparations kept me from doing Part 2 on the night it was released, and unfortunately I didn't feel like coming back to it later: it seemed too tedious for too little reward. I thought it was inelegant that a solid block of # pixels would be considered a sea monster with waves.
I punted on part 2 of #20 as well, but came back to it the next day with a reasonable idea. Some of the tedious parts are easier to do by hand, like picking the corner of the map to start with on #20, or solving the uniqueness on #21.
I'm still looking forward to reviewing how Norvig did most of the problems.
I aim for "first thing that I can get to work and doesn't take more than 20 seconds". Usually they run in under 2 (Common Lisp is my language of choice). Then for the part 2 portions I may have to optimize.
I sometimes go back and clean up my solutions, though I aim for a combination of clarity and performance if I do that, clarity first.
Unless I'm really unhappy with my solution, I stick with the first thing I write that spits out the correct answer. Since a lot of the problems require you to do some potentially interesting optimization to get an answer in a reasonable amount of time, I don't bother optimizing the ones that don't.
How do you algorithmically code like this? In my software engineering career, I am a plumber at best. Would fail hard at anyone who would ask me to code anything more than fizz buzz under time pressure. That said, I've built some incredible production worthy and robust systems that move mountains as a plumber. I am damn good at that.
Norvig teaches a really cool, and challenging, class on Udacity called the "Design of Computer Programs"[0] in Python where you can learn his thought process on how to write reliable and beautiful software to solve interesting problems.
I'd recommend taking a look at his book Paradigms of AI Programming [0]. It really shows his thought process in developing solutions to problems. The only other way is practice, and in particular practice with languages that offer functional features. Notice his use of lambdas, assignment of existing functions/constructors to more accurate names (Passport instead of dict, day 4), and higher order functions (quantify).
Those are things that really help in algorithmic code by increasing accuracy of the names of things to the application (almost creating a domain specific language).
What kind of experience do you need for this book? For most ML you need linear algebra and whatnot, but this doesn't appear necessary here. If you've worked through something like SICP, do you think that would be sufficient?
This is more classical AI, if you have a foundation in algorithms and data structures you can follow along. More experience, it'll be easier. If you've worked through SICP and that's your primary study of programming, then you can approach PAIP, but may need to take your time and look up related material to help you out.
I am a plumber too, and a good one, but I notice (if you forgo the time pressure: my fast solutions are always plumbing) that the language I use matters in my thinking and that is probably how I learnt them. When I have the time, I think solutions up (mostly in my head or on 'paper'(scratch buffer)) in lisp/scheme, haskell/idris or apl/k. The result, when I consider it elegant, is usually very nice after translating to another language.
For Scheme I think my thinking came from working through SICP many years ago while the other languages in the list force you basically to first think about a solution before you start typing.
I am not saying, obviously, people cannot think up these things straight in rust, python, c# etc, however, for me personally, it works much better if I first work it out in one of the others; it is way too tempting to grab for my plumbing wrench otherwise.
If you're ok with throwing time pressure out of the equation, then I think the main thing is just an aspiration to find a beautiful solution to a problem, where "beautiful" is being concise, efficient, short (and, rarely, original) in some proportion that suits your fancy.
After that, it's just practice interspersed with data structure and algorithm research.
I wouldn't worry about it. Jobs that require this are few and far between - so the necessity isn't there. HN have a wideon for this leetcode stuff. In reality, unless it's your hobby, you don't need to be able.
> In reality, unless it's your hobby, you don't need to be able.
I agree with you completely here. I also feel like I fit the description of a plumber presented here. The most challenging algorithmic problems that I've had to confront in my career, which came when I was working in DSP for an RF company, were things that were tackled by the team -- a group of very talented and very experienced engineers -- and we worked on these problems collaboratively and over a period of time that is not measurable in minutes or hours.
But, having said that, isn't it also undeniable these things, these leetcode-style algorithms, are used to gatekeep entry to many of the "better" jobs in our field, despite the fact that we know that these things are not strictly necessary on the job? At the very least, this has been my experience and I read the same anecdotes on HN frequently.
This is very well-written code. I don't think anyone would bring into question Norvig's technical skills, but I wouldn't think someone like him would spend much time at his job writing code, or have much free time to program as a hobby.
As someone who aspires to a position similar to Norvig's (directing research at an organization that is frequently applied in real life), I think it's really cool to see him having time to do stuff like this.
I have been meaning to brush up on my interview skills, and I've found the lack of structure in LeetCode's Monthly problem sets to be quite frustrating. Every time I'd get stuck on a certain problem, I'd dive deep into the techniques behind the solution just to discard that knowledge the next day, as the selection of problems was seemingly random.
That was one of the first courses I took that course as I was learning to program. Of course most of it was way over my head at the time, but even to a beginner it was clear that he really, really knows what he is doing.
The guy created very clean python solutions, and very insigthful walkthrough (although a bit wordy some times). It was my go to repo after validating my solutions on AoC (he managed to publish daily during the whole challenge !).
This is another vote of confidence for type annotations. There still seems to be a debate between highly experienced Pythonistas whether typing is good or not.
In this case I think they work really well, and aren't stuck in places where they're not needed.
Probably the most indicative part of Norvig's code beauty is the requirements file [0]. With python developers being infamous for importing obscure/questionable libraries for even a simple task, I love how Norvig solved all these with just native libraries and the staples numpy/plt.
For me it followed (other than the pattern matching of the sea monster) directly from my backtracking solution to part 1 (versus his solution). One of the cases where doing the first thing that came to mind ("Well, gotta see where all the tiles go to see which are the corners") was the right thing, poor reading comprehension for the win! I specifically missed this line in the problem description which led to his (and many other people's) solutions:
> but the outermost edges won't line up with any other tiles.
I thought I spotted a bug on the very first day, part 2.
def day1_2(nums):
"Find 3 distinct numbers that sum to 2020, and return their product."
return first(x * y * z
for x, y in combinations(nums, 2)
for z in nums & {2020 - x - y}
if x != z != y)
The last line (if x != z != y) returns true when x and y are equal and z is different.
But x and y are already constrained to be different since nums is a set and itertools combinations picks distinct elements from the iterable. If the test had been x != y != z it would have been a bug.
This was precisely the kind of thing I was looking for. I've done AOC for a few years, and did OK this year (got 2 stars on all but 3 days, and one star on all but 2), but being able to compare my ramshackle, simplistic code to something like this is the project for a month - break down one a day.... I'll do this in february, so thanks for posting that, OP!
Day 5 part 1 solution is interesting. I had implemented it the obvious way doing a binary search. I didn't realize you could represent that string as binary and get the equivalent integer. That's really cool however I am not sure I get the idea behind why that actually works
It's a bit overly clever. I am a huge fan of Peter Norvig (and he was once my skip-level boss), but I'd call this code out in a code review for being obtuse.
Notice that the input, `nums`, is a set. So he's taking the intersection of two sets. One set always has one item, so the result will be a collection with either 0 or 1 items.
It could have also been written as:
first(x * (2020 - x)
for x in nums
if (2020 - x) in nums and x != (2020 - x))
But that has a lot of duplication, so it'd probably be best to just use an imperative approach here and do something like:
def day1_1(nums):
"Find 2 distinct numbers that sum to 2020, and return their product."
for x in nums:
y = 2020 - x
if x == y:
continue
if y in nums:
return x * y
I am not sure I would advocate for this but, from Python 3.8, this is an instance where assignment expressions could allow the first code without duplication.
first(x * y
for x in nums
if (y := 2020 - x) in nums and x != y)
I agree, tuvistavie! But my default machine has 3.7 installed, and I didn't want to get too far ahead of readers. Sometime next year I'll go to 3.8, and look forward to using ":="
The "x == y" test looks like a bug to me, unless duplicate entries are not allowed. For example [1010,1010,1721,979,366,299,675,1456] should return 1010 * 1010, not 1721 * 299.
def day1_1(nums):
"Find 2 distinct numbers that sum to 2020, and return their product."
return first(x * y
for x in nums
for y in nums & {2020 - x}
if x != y)
`nums` is a set of integers
`nums & {2020 - x}` Finds all the numbers that are in the set of `nums` and the set of `{2020 - x}` - this basically extracts a number, `y`, for which `x+y = 2020`.
So for each x in nums, he extracts a number from nums that, added to x, equals 2020. If there is such a number y, he checks if it is the same as x, and only if it is different, does he yield in his generator, as the product of y and x.
{2020 - x} is a single item set. nums & {2020 - x} does set intersection with nums, which restricts nums to either the item that sums with x to 2020 or nothing if there is no such item in nums.
Norvig's code is gold, even back from the times of PAIP (which is still definitely a must read).
His "poker" course (on YouTube) is also golden standard.
Why? The background in math, basic data structures and logic, so one know the range and boundaries of what is possible. Plus careful attention to details, like it is in any art.
Even as a software engineer who studied mathematics, I just can't get excited about the type of problems found in Advent of Code and other such things, like Project Euler. Those problems amount to little puzzles and tricks and algorithms that have little to no context. What I find interesting about computing and programming is the representation and exploration of ideas.
Also, I can't help but wonder. Why does the Director of Google Research and a computer scientist use his personal time to code, what looks like exclusively, in Python rather than other interesting languages? Is it a bit of marketing for Google and/or his book, or does he just like Python?
>what looks like exclusively, in Python rather than other interesting languages?
probably because he cares more about the problems and less about the language and python is a pleasant general purpose language (and ubiquitous in AI research)
I never understood why many software developers are so focused on languages rather than on problems, the language at the end of the day doesn't really matter, unless the problem is in a very peculiar domain.
It is less about hyper-focus on language and more of a curiosity. Yes, I personally don't like Python, but I would have the same curiosity if he used <x>, had written a book using <x>, and if Google heavily used <x>.
Norvig is a teacher. His first choice was Lisp for a long time. He chooses Python because it is a good enough alternative for his teaching. You can find his motivation here: http://www.norvig.com/python-lisp.html
He is writing such programs, essays etc since long time on his website norvig.com, I don't think he only like python and definitely not the marketing at all.
My favorite essay is http://norvig.com/21-days.html after reading this essay you might change your opinion about he finding only python interesting.
Thanks for those, but I have read them already. His comments on his book and Python are not really relevant in today's world and are more of a historical interest. Also, I don't really understand the timeline from his comment and am not familiar enough with the book to know its history. Did the first edition use Python? (I am aware of his and Russell's prior AI book.)
What am I confused about other than what I asked? There’s two books. The first is Paradigms of Artificial Intelligence Programming, which used Common Lisp, and the second is Artificial Intelligence: A Modern Approach, which to my knowledge primarily uses or encourages Python. I was trying to place what time Norvig was referring to in the linked comment. He mentions his book, which I assumed to be the modern approach one. The first edition came out in 1995 and the second in 2003. Python came out first in 1991, and he joined Google in 2001. His comment makes it hard to know when he was referring to (and thus even what book), and what I don’t know is if the first edition of the modern approach book used Python or if that came later in the second edition (without going and finding copies of the editions myself).
Artificial Intelligence: A Modern Approach is "his and Russell's prior AI book". It seemed as if you were talking about a different book.
In the HN comment he says "the Lisp code that Russell and I had online" so I don't think it's difficult to guess it's indeed AIMA he's talking about.
I don't think that the book, in any of the four editions, "primarily uses or encourages Python". I have not checked them all, but I expect the book to still use pseudocode exclusively. Elsewhere, they provide implementations in different languages: https://github.com/aimacode
It seems my remark that confused you is that I indeed made a mistake about thinking Russell was a co-author on PAIP, which he is not. Although, I thought it was clear I was talking about AIMA and referring to PAIP in my parenthetical comment. I feel it's nicer to just ask a question rather than to say someone is confused.
I still think it's fair to say the book encourages the use of Python if one of the primary authors publishes solutions in that language over existing solutions in another language and the author promotes Python. Of course, it seems over time other languages have grown into providing solutions. I don't have the book (because of its exorbitant price) but do have PAIP, so I can't say for sure. Even the GitHub repo for the Lisp implementation says its out of date and was used back in 1995. This latter point is relevant and was what I was curious about. Were Norvig's comments referring to a time before the first edition was published, or was it after the first edition was published and sometime in between the first and second editions? I'm just generally curious about the context in which he's referring to.
> It seems my remark that confused you [...] I feel it's nicer to just ask a question rather than to say someone is confused.
So you can say that I was confused but I cannot say that you were confused? Ok then.
What do you find unclear, really?
The 1st edition of AIMA is from February 1995.
The "Python for Lisp Programmers" page is from May 2000.
The second paragraph starts: "I looked into Python because I was considering translating the Lisp code for the Russell & Norvig AI textbook into Java."
That's fine, but it addresses a bunch of things I didn't say.
I'm sure I would learn something if I read through the code in more detail, but that's true of a ton of things and time is what it is. I merely commented on what I personally find interesting and uninteresting. Also, even I, a non-Director of Research at Google and non-computer scientist, don't particular enjoy using the language I use on a day-to-day basis for fun in my personal time.
Mind pointing out where? I asked a pretty simple question and got answers insinuating I said things I didn’t. Of course we all enjoy things for different reasons. I don’t insist on anything. People are free to disagree on whatever and have their own stance. If people like Advent of Code, that’s great. I have looked into it before, and it’s not for me. And I was also simply curious why Norvig seems to like Python so much. He either likes it or it’s convenient as a marketing tool or both (i.e., it fits all his needs). None of those are bad or wrong or whatever.
In a thread about Peter Norvig solving AoC problems in Python, you bashed on 1) Peter Norvig, 2) AoC and 3) Python. You're entitled to your opinion, of course, but your comment reads like someone jumping into an enthusiastic conversation among Star Wars fans to tell everyone how much you dislike Star Wars. I'm sure you didn't intend it that way (based on your later comments), but that's how it comes across.
I think bash is a little strong. There’s a lot of ways to view programming and to identify as a programmer. In many ways, it’s a lament of mine and a feeling of being an imposter that I don’t necessarily enjoy what many programmers seem to. I didn’t attack anything and only presented my personal opinion.
> I was also simply curious why Norvig seems to like Python so much
Why aren't you satisfied with his own explanations that you already knew about?
"I looked around for a language that was closer to the pseudocode in the book, and discovered Python was the closest. [...] Python is an excellent language for my intended use. It is easy to use (interactive with no compile-link-load-run cycle), which is important for my pedagogical purposes."
"I looked for the language that was most like our pseudocode, and found that Python was the best match. Then I had to teach myself enough Python to implement the examples from the textbook. I found that Python was very nice for certain types of small problems, and had the libraries I needed to integrate with lots of other stuff, at Google and elsewhere on the net."
I find it fascinating (in a very good way) that he does find time to code. Python is not a bad language, in all fairness, any programming language which picks your interest enough to spend time working with it is a good choice.
The added benefit of Python is the ease of using something like Jupyter notebooks, which make it trivial to iterate.
(disclaimer: I coded more in Ruby than in Python, but I still enjoy both)
> The added benefit of Python is the ease of using something like Jupyter notebooks, which make it trivial to iterate.
Although, other languages do have better REPLs, allowing easier iteration than Python's REPL. Some languages, have both a better REPL and notebook-style programming, such as F#.
But I was really more curious rather than saying which language would be "better".
Why do we speak in english instead of other interesting languages? Between java, python, and haskell (which are the linguae francae of the papers which I tend to read, the first and last having replaced C and lisp during my lifetime) I think python is a defendable choice for minimising boilerplate while maximising potential readership.
As someone who has studied maths Proj Euler is infinitely more interesting especially in the harder questions. For me, most of time spent in advent of code is basically about parsing a blob of text. Project Euler inspires much more interesting ideas. Mostly it is number theory and algebra.
The same could be said for actual advent calendars - if you go around any supermarket you could get much better chocolate for a kid than any advent calendar will have and it'll doubtless be cheaper too. But when you're a certain age there's something particularly magical about the experience of waking up in the morning, finding the little door for today, opening it up and having the little (vaguely) christmas-themed chocolate shape.
As an adult, I have to say that for me doing AoC brought back a little of that magic. It helped that the problems unlocked at 6am in my timezone so I'd wake up, hack on the problem for an hour or so then walk my dog and get ready for work. It was also nice having a collective experience, knowing that not only 5-6 of my colleagues but thousands of other people across the globe were all going through the same problem at the same time. I really liked that feeling.
So yeah Project Euler has very compact, well-written, well-designed, challenging problems in spades ... but AoC is a very specific sort of event that I must admit I look forward to more and more each year :-)
I personally don’t think it is because in the face of many modern languages, it doesn’t do any one thing the best and doesn’t present an interesting paradigm. Is there something it does interesting that I can’t find in languages like F#, Racket, or Elixir?
But my question was trying to understand if Python has some hidden mojo that Norvig really likes or if he uses it like this publicly because he’s a top leader at a company that also uses it heavily. People listen to people like that in positions like his, and if he did all his solutions in Julia, F#, or whatever, that’d probably route a good amount of attention to those other languages that Google and his book doesn’t really use or hire for.
> it doesn’t do any one thing the best and doesn’t present an interesting paradigm
That's absolutely true. But once you've been through the language paradigms, written Forth and Prolog and Haskell and APL and Lisp, you don't gain much by immersing in a new language.
At that point you start looking for languages where the things you loathe about them intersect minimally with what you work on. Python has relatively little to loathe and is very convenient. I still reach for it by default as well, and I was at one point best known for my Haskell work.
It has nothing to do with Google marketing. Norvig's established enough where he lends luster to Google, not vice versa.
Too bad he punted on 20... one of the few difficult problems this year. Usually the hard problems are where you really see the contrast between his solutions and the rest of us plebs. Do I sense a hit of frustration in his comment? ;)
"too tedious for too little reward...." "sea monster as characters inelegant"