Hacker News new | past | comments | ask | show | jobs | submit login

I’ve heard that Google allows you to type code into a computer now during an interview. A welcome improvement in the state of the art.

Still no access to a compiler or debugger, but baby steps.




I had to write code exclusively on white board last month. Only laptop in the room was interviewers which he was using it exclusively to furiously copy the code i was writing in the whiteboard.

He said it will compile it and submit report when he gets back to his desk. :/


I've done nearly 100 interviews at Google, at least half of them with me copying code from a whiteboard, and I have never tried to compile a line of code that a candidate wrote.

I also go out of my way, to make it clear that I don't care about every hanging parenthesis, indentation, or typo.

I care about whether or not the candidate asks for clarification, or bulls ahead with assumptions, whether the overall algorithm works, whether the candidate can identify limitations of, and bugs in their solution (Everyone has bugs. Everyone. There's nothing wrong with that.) and what their testing strategy is.


I have to ask this, do you see yourself as the norm at Google throughout your whole employment time there?

In my experience as both interviewee and interviewer, there is a lot of power struggles involved. Just like any other interaction betweeen engs like code reviews.


OK so of the 5 interviewers, 2 did this. Others seem to not care. Do you know if "must compile" is a google wide rule that some interviewers just ignore?


I have personally never heard of 'must compile'.

I have also never received any negative feedback from hiring committees or managers about how I conduct my interviews, what I focus on, or how I analyse candidate performance.


> He said it will compile it and submit report when he gets back to his desk. :/

So, he's asking you to do something he can't do? Why can't he just read your code and know how it will behave?


Same happens at Amazon. They want syntactically correct program that will actually compile and run on a compiler, but to verify that they click a photo of what you've written on the whiteboard to actually try that on the computer themselves. :|


Someone above him might have decided that’s part of the process.

I am starting to think that most of those decisions are made by burocrats who have no accountability on the actual results, specially at Google where they can afford to lose great candidates left and right.


This process is very well designed to put all the cards in the employers hands.

Imagine if most highly compensating employers made the barrier to entry so absurdly difficult, you would no longer consider looking for new positions every 1-2 years to grab new offers/seek larger raises. Obviously, this isn't sustainable long term but it sure will reduce turnover rates if you can get a small cartel to follow suit.


Sounds like a great opportunity to whiteboard an exploit instead.


I'm a moron and did something stupid not too long ago that would be perfect for this.

I'm too lazy to find my actual code but here's the gist:

C# but might working in Java/etc too *

Create an Image of some size

loop hight of image

   loop width of image

      Bitmap b = cast Image to bitmap since Image doesn't have the pixel method

      get pixel value or whatever looks legit 

   end
end

This creates a new bitmap for EVERY pixel in the image. If the image is large enough and the system is 64 bit bad things happen. On Lubuntu this code burned through 8 GB of ram in seconds and was well on its way to eating all the virtual memory before I forcefully shut it off.

Maybe this is a Linux issue but it hard locked my system. I couldn't switch to a different terminal or anything.


I'm curious why this worked like that, since GC should consider all those bitmaps orphaned at the end of every loop iteration (possibly as soon as you get the pixel even, since that reference is no longer used). The GC might not run for a while, but it should certainly run long before the entire system starts swapping...

The only thing I can think of is that the actual bitmap data is not tracked as a managed array by a Bitmap instance, but rather is a pointer or handle from some underlying native library. GC might not kick in then, because it doesn't realize how much memory all those bitmap objects are actually hogging. Now, on .NET, when libraries do that kind of thing, they're supposed to use GC.Add/RemoveMemoryPressure to let it know. But perhaps the library that you were using didn't?


Can confirm. I interviewed last September and was given the choice between laptop and whiteboard. Picked laptop. It seemed this element was new to most interviewers at that time.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: