Hacker News new | past | comments | ask | show | jobs | submit login

One of the things I miss most about working for Microsoft was all the great internal tooling. There's stuff that blows away what's available on Linux. I wouldn't trade it for apt-get any day of the week but it would be nice to see a lot of it get ported.



> One of the things I miss most about working for Microsoft was all the great internal tooling

Like what? I am not trying to be snarky here. I honestly would like your feedback. From my perspective Linux is way better than Microsoft in this regard, but maybe I am just looking at/using the wrong MS tools?


Windows Performance Explorer is an example of something public. It’s like dtrace on steroids. Page Heap is really helpful for finding buffer overflows. I’ve never used a debugger with a better UI than the VS one. WinDbg’s reverse debugging is super powerful and fast enough to be useable.

Not public: the best instrumented profiler I’ve ever used. Smaller things like Assert tags so you can uniquely ID them. A massive distributed test system (love hate with that one). Profiling and optimization tools to answer pretty much any question about how your product behaves.

I always thought MSVC’s dialect of C++ was a little more programmer friendly. E.g. you can construct an object inline and pass it by reference. The compiler doesn’t do ridiculous things once it finds out you sign overflowed - it’s designed to support applications not meet synthetic benchmarks. PDBs make a lot more sense to me than packing debug info directly in the binary.


> WinDbg’s reverse debugging is super powerful and fast enough to be useable.

For this use case on Linux, there's rr: https://github.com/mozilla/rr/


Here's just guessing:

- Most of the tools by Sysinternals, eg. ProcessExplorer, ProcessMonitor

- Tools by Nirsoft, eg. TCPView, GDIView, HeapMemView etc.

- Microsoft's own tools eg. WinDbg, System Profiler, all the snap-ins for MMC

- Tools for looking at system components related to COM/OLE

That's just guessing, but I find I respect Windows more and more the more I learn about it.

I recommend reading the Windows Internals books because it shows the amount of work that has gone into things that are overlooked often eg. NTFS (which is miles better than Apple's new APFS, which seems to get a lot of praise for unknown reasons).

You can find out a lot of what is happening under the hood with Sysinternal's tools, including IPC which would be far harder to trace under Linux I believe.


The problem with Windows was never the core technology but rather policy decisions by those in charge.


It sounds like he is talking about internal tools, ie not available to the public.


This wasn't the case until they bought SysInternals. I think Microsoft has very good dev tools overall, but the utilities created by Mark Russinovich put everything Microsoft had to shame, so they hired him and bought his code.


Also azure was used internally (and with large internal clients) for a long while before being exposed as a service to customers.


Sounds a lot like what I hear about google. The internal tooling is second to none. "If you have a problem, 5 people already solved it in a really elegant easy to use way already"

I wish I was working in such an ecosystem. :(


Google has a lot of tools are are best-in-class— if you spend a large fixed cost setting them up and get everyone to use them. Obviously this is easy for Google to justify: they spent the fixed cost once ten years ago, and now that everyone uses the product, it's easy for new products to integrate too. But it makes them hard to spread outside Google itself.

Bazel, Google's build tool is a good example of this. Google actually open-sourced most of Bazel a few years ago, but as far as I know, it hasn't gotten much uptake. It requires doing a bunch of boring configuration work to use it. But as someone who used it internally I definitely prefer it to all the alternatives, due to its speed and reliability.


(I work on Bazel)

Although Bazel is still in beta, more and more companies are using it (e.g. Dropbox, Huawei, SpaceX, Pinterest, Stripe - see https://github.com/bazelbuild/bazel/wiki/Bazel-Users) or interested enough to attend the recent Bazel conference (see https://conf.bazel.build/about).


I wish you allowed people to be more open about how you build and deploy software at Google. I asked about how a change in an angular website gets to production and the response was that they didn't know completely and wouldn't know how much they're allowed to share even of the things they do know.

I can imagine there is some fear of espionage or sabotage but I'm just asking the boring stuff about deploying a (web) front end system.


Their distributed build system is better, and they have better tools for debugging distributed system problems.

My impression is that for local stuff their tooling is not much better than public. They have a lot more reliance on logging.

I’ve never worked there so this is just all second hand info.


I find it interesting that feedback like this still abounds (and for good reason), considering the many tools and frameworks and services available online today. Apparently the field for delivering another tool or service continues to remain green :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: