Hacker News new | past | comments | ask | show | jobs | submit login

I really don’t know where this mis conception keeps coming from. The course very explicitly makes clear the need for profiling and instrumentation, and the lab write ups require breaking down the impact of each change you made. But that’s all trivial compared to the main content of the class, it can be taught in a single lecture (+ hands on guiding in recitations/homework). After that is where the “ok X specific section is slow, now what?” comes in.



The mistake in your comment - and probably the course - is the assumption that the measurement, which you limit to only profiling and a vague notion of "instrumentation" (which I assume usually means "reading the performance counters"), is the easy part of performance engineering. Actual performance engineering is rarely as simple as "this huge section of code is slow and hot and hasn't been hyper-optimized yet, let me hyper-optimize it." I have done projects like that in real systems, but they are few and far between compared to projects that involve a lot more measurement and a lot less assembly-level code. Usually, performance engineering either means (1) finding someone doing some O(n^2) shit and diplomatically telling them that they were stupid or (2) finding a relatively subtle and diffuse source of slowness in your system and making equally subtle changes that speed things up significantly.

See the following paper for an example of real life performance engineering, where the engineers involved slowed down their own code to create a significant improvement in general application performance: https://www.usenix.org/system/files/osdi21-hunter.pdf

As another example, performance engineering in trading systems often involves figuring out how to do non-invasive measurement of events that cause systemic tail latency so you can find the bottlenecks and slow parts. If you do the hamfisted things that most engineers think of ("let me chuck performance counter checks everywhere"), you will prevent your system from making money and often destroy the usable signal.


Both 1 and 2 are solved by profiling. The course involves assembly because it’s taught in C, but the core concepts apply to any language - the particular language is just an implementation detail you (and several others) are getting needlessly hung up on.

Your trading system example sounds like a great topic for a masters thesis or similar graduate level work (or better yet - industry), not a core component of an introductory performance engineering class.


> Both 1 and 2 are solved by profiling.

This is a very simplistic view of how software measurement works that is pretty pervasive in academia, but doesn't actually translate all that well. If you're sticking to only typical profiling methods (which tells you very little about things like latency and doesn't tell you at all about many sources of slowness), you still need to find a way to do it in a low-impact way if you actually want to measure a software system of any complexity. As an example of another technique, tracing (and even just reading logs) can give you a lot of interesting signal that profilers don't generally capture.

Most commercial software systems have a number of lines of code (and a code profile, by the way) comparable to the Linux kernel, so if you're just going to apply a simple profiling methodology to it, you're going to get a lot of crap data and you're going to slow things down a lot. Performance engineering is about extracting signal from the crap.

> The course involves assembly because it’s taught in C, but the core concepts apply to any language

I never said I wrote assembly, I said assembly-level code, which you can write in most languages. You can write it in Python if you are skilled enough at Python. Many people do it in Java and Go. I usually do it in C. Most of this kind of code these days is not actually written in assembly.

> Your trading system example sounds like a great topic for a masters thesis or similar graduate level work (or better yet - industry), not a core component of an introductory performance engineering class.

The primary point I am trying to make is that a class called "micro-optimization" should be teaching you how to micro-optimize code. A class called "performance engineering" should nominally be teaching you about how to actually do performance engineering, which is actually not all that related to micro-optimization.


I’ve used techniques I both learned and taught in that class to dramatically speed up many real world large scale systems. Yes, not every aspect taught is applicable to every system (one thing we mentioned many times is that many of the micro-optimizations we explained were done by a modern compiler anyways, which is why we even brought up assembly). But those techniques are still useful historic context for the subject, which is exactly what you’d except an introduction class to include for the first lecture, along with a live demo that gets people excited about the class and willing to keep going with it.

The class is not about micro-optimization. The first couple lectures are. But people in this thread love to read their titles and stop there. I don’t get it. Here are the lecture notes for measurement and timing, you’ll notice they include what you say about separating signal from noise. I’m sure you, with tons of industry experience behind you, know more than this introductory undergraduate’s lecture in the topic provides (it’d be quite sad if you didn’t!), but that does not mean the class does not indeed provide an introduction to the topics. One that, again, I have personally built upon to dramatically speed up countless medium-to-large scale systems in a wide variety of contexts. https://ocw.mit.edu/courses/6-172-performance-engineering-of...




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: