Hacker News new | past | comments | ask | show | jobs | submit | eliasmacpherson's comments login

The ongoing weaponisation of psychological terms such as narcissist by the unqualified is another tiresome step along the euphemism treadmill. It may never stop as it is too simple a form of entertainment to resist.


I think you should read Herovit's World. haha.


It's been a long time since I've interacted with a software architect. As poor as wikipedia is a source for a layman like myself, I still use it as a first step to explore topics I am not familiar with.

Would software architects be aware of all the various paradigms applicable within a particular language? Much like me trying to pick the right tool to complete a particular project...

https://en.wikipedia.org/wiki/Programming_paradigm

is there a good reason why Domain-Driven Design is not linked in the article above, or just an oversight?

https://en.wikipedia.org/wiki/Domain-driven_design


DDD is not a programming paradigm, it’s a software design approach. You can combine DDD with various programming paradigms such as OOP and FP.


Thanks for the clarification.

So just to spell it out: Domain-driven design (nor Data-driven development per the article title, nor Data-driven design) aren't programming paradigms, they are software design approaches.

Whereas Data-driven programming, as linked from the above wikipedia is infact a paradigm and something else entirely. Alright.

They have 'Data-oriented' listed in the wikipedia paradigm article, but it links to 'Data-oriented design' which is clarified as a software design paradigm, as distinct from a programming paradigm.


software design approaches are frequently closely tied to programming paradigms, and DDD (while, like Object Oriented Analysis and Design, it can be used with any paradigm) is closely associated with OOP, and at least the early writing on it (if there is any newer that this isn't true of, I haven't seen it) is very tightly coupled with OOP.


That is true to a certain extent, but that doesn’t make it a programming paradigm.


Yes, it was a related observation, not a refutation.


This refactoring business, I see it going two ways, there are probably more.

1. you refactor, you change the type declaration in one place, auto handles the boring work of replacing the characters throughout your project.

2. you refactor, you change the type definition in one place, auto will handle replacing all the instances in your project, hell, it might even compile afterwards.

I believe you are describing 1. I find that easy enough to do with 'Sed' or IDE refactoring tools.

2, is more subtle and the new behaviour could now be worthy of scrutiny throughout the project. I find it difficult then to 'Grep' or IDE search through the project for all instances reliably when auto is in use. It is much easier for me with spelled out types.

I would trade the benefit of auto in 1) for the safety of spelled out types in 2) every single time.


If you're replacing types in this manner you don't scrutinize the usages. You design (or choose) the new type such that it actually is compatible with the old type. For example std::vector and std::list can be replaced with each other, because they either have the same functions with analogous behavior, or you get compiler errors if you use any of the non-overlapping functions. What you can't do is replace an std::vector with a class whose clear() fills the container with default-constructed instances of the objects.

In other words, you concentrate your review on the original and new type, instead of the usages.


> You design (or choose) the new type such that it actually is compatible with the old type.

Yes I agree, in that case sure, but when refactoring the case can arise that a new type must no longer compatible with the old type, then auto becomes a hindrance.

std::vector and std::list have different behaviour regarding the validity of iterators after deletion (EDIT: and insertion so it seems!), to pick an example.

Maybe I'm just paranoid?


>the case can arise that a new type must no longer compatible with the old type

Then you make sure that whatever causes an incompatibility also causes a compilation error. You can't rely on text searches and IDEs for something like that.

>std::vector and std::list have different behaviour regarding the validity of iterators after deletion

Fair enough, perhaps not the best-chosen example. I was thinking about them purely as collections, rather than as part of resource management. Checking their behavior with automated tools becomes much more difficult once people start taking pointers into elements. But then again, that's true of any class. If, for example, a member function returns a reference to a member and someone gets its address, now that location is implicitly relying on the internal stability of the class in a way that's invisible to the type system.


I think the example is perfectly apt, and I would not know where to start wrapping a std::list/std::vector implementation to pick up on runtime iterator invalidation.


Since I was trying to give an example of two compatible classes, no, it's not apt, since their member functions have incompatible side effects.

Hm... Hypothetically, with a lot of effort you could design a dummy class (A) that implements only the members you want to investigate and where necessary returns a different dummy class (B) representing the element type. If someone ever tries to take the address of a B (you have to delete operator&() and/or get() if it's some kind of smart pointer) then you know you might be dealing with iterator invalidation.


Ah, well I guess I did take it as an example of something that was incompatible. It served well enough as a vehicle to drive the conversation forwards.

The iterator invalidation occurs when push_back(), insert() or erase() are called, presumably among others, so you'd also want to overload the iterator increment and decrement operators too (oh!, not to forget end(), or rend() if you are going the other way...). I'm not sure what operators and methods would be called on passing to an std::algorithm like std::find or std::sort. Most likely the only way to find out for certain would be to make everything inaccessible and replace piecemeal until the compiler was happy to run to completion.

I'd want to take a closer look where it's instantiated, but if all the uses are 'auto', well let's just say I'd be unhappy to say the least.


Here's some old thoughts on the matter of small firearms written by someone long ago:

https://www.orwellfoundation.com/the-orwell-foundation/orwel...


rust: std::simd efforts

midi: 2.0

web: local-first as a trend


if you were on Windows you could possibly do something to get this into Audacity:

http://reaper.fm/reaplugs/

https://github.com/nbickford/REAPERDenoiser


On Rosetta 2, from the horses mouth:

Rosetta translates all x86_64 instructions, but it doesn't support the execution of some newer instruction sets and processor features, such as AVX, AVX2, and AVX512 vector instructions.

I can imagine quite a number of users running into the above situation in multimedia related code.


That’s still a small minority of all the apps out there, though.


But we're talking about Macs. That's a huge chunk of their userbase. For the people who use Macs for actual work you can broadly classify them in two groups, Devs who need Xcode and media. Yes there are exceptions but that's the majority. For one of those groups AVX is pretty important.


And those apps had to be ready too. You don't build a reliable platform by randomly breaking "a small minority of apps". You yourself are certanly in "small minority" of at least a few features you rely on.


This is just an endless logic circle.

“Not all apps needed to worry about the CPU change”

“But some did!”

“And Apple made test hardware available for those people”

“But not enough for all apps to be tested”

“Not all apps needed to worry about the CPU change”


To clarify:

The initial claim was, lxgr: "If you were initially fine with software emulation (i.e. Rosetta 2), as were many small and large software projects for macOS or Unix, you had no need whatsoever to get a DTK."

The subsequent claim was, xvector: "Rosetta 2 was straightforward and surprisingly fast, requiring zero tweaking or user interaction. Most people never even noticed."

Posters, myself included, are reacting against these claims, as they both put the cart before the horse, and the second gives only an end user perspective.

Devs had verified with a DTK that Rosetta2 ran their programs acceptably. Keep in mind patches had to be issued for programs which did not check for the presence of AVX, AVX2 or AVX512, else they would crash. This invalidates the first claim. It shows why the second claim is only the second half of the story.

So the logic follows a line rather than a circle.

Also nobody made the claim that: “And Apple made test hardware available for those people, But not enough for all apps to be tested”


stealing is illegal, so I never lock my front door.


Keeps you from getting a broken window, so that’s probably not a bad idea.

If they want in, a locked door isn’t stopping them.


In Louisiana, if you leave your car unlocked and someone takes it, it isn't GTA, its unauthorized use of a movable.


The locked door makes all the difference for your insurance claim so.



wait til you find out some people prefer their faculties dulled by a hangover than being fully present and in the moment for their dreary morning-midday routines.


Strangely some of my faculties are actually made keener by a mild hangover. I'm more empathetic, have keener hearing, and am in a reflective, more receptive frame of mind.

Unfortunately these effects are inseparable from less desirable ones: guilt, anxiety, upset guts, hypersensitivity to smells, hair-trigger impatience, flop sweat.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: