Hacker News new | past | comments | ask | show | jobs | submit | nevinera's comments login

I used the models with a good workflow, but found them less than helpful. But I think that if I worked in a less expressive language I'd be very enthusiastic.


Aye. It's not the smartphone that's the problem, it's the async notifications pulling you into social media traps.

I dropped those apps about 8 years ago, but started to do most of my _reading_ on my phone - when I pick up my phone and habitually open something now, it's a book of some sort. It's a good idea to have something to replace habits with, if possible.


DRY is _not a best practice_. Repetition is a "code smell" - it often suggests a missing abstraction that would allow for code reuse (what sort of abstraction depends on the language and context), but "blindly-drying" is in my experience the _single most frequent mistake_ made my mid-to-senior engineers.

My experience is mostly in Ruby though, so I'm not sure how well it generalizes here :-)


> "blindly-drying"

Right. It's not an optimization problem!

Remember in school when you learned to turn a truth table into a Karnaugh map and then use it to find the smallest equivalent logic expression? Well, your code is not a Karnaugh map, is it?


Premature DIY can lead to the wrong abstractions. Sometimes code looks similar but actually isn't.


At my first big corporate jobs, I got to work on a codebase that was nothing but premature DRY’d code, but I didn’t know it at the time. As someone who was self taught, and suffered from imposter syndrome as many of us do/did in that situation, I thought I was missing something huge until I was talking to a senior developer and these strange design decisions came up, to which he said something like

> Yeah, that was written by <ex-engineer> and he couldn't abstract his way out of a paper bag

I guess the real lessons were the crappy decisions that someone else made along the way.


FWIW I completely agree in python, Java, typescript, and golang. I've seen people just parrot dogma about DRY and SOLID principals where their DRY'd code is completely not open to extension etc

Premature dry'ing is the same as premature engineering. And lest someone go 'oh so YAGNI is all you need'... no, sometimes you are going to need it and it's better to at least make your code easily moldable to 'it' now instead of later. Future potential needs can absolutely drive design decisions

My whole point is that dogma is dumb. If we had steadfast easy rules that applied in literally every situation, we could just hand off our work to some mechanical turks and the role of software engineer would be redundant. Today, that's not the case, and it's literally our job to balance our wisdom and experience against the current situation. And yes, we will absolutely get it wrong from time to time, just hopefully a lower percentage of occasions as we gain experience.

The only dogma I live by for code is 'boring is usually better', and the only reason I stick by that is because it implicitly calls out that it's not a real dogma in that it doesn't apply in all cases.

(Okay, I definitely follow more principals than that, but don't want to distract from the topic athand)


It would be better to make a class for languages where DRY is not a best practice, then create classes of languages where it is a best practice or may be a best practice through multiple inheritance. To keep things simple.

:)


My experiences are the same in C++ and Python. C++ in particular can get way out of hand in service of DRY.


Yeah I've had so many problems with understanding and working with other people's code bases when the person was obsessed with DRY.

You wrote that code 4 years ago with tons of abstractions designed for some day someone not having to repeat themselves... but it's been years and they've never been useful. However I've had to dig through a dozen files to make the change I needed to make which by all rights should have been entirely contained in a few lines.

My most common reaction to a new codebase is "where the hell does anything actually get done" because of silly over-abstraction which aspires to, one day, save a developer five minutes or three lines of copied code.


These types of posts are .. well thought-out, and usually posted by someone with relevant education. But they are not reviewed documents or journal articles, and you can _tell_ when someone is mixing in a lot of their own educated guessing with the research they've done. Which is the case here.

He probably wasn't intending it to be taken as authoritative source, but that's how most people will _read_ something like this after running into it on the front page of HN. And most of this is just.. guesswork.


The author is presenting a plausible enough theory without any evidence that it is actually more accurate than the actual historical texts.

As one example, the author goes on and on about the importance of the conduits into London - but here's how actual documents from the time describe them:

"A certain conduit was built in the midst of the City of London, so that the rich and middling persons therein might there have water for preparing their food, and the poor for their drink"

Kind of an important bit of context to leave out!


First ran into this on persistent servers, where the deployed instance knew branch names github had long forgotten about.. Someone pushed a branch named "bugfix", and `git fetch` started erroring.

You get even _more_ interesting problems if part of your team has case-insensitive file-systems!


Indeed, if I were running that site, I would now implement the ability to turn on intentional non-randomness for _specific people_, and begin embedding messages in the sequences of comics, or selecting the same two comics 28 times in a row on occasion. Heck, stick the referrer in the session and give everyone coming in from _that blog post_ wildly divergent randomness characteristics :-)


While I don't really _disagree_ with any of this, I want to raise a point from the other direction - it's possible (and for some of us, absolutely necessary) to build software such that holding a large "structure of thoughts and possibilities" in your head is not required.

I find myself largely incapable of doing so, open-plan office or not, and have compensated by adopting development approaches that break problems down in consistent enough ways (into small enough pieces) that the structures I have to pick up and put down are never all that complex. Which is good, because they fall out of my head just all the time. ADHD, a very poor memory for detail, and a role that has me responsible for juggling many tasks concurrently would _destroy_ my productivity if this article were universal truth.

Don't get me wrong, I _hate_ working in an open-plan office. But the impact it has on my output is not because the interruptions affect my flow, it's because the constant social contact stresses the heck out of me >.<


I gained the most from gobase.org, just clicking through professional games. The tool you can review/replay games with lets you try to guess the next move - just let them play the first 10-15 moves and then start guessing. Don't spend a ton of time thinking, just _guess_. Guess over and over, and if you don't guess the move after 5-10 tries, have it tell you, try for a few seconds to understand why that might be a good move, and continue.

You should totally do the tactics and puzzles that you can find (that same site has a bunch), but there's a lot more strategic recognition and pattern-matching in go than chess.

I'm also interested to hear if there are better tools though in the last .. Christ, twenty years? I'm old now -.-


It's an awkward space - requiring us to pay them to put our profiles there is the main way to keep the content valuable/accurate.. but while that's fairly typical for hirers, it'd feel scammy to job-seekers.

I think there's room for such a service, but growing it into a viable model would be a real project, and probably not a distraction a business would want to take on except as a pivot.


When science doesn't match your personal observations, that doesn't make the science wrong. It also doesn't make your personal observations wrong - it usually (in my experience) means that you're measuring/observing different things (assuming you're reasonably intelligent, which I am doing). In this case, "cognitive development and well being" are complex concepts that probably do not map exactly (or maybe even grossly) between their definitions in the study and their experiential meanings in your head.

My personal observation is that "screen time" is not the problem, but lack of interaction _is_ - letting the kids be in front of a screen for two hours doesn't cause a problem, but letting that be an excuse not to spend time with them or interact with them (which is when it usually ends up happening in large quantities) does. Essentially - you need to control the other variables to understand what's really going on; excessive screen time is generally a symptom and not a cause.


When it comes to social science and studies involving self reporting I don't give it the same weight as I would to a study where all the variables are quantifiable and controlled

And in this case the quantifiable, non-self-reported part is the MRI brain scans they're doing, but what does that data indicate? I'm too ignorant of neuroscience to know. Are all changes in behavior and cognition detectable through an MRI scan? Maybe the average kid who spends 8 hours a day playing video games finds it nearly impossible to read a single page of a Harry Potter book without getting distracted, while a kid with 2 average hours of screen time has no problem with it, yet the difference maybe isn't reflected in an MRI scan? Did they perform these same scans on some Amish kids as a control?

Point being sometimes studies like this are hard to swallow when their conclusions go against so much anecdotal observation, particularly when the methodology leaves room for all kinds of other interpretations


Oh I have no problem with doubting the study (I definitely doubt its conclusions myself, since I don't think "screen time" is a meaningfully monolithic concept to study in the first place, without even getting into the details of the methodology). But there are good ways to doubt and bad ways, and "this study is junk because my kids misbehave more when they watch TV" is far on the wrong end.


Interesting the concept if "misbehaving when watching tv". I have a clear memory that I would get a mild headache and feel bored if I watched too much tv, and when I look at my children I notice they are watching too much tv by how "active" their body becomes while in front of the tv, that's the signal boredom is kicking in and a good time to turn it off. Usually prevent things going worse.

The time it goes bad is when I need the full day for, as an example, pack for a long trip. In that case tv is a necessity, but the consequences are terrible


The difference is that the science will have tightly defined terms for "cognitive development" and "well being" that is different than what a parent is measuring when they're looking at agitation, misbehavior, impulsiveness... etc.

My wife and I have seen the same thing with our child. More agitation, more self-satisfying behavior ("forgetting" to do as told and playing instead), more temper tantrums. Kiddo still gets screen time, but it's limited.

There's a much bigger correlation with bad behavior, mental "fuzziness," and impulsiveness to diet, than to screen time. Too much processed food, food with preservatives, and food with refined wheat flour... yikes. Night and day difference in some cases. (Turns out a lot of kids have a "wheat allergy" but it isn't just that.)


No this study is actually garbage. What it considers screen time is way too broad to be useful.

People aren't worried about screens in general, but social media in particular as well as other smartphone applications that are engineered to keep you engaged.

See Johnathan Haidt's literature review here for what an actually critical reading of the literature would lead you to reasonably conclude: https://jonathanhaidt.substack.com/p/social-media-mental-ill...


I don't disagree with you a bit! Science should be doubted in the correct ways, and for appropriate reasons, that's one of the main things I come to HN comments for. My issue with the OP was with his reasons - anecdotes about your children and personal opinions about the world might _trigger_ doubts, but they aren't good reasons to base those doubts on.


I've certainly been guilty of similar acts of pedantry so you'll hear no criticism from me on that front!


And of course, talking about all of this as if 'screen time' is all the same is _insane_. Anyone that thinks watching youtubers screech at each other for hours is equivalent (for the purposes of mental development) to watching an adapted Broadway musical has not put any thought into the topic, and that's entirely setting aside the topic of interactive vs passive entertainment being lumped together..

But hopefully the strategy of "just let the kids blend it all together and hope it washes out in the statistics" is a good enough method -.-


Not "wrong", dubious.

It is quite appropriate to call such declarations dubious.

Your explanation is fine, however, this 'no evidence' refrain that has been used to mislead the public has gone on long enough. At this point when that is in print the assumption may as well be the opposite.

Every sensible person can see that children are not developing as they have in the past and the clear major difference is full attention grabbing effect of media. But no, no, it isn't the 'screen time'


> Every sensible person can see that children are not developing as they have in the past

Can we? On what evidence?

The "no evidence refrain" happens because people keep advancing claims without even trying to back them up.


If you'd like to analyze their approaches to call the study 'dubious', I won't argue with you (it is; they are not strong ones). But making that assertion solely on the position that your personal observations of your own children disagree puts you in the same category as my wife's friend that rejects vaccines because her mom took one and still got covid. That's not how individual observations interact with science.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: