Hacker News new | past | comments | ask | show | jobs | submit | anthonybsd's comments login

>Pretty much every day, I read something to the effect of "lawyers are clueless when it comes to tech lol".

<chuckle> Read the article, seems reasonable.

Connect with Neil on LinkedIn -> 404 page.

All righty then.


In this case, that's because he's got rid of his Linkedin account since that article footer was written (I follow Neil via Mastodon, he discussed it there).

(edited for missing word!)


… and the article footer being constructed, so that it can become outdated would seem to imply a somewhat limited understanding of technology


Do you take the time to make your blog footers dynamically check if your social media profiles still exist? I sure don't.


If the blog footer is intended to contain links to identity, I'd make that whole footer dynamic and then I'd just keep a single source for that.

Alternatively, arguably better: Keep a master domain for each identity and update social media accounts for that identity on a page in the master domain.


Of course not all lawyers are the same. There are some that really do get tech. Or more like the niche they specialize in, because the days of being a polymath are over, the tech world is way too large and complex these days to know it all.

That said, I thought exactly the same reading the article. If he thinks he's already one of the more tech savvy ones, that only proves the point of the article he's criticizing.


Why is everything these days revolve around ChatGPT(etc). You don't need LLMs to refute Chomsky language models. Modern linguistics pretty much rejected [1] his theories on the basis of evidence.

[1] https://www.scientificamerican.com/article/evidence-rebuts-c...


Thanks for posting, finally some support for his supposed debunking! Interesting reading for sure.

  That work fails to support Chomsky’s assertions. The research suggests a radically different view, in which learning of a child’s first language does not rely on an innate grammar module.

  Instead the new research shows that young children use various types of thinking that may not be specific to language at all—such as the ability to classify the world into categories (people or objects, for instance) and to understand the relations among things. 

  These capabilities, coupled with a unique human ability to grasp what others intend to communicate, allow language to happen. 
The fact that very smart people think this refutes Chomsky makes me quite sad. They basically restated the UG theory in the last sentence, as proof that it’s wrong…

Chomsky has been saying for literal decades that language is likely a corollary to the basic reasoning skills that set humans apart, but people still think UG means “kids are born knowing what a noun is” :(


I'm reminded of a 400 leve linguistics class I took in undergrad. We had just read Chomsky's Remarks on Nominalization, and one of my classmates remarked, "I don't think this Chomsky guy understands X-Bar theory". The joke being that Chomsky was the major developer of X-Bar theory. We were just reading an early work of his.

This also reminds me of evolution. Some people looked at discoveries in epigenetics and declared that it disproved Darwinian evolution in favor of Lamarckian evolution.

Sure, Darwin's theory of natural selection combined with random variation at the point of reproduction does not explain 100% of evolution, but it is still covers most of it.


Certainly there has been a shift of many applied linguistics researchers away from generative linguistics, but it is still quite common among university linguistics departments and continues to be actively researched (source: took linguistics courses in college a couple of years ago).


Reddit mods are all volunteers....


This is not atypical. Unfortunately this is true for both Coinbase and Gemini. I had Coinbase support tickets that went 8 months without being responded to. I had Gemini support tickets that were being ghosted for a few months and then closed. These companies operate in a different dimension of customer care.

As for Ukrainians: have you considered using PayPal? Since last year it's widely supported in Ukraine and opening an account is fairly easy.


Palm Pilot was a huge success both from a sales perspective and from "getting consumers comfortable with touch screen" perspective. The author of the article has a very poor choice of metaphors and examples which really obscures his point.


Missing my favorite apple, Ginger Gold :(


Came to say the same ... Bought a bag today around noon and they are half gone!


You are lucky. Where I am (Northern NJ) Ginger Gold's been gone out of stores for a few weeks now. Highly seasonal :(


Central PA .... Shouldn't be too different from your seasons.


Late 90s to early 2000s for sure. Tons and tons of backends of various kinds were written in it. If it was Windows it was typically running on COM/DCOM, and in UNIX it was usually CORBA. Sure, the UI was typically written in something else. Dawn of the Web with CGI, and if we are talking desktop people would use something like Visual Basic, Power Builder, or Delphi. But C++ pretty much dominated that space until Java got decent enough and fast enough.


I disagree. The fastest human reaction time is something around 100ms. (https://humanbenchmark.com/tests/reactiontime). I've just measured, and mine is 230ms. As such 10ms lag wouldn't make any difference. I've used Shadow Tech PC for a while during pandemic. With good upstream and downstream bandwidth it was a fairly decent experience, even for playing something like competitive Overwatch. I noticed the difference with normal gaming PC due to some other factors (quality of sound, etc.). Standard accessories worked seamlessly for USB-over-UDP.


You can easily observe how significant latency is in videos like this: https://youtu.be/vOvQCPLkPt4?t=80 (Microsoft Research presenting its ultra low latency displays for touch interactions). Many mobile games have you drag and drop things, so it's not like it's just first person shooters that suffer from latency.

You're a lay person, you couldn't have known this, you're using words with very specific meaning to streaming (like latency) and you're comparing it to human reaction times, which are measuring something else entirely. You kind of reasoned about from a first principle in a very Paulgrahamarian way, and it led you deeply astray. That happens. And you're not the only person doing this, this is a comment section full of people who play games and parrot stuff they seen in YouTube, and don't have a concrete grasp of what it is they're even talking about, so it's understandable when it's laypeople shouting at laypeople that it's just a bunch of blah.

One of the reasons I hate HN and write in throwaways nowadays is that the comments section is a better example of Knoll's law than actual journalism.


Thanks for posting this. This comment section has been particularly frustrating to read, since it's a mirror of what I've seen in the real world. There are teams at big tech companies making TERRIBLE decisions about the future of gaming because they don't actually understand how latency affects games, and they aren't hardcore gamers so they can't feel the effects themselves.

Even the ~50ms total latency you get from locally streaming over a 1ms wired network (from buffering/inappropriate firmware design) ruins whole genres of high level gameplay. You miss tricky shots in FPS games, you can't confirm/link in fighters, etc.


Wow. Condescending much?


A bit condescending yes but he showed a really good example of how input lag is noticeable.


Appropriate in response to the breathtaking arrogance-in-ignorance of what it was responding to.


This is the most factually true comment that I've ever downvoted.


That comment should be sent out as a blanket text message to everyone who commented on this post about latency IMO

The word needs to get out


Human reaction times have nothing to do with perceived input latency. There is a latency budget that is different for every individual that determines whether or not something will be an acceptable experience. This budget is divided between everything in the signal chain like the input devices, the computer/console, the monitor/tv, and any other processors along the signal path. Streaming games adds additional latency to the signal chain. Generously if your target is 60fps and you have a round trip latency to their server of 8ms, that's a half frame of added latency. On its own it's almost certainly imperceptible to most people, but it's not working in a vacuum and most people don't live right next to the datacenter. It can very easily go over the threshold for what is acceptable to most people.


Humans can detect 10ms of latency easily. The problem is more than just reacting slightly later to events, its also how quickly the game/system responds to your inputs because its a round-trip interaction. This ends up usually being where the latency becomes more noticeable to most people. People can generally adjust for consistent latency, but any latency gains are pretty noticeable once you get used to looking for it.

Also 10ms ends up being close to the average input latency of a single additional frame at 60fps, and you just have to look to the efforts that have gone into Super Smash Bros Melee (especially in netplay) to see how far people will go for a single frame.


Practiced musicians begin to feel discrepancies in time starting at latencies as low as 10ms. I learned this when investigating whether bands could practice live over the internet (spoiler: most of them can't). Turns out that due to limitations of physics, even absolutely optimal connections still have enough lag/jitter to ruin it for professional instrumentalists.


The bigger issue is jitter. People can compensate for consistent delay (e.g. by leading shots in an FPS game). But when the delay is inconsistent and varies quickly, it becomes much more difficult to anticipate movements and execute time-sensitive maneuvers.


You're definitely right for most people, but even 10-20ms is noticeable by experienced players and can be very impactful at pro-level -- e.g. some high-level LoL players feel 35ms ping is unacceptably high for competitive play: https://afkgaming.com/esports/news/ls-talks-about-why-35-pin... (though it probably doesn't matter much for Stadia's use cases)


Doubtful. Pro gamers are known primadonnas. If anyone ever tested them and added synthetic lag with double blind study I suspect they wouldn't identify it more accurately than what a random chance would dictate. Sorry, but pure speed of electrical signals/chemicals traveling in the body puts a constraint on that.


It's not even just "pro gamers", the most popular fighting game in the world (Smash Bros U.) is enjoyed by casual players and pros, and has an entire mechanic based on "two-framing" for edge guarding.

One absolutely does not need to be a pro to pull it off, and the whole interaction window for that mechanic is based around being able to react within ~32ms (1/30th of a second) to edge guard an opponent. It is exponentially harder to pull off in online play.


https://www.youtube.com/watch?v=vOvQCPLkPt4&t=90s you can literally see the difference.


Unless cloud gaming company intend to put servers in every single city across the globe it's not going to work. Even in Boston with good, fiber internet streaming games have too much lag and the compression artifacts are horrible.

When there is fast movement the compression is much more noticeable, worse then the lag. Many reviewers doing graphical comparisons do it with static images. It's quite common for the whole screen to become a blur of compressed and pixelated blocks at the slightest network hiccup.

Also, you are misunderstanding what "reaction time of 100ms" means. It does not mean that any event that takes less time then 100ms imperceptible, it absolutely does not. The sound of a clap lasts 22ms and you are able to hear even shorter sounds. You can see light pulses of arbitrarily short length so long as they are bright enough.

What 100ms reaction time means is that you can't react to a given stimulus in less then that. Here's the important distinction, you don't react to lag, you perceive it.

To experience this for yourself, go this lag simulator webpage [1] and experiment with various lag times. You will quite easily be able to feel the difference in 0ms, 100ms, and 200ms of added latency. Keep in mind this is on top of whatever latency OS layers and browser sandboxing introduce.

1. https://www.skytopia.com/stuff/lag.html


Moore's law is our friend and in the future seamless cloud gaming will be possible.


you cannot reduce streaming latency with smaller transistor, nor with more transistor density. or maybe there's a new more law interpretation I'm not aware of that makes speed of light in the connectivity medium faster?


This is confused. Reaction time is irrelevant, you can still notice very short delays between two events. The fundamental issue is that when you make an input that corresponds to an action in a game, you expect that to action happen near-immediately, and anything else feels terrible.


I tried Shadow and, well, you could really tell they host in a budget datacenter with how often there was stutter or missing keyframes (they host with OVH in Europe). I never had such issues with GeForce Now.

Also, I found it kind of scummy how they will not actually tell you what hardware you'll be getting beyond "4c/8t". Mine turned out to be a low-clocked Haswell, a CPU so outdated that Steam downloads were CPU throttled. I used it for about an afternoon and then immediately cancelled.


Maybe not 10ms but anything above 50ms is known and proven to degrade pro players' performance in competitive FPS games


It matters for MMOs. If there are two pro gamers A and B both with 100ms reaction time, but gamer A has 10ms ping while gamer B has 30ms ping, gamer A has a consistent advantage. This is not strictly a Stadia problem but it may be exacerbated if the display data adds latency on a slower line.


Pro gamers can definitely feel +/- 10ms of lag. It makes a difference at that level.


Agree. I used Shadow for a bit something like 2 years ago and it was pretty seamless. Sure, not as good as having a PC but it was pretty darn close.


a test on a mouse click? really? finger travel time is going to dwarf and eat up whatever reaction time you have.


100ms times are cheaters; 200ms is probably closer to the absolute lower bound of human reaction time.


I don't know about that. I'm off form and this was my third try after going wired.

https://imgur.com/a/8inHYSf


Wow, this site is cool. I consistently get ~188, best was 176ms. I wonder what some of the esport gamers get!

After doing it several times it let me save the score.

Reaction Time 181ms

74.46% percentile


Top level players don't tend to do better on these than slightly above average, because reaction time is something you train for a specific task.


>From where I'm standing, productivity doesn't look so great. It's an open secret in my remote-heavy, North American west-coast social circle that everyone knocks off at 1-2pm when the east coast people are definitely logged off. Probably the east coast people are doing the same thing, but in the morning. Lots of happily phoning it in.

That hasn't been my experience at all. I work for a multinational company with teams spread across the globe - daily meetings with West Coast, London, and Mumbai and productivity has been fantastic in the past 2 years across all R&D teams (can't speak for others). Are you sure this isn't just an issue with your internal culture?


Can confirm like 80% of my Google friends are 3 hour/day types. But then again, it was only 5-6 hours/day before COVID anyway.


I mean it all depends on what they do. 3 hours a day of coding is huge amount of work. 3 hours a day of zoom meetings is meh.


They are living The American Dream that Homer Simpson made us all aspire to. It may not last but at least they were fortunate enough to experience it for now. (I wanna be there :/ )


Homer was kind of a nuclear reactor SRE if you think about it.


He was ahead of the curve in so many ways including quiet quitting.

[1]:https://www.youtube.com/watch?v=jYXzHjbfMDk


has anyone moved US nuclear infrastructure to kubernetes yet?


To be sure, my (observationally biased) sample set includes ~3x more "friends working in tech for multinationals" than coworkers.


I refuse to watch propaganda garbage made by the Russian fascist who is enthusiastically cheerleading the genocide of Ukrainians (1). If you are interested in Anna Karenina - do yourself a favor and just read Tolstoy. It's a fantastic book by one of my favorite writers (although War and Peace appeals more to me).

1. https://www.dailymail.co.uk/news/article-10785295/Russian-TV...


It behooves one to know their opponent. The mere act of viewing the media does not "support" the film-maker or the regime, assuming you're not paying for it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: