I left GitHub earlier this year after a decade. I’ve seen mockups, hack week projects and proof of concepts of this for the last 5 years (at least). A lot of engineers there knew this is the future that PRs need but GitHub at this point seems organisationally incapable of delivering these sorts of large improvements (Microsoft is perhaps partly but definitely not wholly to blame for this). Instead, they are midway through porting Rails views to use React, keeping most pages looking identical while introducing bugs and regressing previous usability improvements on a weekly basis. A real shame.
> Instead, they are midway through porting Rails views to use React, keeping most pages looking identical while introducing bugs and regressing previous usability improvements on a weekly basis. A real shame
I predicted this the moment I saw the React dev tools icon going blue when browsing GitHub. My comment (which I can’t find right now, I’m on my phone) was along the lines of them going the “Reddit way”. A totally worse experience for the end user just for the happiness of the React fanboys working there.
I already can’t stand the code browsing UI, which randomly closes or open a sidebar as you navigate back, or the search input which doesn’t even look good to me. What a total shame they’re messing it up so badly. GitHub had one of the best UIs in my opinion, and they’re just messing it up for the sake of keeping some devs happy.
This would explain something else too, probably: a few years ago I did a call with some GH folks talking about the idea of making the commit message applied to a squash merge part of the review itself.
Apparently this was very common feedback and I know that at least five other people who were maintaining large scale open source at the time gave it that week too. It’s never gone anywhere though, and as a result I have to disable all workflows except “rebase and merge” for every repo…
More likely to be just standard Mobile IP https://en.wikipedia.org/wiki/Mobile_IP. Fairly standard stuff, can cause some false positives around traveling (I've seen people get freaked out about stuff like "This person just logged in from their home state and then less then an hour later logged in from France!" when it was just mobile IP treating their phone as still in the US while they were in France on a trip, but their laptop connected over normal internet was seen as coming from France)
Google asks your kid, and they can pick either way. You can tell them "Hey, this is a device I bought for you, using a cell phone service I pay for, so either reenroll in supervision or I'm taking my device back". A little harsh, but... then you still get roughly the same level of control as before.
Looks like you're right. I don't remember it occurring this way in late 2021, so I wonder if the implementation changed at all since then, but it could also be poor attention to detail on my part
I would also not allow it. I'm saying the problem is that core Go developers say "Go doesn't have exceptions", which is manifestly false, but causes people to not write exception safe code.
But despite you and me, I'm saying there's a lot of broken code out there because of this doesn't-but-actually-does misinformation.
And it's very annoying that you have to tell people to do:
var i int
func() {
mu.Lock()
defer mu.Unlock()
i = foo[bar]
}()
Clean code, that is not. (even if you simplify it by having the lambda return the int)
This is like the biggest thing scaring me away from Go. This half-assed "we don't use exceptions, so you shouldn't have to care about it, except when we do, so you still must write proper defers, which are now doubly verbose because nobody considered it a primary use case"... In any other language, a mutex outside a using/with/try-with/RAII would be instantly flagged in code-review or linter tools. In many cases even hard to write incorrectly, due to entering context being only way to acquire the lock.
Now this middle ground leaves you having to write triple verbose if err != null on every third line of your code and still not be safe from panics-that-shouldnt-have-been-panics.
As parent says, the only way panics can ever work is if the top-level never catches and recovers from them. I'm no expert in go but that would mean in such perfect world, defer should hardly ever be needed at all, not even for locks? Only for truly external resources? But now with popular web servers doing such recovery, the entire ecosystem got polluted and all need to handle it?
Yes it does, which is why recovering from a panic can be done in a deferred function. The go runtime maintains enough metadata to track what deferred functions need to be run while unwinding the stack.
I looked it up, it wasn't (in Denmark). He said he always does it that way. I printed out and gave him a copy of the relevant regulation. I'm sure it made all the difference...
fortinet is a cyber security software company, that makes a product called fortigate that does SSL MITM to decrypt and monitor/filter your traffic. Probably want to poke your IT team or get very concerned about your ISP...
Where I work, IT owns laptops, and is not a part of engineering. Getting things installed on new laptops is apparently not possible. On the other hand, a cloud instance (either this or something similar) is owned by engineering, so it's much easier to control the base image and such.