I'm a software engineer who does not practice "tech veganism." I use an iPhone, iMessage, Gmail, Google, Google Docs, etc. I'm interested in understanding why people who do practice this feel so strongly that using these products and services is bad. So what if Google reads your emails? Why is it bad if they correlate those and your search results to offer a more personalized service -- even more personalized advertisements -- for things you might actually be interested in buying some day? Is it fear of public embarrassment? Of being blackmailed? Of being discovered doing less than legal things? Have you been slighted by the company before so it's a matter of never doing business with them again out of principle?
No need to worry about explaining it kindly and plainly to me :)
You can choose to depend on them if you wish, but I choose to be responsible for my own data in any way I can. I believe that it is the right thing to do, and I must do it to the best of my ability. It's not a fear thing. It's a constant annoyance thing. I just don't want to play their one-sided games.
> I use an iPhone, iMessage, Gmail, Google, Google Docs, etc. I'm interested in understanding why people who do practice this feel so strongly that using these products and services is bad.
You mean services from American conglomerates, why don't you use Huawei, Yandex, WeChat, etc?
For the user almost none. The difference is for the devs. If I want to make a cross platform game now, I have to worry about which Graphics api I will use, OpenGL, Direct3D, Vulkan etc. What sound api I will use. What Display, Input, abstraction I will use. Etc. If I could have a unified api and runtime abstracting all that already for me, making full cross platform games would be easier, assuming I don't want to use Unity. But that is of a much bigger scope that just WebAssembly. We already have that power, but we are chained to the browser. The discussion is to free that runtime from the scope of the browser. Decouple it.
The final dream is having a unified, Desktop, Consoles (Ps4 and Switch) and Browser runtime with a unified API (where sensible). One can dream.
One does not simply develop cross-browser. Different browsers have different issues with different graphics cards. As soon as you're not drawing a textured triangle anymore, things get hairy. Same for the audio API. And don't get me started on the sad state of WebRTC and websockets.
Yes, it's just not viable to develop for browsers unless you have the maintenance budget for a moving target. I decided on that after living through the Flash era and trying HTML5 for a little while. The browser is backwards compatible only for certain kinds of applications.
That's the thing. You don't escape from those problems in native environments. You still have to deal with different GFX drivers doing different crap on different platforms, cards that they that support a GL extension but don't, etc. And that's just OpenGL.
With the browser, you're just adding another abstraction layer that has even more complexity. WebGL is far for a decent spec. And we're talking just graphics, as I mentioned before, audio and networking have tons of unstable and untested code.
Multimedia programming is already hard on native environments. I don't see the web being a friendly environment anytime soon. There's no silver bullet, "code once, run anywhere" for games and multimedia.
The first thing I do on any new Android phone I get is enable developer options and disable all animations.
I've had many friends complement my utterly midrange phone for feeling so snappy; it's because the stupid animations that only serve to slow down UI transitions are off.
Turning low power mode on or off makes no difference in this area. My phone runs at 50% speed, regardless of whether Low Power Mode is turned on or off.
Based on reading this it seems like you haven't worked on a large application where there are lots of modules and libraries developed by independent teams all coming together to form a final product (shipping with CD or not).
I work on a platform team developing many different libraries used by many different teams throughout the company. If we didn't leverage semver (or some versioning scheme that at least differentiates between breaking and non-breaking changes) I don't know how we would do it. We either a) wouldn't be able to release 'patch' updates to a particular library without consumers getting it for free/automatically without changing anything or b) wouldn't be able to release breaking changes without it automatically breaking consumers builds.
Semver may not be useful for the final build or end product that you end up shipping. But it is a very useful tool for all the parts (dependencies) that make up that final product.
I have Bose QC35s and they can be simultaneously connected (not just paired) to two devices at the same time. They can be paired to more, and you can cycle through paired devices with a switch on the headphones or use the iPhone app to select which two devices you want connected.
I almost always have both my Macbook Pro an iPhone connected. The switch from listening to music from the laptop to the iPhone (such as when I leave the office and get on the bus) is seamless. Simply stop playing music on the laptop and start playing music on the phone. I highly recommend them.
Your hands don't have to move around as much to select a toolbar item and you don't have to move your hands up to the screen like a surface. I think the touch bar will be great for productivity.
The only part that's similar is the key faces can change. They're still physical keys that press individually. It's not multi touch, a key can't suddenly become a slider.
The issue is that the MBP line previously had zero of these ports and now they have zero of anything else. That means we have to go buy adaptors so that we can use our hard drives, power adaptors, iPhone/iPad charge cables, and other peripherals.
It's nice that these aren't proprietary cables, but I'll have to spend over $100 on adaptors, or wait two years until there are cheaper alternatives.
Lastly, there may be 4 ports in total, but since 1 is used for charging, there are effectively 3. Not a deal-breaker going from 4 to 3, but definitely a problem going from 2 to 1 on the low-end MBP.
> Lastly, there may be 4 ports in total, but since 1 is used for charging
Only if you're using it exclusively for charging. They literally showed using it with a display that gets video out over the same cord it powers the laptop with.
> Four standard (non-proprietary) multi-use ports is a complaint?
Well they are incompatible with almost everything Apple makes. You can't even plug in an iPhone out of the box. You'll need a special adapter for every device you have (monitor, storage, phone, SD reader, etc).
I hadn't even thought that far yet. All of my Apple cables would need Apple adapters. Every future Apple cable is going to need an adapter back to USB 2.0... facepalm
kinda reminiscent of the 12 year old Nintendo DS display. and in that I suspect developers will by constantly trying to find a purpose, never to much avail beyond some hotkeys, that only hunt and peck typists could appreciate. remember the uber lusted, and uber expensive optimus keyboards?
I'm surprised you're the only person here who has so far mentioned Art Lebedev/Optimus.
I have an Optimus mini 3 and an Optimus keyboard.
The former I never really got along with as the oled drivers made a loud 18khz whine. The latter, however, I still use for photoshop, gaming, coding, etc. - ain't got no poxy touch strip, my entire keyboard is a technicolor discotechque.
That ribbon was updated dynamically depending on the application you were using? Did it allow you to re-arrange the commands on it? Did it support scrubbing/swiping as a method of input?
So did I, but it was a simply something that could have used physical keys. Was it dynamic based on the application you were using or configurable in any way?
> No innovative features? What do you call a ribbon display that no one has ever done before?
A definite step back for the sake of introducing a gimmick, whose consequences have not been given enough thought.
They could have tried buying a Lenovo Carbon Gen.2 and using it for half a day, and realized what a stupid idea it is.
I actually had one of those laptops for a little while - issued by my workplace.
It came with the same concept - a touch-sensitive strip, with (limited) display capabilities and which could change its function depending on the context.
Even if the touchstrip hadn't been an utter piece of junk (which it was - hello lack of feedback and failing to detect extremely deliberate touches), I wouldn't have hated it any less. I don't want to have to constantly look away from the screen when I'm working. I'll avoid mentioning all the other problems of the Lenovo touchstrip (or the rest of the keyboard, which was an utter abomination) because hopefully Apple gets those right - but it doesn't really matter. Give me my ESC key back, and stop breaking usability.
In their boneheaded move, actually Lenovo _at least_ had the presence of mind to realize that ESC is really off limits, so they moved it to the row below. At the expense of the ` key, which isn't really an optimal solution, but it shows some modicum of reasoning about the usability impact of that gimmick. Apple doesn't seem to care.
I gave that mostruosity back shortly after, am now a happy user of a T450s which even has an ethernet port!!!, very happy with it. Lenovo shortly after retired the gen2, replaced it with a gen3 which has an absolutely normal keyboard, issued an apology, probably fired the idiot who suggested that horrible usability compromised. I'm afraid Apple may have hired that idiot.
I now may need to have the Courage to spend my own money on a non-Apple laptop after so many years.
At my university it was a 3rd or 4th year course (depending on how quickly you were able to knock out the prereqs), and I think that was a great approach. At that point you've used programming languages enough that creating your own interpreter is a fascinating experience.
As someone who works at Workiva, I completely agree.
The post comes off a little more abrasive than I'd like. I think writing it as an "email" to a hypothetical new team member makes it worse.
The point is: we take our code reviews seriously and we may point out things that seem silly or nitpicky, we may question your approach, we may make many suggestions for improvement, but at the end of the day it is nothing personal. It is for the common goal of high code quality.
Code review is typically how I structure my interviews.
I try not to ask arbitrary comp-sci questions, but instead provide the candidate with a block of code and ask them to provide a comprehensive code review (usually through a github pr).
Afterwards I perform a my review, and highlight any differences in style or methodologies. I'm not really looking for people that have the exact same outlook as me, I'm just looking for people that have some kind of standard they adhere to.
No need to worry about explaining it kindly and plainly to me :)