Hacker News new | past | comments | ask | show | jobs | submit | clopez's comments login

Take pills of pure caffeine. Easy and cheap to find on any drugstore or Amazon or any specialized online shop. I stopped taking roasted coffee because i got concerned about acrylamide and now doing much better on just pills of caffeine.


Also with just caffeine you dont have any longer the usual problems of intestinal discomfort that coffee causes.


Of course. Green means the service is working for me. It is Red when the service is not working for me, and clearly neither for nobody else. Orange means it doesn't work for me, but you might have better luck.


Several services haven't been working for me for 20 minutes and the dashboard hasn't been updated. They're effectively useless.


And the question is... Do this "Apple certified server-side network appliances" run on Mac or Linux?


Obviously Linux since no one uses mac server side, Even Apple doesn't use mac for their online services.


Almost no one.

For example IMGX uses Macs in its data center for image processing.

https://photos.imgix.com/racking-mac-pros

I remember reading somewhere the performance they got with CoreImage was worth the extra hardware cost.


You have to imagine that the design work done for that was based on an assumption that Apple would stick with the trashcan form factor long term, and that the first gen unit would at least get regular bumps to newer CPU and GPU options.

Must have been a heartbreaker sitting on all that infrastructure in July 2019 when the cheese grater was announced and the newest trashcans were still selling for full price with a six-year-old processor.

(Here's a comment from 2015 which expresses these concerns in the future tense: https://news.ycombinator.com/item?id=9500301)


Apple gathered a few people from the press to tell them the trashcan wasn't working out in April 2017, and that a redesign was coming. Everyone who cared about the Mac Pro knew then.


the new designs fit in a standard rack so I don't think it's that big of a deal.


Sure, but still a huge sunk cost to have designed and fabbed a bunch of the previous ones.


And? Technology changes over time - not just with Mac's.


Regular server racks have not, though. 1U has been 1U for a very long time... Didn't matter if it was an IBM server, Dell, HP, or custom built. This is why we have standards.

These poor saps bet BIG on a non-standard, and seem to have discovered why we like standards after all.


What are they using to run WebObjects then? AFAIK, iTunes and the App Store still run on WO.


The Java implementation of WebObjects runs on Linux.

https://wiki.wocommunity.org/display/documentation/Deploying...


RHEL


I think the better question is whether they run Linux or some BSD.


Exactly, since Netflix is using highly optimized FreeBSD servers for video streaming[1]:

> In the end they are now effectively at 200Gb/s encrypted video streaming from FreeBSD per server.

[1] https://www.phoronix.com/scan.php?page=news_item&px=Netflix-...


200Gbps per _socket_.


I used "nmap -A" on Apple servers that my iPhone downloaded the last OS update from. They were all Linux servers.


They was used Akamai infrastructure in past.


They bought a bunch of Akamai early on


Probably Linux, but don’t forget Airports ran NetBSD so that’s something else they have internal experience with.


Last I heard most of Apple still runs on Solaris.


I heard Solaris for email, calendaring, etc.

RHEL for WebObjects and other servers.


Why is this an obvious question about an edge cache?


Because some of us have hope that Apple might be using high-performance commodity hardware with OS X and maybe they’ll let us do that one day too, so we don’t have spend $6000 for a Mac tower.

Alas, it is but a dream.


That’s irrelevant though. I run OS X on kvm, people run hackintoshes. There’s no technical limitation here, never has been, it’s a business decision.

Apple obviously isn’t beholden to the licensing terms that they release OS X to their consumers under.

If they want to run OS X on commodity hardware they can, and moreover it doesn’t change their positioning to the outside at all.


If Apple uses Apple hardware to host this they pay the manufacturing cost, not the retail price you pay.

It probably wouldn’t look good if they hosted this on third party machines running Mac OS.


You can benchmark the alternatives for 2 seconds and choose the faster. Call it: performance detection


"Flooding with SSIDs" means generating lots of fake SSIDs each second to trick that device into attacking those fake SSIDs and keep it busy. Keept it genrating even more (use several wifi adapters) until the other device goes crazy. You can use some of the tools (like aireplay) from the aircrack-ng suite for generating fake SSIDs


Amazing!. Didn't knew that. Its like Satoshi Nakamoto. Another anonymous genius.


I find this post kind of snake oil sell. It begings explaining the problem it wants to solve (lack of a modern text/console based browser) to directly enter in a very long history about why he did that (thats not a5 all useful for the people wanting a tl;dr summary). The first thing i miss about this is how this thing compares to brow.sh abd why i should pay for this instead of deploying my own brow.sh instance


This!. Love your reply. I'm also tired of developers criticizing Linux desktop for not being friendly enough and running away to propietary systems instead of trying to help to fix what they don't like of Linux Desktop. That's the good thing of open source: don't like it? Then fix it and send me a patch and stop complaining. Its not like propietary systems where you only can complain (you can't fix)


Maybe they want to get stuff done, other than puttering around in the tooling. It's a nice option to have, but would be ruinous to productivity.

Also, design by committee rarely produces good UI. You can patch little UI bugs, but if you really want to holistically improve the UI it's a huge undertaking.

Just because you're developer, doesn't mean you want to hack on every tool you use. Software has gotten way too big for that.


I can attest that using free software is not ruinous to productivity - least of all in the field of software development itself.

You don't have to fix every bug you find. Just fix the odd bug, and work around the rest until somebody else fixes it; many hands make light work. But the work doesn't get done at all if you run away to proprietary platforms.

You don't have to boil the ocean. Just do your civic duty from time to time. Once a year, even. If everybody on HN used desktop Linux and, once a year, invested an hour into fixing a minor, polish-level bug, desktop Linux's polish problem would completely disappear.


> Maybe they want to get stuff done, other than puttering around in the tooling.

You mean like that crufty thing called homebrew?

Or the default versions of software shipped are out of date by maybe a long, long time?

https://www.reddit.com/r/bash/comments/393oqv/why_is_the_ver...


> You mean like that crufty thing called homebrew?

What kind of problems have you faced with Homebrew? Can you please elaborate?

I use Homebrew all the time and it just works for me out of the box without any issues. I never had to edit configuration files or customize anything to make it do the right thing. It just works.


This attitude really bugs me. Linux is crap because the community around it is dysfunctional and doesn't care about building a working system, not because developers "ran away".

I am typing this on a Mac. I spent years as a desktop Linux user and developer.

For maybe 5-6 years, I invested my evenings and weekends in trying to improve desktop Linux. I fixed bugs in ALSA. I worked on Wine. I fixed bugs in GNOME. I wrote freedesktop.org specs. I did a lot of stuff.

I also watched as lots of other people tried to fix basic problems.

You know what my experience was?

Half the time, attempts to fix things triggered massive flamewars. KDE and GNOME couldn't agree on basic things like how notifications should work; any attempt to come up with a compromise system resulted in the KDE guys screaming and yelling about how everyone should just adopt their own system (which had braindead usability problems). Linux people thought package management was God's Gift to users, even though actual users kept telling them it was awful and they just wanted to download apps from websites. The kernel developers insisted that every driver by GPLd, even though this was incompatible with the business models of key hardware developers, resulting in those firms working around the GPL, ensuring nobody "won". Common distros couldn't play music or video files because of a refusal to pay for software patents.

The other half of the time attempts to fix anything were rapidly undone by pointless ecosystem churn as things were written, rewritten, thrown away, and rewritten again.

There was no coherent plan, no architecture, and competitive evolution turned out to be bad way to create an operating system. Developer experience was a nightmare. Any time something deviated from what was laid down by the original UNIX in the 70s it caused massive schisms and basic APIs split or stopped working.

Linux on the desktop will never be competitive with macOS regardless of how badly Apple screw up their QA because the desktop Linux community is far more dysfunctional.


> That's the good thing of open source: don't like it? Then fix it and send me a patch and stop complaining.

That's simultaneously a good thing and bad thing. It's good because it's possible. It's bad because when everyone is responsible for a platform's software defects, then no one is responsible.

Even if we ignore average users, and just consider developers, the overhead and learning curve is prohibitive. A frontend webdev or even a backend Java dev would probably have a really hard time tracking down and fixing an issue in Xorg or Wayland or in their touchpad driver. The relevant maintainer could likely fix it in a few hours or days if they had the available time and motivation, but the user (who just happens to be a developer in an unrelated field) would likely take many days or weeks.

Even if you match skill-sets -- say a C programmer is having trouble with some GNOME UI issue that would turn out to be a bug in a C library -- you still have a big hump to get over to contribute to a project you're not familiar with. Build system, how to safely test changes on a live system (where the normal software comes from a distro package), code organization and just learning how things work in a new code base, code style, pull request process, review process, etc. -- all of this makes it really difficult to contribute, even if the actual change is small.

I do wish more people would take the time to dig in and scratch their own itches, but I absolutely don't blame them for just wanting to be able to get their work done without having to first fix their OS and tools.

(Credentials: I use Linux as my daily driver, and have gotten frustrated by macOS as a development environment any time I've had to use it as such. I used to be an Xfce core maintainer, a decade ago. These days I mostly do Scala and Java backend dev, and consider myself quite rusty with languages like C.)


Linus Torvalds? He created Linux and git. Arguably the two most successful open source projects ever.


Linus has demonstrated incredible long-term effectiveness as a software developer, both creating and then shepherding two of the most important pieces of software of the last 50 years. But he seems qualitatively different than Ballard.

I'm trying to put into words what the difference feels like. Git and linux demonstrate, for Linus, great intelligence but not genius, the way Ballard's works do. And on Linus' side, git and linux demonstrate leadership, pragmatism, and a tremendous understanding of how to actually drive a large project forward over time, which Ballard's works don't.


Torvalds is like Brahms, who published relatively few works, but polished, refined, and winnowed them until they were of very high quality. He wrote his duds, but he knew enough not to publish them. Ballard is like Bach: an unbroken series of gems, mostly small- to medium-sized, each one an immaculate work of craftsmanship wedded to incandescent genius. Nothing in the entire hoard is inferior work --- there's nothing you can point to and say, "Bach screwed up here."


> Step two: those Huawei phones with a forked version of Android are sold globally. They are less secure and get hacked.

Why those phones will be less secure and therefore easily hacked? Which kind of argument is that?

How a huawei phone with a forked android is any less secure than any 2-year old android phone from $randomanufacturer (not longer receiving any OS update at all)?


Yes, Huawei phones can be less secure compared to a two-year old android phone that is running vanilla Android, because Huawei, and sometimes even Samsung, sometimes end up making modifications to the kernel that expose the entire device to userland hacks.

Google is trying to move Android to a more secure footing with Titan, Play Protect, verified boot, etc like ChromeOS. If Huawei becomes the dominant Android phone manufacturer, there is the possibility for things to be worse than they are today.


Might this answer your question?

> Huawei must raise 'shoddy' standards, says senior UK cybersecurity official

> GCHQ technical director says he hasn’t seen anything that reassures him company is taking necessary security steps

https://www.theguardian.com/technology/2019/jun/07/huawei-mu...


Android isn't exactly known for being a paragon of security. The number of unpatched critical CVEs in the wild at any given moment is staggering. At worst this is a step sideways.


Sure thing, but at least Android is open source.

Huawei's drivers, which is what led GCHQ to probe into Huawei's code and write a rather uncharitable report on what their coding practices look like [1], are not. Admittedly, as members of the public we can only take their word for it that they found shoddy code by any reasonable standard. But if the latter is true and any indicator of how they'll maintain their own fork of Android, it's doesn't inspire much confidence.

https://www.theregister.co.uk/2019/03/28/hcsec_huawei_oversi...


> Sure thing, but at least Android is open source.

Some of it. Certainly not many of the hardware drivers. There's a reason that updates are dependent on hardware vendors and mobile network operators and that most phones don't have fully functional Lineage builds.


Yeah, well... I think we can agree that it's more open source than other Phone Operating Systems. And that's besides the more important point here, which is that Huawei's developers reportedly write insecure looking spaghetti code.


My point is that any security argument is a red herring when the baseline for comparison is a wet paper bag.

The thing about exploits is that it only takes one. It doesn't matter if Huawei adds another one when there are already thousands to choose from.


And mine is that Google has large swaths of OSS code to show that they're competent at writing secure code, whereas there's a report out that Huawei is writing spaghetti code that is so poorly written that even security experts can't make up their mind to say whether it's secure or not except to say that they need to get their act together.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: