Hacker News new | past | comments | ask | show | jobs | submit | holycrapwhodat's comments login

Why is Swift interop needed for that? C/C++ are first class citizens in Apple's toolchains, and Obj-C++ interop has been a well supported thing for 25+ years.


I work like all day every day. This is just a fun project idea, so would prefer to save time with a dynamic language like Swift.


But, you can still pair most types of extra controllers to it (including a set of Joycons), and the eShop is aware of games the few games that can only be played on a tv and warns of incompatibility.


It is not USB-C powered.

It needs 100-240V, 50hz-60hz AC power.


A missed opportunity though. If it’s USB-C powered, it could be even smaller and Apple could simplify its BOM by including a MacBook Pro charger with it.


If it was USB-C powered people would be complaining about the size of the external power supply.


If it only had one USBc input, how could it supply enough power for 5 USBc outputs ? Well ok, 3xTB5 + 2xUSBc, but still ?


> Nearly all the combined work of humanity has been "lost to time," and society seems pretty okay with that.

Pre-digital age, preserving the combined work of humanity was actually quite difficult. The cost to preserve everything outside of "obviously important" artifacts would've been preventative (or even impossible) for society as a whole.

I believe many (if not most) folks native to the digital age believe that digital artifacts should be preserved indefinitely by default - as the cost in doing so is comparatively trivial - and laws in democratic nations will catch up to that.


Hey I agree 100%. We live in a time and place where we could put about 10-Refridgerators-worth of computer and storage into the basement of every library in the world, and fill those drives with every book, painting, movie, song, etc... EVRYTHING all in one place, replicated around the world a million times over..

We could do this. The technology exists. But we, as humans, as a society and as a race of beings, have collectively decided that we will not do this: It doesn't make anyone any money.

For the first time in history, we could store all of human knowledge in a safe replicable way, world wide, for everyone. But we specifically choose not to do this.


Are you willingly ignoring the Internet Archive which is exactly doing that, and is not a for-profit operation?

We need more of those, agreed, but it makes no sense saying "no one is doing that."


No, I am not ignoring them. I know Archive very well. They do not preserve copyrighted content deliberately, it just gets uploaded, and when no one comes and complains it stays up. They remove things ALL the time. All of the Atari 2600 games from Atari itself, for example. Atari's current owners showed up and asked Jason to take those down, and he did. And he thanked them for the privilege and said they were very nice.

I ADORE Archive. But guess what, they're being sued into the ground over doing EXACTLY what we all want them to do: preserving things. If anything, this absolutely 100% proves my point: we have 1 example of a modern Library of Alexandria, and it is in danger because someone is upset they didn't get paid. This is even more than choosing as a society not to save information and our culture. This is being outright HOSTILE towards the idea.


Yup!

Much of Santa Clara Valley has "R1-8" zoning, which means "detached single family homes, 8 per acre."

43,560 / 8 = 5,445 sqft lots.

7,900 sqft is larger than average for many comparable neighborhoods.


Safari on iOS supports extensions, and - while I don't know - I'd be shocked if Chrome on Android didn't?


Chrome does not.

But we can install alternatives that are better.


Our universe is a closed system. We cannot change its overall heat content. We can only move heat around within it.

Without tapping into another universe - which we're nowhere close to being able to even theorize about - all we have the ability to do is create a localized heating crisis within our universe.


In about 5 billion years, we'll have a really bad localized heating crisis in our solar system.


The universe is busy doing this itself. Eventually we use up all the low entropy states and end up in high entropy states where we can no longer extract useful energy.


> Are 200-3kb/s sufficient for a realistic attack?

Yes.


As both a software developer and a pilot...

We're (much?) closer to Fully-Self-Flying planes than FSD cars because the problem space is - perhaps counterintuitively - MUCH smaller to tackle. And we have a lot more experience tackling it.

Additionally there could easily be remote pilots as backup in case of catastrophe (See remote piloted military and border patrol UAVs)

And pulling a parachute at 1000'+ altitude actually has quite a bit of precedent (See https://en.wikipedia.org/wiki/Cirrus_Airframe_Parachute_Syst...)

Now... There's probably a lot of cultural and regulatory reasons why the "string of automated glider ports" idea will never come to fruition.

But... As far as technical hurdles go, there's not much new technology that would need to be invented here.


As an industry leader on this specific subject matter... I agree: the problem space is much smaller, in theory.

However, certification requirements and safety assurance needs will drive both cost and time into realizing fully autonomous aircraft. They will be here, but we are 10-15 years away.

The problem is in how to certify machine learning code. Today, you can't. Existing AMCs (accepted means of compliance) are incompatible with the nature of ML. (The breakdown is specifically with assurance architectures focused on code traceability and coverage.) A new architecture for demonstrating safety assurance with AI/ML is needed, and is being built, but is still 1.5-2 years away from being released, and then it will take another year or two before a CAA (civil aviation authority, like the FAA or EASA) will certify a component with ML code--and that will not be an autonomous pilot. That will come in time, but the industry is conservative--especially on safety-critical matters--and it will take years to develop trust in both technology, human factors, and methodology to work up to autonomously flown passenger aircraft.

From the regulator perspective, EASA has taken poll position in thought leadership. Google their AI Roadmap or their Concept Paper for Level 1 Machine Learning Applications.


> string of automated glider ports

There is a small precedent here: the space shuttle flew almost 100% by computers.

There was a small involvement of the pilots when landing and some change of software due to small RAM size in the computers.

I could imagine "Fully-Self-Flying planes" would start out with cargo planes between areas with low population.


Modern supercomputers are obscenely parallel machines built to chew through embarrassingly parallelizable tasks.


Forgive me if I'm wrong but I thought supercomputer time was usually not allocated to embarrassingly parallel tasks. While they certainly can do those tasks well they're a waste of a distributed system with expensive, high bandwidth fiber connections between nodes.

When I spent (a relatively small amount of) time working with one this was the main thing the director drilled into my head. Use it to solve large, parallel problems that require lots of intranode communication of intermediate results. Embarrassingly parallel problems can be solved on cheaper hardware like GPUs.


When the data no longer fits inside a compute resource (a node, or even a rack), you are by necessity going to be distributed. Communication is a fact of life when the problem-size grows. This is true also for GPU-based computing.


So is a GPU


Might depend on the level of embarrassment.

Interesting that they did this with Julia, with 83% of instructions being AVX-512 (if I'm reading it correctly).

Does anyone know if Julia's GPU capabilities could have been leveraged on say a cluster of NVIDIA A100/V100?


This work is four years old (with development happening before that), so the Julia GPU capabilities probably weren't good enough at the time. If you wanted to do it today, that'd probably be the way to do it, but would need some benchmarking.


A lot of modern supercomputer use/have GPUs. But most GPUs had very bad fp64 compute capabilities, so they were not really used for anything requiring precision for a long time.


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: