For humans to have any reasonable presence somewhere else in our solar system (moon, mars, etc.) we need the ability to launch tons (literally tons) of stuff to orbit and to the destination. and we need to launch it often to do anything in a reasonable amount of time. the only way to do that is to make reusable launch systems. SpaceX's Falcon 9 has aced that for satellites (see Starlink and everything else they've launched). The Starship launch system is capable of launching a significantly larger payload, ~20 times more. What they demonstrated yesterday is that a launch system that is capable of getting us anywhere in the solar system can be reusable. Huge accomplishment.
Went to comments here exactly for this. I was delighted to see one of my favorite authors front-and-center as an incidental detail of an otherwise unrelated tech. demo.
The how it works section says the following when you expand it:
> Through carefully mechanized components inside, tinyPod's wheel makes direct rotation contact with your Apple Watch crown, letting it naturally scroll anything across the OS.
> "What goes around, comes around! Rediscover the delight of tactile scrolling with tinyPod’s physical scroll wheel. And yes, it actually scrolls. How? Through carefully mechanized components inside, tinyPod's wheel makes direct rotation contact with your Apple Watch crown, letting it naturally scroll anything across the OS."
I worked on a team with similar cost optimisation gurus... They abused HTTP code conventions and somehow managed to wedge in two REST frameworks into the Django app that at one point had 1m+ users...
I don't know if this counts, but I had something that I would call parasitic happen to me once.
I administered a VBulletin forum, and naturally, we installed all sorts of gewgaws onto it, including an "arcade" where people could play games, share high scores, etc.
This arcade, somehow, came with its own built-in comment system, one where users could somehow register without registering for a proper VBulletin user account on our instance, and thus without admins being notified.
One day, we discovered this whole underbelly community that had apparently been thriving under our metaphorical floorboards, and promptly evicted them. In hindsight, I probably should have found some way to let them stick around, but recently several things had happened that hardened our stance to any sort of un-wanted users.
If I understand TFA, you'd need to find a way to get S3 (which offers no server-side script execution, only basic file delivery) to emit an error code (403 specifically) alongside a response of useful data. Good luck...
Simple. Just encode all of your app's data and logic as a massive lookup table, each bit of which is represented by an object that either doesn't exist (a zero) or is unauthorized to access (a one).
When you read a sequential series of keys (404 403 403 404 = 0110) it will either tell you the data you were looking for or the next key name to begin reading from.
It said "never incur request or bandwidth charges". I assume this means you don't pay to compute the response or for the bandwidth to deliver it.
Seems you could compute the response, store it somewhere (memcached or something), and then return an error. Then have the caller make another call to retrieve the response. (To associate the two requests, have the caller generate a UUID and pass it on both calls.)
That doesn't make it entirely free, but it reduces the compute cost of every request to just reading from a cache.
(This does sound like a good way to get banned or something else nasty, so it's definitely not a recommendation.)
Well, you can probably send out one bit a time by updating your ACLs on a clock (with which your clients are also roughly synchronized) and distinguishing between 403 and 404.
take an awful lot of time to get that data out, though.
It seems to me you could just use static ACLs and create (or not) object names to cause this 403 vs 404 distinction? The drawback is that you'll be paying for the minimum retention of minimum-sized objects, not to mention all the other bucket management traffic you are using.
So you're going to have a lot of consumers of the same bit stream before you've somehow made the covert, "free" egress a net positive value versus a regular object. I imagine AWS can trivially put in place some throttling of error responses to make this impractical.
Ignoring these economic issues, imagine a content-addressing scheme like /stream-identifier/bitnumber which you can then poll to fetch one bit per request. Populate an object (which will return 403) for 1 bits and omit an object (which will return 404) for 0 bits.
You also need to know some stream length or "end of stream" limit. Otherwise you can't tell if you've read past the end or are really fetching 0 bits of a longer stream.
One strategy might be to use an 8b/10b encoding so you can detect when you're not getting a valid symbol anymore. You could treat that as end of stream if it is supposed to be static, or go into some polling mode to wait for more symbols to be posted.
Hybrid strategies might use regular objects or recursive use of these streams to publish metadata streams that tell you about the available stream names, lengths, and encoding schemes.
I wish this were talked about more. Quantum computing is the biggest long-term threat to crypto imo. What's the plan once elliptic curve cryptography can be broken?
There will be a point in time where there are just a few quantum computers that can break everything before the general public has access to quantum computing. Can crypto work in that scenario? Normal computers wouldn't be able to work with the beastly algorithms a quantum computer could handle.
The first entities that are likely to achieve practical quantum computers will either be governments or big tech companies like Google. And it will be a big deal, so there would likely be several years of warning before it could be at the point where it would make sense to use it to steal someone's bitcoins (I guess the original Satoshi coin address would be the biggest bounty). And in the time period between when the big development is first announced and before it's practical, Bitcoin and other cryptocurrency projects can do a fork to a new digital signature scheme that is quantum proof (such as LegRoast) so that anyone who is concerned can move their coins to a new secure address. So while it would certainly be disruptive, it wouldn't necessarily spell the doom of Bitcoin.
Depends on the incentives. If the only interest in quantum computing is to break classically hard encryption then I think the time between poc and widespread availability could be relatively short.
I am not a Mathematician, but what I understood, it's basically an extension of ECC using multiple elliptic curves, allows to re-use the Diffie–Hellman key exchange protocol (private keys kept secret, public keys exchanged) and memory requirements are small. So it would be a perfect replacement in wallets and validation nodes. But I can not explain why it is safe against an attack using quantum computers.
They're correct. The blockchain just records that the funds were sent to your address. To spend the funds you have to show the public key which hashes to that address, in another transaction signed by the private key.
If the sender wanted to send you a private message, they would need your public key, but that's not what transactions do.
Sending to an address means sending it to a "hash" of a public key (or a more complex script) on all modern formats. Then such script and data is revealed on spend.
While not implemented I think there are "lattice based" forms of cryptography that are believed to QC resistant that blockchains could migrate over to if QCs begin to show signs of increased fault tolerance and size.
There's a lot of research and practical work on quantum-proof cryptography which is already in use in some cryptocurrencies - 'just' need to hardfork and update it when it's ready for Bitcoin
One of the best use cases for this is say you have a backend/internal system and you want other things to start interacting with it. Instead of having to write the api to interface with it, you can just use something like this and with little effort you have an api and can talk with the database.
I think the point he was making is: why the API if you just want to talk to the db? You can connect to a SQL db over the network and protect the data with views and stored procedures.
That's actually exactly what we're trying to build at Splitgraph [0]. :) We're building a "data delivery network" (DDN), which is like a CDN, except instead of forwarding HTTP requests to upstream web servers, we forward SQL queries to upstream databases.
The premise of the idea is that we can cut out the middle-man for a lot of data distribution use cases. We give you a way to deliver your data in native SQL, using the Postgres wire protocol. We've decoupled authentication from the database, so we can do it in a gateway / LB layer using PgBouncer + Lua/Python scripting. Any SQL client can connect to the public Splitgraph endpoint (as far as a client is concerned, Splitgraph is just a really big Postgres database). You can write queries referencing and joining across any of the 40k datasets on the platform.
In fact, just this week we've been working on v0.0.0 of our web client. This lets you do things like share and embed SQL queries on Splitgraph, e.g. [1] (this query actually joins across two live data portals at data.cityofchicago.org and data.cambridgema.gov).
There's also an example here of using an Observable notebook with the Splitgraph REST API [2]. It also works with the Splitgraph DDN configured as a Postgres database, but that's only supported in private notebooks for now (since normally it's a bad idea to expose your DB to the public!)
In general, we like the idea of adding more logic to the database. Tools like OP's are useful in this regard. In fact, at Splitgraph we use Postgraphile internally (along with graphql-codegen for autogenerated types) and we have nothing but good things to say about it.
DreamFactory is basically a paid service for this sort of thing. They support something like 20 types of databases (among many other data sources). They have a lot of features that make the exposed api be good enough long-term. https://www.dreamfactory.com