Hacker News new | past | comments | ask | show | jobs | submit login
Towards a Type System for Containers and AWS Lambda to Avoid Failures [pdf] (christophermeiklejohn.com)
180 points by cmeiklejohn on April 3, 2018 | hide | past | favorite | 63 comments



I'm not sure what the point of this paper is?

* It talks a lot about containers, but this is really just contracts across systems, whether they run in containers or not. So not sure why the word "container" is necessary.

* It says "towards a type system", but then just muses about IDL/REST/Thrift, says "we need cross-system stuff ... so we should use better types", ...but...what does that look like? There are vague assertions that "we've done this", but I don't see any description of what that actually looks like.

* The Zookeeper/Kafka example, while apt, I'm not sure if "having a cross-system type system" is exactly right, as Zookeeper is purposefully/validly ambivalent about what its clients encode within it's file system, so whether the replication data is F/F+1/whatever is meaningless to Zookeeper. So to me the solution is not "cross-system type system" where Zookeeper becomes aware of Kafka's invariants, it's just Kafka itself correctly/internally interpreting the Zookeeper data. If that means it's an Either[ValidCluster,InvalidCluster] within the Kafka "ClusterInfoApi" abstraction, that's fine, but that's not something the Zookeeper API/IDL is going to care/know about.

* You're never going to get all networked services to agree on the One True networking format/One True type system/IDL, so why even muse about this as a possible approach?

Disclaimer I don't read academic papers on a regular basis.


“Towards” papers are typically workshop papers which give a preview of ongoing work, usually by PhD students.


This is what we’ve done at StdLib [1] with FaaSlang [2].

FaaSlang uses static analysis and a superset of ESDoc comments to infer types for the API interface to serverless functions on our system. This allows for automatic type coercion from query parameters, automatic error checking, and more - all caked in at the gateway layer before a function is executed.

Zero configuration; just write comments the way you normally would. It’s almost a healthy midpoint between TypeScript and JavaScript, operating above the runtime, but theoretically applicable to any language run within a function container.

[1] https://stdlib.com

[2] https://github.com/faaslang/faaslang/


Honestly I stopped reading as soon as I saw the name of your project - really, it's called "stdlib"? What good can possibly come from that name? Either the confusion will prevent your product from ever succeeding; or, less likely but just as annoying, you'll grab meaningful mindshare and everyone here will have to change their vocabulary to accommodate your semantic landgrab.

I've noticed a growing trend of startups trying to do this, and it just annoys me immensely. How hard is it to invest in your own original brand?


We chose the name as a reference or throwback, of course, to <stdlib.h>. We’re <stdlib.com>, and we allow developers to utilize serverless technology to make shipping custom logic (functions) simple and create a Standard Library of APIs within their organization or contribute to a public library.

From the inception of the product we knew we were building a library - well, a publishing and hosting platform combined - but really, a library of APIs. The very first name we came up with was “stdlib” and, through a stroke of fate, “stdlib.com” was available.

We don’t take the name lightly. We understand we stand on the shoulders of giants, and it is our responsibility to create the best “stdlib.com” we can. It motivates and inspires our team members. That is why we chose the name.

It’s unfortunate you have a problem with this, but we’re very thankful to our developer community, team, partners and our investors for their support. :)


> The very first name we came up with was “stdlib” and, through a stroke of fate, “stdlib.com” was available.

maybe everyone who thought of that name before you laughed and said "I'm not that kind of person"


Interesting.

I just went through this a bit and it doesn't appear that the specification allow for polymorphism in the type system. Am I missing something here?

If that's true, it doesn't seem to address the issues highlighted by the paper, although does handle simple type checking across interfaces of primitive types.


To be clear, it seems you have basic coercions (a la C, or ad-hoc polymorphism) but no general support for subtype polymorphism nor parametric polymorphism.


Also, I have to sign up for stdlib? Do I have to pay for it? (Never got that far, and the fact I have to sign up is quite misleading for something called stdlib.)


You have to pay for APIs / Functions that the individual vendor charges for (if the functions are not free) - for example, MessageBird charges $0.005 per SMS sent [1] using their sms.create function.

You also pay a nominal fee per ms of compute used when somebody uses an API you’ve built.

But you can get started for free - $5.00 of credits included that should cover up to 100,000 requests, if you’re really experimenting feel free to message our team (my email is in my profile here, I think).

[1] https://stdlib.com/@messagebird/lib/sms


Not yet! But the specification is still under very active development. We try, as best we can, to solve customer needs ahead of nice-to-haves (though the latter still important!).


But, if you're assuming EsDoc, you're already inherently tied to JavaScript/TypeScript/etc, no?

Part of the point of the paper is assuming non-uniformity between APIs, hence the IDL.


Also, it remains unclear how you expect to support parametric polymorphism if you're assuming JavaScript as a base language for your analysis.


What can I say? We’re a team of hackers. It’s not our goal to solve every problem immediately, just the ones facing us and our developers today. We wanted an abstraction simple enough to ship to our community and customers but robust enough, conceptually, to be built out into a fully expressive language. (Hence FaaSlang.)

ESDoc is JavaScript specific, but the concept of relying on comments to infer types is not, nor is the JSON format we store function execution information in. :)

(Well, I mean, we rely on JavaScript-translatable types — but can create parallels to other runtimes and languages.)


I've not nearly as much experience developing these systems, so getting started without types was a bit daunting. I wrapped my Lambda functions in protobufs and used a shared common definition repository. Then the lambda services support either the JSON rep of the protos or full-on binary protos, and the type checking happens on both ends. Curious what y'all think of this as a solution.


We're trying to highlight that most of the work you're doing to make sure the interfaces are well defined and match up manually can easily be done by a type checker, as you would have if you wrote this as a single application.


How are you going to handle updates and migrations of types? Something I've been thinking about in terms of dependently typed APIs is versioning and migration - but more at the code ecosystem level than in terms of running systems that can't be turned off. But there is kind of a similarity there. You can think of the ecosystem of written programs as kind of a distributed system. Would be nice to give library authors the tools to more gracefully migrate their consumers code in a type directed way, and to highlight potential problems prior to deployment, but this could equally apply in a distributed computing context.


Welcome to Erlang/OTP "releases" where the folklore is that Ericsson engineers spent as much time testing releases (read: state migration code) as they did application code.


Interesting. Yeah, from what I understand the Erlang philosophy was to just throw out the idea of a type system and deal with faults dynamically, which is understandable given the time the language was created. But given what we've learned about type systems in the intervening years, it would be super nice to leverage a type system for this, and greatly reduce the testing overhead.

I'd love to see this in an event sourcing context too. This paper, “The Dark Side of Event Sourcing: Managing Data Conversion”, seems to hint at there being some interesting algebraic foundations to splitting and merging streams, adding new fields, etc: http://files.movereem.nl/2017saner-eventsourcing.pdf


I'm really happy more work is being done in this space! Also nice to see nods to various attempts from the past - it's important we learn from their failings, but also their successes. It's too easy to get stuck in the mindset of "Ugh CORBA" and "noes, SOAP", without being able to see an opportunity there. Let's be persistent and figure this stuff out!


I am having SOAP-related WSDL flashbacks.


With the exception that these systems wouldn't have identified the type errors we highlight.


I would take SOAP/WSDL over barely documented restful APIs and JSON...


Unfortunately, popular opinion disagrees and that's part of the motivation for this work.


Was about to write "...and we shall call it SOAP".


More like "...we call it the simply typed lambda calculus."


It's a bit like all the folks who recoil in horror when you say 'types' and all they can think of is Java.


I actually don’t think SOAP was that bad. You didn’t want to deal with the API by hand but it was easy to automate the consumption of the API, visual studio was taking care of all of the plumbing for instance and you would consume an API in just a few clicks.


Sounds pretty nice actually. I'm guessing there could be pain over time though when it came to maintaining it. Easy to add, hard to remove or change?


Depends what you mean by change. If the API changes, all you had to do is to right click on the reference of the API in visual studio and select update. It would fetch the latest WSDL and recreate the code to consume the API. So painless as long as the API is backward compatible.

And on the server side, the WSDL was automatically generated from your code if you used .net, so every time you would update the API the WSDL was up to date.

All that created a lot of plumbing and chatty XML traffic behind the scene, so the underlying exchange isn't as neat as a REST API, but as long as you didn't have to deal with it directly it didn't really matter.


Ah, thanks for the extra info. I'm too young to have dealt with this stuff myself, so first-hand experience is always really interesting! What I have had to deal with is schemaless JSON HTTP APIs - definitely know what a pain that is! GraphQL is a whole bunch nicer from what I hear (but will be interesting to see what folks say in several years time).


I like the premise of the paper, but the motivating examples feel really weak to me.


Care to extrapolate? We tried hard to take industry use cases.


kafka/zk bug: I just don’t think this is a compelling example of an underspecified interface being to blame. In my view, it’s squarely on kafka to correctly implement its replication policy, and that likely isn’t something I’d want to bake into an interface layer at the system/network boundary. Also, kafka is a fairly foundational infrastructure-y component from a modern “serverless” perspective, and its interface w/ zk is critical — it just doesn’t exemplify the type of pain you see from underspecified interfaces in large microservice deployments. Unlike between tons of small containers, the kafka/zk interface is worth scrutinizing and warrants a ton of manual testing and verification.

lambdas/kafka: from my perspective, this one is kind of conflating the value of Options with the value of a typed IDL with some notion of generics baked in. It’s not clear to me that Option[Number] is what I’d want in this scenario, maybe I just want to reject anything that doesn’t validate to a Float — but I think that’s kind of tangential to whether or not generics in a more conventional sense would be useful.

edit: I guess I would have liked to see examples that show how this would help with discoverability, code generation, distributed tracing, monitoring, verifying large systems of microservices. It's a nice position to start with though -- are you planning on a follow up or an implementation?


Those use cases weren't super-specific. IoT sensor data gathering can be accomplished in many, many ways using a Kafka platform. I've done enough operations work to see that this would mean nothing to me (as it stands). I well laid out use case explaining what it means to developers and ops would help your case. Good luck!


"CORBA, while successful" ....


What're they going to say? "CORBA was a giant shitpile and created tire fires for several years but since this is an academic paper, we have to present historical context for our ideas"...


Your CORBA experience depended on how you used it. If you used the IDL and messaging and plugged in your own backend it could work quite well. People are still reinventing new IDLs to this day.


To be fair, all modern RPC frameworks like Thrift, Protobuffs, etc. all use a IDL and it would be remiss to discount CORBA for this.


Just like RMI, CORBA was fine if you knew the boundaries where distribution started/ended. The issue with CORBA was the transparency and was apparent since NFS. If you used CORBA tastefully, it was fine.


People have been reinventing IDLs since forever and a day.

S-expressions are the oldest serialization format that I know of. ASN.1 is a fairly ancient IDL that was used for RPC back in the early 80s (ISODE/ROSE). There's ONC RPC, with XDR as the IDL. There's DCE RPC (and MS-RPC). And many many many many others. There's tons of serialization formats, and they all tend to have one or more related sort of RPC frameworks, ad-hoc or otherwise. Perl5 has several, no?

This is the space of NIH.

Take protocol buffers. It's supposed to be an anti-ASN.1, but it's actually remarkably similar to ASN.1's DER (distinguished encoding rules), which means it has lots of the same mistakes. It's all very sad.


If you actually read the paper, we talk about ASN.1 and why it's insuffient for what we are doing. That's clearly stated, and it's clear you didn't read that paragraph.


If you want people to be supportive and care about your work, this is not the way to go about it.


Thanks for noting this. I was hoping my lengthy reply would get this point across more subtly, but maybe a more explicit note will be better.


I was responding to a comment, not TFA. I had not read TFA at all at that time. Looking at the one paragraph that mentions ASN.1 (skimming the rest for context), I cannot help but wonder how familiar you are with ASN.1. SNMP is not a good example of how to use ASN.1. To be fair you're referencing a paper that discusses SNMP's use of ASN.1, but... you say nothing at all about how ASN.1 is inadequate, or what might be missing from it! What gives. Did you actually read anything about ASN.1? Did you even skim the ITU-T x.68x and x.69x document series?

[EDIT: I missed the second citation about why ASN.1 is inadequate. I will have to read that, though I'll admit that I am skeptical. Keep in mind that ASN.1 has been extended over time, so that one good question might be: what is missing that cannot be added? This is especially a good question when the alternative is to start from scratch.]

Note that I am [and my earlier comment certainly was] not defending or promoting ASN.1. I am saying that we have had a lot of reinventions of IDLs.

Now I'm also adding that a lot of ASN.1 haters don't really have a clue about ASN.1. Typically what they hate is a) the tag-length-value encoding rules (but not all ASN.1 encoding rules are TLV!), BER/DER/CER, and b) the syntax (which, whatever, it's just syntax).

The ASN.1 haters completely miss out on: the rather vast literature on encoding rules (including PER and OER which most closely resemble XDR), advanced features (e.g., the Information Object System), and especially the related SDL (which, granted, I believe simply hasn't had much use at all, at least that I'm aware of, but it's a very interesting idea that deserves more study). Instead, the ASN.1 haters generally produce or choose alternatives which either have roughly the same darned problems (Protocol Buffers, I'm looking at you), or a completely different set of also-obnoxious problems (e.g., JSON's escaping requirements for strings).

And so the circle of NIH [bad] wheel reinvention goes.

Another common objection to ASN.1 is lack of open source tooling. However, this is not really true anymore, with several decent open source ASN.1 compilers in existence. Of course, if you choose to invent a new IDL, then initially there will be no open source tooling for it!

I'm perfectly happy to not use ASN.1. I'll be perfectly happy if someone else doesn't use it. I'm not happy to see ASN.1 haters toss the baby (ASN.1's good ideas) out with the bathwater only to then reinvent the wheel badly. Your exceedingly-light on the evidence dismissal of ASN.1 is par for the course, and does not help.

EDIT: Note that I'm not calling you an "ASN.1 hater".


I have looked at that second reference[0] (your reference [9]). That's a paper from 1994! The ASN.1 Information Object System was brand new, as well as parametrization, and the paper does make use of them. That paper does not even reference any of the ASN.1 specifications(!) -- this is infuriating but understandable given that those ITU-T specs were not freely accessible back then, which is even more infuriating, but still, you'd expect academics to still include references to them, and even to purchase access.

More importantly, there are NO conclusions in [0] (your [9]) that would support your dismissal of ASN.1. On the contrary, I would think it's the opposite. My conclusion is that you did not read your own references and did not do the research you should have done. This is, of course, par for the course in this sub-field of computer science. Everyone seems to always think they know better without actually making sure that they do. Some of this, of course, is due to the sheer cognitive load of the literature to which you and everyone else seem eager to add without doing anything to make it easier on the rest of us. You've taken the easy, lazy path. I'm not expecting you to do a full survey of IDLs, and I'm not expecting you to look at ASN.1 and say "aha! they have all the answers", but I am expecting you to have more than no idea about it when you reference it, or if you're going to use a reference as authoritative for some statement, make sure that it actually is.

If I've made a mistake chasing down references, please let me know. I'm eager to understand why you reject ASN.1. I'm particularly interested in what it is missing or does badly. Thanks.

[0] http://people.cs.vt.edu/~kafura/PreviousPapers/asn1++-ulpaa9...


I still think the saddest thing I ever saw in the wild was a Smalltalk shop using CORBA. Doing IDL for Smalltalk VM to Smalltalk VM communications is just wrong.


Can you extrapolate on this?


If I have a Smalltalk VM and I'm only talking to other Smalltalk VMs then a native Smalltalk solution without the foolishness of using CORBA is preferable. Its just an insane bit of added plumbing.


Having at least your interfaces defined in something resembling DCE IDL can give a useful amount of confidence that the system could plausibly interop with something else. (RPC, COM, etc.)


First, its something you don't need at the start (and frankly they didn't need for at least a decade), and it not only complicates the code, it also complicates the maintenance and support burden. Interop is probably going to be some other mechanism and should be done only when you need it and have a clear understanding of what actually needs to be done.


One can call anything "an insane bit of added plumbing" if it's truly not needed, but that doesn't add much to a discussion of distributed applications.


I think it speaks to the complication people add to distributed applications when something like CORBA is on the table without an actual evaluation if it is really needed. It is most definitely not the only solution.


The entire point of CORBA was interop, so it's a bit absurd to say it wasn't actually needed, no?


About 4 years ago, I needed a car to drive into the ground that got good gas mileage. I got a pretty good deal on a Kia Rio and I drove it 100 miles a day and it was good on gas and low maintenance. It is just me, so room wasn't an issue.

I suppose I could have bought a Cadillac Escalade and spent a lot more on gas and maintenance while being able to haul a lot of people which would of never happened. It also would have cost quite a bit more than my Rio.

So, its not absurd to buy a Kia if it meets your needs instead of an Escalade, quite the opposite in fact. Just as it is really foolish to use CORBA when all you really need are the facilities already included with most Smalltalks. Just because I'm doing communications doesn't mean I need the big, expensive solution.


Agreed, and that's why Distributed Erlang works the way it does.

However, it's unfair to criticize CORBA for one of it's design tenets. It was designed for interop, hence the name.

Many solutions for transparent distribution were around the same time as Smalltalk. Look at systems like Eden, Emerald, etc.


I think its totally fair to criticize something, especially if it is part of its design tenets. Just because they designed it for interop doesn't mean I cannot evaluate it for the situation I am in. I've never been one for programming dogma. If I was using NeXTSTEP only at the time, CORBA would never have come up because of PDO.


Well, this is basically the story of Distributed Erlang, which does this out of the box transparently.


PDO on NeXTSTEP was also quite good. I haven't really looked, but I'm pretty sure it survives in mac os.


Why sad?


"Corba, while partially successful..."


As much as I disagree with CORBA, as highlighted by my publication history, it's clear in many cases it was very successful in industry in limited applications.


There's even an implementation for high-assurance systems like INTEGRITY-178B supplied by Objective Interface:

http://www.omg.org/news/meetings/workshops/SBC_2005/SBC_2005...

As I suspected, the slides I dug up indicated they built it with some rigor but not high assurance itself. Probably too complex: the very reason I object to CORBA. Like you said, though, CORBA did have successes with this one in very-demanding niche in terms of predictability and security.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: