Hacker News new | past | comments | ask | show | jobs | submit login
Launch HN: Seed (YC W21) – A Fully-Managed CI/CD Pipeline for Serverless
178 points by jayair on Jan 19, 2021 | hide | past | favorite | 102 comments
Hi HN, we are Jay and Frank from Seed (https://seed.run).

We've built a service that makes it easy to manage a CI/CD pipeline for serverless apps on AWS. There are no build scripts and our custom deployment infrastructure can speed up your deployments almost 100x by incrementally deploying your services and Lambda functions.

For some background, Serverless is an execution model where you send a cloud provider (AWS in this case), a piece of code (called an AWS Lambda function). The cloud provider is responsible for executing it and scaling it to respond to the traffic needs. And you are billed for the exact number of milliseconds of execution.

Back in 2016 we were really excited to discover serverless and the idea that you could just focus on your code. So we wrote a guide to show people how to build full-stack serverless applications — https://serverless-stack.com. But once we started using serverless internally, we started hitting all the operational issues that come with it.

Serverless Framework apps are typically made up of multiple services (20-40), where each service might have 10-20 Lambda functions. To deploy a service, you need to package each Lambda function (generate a zip of the source). This can take 3-5 mins. So the entire app might take over 45 mins to deploy!

To fix this, people write scripts to deploy services concurrently. But some might need to be deployed after others, or in a specific order. And if a large number of services are deployed concurrently, you tend to run into rate-limit errors (at least in the AWS case)—meaning your scripts need to handle retries. Your services might also be deployed to multiple environments in different AWS accounts, or regions. It gets complicated! Managing a CI/CD pipeline for these apps can be difficult, and the build scripts can get large and hard to maintain.

We spoke to folks in the community who were using serverless in production and found that this was a common issue, so we decided to fix it. We've built a fully-managed CI/CD pipeline specifically for Serverless Framework and CDK apps on AWS. We support deploying to multiple environments, regions, using most common git workflows. There's no need for a build script. You connect your git repo, point to the services, add your environments, and specify the order in which you want your services to be deployed. And Seed does the rest. It'll concurrently and reliably (handle any retries) deploy all your services. It'll also remove the services reliably when a branch is removed or a PR is closed.

Recently we launched incremental deploys, which can really speed up deployments. We do this by checking which services have been updated, and which of the Lambda functions in those services need to be deployed. We internally store the checksums for the Lambda function packages and concurrently do these checks. We then deploy only those Lambda functions that've been updated. We've also optimized the way the dependencies (node_modules) in your apps are cached and installed. We download and restore them asynchronously, so they are not blocking the build steps.

Since our launch in 2017, hundreds of teams rely on Seed everyday to deploy their serverless apps. Our pricing plans are based on the number of build minutes you use and we do not charge extra for the number of concurrent builds. We also have a great free tier — https://seed.run/pricing

Thank you for reading about us. We would love to hear what you think and how we can improve Seed, or serverless in general!




I build my first lambda 4 years ago and it was great: no servers, no complicated tools. Just one function which I upload and it works. The amount of tooling which exists now is just daunting. At this point, is it still worth it if the technology is so complex that people are building the whole SaaS for managing it?

PS YC is still bullish on selling shovels I see.


I think that's fair. When we started back in 2016 with Lambda, it was similar to how you describe it.

Now we've got a ton of companies that just use Lambda. So you can imagine a team for 50 developers, working on 40 or so separate services, with 500 or so Lambda functions. It can be hard manage the tooling for all of this internally.


I see this as the issue with Lambda's.

If you use a few sparingly, great.

When you are at a team of 50 working on 40 separate services with 500 Lambda's or so I don't see how you are better off than writing a self contained service that's deployed as a single artifact i.e microservice.

You get the advantage of all related code grouped together, easy to instrument / test / run on any platform. All the advantages of a mono repo but at a service level, non of the orchestration or special tools to deal with 500 disparate lambdas and wiring up event sources / sinks.


I think the complexity aspect is a fair criticism of Lambda.

The way I look at it is that, as a developer, I want all the advantages of serverless (per ms billing, scaling up instantly, scaling down to 0, etc) while not having to worry about the function level nuances of tooling and deploying.

Having struggled with scaling large systems in a past life, I personally lean more towards taking on the burden of tooling instead.


If you have 50 developers and you can't "manage the tooling for off of this internally" you've got WAY bigger issues.... SMH


To expand on that, you can build this tooling out internally and many teams do. They just have a team dedicated to building and maintaining something like Seed internally.

What we bring to the table (aside from being more cost effective than staffing for this), is that when we solve an issue for one of our users, it gets solved for everybody else that's using Seed.

In fact, that's how we solved some of the issues around "reliably deploying a large number of serverless services together". One of our users was running up against some errors related to this and we worked with them to figure out the issue and fix it for everybody on Seed.


I use seed.run and it is absolutely outstanding. The UI is incredibly easy to use and I have so much more confidence in my deployments.

These guys have done an outstanding job, definitely take a look. It's an indispensable tool.


Wow thank you! Really appreciate your support!


But honestly, thank you. What you built makes my life better.


I loved reading serverless-stack a couple of years ago; it was really helpful & convinced me to use serverless for a side-project that’s still going (with almost no expenses!).

I’m surprised to hear how many separate lambda functions each service in your example had. I understand the need to deploy each service independently... but to have +10 deployments within each service seems crazy to me. Is there a reason each service needs so many lambdas (vs deploying the service code as a single lambda function with different branches)?

Fwiw, I found it possible to get quite far with a single monolithic lambda function that defined multiple “routes” within it, similar to how an Express server would define routes & middleware.

Anyways, thanks for writing that PDF, and good luck with Seed!


One problem with monolithic functions is that you must grant them a union of all the rights required by every code branch in the monolith.

Obviously this can expand the blast radius of any vulnerability and tends to encourage rougher grained privilege grants.


This is getting out of hand. Are there "monolithic" and "micro" functions now?


That made me chuckle. But to be fair, in this case "monolithic" function is just a way to describe this pattern of moving your entire app (express in this case), inside a Lambda function. When Lambda started to become popular, this was the most common way to migrate to it. Just move your monolithic app to a function, hence "monolithic" functions.


Exactly, this is as opposed to processing the narrowly defined event with a minimally purposed function with least privilege.


"microlithic" a micro service which bundles multiple responsibilites.


That's brilliant!


Thank you for the kind words about Serverless Stack. Frank and I poured ourselves into creating it. So it makes me really happy when I hear that it ended up being helpful.

On the Lambdas per service front, the express server inside a Lambda function does work. A lot of our customers (and Seed itself) have APIs that need to have lower response times. And individually packaging them using Webpack or esbuild ends being the best way to do it. So you'll split each endpoint into a Lambda.

I just think the build systems shouldn't limit the architectural choices.


Frank here from Seed. Just wanted to add that when you have a monolithic Lambda, multiple routes would share a CloudWatch log group, metric, and share a common node in x-ray. On the flip side, the advantage of having separate Lambda functions handling each route lets you leverage other AWS services better.


I have achieved this with AWS Cloudformation/SAM, a template.yml and a makefile. Polyglot too, a mix of Python backend and JS backend across multiple functions.

I’m trying to think of how a service would help me here. However I do think this is a frontier-space where there is a lot of room for improvement. Looks polished though, I’ll take it for a spin on a hobby project soon.


Yeah makes sense. Adding SAM support is on our roadmap.

Looking forward to hearing your feedback when you give it a try! I should've clarified in the post, we support all the runtimes, not just Node.


Do you have any rough plans for how you would support SAM? Would you be transforming the YAML in some proprietary way or just calling off to CloudFormation on the users behalf?


Yeah at our core we do CloudFormation deployments. Whether thats through Serverless Framework or CDK (using SST https://github.com/serverless-stack/serverless-stack). So in the case of SAM it would be similar, deploying the CF stack on the users behalf. The deployments process roughly looks like: install dependencies > package functions > generate CF stack > deploy it > monitor progress. We do some optimizations along those steps but thats the gist of how it works.

Hope that helps. Feel free to get in touch if you want to know more jay@seed.run


Looks great. For someone who's not taken the plunge into Serverless yet, how would the costs compare to the more traditional options of hosting an app? i.e. a Rails/React app on Heroku

Of course 'it depends', but roughly speaking?


Yeah it does depend. But the numbers that get touted are at around 70-80%.

But here are the caveats. If your usage patterns are 24/7 and very predictable. You can design your infrastructure to be cheaper than the Lambda.

However for most other cases, including us at Seed (we use serverless extensively). It's so much more cheaper that we wouldn't do it any other way.

If you have a hobby project, it'll be in the free tier.

Some more details here — https://serverless-stack.com/chapters/why-create-serverless-...


Great reply, thanks will give it a go once I learn how!


Oh I'll add, Seed is heavily influenced by Heroku. It's a little like Heroku but for Serverless.


Isn't Heroku serverless? It is a PaaS offering similar to Lambda and Google's various PaaS offerings that generally get branded as serverless.


I should clarify, when I mentioned serverless, I really meant serverless on AWS.

Broadly speaking PaaS is similar to serverless. The main thing I look for as a user is, the per millisecond billing, the ability to scale up instantly and scale all the way down to zero.


I wish there was something like this for Docker rather than Lambda functions.

I'm new to all of it, but the security groups, route tables, internet gateways and other implementation details of AWS left me feeling overwhelmed and insecure (literally, because roles and permissions are nearly impossible for humans to reason about). AWS also suffers from the syndrome of: if you want to use some of it, you have to learn all of it.

Basically what I need is a sandbox for running Docker containers with any reasonable scale (under 100? what's big these days?). Then I just want to be able to expose incoming port 443 and one or two others for a WebSocket or an SSL port so admins can get to the database and filesystem (maybe). Why is something so conceptually trivial not offered by more hosting providers?

I researched Heroku a bit but am not really sure what I'm looking at without actually doing the steps. I'm also not entirely certain why CI/CD has been made so complicated. I mean conceptually it's:

1) Run a web hook to watch for changes at GitHub and elsewhere

2) Optionally run a bunch of unit tests and if they pass, go to step 3

3) Run a command like "docker-compose --some-option-to-make-this-happen-remotely up"

So why is a 3 step thing a 3000 step thing? Full disclose, I did the 3000 steps with Terraform and while I learned a lot from the experience, I can't say that I see the point of most of it. I would not recommend the bare-hands way on any cloud provider to anyone, ever (unless they're a big company or something).

I guess what I'm asking is, could you adapt what you've done here to work with other AWS services like ECS? It's all of the same configuration and monitoring stuff. I've already hit several bugs in ECS where you have to manually run docker prune and other commands in the EC2 instance because the lifetimes are in hours and they haven't finished the rough edges around their cleanup commands. So I've hit problems where even though I've spun down the cluster, the new one won't spin up because it says the Nginx container is still using the port. I can't tell you how infuriating it is to have to work around issues like that which ECS was supposed to handle in the first place. And I've hit similar gotchas on the other AWS services too, to the point where I'm having trouble seeing the value in what they're offering, or even understanding why a service exists in the first place, when I might have done it a different way if I was designing it.

TL;DR: if you could make deploying Docker as "easy" as Lambda, you'd quickly run out of places to store the money.


Yeah I feel your pain in regards to AWS. It was a big reason why we wrote https://serverless-stack.com.

We run some ECS clusters internally and have run into some of the issues you mentioned. We use Seed to deploy them but the speed and reliability bit that I talked about in the post mainly applies to Lambda. So Seed can do the CI/CD part but it can't really help with the issues you mentioned.

Btw, have you tried Fargate?


Ah that's cool, makes sense. We may eventually move to Fargate, but the project has some legacy stuff that somewhat relies on having a host machine because of its shared directory. I've set up a roadmap to gradually remove the restrictions that prevent us from transitioning from EC2 to Fargate.

I've learned a lot more implementation details in this project than I expected. For example, I think stuff like awsvpc network mode is a code smell. I did appreciate some of the work that AWS did though for just mounting an EFS filesystem like any other path in the ecs-params.yml file though.

I did try it, but EFS latency is too high to run a whole server (at least for PHP). It does work for a storage folder though. Specifically, PHP Composer feels like it will never finish if the whole project directory is on EFS. But if I changed the build system to pre-build all of the Docker images, it might be ok.

To me, Amazon doing their job would look like: no distinction between EC2 and Fargate. They should have provided a host filesystem out-of-the-box (that uses EFS internally) enabled by default with the option to disable it. But that's not the AWS way. In AWS, each service gives you 90% of a typical use case. The other 10% comes from the 10 other services that you must learn in unison.

But hey, this pain could easily be someone else's meal ticket if they automate the worst parts!


We're building something like what you describe (YC S20) - https://layerci.com - it's similar to OP but meant for standard containers instead of serverless.

TL;DR:

1. Install on GitHub https://github.com/apps/layerci/installations/new

2. Create files called 'Layerfile' to configure the pipeline

Docker Compose example for step 3: https://layerci.com/docs/examples/docker-compose

Then just point it at a docker swarm cluster or run the standard docker/ecs integration: https://docs.docker.com/cloud/ecs-integration/


Thank you! I remember in the 90s, if I thought of a website or invention, I figured I had about 2-3 years to make it (certainly less than 5) before someone else did. That number dropped to maybe 6 months by 2010, and today most things are either about to be released or were released 2 weeks ago (minimum). So I'm not sure if I manifested what you made by needing it months ago, is what I'm saying.

Anyway, the value proposition of LayerCI may not exactly be in the CI/CD stuff. What caught my eye was the 12 staging servers with high power CPUs and the layer caching like Docker (which takes multi-minute build times down to seconds). I think if you manage to include backups and monitoring from the start, you'll really have something. And if you've already done them, good job manifesting that.


Have you tried cloud run on GCP? It sits in the niche you're describing between a serverless platform and some managed container orchestration platform like kubernetes (GKE or EKS).


Does that use Cloud Run? I haven't tried Google cloud yet because I thought I'd have to learn Kubernetes. I have an aversion to learning Kubernetes because I still can't figure out what problem it's trying to solve. Admittedly, I probably haven't gotten far enough with cloud hosting to know what limitations I'll hit yet though. Some ok answers here:

https://stackoverflow.com/questions/55786955/whats-the-value...

The computer science part of me just looks at a Docker swarm as a big graph. We should be able to balance a load if we just know the remaining CPU capacity of each container. But I look at the astonishing complexity of all this stuff (not to pick on k8s too much) and my first thought is: never have I seen so much code do so little!


If you have a simpler implementation then you may have a billion dollar idea on your hands. I look forward to the Show HN!


Cloud run is exactly what he described. We use cloud run and it is great.


K8s on DigitalOcean might be a solution. K8s can be pretty complex but for a single tenant/single app you can probably skip some of the complexity.

Even at 100 containers you're probably going to want health checks (some load balancer integration), rolling deploys, metrics, and aggregated logging.

Amazon also added support for Docker containers to Lambda. You need to make sure your container implements the correct interface so Lambda can start it which is in their docs


I think you could check Moncc https://docs.moncc.io/ - you can wrap all of the above in a template (provisioning and orchestration) and run locally or on gcp/aws

you can also integrate it with github actions



> docker-compose --some-option-to-make-this-happen-remotely

Some of this exists- you can do remote operations like that with contexts but that doesn't solve the infrastructure issue.

Custom docker images on heroku is closer...


hey Zack, we have a prototype of this, we would love to have you try out (and anyone else). We just helped a couple customers migrate their Docker code repos from DigitalOcean to AWS and save $2K a month with our template. Gives you a CI/CD pipeline and deploys on ECS/Fargate.

Please reach out safeer [at] tinystacks.com


tbh, didn't run into this problem yet.

Half of my project is being developed in serverless (the microservices) that add to the big monolith application.

I've basically implemented a "monorepo CI/CD" which mostly works fine for our needs. (With some limitations/bugs in Gitlab CI due to the monorepo design)

For the most part we probably don't get so many functions bundled together, thus avoiding the deployment limitations referred.

Only one serverless app is reaching any kind of limits (200 resources per Cloudformation template if I remember correctly)

https://pedrogomes.medium.com/gitlab-ci-cd-serverless-monore...


Yeah that makes sense. That's basically how Seed started. Thanks for sharing.

What we started noticing with teams that we were talking to (and our own experience) was that the build process started limiting our architecture choices. For example, we want functions packaged individually because it reduces cold starts. But because the builds take long we had to make a trade-off. And that didn't make sense to us.


This looks great! I've been using Serverless Framework for a project and have not been too satisfied with the experience. Could you explain the integration with that framework a little more? I see the two options for services with Seed are the Serverless Framework or Serverless Stack (which I have no experience with, but looks like a compelling alternative). Is Seed just compatible with existing Serverless Framework yml configurations, or does it integrate with your Serverless Framework account somehow? I see you offer an integration with Serverless Pro, which confused me as this appeared (to me) to be a full replacement for Serverless Framework.


Yeah so if you have a Serverless Framework (the open source project) app in a git repo, you can add that to Seed. And it'll deploy it for you. To the environments you configure on Seed.

It doesn't connect to your Serverless Pro (their SaaS offering) account. Serverless Pro offers some similar features to Seed but most of our users just use Seed.

If you want to deploy using Seed, while viewing logs or metrics on Serverless Pro, you'll need to follow those docs you mentioned to create an access key (https://seed.run/docs/integrating-with-serverless-pro). We should clarify the integration in our docs to make it less confusing.

I hope that makes sense!


Just made a quick edit to that doc, I hope it helps:

https://github.com/seed-run/homepage/commit/e5fdd3fb41fedb2b...


I am curious what made you unsatisfied. As a member of the Serverless team I'd love to hear the feedback so we can potentially improve the experience for you and others.


Wow! Congrats to you, Jay and Frank. I've been a fan of your work on both Seed.run & Serverless Stack for a while. Best of luck, and I'm excited to see Seed grow :)


Thank you! I really appreciate the support!


Thank you for ur service. I just registered. I previously used aws cicd tools to do this. They integrated well for my simple use case. Can i trigger a deploy daily?


Currently, there's isn't a way to do it directly on Seed. It can be triggered using a git push.

But we've got a CLI in the works, and that should let you control when you want to trigger a deploy.


Looks really neat! I will try it out for my next ~/tmp weekend project. Meanwhile, I noticed that the link to the C# project is broken on this page https://seed.run/docs/adding-dotnet-core-projects . I wanted to try propose the change but I couldn't find the repository on your GitHub.


Oops sorry about that. I appreciate you taking the time to try and make a change. Does this link work for you?

https://github.com/seed-run/homepage/edit/master/_docs/addin...


Yep, I could fork it without any problems


For some reason that repo is internally set to private, I'll check and see why that is.


Well done, and thanks for Serverless Stack! Awesome tutorial!

I completed it and it was excellent, and a lot of fun.

The only thing I would say is that a section on public user uploads would be amazing (e.g. avatars) as the perms and CDK stuff is a bit knotty for that (I eventually figured it out but it took a bit of trial and error).


Thank you for the kind words!

That's a good point on the avatars idea. We'll need to create a version of the notes app, where there's a public aspect of it. So maybe being able to publish it.


How does this compare to something like AWS CodePipeline with CDK (https://docs.aws.amazon.com/cdk/latest/guide/cdk_pipeline.ht...)?


Most of my post was about Serverless Framework but we support CDK as well (with SST https://github.com/serverless-stack/serverless-stack).

A couple of things that we do for CDK that's different from CodePipeline:

- Setting up environments is really easy, we support PR and branch based workflows out of the box.

- We automatically cache dependencies to speed up builds.

- And we internally use Lambda to deploy CDK apps, which means it's basically free on Seed (https://seed.run/docs/adding-a-cdk-app#pricing-limits)!


It's been a while since I touched anything serverless, but it looks like Seed supports incremental deployments, which was a major pain point when I last worked with the Serverless Framework (an open source library for deploying Lambdas, one of the first ones.) Nice job team!


Thank you! We do these checks on the service level (https://seed.run/docs/incremental-service-deploys) and the Lambda level too (https://seed.run/docs/incremental-lambda-deploys).


Just to add, if you have any questions about Seed or need some help with your serverless apps, send me an email: jay@seed.run or just put something on my calendar: https://calendly.com/jayair


This looks great. If you added support for easy/integrated static site hosting, this would be a compelling alternative to Vercel and Netlify. Any plans for that?


While you can deploy static websites as a part of your stack on Serverless Framework and CDK; Seed isn't doing anything specific for it.

Under this scenario, the static site is hosted on the user's AWS account. Is that what you mean when you are thinking about an alternative?

We've talked about this internally, so I'm curious to hear about your use case.


How does seed compare to aws cdk pipelines

https://aws.amazon.com/blogs/developer/cdk-pipelines-continu...

I know if I go the route of cdk pipelines I will need to implement my CI/CD pipeline on my own using cdk. I want to know what are the other advantages of seed.


I talked a little bit about it here: https://news.ycombinator.com/item?id=25838954

But the big one for CDK is that it's faster and basically free on Seed.

Feel free to get in touch if you want more details! jay@seed.run


Do you plan on supporting Google Cloud Functions?


It's definitely on our roadmap, a little bit further down the road.

But I'd love to connect and learn more about the specifics of Google Cloud.

jay@seed.run


Do you have any plans to open source this?

I'm thinking about lock-in -- what if you suddenly deprecated the product? Will my deploys suddenly break?

Are you planning to maintain 1:1 feature parity with Serverless/CDK long-term? Could I fall back to those deployment tools, albeit slower, worst case?

Either way, this is awesome and congrats on the launch!


Yeah we've definitely talked about open sourcing this and it is a long term goal of ours. I think if we were starting over, we would've open sourced it right from the beginning.

> Could I fall back to those deployment tools, albeit slower, worst case?

Yup, that's how we've designed Seed. We deploy it on your behalf. So if we were to go down, you could still deploy your app just as before.


Thank you for making serverless easy and accessible! I really enjoy using Seed for some of my projects.


I really appreciate the kind words and support!


Is this faster then using the deploy function command in serverless? Look's very interesting.


It's similar in speed on a per function basis, but the main difference is that we figure out which ones need to be updated and do them all concurrently. Plus handle any failures and retry them.


Yes that does sound good.

I'm working with Lambda with some great devs but we are all spending way to much time figuring our how AWS is structured and managed instead of actually writing code.


Yeah that sounds familiar. We went through a lot of that.

Feel free to get in touch if you need any help jay@seed.run


This looks really great! But how do you differ from/compliment what serverless.com offers?


Thank you! They offer something similar but there are a couple of differences.

- The big one is the focus on speed, the incremental deploys at a service and Lambda function level (https://seed.run/blog/speeding-up-serverless-deployments-100...).

- We are also focussed on reliability, so setting up and tearing down environments while handling all the AWS rate-limit errors, or other timing related errors. We do this by connecting directly to CloudFormation.

- We also allow you to configure a deployment order for your services (https://seed.run/docs/configuring-deploy-phases).

On the alerts, logs, and metrics; the critical difference is that we query directly against your CloudWatch Insights or subscribe to your CloudWatch groups, instead of ingesting all your logs on our side. This allows us to:

- Provide real-time Lambda alerts basically for free (https://seed.run/docs/issues-and-alerts)

- And you don't need to configure anything on your side. You connect your AWS credentials and it works out of the box.

As always feel free to reach out if you need further details jay@seed.run


Awesome, thanks for the info! Do you have more info on what you mean by "We do this by connecting directly to CloudFormation"?


Yeah for sure. Previously, we relied on the the Serverless Framework CLI output. But now we directly monitor the CloudFormation events to figure out the root cause of the failure, then decide if we should retry the deployment, and how long to wait to retry.


Any plans for supporting BAA/HIPAA companies? Congrats on the launch!


Yup it is on our roadmap. Feel free to get in touch if you'd like to talk further jay@seed.run


Nice. Very real problem. I'm excited to check this out.


Awesome, would love to hear what you think once you get a chance to take a look. jay@seed.run


> "Are you building a Serverless Framework app on AWS?"

Nah, my enterprise customers are mostly on Azure. But, yeah functions are useful sometimes.


Congrats on the launch!


Thank you!


What is the benefit of this over let's say gitlab ci?


We've built custom infrastructure just for Serverless Framework and CDK. So it's a lot faster (thanks to the incremental deploys), reliable (we handle failures and retries by connecting directly to your CloudFormation events), and there's a dashboard to set up your environments and deployment order for your services.


Cool product! Any plans to support Azure Functions?


Thanks! We do but it's a bit further down the roadmap.


[flagged]


Just in case anyone's wondering, I made some fine-grained edits to the text above after it was posted, and that included some de-folkification. This was before I saw your comment.

Since I'm now going to get asked wtf I'm doing mucking with people's text:

I help YC startups with their Launch HNs. Mainly I coach them to take out anything that sounds like marketing or PR, and to add things that the community tends to find interesting. Usually we'll agree on a final draft by email, but sometimes we skip the fine-tuning step, and in that case I sometimes do it live, because I'm a compulsive editor. Part of the intention is to sand off sharp edges that might get things snagged in offtopicness, so I'm glad to see your comment as a sort of natural experiment demonstrating that this is useful :)

By the way, I'm happy to help anyone else with this too. That is, if any of you want to present your startup or some other major piece of work to HN, in the style of this post and https://news.ycombinator.com/launches, you can email a draft to hn@ycombinator.com and I'll try to look it over and give you feedback. The only catch is that I can't always reply quickly, and my worst case latency is abominable because the HN inbox undergoes periodic overwhelm. Still, it does mostly work. If you want to do this, you can look at the advice I give YC startups here: https://news.ycombinator.com/yli.html. The logistical aspects only apply to YC startups, but the communication aspects are more important and they are universal.


Thank you dang! I appreciate your service, even after having my comment above to be flagged :)

Is there a way to let you know about similar sharp edges, in order to avoid writing offtopic comments like mine?


Email us at hn@ycombinator.com. That's the reliable way to get in touch and we always appreciate getting a heads-up about something on the site, since there's far too much content for us to see it all.

https://hn.algolia.com/?dateRange=all&page=0&prefix=true&sor...


Honestly, it's a weird quirk I developed as I started writing more publicly. Hadn't thought too much of it!


I noticed this too, I personally prefer the usage of folx


At least "folks" is a real word.

"Folx" is unquestionably the result of the language war: https://newdiscourses.com/tftw-folx/


What's wrong about trying to include other marginalized groups including people of color and trans people?


It's wrong to make everything to be about race and gender. It's racist and sexist to do so.


Come out of the rabbit hole! most people are not SJW


> most people are not SJW

This is true. However, few active people are enough to poison the discussion. Examples:

* erasure of gendered language from source code comments

* "master" branch controversy

* BLM banners in numerous open source project

* Python PEP8 English controversy: https://github.com/python/peps/pull/1470/commits/89b72cf7261...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: