Hacker News new | past | comments | ask | show | jobs | submit | more Rowern's comments login

So much this.


It looks really beautiful. Well done!

We use react-admin (https://github.com/marmelab/react-admin/) and it was a total life saver to create a backoffice for our app in less than 2 weeks.

Hope this takes off and you can provide some code example for how to use this project for that purpose.


My gripe with react-admin and to a lesser extent Semantic UI is their idea of also writing a full REST api layer. Just give me a good front end tool and let me handle my data layer!


You should probably just say its a tutorial on how to install a PWA (i.e. how to click on the install button).


This issue is from 2016...


Traefik is really cool for most usage, I tried to used it in production and here are some of the shortcoming I found (the 2 first issues are on github):

- If you use DNS for a blue-green deployment, traefik does not honor DNS ttls (because it uses the DNS of go and go might do some caching) so when you do the switch you might still end on the "old" environment

- An error on one of the cool feature: serving error page when your backend returns some HTTP code

- Some configuration options are not documented (but easily found using github search)

I still love this software and I will keep looking at it.


By default, go compiles binaries linked to libc to use the system DNS resolver. It does re-implement a DNS resolver but it's only used if CGO is disabled at compilation time.


That's only half right. It does link to libc, but the default behavior on linux is to use Go unless some conditions are met.


Use go for what? DNS resolution? The default behavior is to use the system DNS resolver. The Go resolver will be used if the system is resolver is not avaialble (e.g. if the binary is compiled as pure go) or if the net.Resolver has the PreferGo flag set (which is false by default).

https://github.com/golang/go/blob/541f9c0345d4ec52d9f4be5913...


My company has 2 big projects running on HHVM.

Most of the library we used to start our projects (symfony, mongodb to name the biggest) dropped the HHVM compatibility in the middle of development, I can tell you this was a good lesson for me: never take a technology where big vendors drop compatibility.

Having to debug a production error with an HHVM library (which support was dropped), find a fix and then seing it was fixed in the official PHP library 6 months ago hurt a lot...

Another point not in favor of HHVM, it's maintained by them, the roadmap is only known to them and the number of HHVM alternatives to big libraries is near 0.

We are gradually moving away from HHVM (and PHP in general) in favor of NodeJS and TypeScript.


I've been working with gitlab CI for the last year. Here are some of my feedbacks:

- 6 months ago we seriously considered moving away because it was really unstable (even when running on private runners) but now its a lot smoother

- with private runners you can have a very powerful CI without having to manage a master (as Jenkins) for a fraction of the costs (runner with docker-machine on spot instances)

- beware that if your CI flow is more complex than just a simple pipeline to build and deploy your project (we have a project for our code, that then trigger a project for end-to-end tests, that then trigger a deploy to our env) you will need to do a lot of boilerplate code (you will need to manually manage artifacts if they need to be shared between jobs)

- variables from a triggered pipeline should be available through the API and made more visible in the UI

- we do not use kubernetes so eveything CD is off the plate for us (environment and monitoring tab are useless)

- DO NOT USE THE BUILT IN CACHE, it's super slow and will fail unexpectedly (simply do cp to s3 and it will never fail)

- IF YOU USE THE BUILT IN CACHE, parallelism will be hard (you cannot populate part of the cache from a job, another part from another job and in the next step use the result of both cache)

- triggers are weird, its a curl to an API endpoint but it does not use the normal auth mechanism and it will answer with a useless json (please add the project id, variables etc to the result of the trigger it's a must have for anyone that needs to parse the output)

- the gitlab API is top notch except on the CI part...

- be ready to restart some jobs 2-3 times if gitlab is deploying a new version ;)

- be ready to have some random errors that can be fixed by a retry

- it will seem a good idea to run gitlab-runner on every laptop of your team to reduce cost. DO NOT DO THAT, if you are more than 2 in your team the guy in charge of making the CI run (me) will make you restart you docker, delete a specific image, restart gitlab-runner, etc... invest 1 day to setup the docker machine on spot

- please show in some way when a job triggered another one (maybe a section in the YML, or even better check make us populate an env var with a link to the triggered pipeline or anything)

- design your pipeline so that if a part fails you can restart it without breaking everything (I'm looking at you terraform)

This list seem really long but, I have worked with Jenkins and even if more stable the steady improvements and addition to gitlab CI still make it my first choice for my needs.


> - IF YOU USE THE BUILT IN CACHE, parallelism will be hard (you cannot populate part of the cache from a job, another part from another job and in the next step use the result of both cache)

You can use the `artifacts` and `dependencies` combo to leverage which artifact will be downloaded into a particular job.

For instance,

    bundle-install:
    stage: build
    script: ...
    artifacts:
        paths: [bin/*]

    yarn-install:
    stage: build
    script: ...
    artifacts:
        paths: [bin/*]

    rspec:
    stage: test
    script: ...
    dependencies: [bundle-install] # This downloads only `bundle-install` artifact to this job

    karma:
    stage: test
    script: ...
    dependencies: [yarn-install] # This downloads only `yarn-install` artifact to this job

    eslint:
    stage: test
    script: ...
    dependencies: [] # This downloads nothing
https://docs.gitlab.com/ee/ci/yaml/#dependencies explains how it works


> it will seem a good idea to run gitlab-runner on every laptop of your team to reduce cost.

Will it?!


Agreed, that's a crazy way to try to reduce cost


Reminds me of the Xcode built-in distcc thing they had back then.


Gitlab runner is really easy to install on Linux. At work, I run Gitlab-CI jobs on my laptop : the main reason was the shared runners (provided by my company) were unstable and full. Our Gitlab instance has now ~20 shared runners (used by dozens of teams) and are a lot more stable. I still use my laptop to avoid waiting forever for the docker images to be downloaded.


> - we do not use kubernetes so eveything CD is off the plate for us (environment and monitoring tab are useless)

Environments can be useful even without integration with K8S. It's useful e.g. for review apps feature (https://docs.gitlab.com/ee/ci/review_apps/index.html) which don't need to be hosted on K8S. Look on the https://gitlab.com/gitlab-org/gitlab-runner/environments, where we're using environments to track our releases, e.g. the download pages hosted on AWS S3. Another example is https://gitlab.com/gitlab-com/www-gitlab-com/environments - and again our about.gitlab.com website have each MR deployed as a review app without usage of K8S, but enviroments feature is used to track all deployments, link them from MR page and automatically delete review deployments when the MR is merged or closed.

> - DO NOT USE THE BUILT IN CACHE, it's super slow and will fail unexpectedly (simply do cp to s3 and it will never fail)

Are you referencing cache configured for Shared Runners on GitLab.com or the cache feature in general?

I need to agree that we had many strange problems with the cache in the past for Shared Runners on GitLab.com. Even now the feature is not always working as we would like to, and this is something that we're already thinking about how we could improve it: https://gitlab.com/gitlab-com/infrastructure/issues/4565.

But in general - I can't agree that the feature is not working and should not be used. In most of the time we had no problems with using the distributed cache with S3. When cache servers are stable, the feature just works. I also can't agree with that manual copy to S3 will be faster than copy to S3 made by Runner - in the end both are simple HTTP PUT requests send to chosen S3 server.

Also remember, that in some cases it's better to use the local cache instead of remote cache feature. With files stored locally there is no much things that can go wrong and it's definitely the fastests solution (however it can't be used for all workflows).

> - IF YOU USE THE BUILT IN CACHE, parallelism will be hard (you cannot populate part of the cache from a job, another part from another job and in the next step use the result of both cache)

Well, it depends :)

Our cache feature was designed with specified workflows in mind. The priorit is to allow a particular job to be speed up (but the job should be configured in the way that it will still work even if the cache is not available). We've made possible to re-use cache between parallel jobs, but as usual with more complex designs - it's hard to handle all cases.

But what it was not designed to, and what is confusing new users from time to time, is passing things from one job to another. This is where artifacts feature should be used. Cache feature was just never designed for this and we were always loud about this :)

But it doesn't mean that cache can't be used with parallel pipeline. Using configuration features like `key` and/or `policy` and configuring this properly for different jobs, it's possible to prepare cache in one job and then re-use it for many parallel jobs in next stages. This is exactly what's done for the GitLab CE and GitLab EE project: https://gitlab.com/gitlab-org/gitlab-ce/blob/v11.2.0/.gitlab.... Look for `default-cache`, `push-cache` and `pull-cache` YAML anchors and check how they are used next. In GitLab CE's pipeline, in the `setup-test-env` job `bundle install` is called and all downloaded gems are next turned into cache. In the next stage, where all tests are being executed, the same cache is downloaded what speeds up the `bundle install` executed in all test jobs.

So in the end, it depends on what you're expecting:

- If you want to pass things from one job to another: it's not cache that doesn't work. You just should use artifacts for this, since cache was never designed to handle such workflow.

- If you have not too complicated Pipeline, then configuring cache for parallel usage should not be a big problem.

- If you have a complex pipeline... well - there definitelly will be cases when our cache feature will be not much useful. And in that cases one need to chose if he wants to refactor the pipeline so it will fit to how cache is working or looking on own way to speed up jobs. But I'd say that in most cases it's posibble to configure the pipeline in the way, that it will be able to use cache.


So I am not the only one to have a slow eployment because re-building AMI and re-provisioning everything takes quite some times.

I also had a difficult time to explain that a prod deploy of 30min (image creation, deploy with blue green) is normal for this kind of inf... Did you face the same thing?


Rebuilding AMIs is not a thing you should be doing every deployment. Sounds like you are on AWS so use proper containers on ECS or EBS. Docker itself caches pretty aggressively. Decompose your projects as well so that the independent parts build and deploy without rebuilding everything else in the project that hasn't changed.

At the end of the day, if you're on continuous deployment, a commit should be rebuilding only what it touches. We have 4 min long deployments + 1.5 min tests and I definitely don't think we're optimizing aggressively.


Things get awkward when versioning standards require all application components to have the same version because several version numbers running around is cognitive overhead that engineers can’t afford in many situations. With more than about 10 components I’ve usually seen it turn into “deploy 10 services that have 10 changes, 9 of which have one commit that bump a version number up.”

Many places still keep producing very stateful software (sometimes even very much by choice) that is better off managed through Puppet / Chef rather than an immutable containerized approach. If your software needs to take an hour and a half to shutdown, for example, you have to get a bit creative with your deployment strategies.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: