Hacker News new | past | comments | ask | show | jobs | submit | srd's comments login

While the concept sounds sound, so far I've never actually seen this setup in the real world. Do you happen to have any resources on how to properly setup such a step in a CI tool?


Check out software like Twistlock, Sonatype and I think Tennable has a scanner as well that integrates into the pipeline. If your are not using Sonatype to build you can find good support for this in Jenkins or Team City via a plugin (Full disclosure, I work in this area)


Using the opportunity to ask other awesome users: How do you configure floating popup windows so that the display decently? The Skype user profile windows, or the firefox ssl certificate warning dialog are the kind of windows I'm talking about. The few other awesome users I know don't have an answer for this either. This is the last outstanding configuration issue I have with awesome.



I had this back when I used awesome (3.5) https://github.com/gdamjan/dotfiles/blob/master/User/_config...


I have a question about storing the encryption keys. How would one actually securely store them and distribute them among the application servers in a cloud environment.

I don't buy into the 12-factor-application way of storing sensitive data in an environment variable ("ps ae" and a local intruder has the data).

Storing it in a secured file on the server requires the file to be distributed by the provisioning service, thus implying the key being stored in a repository (that just isn't your application code, but a repository none the less).

Using an api service to with client certificates doesn't really help either, because if the application code can access the required configuration and certificates, so can an intruder with shell access (since the intruder most likely has the user permissions of the running application before doing a privilege escalation attack).

I haven't seen an answer to that question that really satisfied me in the past. Does anyone have a battle tested method for storing database encryption keys?


> Using an api service to with client certificates doesn't really help either, because if the application code can access the required configuration and certificates, so can an intruder with shell access (since the intruder most likely has the user permissions of the running application before doing a privilege escalation attack).

The problem here is that if your application requires the use of encryption keys, and the user which runs this application gets compromised, you have a problem anyway - since like most applications, yours probably also doesn't care about the key's security once it's in memory. If they get that far, the only thing you can do is replace/revoke those keys and take the hit.

In more secure non-cloud setups, encryption/decryption is being done by something like a HSM, a box with only one interface (usually PKCS#11) which can be used to encrypt, decrypt and sign stuff, and you never see the keys. You have software HSM's - and you could try to apply the same principal in the cloud, where you have an isolated box with only the very well protected and audited soft HSM running.

But not sure the cost of learning and maintaining such a system is worth it for most situations, where I would use an API service like Hashicorp's Vault. Most of the compromising of keys and secrets doesn't happen on your servers, but on your or some developer/user's work machine. How many crap is exchanged over email, dropbox links, slack, skype, ...? Keeping keys out of the hand of the user and eliminating the need for your users to have them at all is higher on my priority list.


Are you talking about the NIC interrupts or all hardware interrupts? If the later - how do you configure that?


It wouldn't matter because all your interrupts would be handled by a single core, becoming a bottleneck.

It also doesn't matter because heavy i/o will take up both system and user cpu, and the 90/10+ split (is the app even multi core??) puts you into full utilization, which is fine for bulk jobs, terrible for high performance requests. Even a single machine at 100% can (in unfortunate circumstances) cause domino effects. Better to build in excess capacity as a buffer for unexpected spikes, which means managing your clusters to not stack jobs which could compete for resources, but also not stack jobs that could unintentionally starve other jobs - this requires intelligent load balancing that's application and job specific. Or a cluster dedicated to specific jobs (which they have, ironically)


You can assign different interrupts to different cores. That's another advanced optimization.

e.g. One core per network card.


And you can go one step further using receive flow steering[1] or transmit flow steering. Most modern performance oriented network cards (Intel 10G, Solarflare 10G, anything from Mellanox, Chelsio, etc) surface each of these receive queues differently and can be seen as different on the right hand column in /proc/interrupts. You can distribute said rx/tx queues on different cores (ideally) on the same socket (but potentially a different core) as the application for minimum latency.

Linux has some really impressive knobs[2] for optimizing these sorts of weird workloads.

[1] https://lwn.net/Articles/382428/

[2] https://www.kernel.org/doc/Documentation/networking/scaling....


Except sometimes later is too late. See climate change.


I'm a bit confused. Installing this on archlinux from the AUR I get version 3.3.1; however the post itself seems to be for 3.0beta (judging from the announcement halfway down the page). Which is the actual current version?


Version 3.3.1 is the latest release. The "3.0" in the post is just used to differentiate it from the existing Chrome-based application (1.0 - 2.0)


That depends on the implementation of the queue. Most queues have a "failed/retry" concept, where they mark failed jobs to be retried in some set future point in time. So one failed job does not hold up your entire queue processing.


Amazon.de just introduced a new translated UI for english. Click the globe icon on the menu bar, next to "My Account" and select english. Haven't tried it myself, so I can't comment on the quality of the translation.


It's good enough that I occasionally slip up and forget whether I'm on the US or DE site, since I check prices at both for used books and use the US site when buying stuff for people back home - sometimes, the only difference in the sites is the currency of the prices.


I guess this is because there are so many more exposed levels of git to learn. The introductory chapters in git books about all the "this is how a commit hash is computed" and "it's all about the content, not the filename" is good if you want to know whats going on on the lower levels, but for git newbies it's more confusing than helpful.

The git concepts you need to know that are non-obvious are: - there is no special branch, "master" is just a default name - there is no special "central" repository on a technical level - all git commits have one or more parents, but they don't know what branch they're from (i.e. reverting a merge can be a pain) - a git commit always represents an entire project tree, you can't version individual files like subversion does

Unless you're planning on spelunking into the plumbing commands (i.e. low level stuff exposed to shell commands), small parts of the porcellain commands (i.e. that the end user should use) is more then enough to work with git and understand the "how I need to work with git" flow:

The commands I use 95% of the time are: - git init - git add - git commit (and git commit --amend when I didn't pay attention) - git rm - git checkout - git branch - git pull - git push

and most without any commandline options. These command invocations you should be able to learn within 4 hours, especially if you've used an SCM before. (and the git <command> --help pages are actually well written, once you know what you want to do).

Sure it's nice to know about git add --patch, or git rebase --interactive, but do you really need them to work well with git? I don't think so. If you're inclined to learn more about your tools, sure, go ahead. But thats something that comes with years of use and doesn't have to happen up front.


I find that knowing how it works is the only way the commands and flags make any sense. I am constantly seeing people around me fumble through git trying to get by on a handful of commands they've memorized and just live with the fact that they need to re-clone their repo every now and then.

These people scare me. How can they be comfortable not even knowing what they are telling git to do?

I taught myself how the DAG worked, then the low level commands to manipulate it. Now when I read the docs I get nice surprises like pre-built commands for doing the series of operations I plotted out in my head (most recently 'git merge --no-commit' instead of read-tree, update-index, write-tree)...

Oh god, I've become that guy in college who refuses to use a CRC library until he understands the proof.

But seriously, I'm genuinely amazed people can use git at all without understanding it completely. The mnemonics make zero sense without background, and the operations are completely arbitrary looking.


I'm genuinely amazed people can use git at all without understanding it completely. The mnemonics make zero sense without background, and the operations are completely arbitrary looking.

Remember, most things people use every day without understanding them completely. This is a huge barrier to entry and effective use of git; you can't require people to spend weeks learning it before they commit a single line of code. So yes, people learn sets of runes that work, and know that if you step off the path there's no easy way of working out what happened let alone undoing it without blowing away the local copy and going back to the master.


> [...] you can't require people to spend weeks learning it before they commit a single line of code.

But you can expect them to spend two hours to understand the little number of fundamental objects that git works with, and maybe another quarter for basic operations (fetch, push, merge).


The thing is, a lot of git commands are really, really subtle. And almost all of them don't do what you usually want without a couple of switches.

Personally, I'm more of a fan of darcs. It has a really good UI - defaulting to interactive use where it shows you what you're actually doing - and instead of branches you just clone into another folder, reducing a lot of cognitive load around them. It integrates with email as a primary use case - if you have a mailing list and an email client, you have a pretty good replacement for github pull requests.


I only delete everything and start again if I have made the mistake of making any changes to some source code. After a while you learn not to change things.

Lots of projects using git means that I get to spend more time doing things other than coding.


My top git commands from the commandline history are git status, git commit -a, git diff, git gui, gitk --all. There are a few git pull/pushes in there, but these 5 dominate. I hardly ever use git add, and over the years I've pretty much stopped using the index altogether. The GUI tools are my way of doing the fancy stuff. They work great and expose a lot of the more complex features without having to remember all the command line options.


That is the chicken-and-egg problem with git: to understand it well, you need to understand the lower level algorithms, because this is how git was developed in the first place. But most people will complain that it is hard to learn this low level interface, so they will always get stuck when understanding the high level operations.

It is in a sense the same with C programming. C is extremely easy to understand if you have a good grasp of machine architecture and assembly language. Everything makes sense. But if you look at it from the point of view of a high level language, you will always be amazed at why it does things in that particular way.


As a die hard vim user I have to ask: Could you elaborate on your decision? What made Atom more appealing to you, what are the features it has that vim doesn't have (or does have, but more not as easily employable)?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: