In that it adds complexity to have two tools vs one. And in the wild will add risk of mistakes and wasted time due mixups since they overlap and have incompatibilities. I’d rather give a team one tool, the only reason this is two tools is because they were independent projects, but they realy should be one conceptually.
What you're describing is pretty much contrary to the philosophy behind podman, buildah, skopeo, etc. though which is to have fairly narrowly scoped tools that serve a specific purpose rather than a big application that does everything.
I may not fully understand what the tools can do but it seems overly narrow scoped. Also in that Unix philosophy you don’t duplicate functionality that’s slightly incompatible between tools to the point you need paragraphs and tables to explain when to use which one.
Buildah specializes in building OCI images. Podman allows you to pull/run/modify containers created from OCI images. These are distinctly separate tasks, and it seems a lot more straightforward to me than having a daemon (always running, as root...) that handles both tasks.
Podman does allow you to build containers, but my suspicion is it’s intended for easier transitioning from docker (you can alias docker=podman and it just works). Also the build functionality is basically an alias for “buildah bud” so it’s more of a shortcut to another application than re-implementing the same functionality.
I think that explanation is a little clearer, however the repos and the article don’t make this clear and the fact that podman also builds images makes it less crisp.
> Some of the commands between the two projects overlap significantly but in some cases have slightly different behaviors. The following table illustrates the commands with some overlap between the projects.
And this makes no sense at all if you’re purposely designing a tool.
See the table of the subtle differences, why does podman create images that aren’t compatible for example? Regardless of what Docker does, if you make tools that are for a specific use case why blur the lines?
I don’t think you’re reading the article, it says:
> Each project has a separate internal representation of a container that is not shared. Because of this you cannot see Podman containers from within Buildah or vice versa.
> Mounts a Podman container. Does not work on a Buildah container.
^ this here is one of the problems, the containers are not compatible is my interpretation.
The tool feature sets overlap with subtle differences according to the article, that blurs the line on what each one is for. They need to pick a direction, if you’re making a build tool and a run time, the the build tool must only build, and the run time must only run, or just make one tool. Intentional and truthful (meaning the words mean only what they say) design limits the chaos that happens in the wild, these tools aren’t doing that. It may seem clear to you, but the article is littlerly about how it’s not clear and how they overlap confusingly. So you’re going to come across a mess at some point due to this mistake, that or they could explain their rationale for the overlap but they don’t.
The difference is that buildah's only job in the world is to build OCI Images. Podman is more about running containers so
It's containers are a lot more generalized.
Buildah containers and buildah run are far different in concept then podman run is. Buildah run == Dockerfile RUN. So we don't support a lot of the additional commands that are available for podman run and have decided to keep the format different. Podman has a large database, that we felt would confuse matters when it came to podman run.
I tell people if you just want to build with Dockerfiles, then just use podman, and forget about buildah. Buildah and its library are for building OCI images and hopefully embedding into other tools in addition to podman like OpenShift Source2Image and ansible-bender. As well as allowing people to build containwer images using standard BASH commands rather then requiring everyone to use Dockerfile. Podman build only supports Dockerfile.
The format shared between the tools is an OCI image. Earlier you stated the images are incompatible, which is false. Then you switched to worrying about the internal representation of a container differing between the tools.
Why are you concerned about buildah’s internal representation of a container, unless you’re contributing to the codebase?
In all fairness, the blog is a bit confusing. I know that podman ad buildah both comply with the OCI image spec and that pod man in fact calls buildah. Which makes the various discussion around visibility etc. somewhat confusing to me. It may well be irrelevant in which case perhaps there’s a clearer way of explaining the relationship.
We get this question all the time, and I totally understand the frustration. In a nutshell, here's the breakdown. I will highlight this in blog entries as RHEL8 comes out and emphasizes podman, buildah and skopeo, so you will see more :-)
If you break containers down into three main jobs, with a sort of forth meta-job:
If you think about it, that's what made docker special, it was the ability to FIND, RUN, BUILD, and SHARE containers easily. So, that's why we have small tools that map to those fairly easily.
The more seasoned the manager or exec will say at this point that you should always see everyone as having good intentions. It’s of course not true at all, but seems to be one if the mental tools they use to not acknowledge what someone is really doing. I think what it does do in reality is gives the person a way out when they come to their senses without having to admit they went down a selfish dark path. So in that sense it’s practical. Although it can take years with some people too. Personally I just fire them if they report to me or I have sone influence I get them out of the org another way.
They in effect voted for a dictatorship, initially for Pompey on the ground of a terrorism threat, and then later were supportive of Caesar. Pompey used the democratic tribune system which had been abolished by a previous generation of the aristocracy. The dictator argued past the traditions of the Roman Republic as exemplified by the aristocrats, and appealed directly to the people.
It does, but there’s also what I call stupid security that doesn’t really add any measurable improvement, but does decrease productivity. Users out smart these systems all the time (e.g. forced password changes where you can’t use the last 10 passwords, users just change their password 11 times so they can continue to use a password that’s been configured on their devices. It’s stupid then to force changes like that when there are much better ways like mfa)
Smart security allows users to do what they need to do efficiently and safely.
Yeah I can't stand the forced password changes where you can't use the last X passwords, or passwords that expire every 30 days. A lot of times it's security compliance entities that push this down to companies, for instance PCI, etc all require those. I think even the new NIST standards address these practices, but the compliance entities are slow and far from pragmatic.
Yep. I can tell you that currently, leading practice involves such measures. I don't agree, but CIS, which is essentially the current gold standard, says so. People who get paid to do this often don't bother griping about it, because they are being paid to harden to a standard and that standard is what it is.
This unfortunately leaves a disconnect between the people who harden (who might actually hear about issues), and the people who write. Even if the writers do hear, it won't be implemented until the next revision.
Yeah PCI or FedRAMP have this 10 char password requirement, which of course no one can remember a 10 char password. So companies just make the password a pattern with some variations, effectively reducing the complexity to a tenth of a random 8 char password and the people who know the pattern leave the company so it’s effectively public. So much for math.
Right. The first rule of password security: if you have a large enough user base, the odds of a user writing down a password increase, and as passwords become sufficiently difficult to remember, the odds approach 100% at some point that _some_ people are writing down passwords. No amount of defense in depth can protect the "I have a Post-It note under my keyboard" problem, if people can get into your building.
We've handled this by mandating password manager use and pushing length requirements to absurd levels to where it truly is easier to just use the manager, which has two factor.
Absolutely, but I prefer not to leave 22/tcp open to the world. If I do leave it open it is only from a restricted IP set, otherwise it is behind a VPN, probably OpenVPN.
Sure, especially when you VPN into a sacrificial subnet and need MFA to continue elsewhere into locked down application domains. OTOH I would leave ssh listening on a non-descript high port with MFA (key and OTP) enabled. No use worrying too much about that.
Usability is a component of security because of human factors; if your “secure” process or system is not convenient for use, people will in practice find ways to work around it instead of using it as intended, which will defeat security.
It shifts convenience. It's less convenient for me to have to unlock my door when I get home, but it's more convenient to not carry all my valuables with me during the day. And I really like that tradeoff.
Good security measures are like this. Add sandboxes so you can let users do what they want. Add authentication so people know who they're talking to. Support security keys so people don't have to worry ad much about being phished. And so forth.
That's why for any office that has internal wifi I encourage also having a "guest" wifi network (small offices, not using anything with enterprise-level management). People are going to want to connect devices to wifi instead of LTE even if they never approach their billing limits, and if there's an available option that they're allowed to use it cuts down on attempts to use the one they're restricted from (and complaints about "I tried to connect to the wifi but I can't get to the Internet!" "Was it the internal one or the guest? Internal is locked down. Connect to the guest.")
OAuth is going to be hard to use if you don’t create an account though, there’s no way for DS to know what scopes to restrict the user to, it sounds like that was the core issue that broke their workflow.
I think it is very unclear what is real and not real. OP is mostly passing blame to DocuSign without achieving full understanding themselves. That is more an indictment on OP than DocuSign even if there is some actual session security problem with DocuSign (likely from the deprecated API).
Yeah and it sounds like they were trying to do something that makes no sense for DocuSign to support: use a single account to sign all users’ documents.
DocuSign has a legal obligation here to prove authenticity, how are they ever going to be able to do that if everything is behind a single account?
They support oauth and that makes sense and should be the way to do it.
DocuSign should've recognized this and let them know the flaws in their plan, but in my experience they're _way_ too sales-oriented to ever do this.
I had a similar experience. I explored their API and got stuck on how to implement my use-case and how to ensure it's legally binding. Sales and "technical" resources assured me it was possible, didn't explain how, and everyone balked at any sort of legal questions and basically told me that was all on us to sort out.
I decided I didn't need help creating a box for users to scribble on. E-signature isn't a technical challenge at all.
>DocuSign has a legal obligation here to prove authenticity, how are they ever going to be able to do that if everything is behind a single account?
By the signature.
I don't know how it works in DocuSign's internals, but there's no requirement for the signer to have an account. The point of the account is for users to see all of their documents in one place. In OP's case that's everyone's documents because they use a single account.
Yep. I just signed things via docusign 2 weeks ago. There was no 'create an account' first. IIRC, there was something at the end indicating I could create an account after the doc was signed, but I got a copy via email anyway which was all I needed. The last thing I need is yet another account/login for what is essentially a one-time thing.
The argument isn’t correct, what does a user do when the download is damaged by an injection? A re-download results in exactly the same tampered with file.
Likewise, integrity of the download is the primary reason I’ve switched downloads to HTTPS too. The argument that singed downloads is enough fails to address what the user is supposed to do after the integrity has failed? A redownload can result in the same tampered with file. This isn’t hypothetical btw, it happens in the real world, I’ve had ISPs in Ireland and South Africa damage downloads due to their injections and users don’t care if it’s their ISP, they get pissed off at you unfortunately.