I use atool (https://www.nongnu.org/atool/) for unpacking random archives - it abstracts away all formats including zip/rar/7z, compression and also guarantees that nothing is ever extracted into $CWD.
Maybe it's just me, but this seems like the more obvious behaviour? Personally I'd typically extract in /tmp/relevant-name, and sometimes that results in /tmp/relevant-name/relevant-name.
Doesn't seem a big deal or require/cause trust issues to me.
(And when I create one, I always have to check/look up what happens, so it doesn't surprise me that a variety of things get done at all.)
It's been common convention for decades that if you distribute a source tarball of something, that everything inside is inside a directory named foobar-1.0 where 'foobar' is the project name and 1.0 is the version.
Not everyone does this, of course, but it's nice when they do. Because it means you can just wget the file into a dir and untar it without worrying about it messing up whatever is already there. Also handy for putting different versions of the project side-by-side.
Ok, but like you say it's a mixed bag - I 'wget the file into a dir and untar it without worrying about it messing up whatever is already there', because nothing is, it's a mktemp -d or manual equivalent.
That was typical on DOS/Windows when distributing ZIP archives, for a long time.
But on *nix systems, the idiom for tarballs usually includes a directory containing all of the contents.
> (And when I create one, I always have to check/look up what happens, so it doesn't surprise me that a variety of things get done at all.)
True - I usually do a `tar tf foo.tar.xz |head` to get a quick peek at the archive. This generally avoids the problem of dumping a bunch of files into the current directory.