I know this is a case of using the wrong tool for the job, but about once every 3 months or so, I find myself using dig to troubleshoot an issue, only to find out I had a hosts file entry and that's why it was/wasn't working.
I've been trying to force myself to use curl --resolve rather than using hosts file entries (where it's suitable) but for some reason I just can't seem to force myself to get into the habit of getent vs dig.
What would save me wasted time is an asterisk next to an entry or a reminder at the bottom of the output (or in stderr) which indicates that there is a hosts file entry which matches, but it was "ignored" (and even what it's value is).
I understand if it's not something you want to support, but it's definitely a pain point for myself and quite a few ops people I've shoulder surfed over the years.
To use a single third party utility like dig that would retrieve entries from both /etc/hosts and from zone files, one could use a program like pdns_recursor with the "--export-etc-hosts=on" option.
pdns_recursor is listening on 127.0.0.1 and /etc/resolv.conf contains namserver 127.0.0.1, then
dig ns example.com
should return "localhost" as the NS RR for example.com so we know the A RR came from /etc/hosts not a.iana-servers.net
Additionally, adding the "--trace=on" option will output debugging info that will tell whether the answer came from "local auth storage", e.g., /etc/hosts
I've fought a number of resolver issues over the years, where, for various reasons, the application resolves differently from client tools, or client tools resolve differently from each other. Sometimes related to hosts file entries, but those are much easier to figure out than issues related to truncated replies related to EDNS or UDP vs TCP replies, or similar. (it's been awhile, so I forget details).
I've started just editing an instance of unbound on my router for custom DNS records. I've gone as far as redirecting all my port 53 traffic to it using iptables on my router as well for those insidiously hard coded DNS servers on Google products.
Something that puzzles me while using dig(1) is that I don't really understand the +trace output. In my mind it should be rather easy to trace a DNS request, something like: dig A google.com → etc/resolv.conf says your nameserver is X.Y.Z.T → made a query to X.Y.Z.T, it recursed → domain google.com has nameserver ns.a.b. → ADDITIONAL section told us ns.a.b is at X.C.F.G IP → made dns query to X.C.F.G, it told us it was Authoritative and gave RRs [IP1, IP2].
I never quite can get this level of clarity from dig(1), instead, "dig A google.com +trace" gives an output like this: https://gist.github.com/ahmetb/28a3853bef72fbabf4f0d8ac0712a... which makes me think like: what the hell, am I actually hitting root nameservers every time?
If your tool can help developers here, I think it's a major value add.
If it recursed, then X.Y.Z.T gave you the final response. It doesn't tell you how it arrived at that final response, and it might not even know (it only saves the final response in the cache). The output of dig +trace is the steps a recursive resolver would have to follow if it started from scratch (with a cold cache).
Great work here, interesting project. I get the desire to roll your own versions of things, but personally I'd feel more comfortable using it if it used the standard community packages for dns and logging like trust-dns and tracing though.
All I'm saying is that lookup.dog was originally going to contain a Shaun of the Dead GIF and nothing else, in the same vein as butt.holdings or cheese.singles. The DNS part came later.
Just took a look at all the tools you listed (besides ripgrep, as I've already been using it). Exa and bat look great! Just installed them and am looking forward to using them. I'm having a hard time convincing myself to leave iTerm for Alacritty, just for a speed boost. I've never had speed issues with iTerm, but I'm wondering if there is anything else you like about alacritty that might change my mind.
> I'm having a hard time convincing myself to leave iTerm for Alacritty, just for a speed boost.
I've never understood why anyone would want a higher-throughput terminal. Better latency I could see, but throughput? I'm not Commander Data. I can only read so fast. If my shell is dumping text at me faster than my terminal can render, it's because I catted something I shouldn't have.
I like a few things about alacritty - it handles file drag and drop better than some IMO; config takes effect immediately and has extensive options (eg arbitrary keyboard mapping); I somehow got shift-enter to work on vim which I had trouble with in other emus; it’s noticeably fast.
On the downside it has no tabs and fewer typical GUI features like right click.
If you love your emulator I wouldn’t suggest switching.
Alacritty is more lightweight/cruft-free IMO, and cross-platform. Personally I don't want iTerm's settings menu or tabs; I do want one configuration that I can use on multiple platforms when the need arises.
I haven't used Alacritty as much, but Kitty is faster, easier to configure and very convenient in its splits. I believe there's also some problem that Alacritty has with fonts that Kitty doesn't, but I don't remember exactly what.
> supports the DNS-over-TLS and DNS-over-HTTPS protocols, and can emit JSON
This shows how much our old-school Unix tools need an update.
What if all the CLI tools had an option for JSON output and the shell natively supported a query language like `jq`. Then you could pipe commands together:
This assumes the default output would still be the legacy text format. But if the default output changed to JSON then you could get rid of the ugly json-in/out syntax.
I think a better way to handle your approach would be to attempt to make it automatic; for example, if a program could detect its output as "not a TTY" and then (somehow) make an intelligent determination about what kind of object to pass (i.e. if there was a way for the program on the other end of the pipe to say "Send me object data if you want!") then it would be useful, coupled with programs or commandlets to do filtering.
Having to manually specify "you, output JSON! And you, input JSON! and also filter based on this difficult to remember for most people syntax!" would be cumbersome; much easier to have "commandlets" built into the shell to handle this sort of thing in a sort of composable pipeline. For example:
As others have said, Powershell supports this, and it's completely awesome to work with. The fact that (assuming proper metadata on the binaries) I can do queries like this is awesome:
This lets you, for example, have multiple versions of something installed, and have a script easily and cleanly get a list of all of them and either choose which one to use, or run something on each (e.g. for testing scripts).
There are also a lot of "that's weird but handy" features, like being able to interact with environment variables the same way you do local files:
if (Test-Path 'env:GIT_COMMIT') {
echo "GIT_COMMIT is $Env:GIT_COMMIT"
$commit_hash = $Env:GIT_COMMIT
}
Extremely impressed with Microsoft on this implementation. Also worth noting: you can install Powershell on your Linux or macOS systems and use it there as well. Semi-tempted to try implementing things in Powershell as the only cross-platform shell available.
Like probably dozens of others, I've been idly dreaming about reimplementing coreutils / textutils etc to be able to emit and consume structured records in some serialization format.
The thing that would make this really useful is a(t least one) global database of serialization formats to be a lingua franca between tools. Designing those formats to be useful cross-platform would be tough, but you could start with a format db that only promises linux compat, and later add bsd, and maybe windows even later.
edit: google had something like this internally, the "protodb", a versioned repository of protobuf definitions so that any other tool could consume your tool's output.
Some shells, such as elvish [0], nushell [1], mash [2], and even Windows' powershell do support natively manipulating structured data. There is quite a gap between the shell supporting it and tools supporting it, though, and it's a bit of a chicken-and-egg problem.
I actually agree: the thing that library has that dog doesn't is that it's been battle-tested. For example, I've already had to fix a case where it was mis-parsing TXT records because I missed part of the relevant RFC!
Ooooh they could have named it Dug. Then you'd have DigDug. On the other hand, this isn't really dig-but-in-rust, so I suppose there is no real reason to relate it.
It does look neat when comparing it to the classic output you have in some other tools; not always the easiest to spot the records in then a lot of debug information that looks the same as 'output' information is often stacked on top of each other making it less readable and even without the colouring this looks like a better fit if you simply want to look at some records.
Is there support for reverse lookup that I'm just overlooking? There's of PTR records being used for reverse lookups, but no sign of how to do it without reversing the octets by hand.
This is my thinking too. I could add the option if there's enough interest, but aren't there already dedicated tools to do this?
As an example: I sometimes run 'curl ifconfig.me' to get my public IP from the command-line, but I wouldn't expect curl to add an '--ip' option to make this specific query easier to run. curl is a general tool, and the fact that you can use HTTP to get your public IP doesn't mean it needs a top-level option in an HTTP client. (I get that reverse lookup is an internet standard, and ifconfig.me is a third-party service, but still.)
I really like the Rust implemented tools like ripgrep, bat, exa, fd, lsd etc.
I just tried `cargo build --release --verbose --target x86_64-unknown-linux-musl` with Dog but it uses the `native-tls` Crate that wraps OpenSSL.
It would be great for tools like this to leverage Rustls [0] so one could compile without requiring OpenSSL and be able to compile statically with musl
Awesome, dig was long due for a sane replacement. It's output and commandline syntax are about the most obtuse of any mainstream unix CLI tool; it even beats ifconfig and find, which is quite the achievement.
I've been trying to force myself to use curl --resolve rather than using hosts file entries (where it's suitable) but for some reason I just can't seem to force myself to get into the habit of getent vs dig.
What would save me wasted time is an asterisk next to an entry or a reminder at the bottom of the output (or in stderr) which indicates that there is a hosts file entry which matches, but it was "ignored" (and even what it's value is).
I understand if it's not something you want to support, but it's definitely a pain point for myself and quite a few ops people I've shoulder surfed over the years.