Here is a brief demonstration of how "DOH DNS servers" can be useful. Nevermind the idea of applications having their own DNS caches.
1. fetch page of html, e.g., hn front page
curl https://news.ycombinator.com > 1.htm
2. extract urls from 1.htm
yyt < 1.htm > 1.txt
(example scanner "yyt" provided below as t.l)
3. convert urls to hostnames
g=1.txt k 1
(example script provided below as "1.k")
4. retrieve json dns data from doh dns server, efficiently, over a single connection
see https://news.ycombinator.com/item?id=17228745
5. convert json dns data to csv
see https://news.ycombinator.com/item?id=17228473
6. import csv into database e.g. sqlite3, kdb+, export to /etc/hosts, export to zonefile for localhost auth dns server, etc.
now, when user is reading hn front page, no dns lookups are needed. user already has the dns data. there is no network usage for dns requests, increasing hn front page browsing speed for the user. there are no piecemeal dns requests sent, increasing user privacy.
7. track ip address changes over time, compare answers from different caches, etc. retrieve type 2 (NS) instead of type 1 (A) records, then compare to NS records provided in public zonefiles from icann, public internet scans, etc.
cat t.l
#define p printf("%s\n",yytext);
%%
\200|\201|\204|\223|\224|\230|\231|\234|\235
http:\/\/[^ \n\r<>"#'|]* p;
https:\/\/[^ \n\r<>"#'|]* p;
ftp:\/\/[^ \n\r<>"#'|]* p;
.|\n
%%
int main(){ yylex();}
int yywrap(){}
/* compile with something like:
flex -Crfa -8 -i t.l
cc -pipe lex.yy.c -static -o yyt
*/
cat 1.k
/k3 (novice level)
/usage: g=f k 1 where f is list of urls
h0:_getenv "g";h1:0:h0; h1:{:[(#h1[x] _ss "://")>0;h1[x];_exit 1]}'!#h1;h1:{*((h1[x] _ss "://[^/]")+3) _ h1[x]}'!#h1;h2:{h1[x] _ss "[^a-z^A-Z^0-9^.^-]"};h3:{*h2[x]};h1:{h3[x]#h1[x]}'!#h1;h1:?:/h1;h0 0:h1;
\\
1. fetch page of html, e.g., hn front page
2. extract urls from 1.htm (example scanner "yyt" provided below as t.l)3. convert urls to hostnames
(example script provided below as "1.k")4. retrieve json dns data from doh dns server, efficiently, over a single connection
5. convert json dns data to csv 6. import csv into database e.g. sqlite3, kdb+, export to /etc/hosts, export to zonefile for localhost auth dns server, etc.now, when user is reading hn front page, no dns lookups are needed. user already has the dns data. there is no network usage for dns requests, increasing hn front page browsing speed for the user. there are no piecemeal dns requests sent, increasing user privacy.
7. track ip address changes over time, compare answers from different caches, etc. retrieve type 2 (NS) instead of type 1 (A) records, then compare to NS records provided in public zonefiles from icann, public internet scans, etc.
cat t.l
cat 1.k