Anyone know how it handles ligatures? Depending on font and tooling the word "fish" may end up in various docs as the glyphs [fi, s, h] or [f, i, s, h].
According to a quick check against /usr/share/dict/words "fi" occurs in about 1.5% of words and "fl" occurs in about 1%. There are other ligatures that sometimes occur but those are the most common in English I believe.
I don't have any sense of how common ligature usage is anymore (I notice that the word "Office" in the title of this article is not rendered with a ligature by Chrome) but it might be insanity inducing to end up on the wrong side of a failed search where ligatures were not normalized.
Seems to work well when it's searching the PDF text layer as ligatures are a font rendering effect. You're right — ligatures are not as common in modern books.
Might be iffier in OCR mode: it seems to use Tesseract, which is known to have issues recognising ligatured text.
The (standard) ripgrep regex engine has full unicode support. My reading of that is that it should handle such equivalences like matching the decomposed version.
In effect, all of UTS#18 Level 1 is covered with a couple caveats. This is already a far cry better than most regex engines, like PCRE2, which has limited support for properties and no way to do subtraction or intersection of character classes. Other regex engines, like Javascript, are catching up. While UTS#18 Level 1 make ripgrep's Unicode support better than most, it does not make it the best. The third party Python `regex` library, for example, has very good support, although it is not especially fast[1].
Short of building UTS#18 2.1[2] support into the regex engine (unlikely to ever happen), it's likely ripgrep could offer some sort of escape hatch. Perhaps, for example, an option to normalize all text searched to whatever form you want (nfc, nfd, nfkc or nfkd). The onus would still be on you to write the corresponding regex pattern though. You can technically do this today with ripgrep's `--pre` flag, but having something built-in might be nice. Indeed, if you read UTS#18 2.1, you'll note that it is self-aware about how difficult matching canonical equivalents is, and essentially suggests this exact work-around instead. The problem is that it would need to be opt-in and the user would need to be aware of the problem in the first place. That's... a stretch, but probably better than nothing.
Thanks very much for clarifying that. It did seem unlikely: I remember NSString (ask you parents...) supported this level of unicode equivalence, and it was quite a burden. Normalising does feel like the only tractable method here, and if you have an extraction pipeline anyway (in rga) maybe it's not so bad.
Yes, rga could support this in a more streamlined manner than rg, since rga has that extraction pipeline with caching. ripgrep just has a hook for an extraction pipeline.
Can you say how that differs from what I suggested in my last paragraph? I legitimately can't tell if you're trying to suggest something different or not.
As UTS#18 2.1 says, it isn't sufficient to just normalize the text you're searching. It also means the user has to craft their regex appropriately. If you normalize to NFC but your regex uses NFD, oops. So it's probably best to expose a flag that lets you pick the normalization form.
And yes, it would have to be behind a CLI flag. Always doing normalization would likely make ripgrep slower than a naive grep written in Python. Yes. That bad. Yes. Really. And it might now become clear why a lot of tools don't do this.
Awesome tool and I use it often. One under utilized feature of rga is its integration with fuzzy search (fzf) that provides interactive outputs compared to running the commands and collecting outputs in sequence. So in short use rga-fzf instead of rga in CLI.
The built-in rga-fzf command appeared in v0.10 and ostensibly obviates the need for the above shell function, but the built-in command produces errors for me on MacOS: https://github.com/phiresky/ripgrep-all/issues/240
According to Reddit [1], you can use the existing rg.el package, and just point it to the rga binary instead of the rg binary, and it is supposed to just work.
Huh, thanks yeah that does switch the binary to rga, but with rga you need to specify a wildcard operator for the path parameter in order to search PDFs, otherwise it only searches plaintext files, and I'm not sure how to make rg.el's RG function add that... must be a variable but not finding it.
To what extent does reading these formats accurately require the execution of code within the documents? In other words, not just stuff like zip expansion by a library dependency of rga, but for example macros inside office documents or JavaScript inside PDFs.
Note: I have no reason to believe such code execution is actually happening — so please don't take this as FUD. My assumption is that a secure design would involve running only external code and thus would sacrifice a small amount of accuracy, possibly negligible.
Also note that it's not necessarily safe to read these documents even if you don't intend on executing embedded code. For example, reading from pdfs uses poppler, which has had a few CVEs that could result in arbitrary code execution, mostly around image decoding. https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=poppler
(No shade to poppler intended, just the first tool on the list I looked at.)
Couldn't or shouldn't each parser be run in a container with systemd-nspawn or LXC or another container runtime? (Even if all it's doing is reading a file format into process space as NX data not code as the current user)
That's a qualitatively different kind of security topic, though. On the one hand, we have a bug in a tool that reads a passive format with complete accuracy. On the other we have the need to sacrifice some amount of accuracy to avoid executing embedded code in a dynamic file format.
this is why i do like to try and parse shit myself for my own tools, not that thats without risk but i dont share my.code so its untargeted. however, to support a wide variety like this the tools are ok. most code honestly in a pdf will not target pdftotext , i think. i think it would target the thing people open pdfs with like browsers and maybe a few readers like adobe and foxit reader. pdftotext seems more like an 'academic target', like a nice exersize but not very fruitful in an actual attack. i might be wrong tho.
None of them really execute "code". Pandoc has a pretty good write up of the security implications or running it, which I think applies just as much to the other ones, with the added caveat of zip bombs.
You are correct that rga doesn't ship with an Excel adapter out of the box. I have an open PR [1] to allow users to process XLS and XLSX files like any other Zip archive.
On average, the macros in an Office document add features to the software and aren't run to render any content. So like toggling a group of settings or inserting some content or whatever. They may change the content, but it's done at a point in time by the user, not each time the document is opened.
And then, on average, most users don't use macros in their documents.
Use Recoll for that; check the recommended dependencies from your package manager. Synaptic it's good for this with a click from the right mouse button on the package.
EDIT: For instance, under Trisquel/Ubuntu/Debian and derivatives, click on 'recollcmd', and with the right click button mark all the dependencies.
Install RecollGUI for a nice UI.
Now you will have something like Google Search but libre in your own desktop.
To take it further install recoll-webui [1] and SearxNG [2], enable the recoll engine in the latter at point it at the former for a web-accessible search engine for local as well as remote content. Make sure to put local content behind a password or other type of authentication unless you intend for it to be searchable by outside visitors.
Source: I made the recoll engine for Searx/SearxNG and have been using this system for many years now with a full-text index over close to a terabyte worth of data.
It's somewhat similar to 'recoll' in its functionality, only with recoll you need to index everything before search. It even uses the same approach of using third-party software like poppler for extracting the contents.
By the way Recoll also has a utility named rclgrep which is an index-less search. It does everything that Recoll can do which can reasonably done without an index (e.g.: no proximity search, no stem expansion etc.). It will search all file types supported by Recoll, including embedded documents (email attachments, archive members, etc.). It is not built or distributed by default, because I think that building an index is a better approach, but it's in the source tar distribution and can be built with -Drclgrep=true. Disclosure: I am the Recoll developper.
Wow this is a gem of a comment. I use Recoll heavily, it's a real super power for an academic, but I had no idea about rclgrep. Thank you for all your work.
What rclgrep does is run the recoll text extraction and do a grep-like operation on the extracted texts. If you want to give it a try, don't hesitate to contact me if you have any trouble building or using it, or think it could be improved. It's more or less a (working) prototype at the moment, but I'm willing to expand it if it looks useful. The "Usage" string is much better in the latest git source than in the tar, and it sorely needs a man page.
I think an index of all documents (including the contained text etc) should be a standardized component / API of every modern OS. Windows has had one since Vista (no idea about the API though), Spotlight has been a part of OS X for two decades, and there are various solutions for Linux & friends; however as far as I can tell there's no cross-platform wrapper that would make any or all of these easy to integrate with e.g. your IDE. That would be cool to have.
Ugrep seems to be a completely new codebase, whereas RGA is a layer on top of ripgrep. Based on the benchmarks on the ripgrep github repo, rg is a bit better than 7x faster than ugrep.
For all of the built-in adapters to work, you'll need ffmpeg, pandoc, and poppler-utils. See the Scoop package [1] for a specific example of this.
> does it create a bunch of caches, that clog up storage and/or memory?
YMMV, but in my opinion ripgrep-all is pretty conservative in its caching. The cache files are all isolated to a single directory (whose location respects OS convention) and their contents are limited to plaintext that required processing to extract.
According to a quick check against /usr/share/dict/words "fi" occurs in about 1.5% of words and "fl" occurs in about 1%. There are other ligatures that sometimes occur but those are the most common in English I believe.
I don't have any sense of how common ligature usage is anymore (I notice that the word "Office" in the title of this article is not rendered with a ligature by Chrome) but it might be insanity inducing to end up on the wrong side of a failed search where ligatures were not normalized.