1. You provide a sitemap.
2. All URLs in the sitemap are downloaded.
3. The HTML from each URL is read, extracting the page title, description, and contents.
4. The contents are processed using a search algorithm. This tool supports TF/IDF and BM25, two commonly-used search algorithms. I use Python packages that implement these since there are many people who have implemented these algorithms reliably.
5. A link graph is calculated that tracks all links between all pages.
6. When you run a search, the algorithm you chose (BM25 or TF/IDF) will run to find related documents. This is a keyword search. Then, results are weighed by the number of links to the page. This weight is useful if a site talks a lot about topics with the same keywords; by using links as a ranking factor, posts that are more connected to others will be elevated in search. Google pioneered the idea that links are "votes" on the relevance of content (although this tool doesn't use PageRank like Google).
How fan we capture unencrypted packets from the network? I thought you had to run tcpdump or something like that to be able to do that. But you won't be able to run tcpdump if you don't have access to the interface (source or destination), no?
I'm speaking in the context of the parent conversation ("unencrypted WiFi packets"). On wireless networks, all devices share the same "wire", so to speak. Normally that traffic is useless when captured due to encryption, but that's not the case on unencrypted (i.e. public) WiFi.
It doesn't matter if the wifi is encrypted or not. All that matters is that you share the network with an attacker. You can ARP poison just fine, encrypted or open, wifi or wired.
won't there be more noise while predicting just 20s in advance? The longer the duration, the less effects we will see of temporary events like network blips etc. no? sorry I'm new to software engineering and just trying to learn.
However with a smaller prediction interval you can dampen your autoscaling more. If you predict 20s into the future, react, and 20s later you see how that changed the situation you can afford to spin very few instances up and down each 20s. If you have to predict 5m into the future you might have to take much stronger actions because any effect is delayed by the 5m startup interval.
I went through the post and I have absolutely no clue what this person is talking about. But I want to be in a place where I can understand what the person is saying.
How can I reach that point? I was lost at quantized, could understand bit packing, and was even more lost when the author started talking about things like Hamming Distance.
Please help me out. I want to grow my career in this direction.
Then you need to understand binarization. This is a surprisingly effective trick that observes that if you have an embedding vector of, say, 1000 numbers those numbers for many models will be very small floating point numbers that are just above or below zero.
It turns out you can turn those thousand floating point numbers into one thousand single bits where each bit simply represents if the value is above or below zero... and the embedding magic mostly still works!
And instead of the usual cosine distance you can use a much faster hamming distance function to compare two vectors instead.
Once you understand embedding vectors and CLIP that should hopefully make sense.
The part of CLIP[1] that you need to know to understand this is that it embeds text and images into the same space. ie: the word "dog" is close to images of dogs. Normally this space is a high dimensional real space. Think 512-dimensional or 512 floating point numbers. When you want to measure "closeness" between vectors in this space cosine similarity[2] is a natural choice.
Why would you want to quantize values? Well, instead of using a 32-bit float for each dimension, what if you could get away with 1-bit? You would save you 31x the space. Often you'll want to embed millions or billions of pieces of text or images, so the savings represent a huge speed & cost savings and if accuracy isn't impacted too much then it could be worth it.
If you naively clip the floats of an existing model, it severely impacts accuracy. However, if you train a model from scratch that produces binary outputs, then it appears to perform better.
There is one twist. Deep learning models rely on gradient descent to train and binary output doesn't produce useful gradients. We use cosine similarity on floating point vectors and hamming distance on bit vectors. Is there a function that behaves like hamming distance but is nicely differentiable? We can then use this function during training and then vanilla hamming distance during inference. It seems like they've done that.
I'd suggest playing around with OpenCLIP[3]. My background is in data science but all my CLIP knowledge comes from doing a side project over the course of a couple weekends.