Hacker News new | past | comments | ask | show | jobs | submit login

Nice workaround. I just wish there was a less 'lossy' way to go about it!



Could you explicitly train a set of embeddings that performed that step in the process? For example which computing the loss, you compare the difference against the normalized text rather than the original. Or alternatively do this as a fine-tuning. Then you would have embedding that optimized for the characteristics you care about.


Normal full text search stuff helps reduce the search space - eg lemming, stemming, query simplification stuff were all way before LLMs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: