It seems to me a big part of the problem is that the "query builder" in TFA is little more than a string builder. In the .Net world I've used SqlKata [0] and been very pleased with it. It allows you to easily dynamically build and compose queries.
Base 32 is just an encoding, so it doesn't modify the underlying data of the uuid, it presents the data in a different display format. It's the same as how the number 10 can be represented as decimal 10, binary 0b1010, octal 0o12, hex 0xA, etc.
Really? In the modern .Net world (originally .Net Core) it's very common for devs to use Windows machines to write code whose CI pipelines and deployed environments are all Linux. I've seen a handful of issues with things like path separators and file system case sensitivity, but we're talking about 3 or 4 minor problems in 6-7 years that I've been using it.
(also yes, people keep asking "what about linux" and think it's bad when you say there is literally nothing extra to consider in 95% of situations, sigh)
I'm actually going to switch to Mac as a pilot for our team at some point this year! I don't expect any issues, I already use Rider and have done plenty of .Net stuff on my personal machine which is a M3 MBP. Really IMO the only question marks will be around using Parallels when we need to occasionally work on a legacy .Net Framework app.
It's not about looking at the body (in my area at least, funerals usually don't even display the body), it's about grieving and mourning. Or supporting other people who are going through grief and mourning, even if you don't feel it very strongly yourself. My mom died of cancer about a month after lockdown started in 2020 so we couldn't have any kind of service, but we had a "celebration of life" a year later. I'm glad that we did.
> LLMs should never do math. They shouldn't count letters or sort lists or play chess or checkers.
But these aren't "gotcha questions", these are just some of the basic interactions that people will want to have with intelligent assistants. Literally just two days ago I was doing some things with the compound interest formula - I asked Claude to solve for a particular variable of the formula, then plug in some numbers to calculate the results (it was able to do it). Could I have used Mathematica or something like that? Yes of course. But supposedly the whole purpose of a general purpose AI is that I can use it to do just about anything that I need to do. Likewise there have been multiple occasions where I've needed ChatGPT or Claude to work with tables or lists of data where I needed the results to be sorted.
They're gotcha in the sense that people are intentionally asking LLMs to do things that LLMs are terrible at doing. LLMs are language models. They aren't math models. Or chess models. Or sorting or counting models. They aren't even logic models.
So early on the value was completely in language. But you're absolutely correct that for these tools to really be useful they need to be better than that, and slowly we're getting there. If you're asking a math question as a component of your question, firstly delegate that to an appropriate math engine while performing a series of CoT steps. And so forth.
If this stuff is getting sold as a revolution in information work, or a watershed moment in technology, or as a cultural step-change, etc, then I think the gotcha is totally fair. There seems to be no limit to the hype or sales pitch. So there need be no bounds for pedantic gotchas either.
I entirely agree with you. Trying to roll out just a raw LLM was always silly, and remains basically a false promise. Simply increasing the number of layers or parameters or transformer complexity will never resolve these core gaps.
But it's rapidly making progress. CoT models coupled with actual domain-specific logic engines (math, chemistry, physics, chess, and so on) will be when the promise is actually met by the reality.
Unless you have some tricks up your sleeve that I'm not thinking of, an immediate consequence of this is that zero downtime deployments and blue/green deployments become impossible. Those both rely on your app being able to run in a state where the schema is not an exact match for what the app expects - but it's compatible so the app can still function.
And that's okay. Most applications don't need zero-downtime deployments, and there are already plenty of APIs that support that use case. I'd rather have more like this one.
That's not correct. He decided that the aircraft was out of control because his primary displays went out at low altitude - the manual says eject if out of control below 6000 ft. But in fact the plane was still flying and responding to controls just fine.
A big factor in this seems to be his overall lack of experience in the F-35 and not flying enough hours to really stay proficient. Highly recommend this analysis by two former naval aviators: https://www.youtube.com/live/g8PBA7k6vP8?si=o2DDBX1XqmM_x1gR
LINQ is the name for the overall system. LINQ can be written using two different styles:
// Method syntax
var evenNumbers = numbers.Where(num => num % 2 == 0).OrderBy(n => n);
// Query syntax
var evenNumbers = from num in numbers
where num % 2 == 0
orderby num
select num;
Method syntax and query syntax are both part of LINQ (query syntax is syntactic sugar). .Net developers tend to overwhelmingly prefer method syntax.
I used to use this quite a bit after it was introduced, but CHM files are a bit unwieldy for documentation, at least for myself. Having integrated search is nice, but the windowed interface, and being tied to Windows (AFAIK, maybe there are readers for other operating systems) I believe will keep this project from being used. I always found it to be decent for more advanced developers, but felt there was something missing for more junior developers that needed documentation.
I have a 6-7 year old Omron brand device that doesn't do this. I can always tell right away when my BP is running high because I can feel the machine squeezing harder than normal to get the reading.
0: https://sqlkata.com/