How are you finding that? I hear they work really well with Linux. I had such a bad time with the XPS15, I keep wondering why there's such a disparity between them.
A partial function is a function where all inputs have an output. For example, calling head on an empty list will throw an exception. To make this a total function you’d need to return a Maybe instead.
- "Professor Patton primarily teaches in USC’s Full-Time MBA Program and its Executive MBA Programs in the U.S. and China"
- "He has served as a key advisor to the Center for Asian-Pacific Leadership, a faculty member at the US-China Institute and a leader of MBA learning programs in China and Korea."
Numerical zero has nothing to do with the issue discussed here. What you are proposing is to add another “number” to types like int and float that results in the program crashing whenever you try to add it to another number.
> What you are proposing is to add another “number” to types like int and float that results in the program crashing whenever you try to add it to another number.
There's already division by zero and NaN to trip you up in IEEE754.
Functional programming languages have been doing it for ages. Most "newer" statically typed languages also have it (Swift, Kotlin, Rust) by default. And old languages had it bolted on (C# 8, Java 8, C++ 17).
I think at this point basically everyone has realized null by default is a terrible idea.
> And old languages had it bolted on (C# 8, Java 8, C++ 17).
C#: actually true, you can switch over to non-nullable reference types
Java 8: meeeh, it provides an Optional but all references are still nullable, including references to Optional. There are also @Nullable and @NotNull annotations but they're also meh, plus some checkers handle them oddly[0]
C++17: you can deref' an std::optional, it's completely legal, and it's an UB if the optional is empty. Despite its name, std::optional is not a type-safety feature, its goal is not to provide for "nullable references" (that's a pointer), it's to provide a stack-allocated smart pointer (rather than have to allocate with unique_ptr for instance).
> Objects from DataProviders are returned as immutable models. On iOS, where the app uses a CoreData cache, this means the rest of the app no longer needs to access mutable CoreData objects directly, which reduces the need to worry about concurrency issues and avoids the crashes due to accessing data on the wrong thread that are common with CoreData.
Is this the common way of dealing with core data? I am actually doing exactly that at work right now, and was wondering if it was the right decision.