Maybe. But in the OS world "object" does have an established meaning.
I think the broader point is that you can build an "object-oriented" system in the sense that you're expecting overtop of the "object-based" systems he is describing and it will give you much of the base level semantics of an OO system (object allocation, tagging, encapsulation) without enforcing any particular OO language semantics (inheritance, typing, methods, etc.) and while also allowing for non-OO (blob of memory) semantics if you need it.
Think of each C allocation via malloc/free as an object. It's just in his system the pointers that point to those allocations are not describable as a series of integer offsets into a big linear memory, but really are handles or unique IDs which map directly into the VM subsystem of the OS and to some action happening in the MMU.
In essence most object-oriented language VMs/runtimes are doing this themselves, essentially building abstract handles (object references, etc.) overtop of the "linear memory" abstraction that the OS provides. If I understand his gist he's really just talking about cutting out the middleman.
I suspect, though, that there's not really a lot of efficiency gains anyways. This path has been so heavily optimized (in both hardware and software) over the decades that it likely matters little.
The security argument maybe is more compelling. There's good argument for the concept of never being able to turn an integer into a pointer or into an integer and back again. Except this would break probably the majority of C programs out there.
Maybe the middle road is for the OS to present a sandboxed "legacy" or "emulation" environment for programs that do pointer arithmetic and to provide enhanced compilers that flag these kinds of things and encourage/offer alternatives.
Given the prevalence of virtualization tech now, there's probably more room for experimentation in this type of thing these days without breaking compat...
I think the broader point is that you can build an "object-oriented" system in the sense that you're expecting overtop of the "object-based" systems he is describing and it will give you much of the base level semantics of an OO system (object allocation, tagging, encapsulation) without enforcing any particular OO language semantics (inheritance, typing, methods, etc.) and while also allowing for non-OO (blob of memory) semantics if you need it.
Think of each C allocation via malloc/free as an object. It's just in his system the pointers that point to those allocations are not describable as a series of integer offsets into a big linear memory, but really are handles or unique IDs which map directly into the VM subsystem of the OS and to some action happening in the MMU.
In essence most object-oriented language VMs/runtimes are doing this themselves, essentially building abstract handles (object references, etc.) overtop of the "linear memory" abstraction that the OS provides. If I understand his gist he's really just talking about cutting out the middleman.
I suspect, though, that there's not really a lot of efficiency gains anyways. This path has been so heavily optimized (in both hardware and software) over the decades that it likely matters little.
The security argument maybe is more compelling. There's good argument for the concept of never being able to turn an integer into a pointer or into an integer and back again. Except this would break probably the majority of C programs out there.
Maybe the middle road is for the OS to present a sandboxed "legacy" or "emulation" environment for programs that do pointer arithmetic and to provide enhanced compilers that flag these kinds of things and encourage/offer alternatives.
Given the prevalence of virtualization tech now, there's probably more room for experimentation in this type of thing these days without breaking compat...