Nobody was thinking about "the cloud" back in those days. Back then, your data, you programs all lived and ran on your own computer in your home. Most people didn't go online, and if you did, it was mostly to read and download data to use locally on your own computer. Connections were intermittent and slow. The idea that your own data would be stored online was almost unimaginable; even using network-depending applications like usenet or email involved downloading everything first before using it. Online applications were hardly even dreamed of.
Our expectations of how "Star Trek AI" would actually be implemented were completely different than how highly connected cloud-based services like Google Assistant work today.
Anyway, the point being, if the assistant lived entirely in your own computer, it would entirely different. Most people are not concerned about what their "computer" knows about them, they're concerned about what companies and their employees do.
> Our expectations of how "Star Trek AI" would actually be implemented were completely different than how highly connected cloud-based services like Google Assistant work today.
The trope-namer (Star Trek AI) was a ship-wide AI - when considering the ship sizes, it definitely is closer to the "cloud" model and not limited to a private instances on officers' bunker/bridge terminals/tricorders. Perhaps a hardcore Trekkie could answer this question: is there any canon that defines the AIs scope? Is restricted to just one ship, or could it possible be a Federation-wide presence with a presence/instances on ships?
The AI for the Enterprise D was run from three computer cores on the ship (two in the saucer, one in the engineering section), made up of Isolinear Chips subjected to a small warp field (iow, it makes computes using light faster than the speed of light). Subspace communication bandwidth is too limited (and potentially affected by latency, since it had to travel through repeaters throughout the galaxy) to provide realtime cloud computing as we experience it.
There are some cannon exceptions to this (such as in Nemesis where the subspace communication interruption affected the star charts), but even then the functionality of the ship was not impacted.
The Star Trek ships are very analogous to our own ocean-bound ships, where satellite communication is possible almost anywhere, but they don't rely on it.
So, yes, the AI is completely confined to the ship.
What about when someone was (permanently) transferred between ships?
Was there ever an indication that their AI-level data was transferred along with their personnel file? For example did the replicator know what food to offer them on day one?
If so, then it's seems reasonable to assume that the Enterprise's AI data was backed up at Federation HQ during routine maintenance, and that the "IT department" at Federation knew exactly what you liked to do on the Holodeck.
Through specific indication from the user. Recall the constant utterance of "tea, earl grey, hot"?
Ultimately, I imagine the user's information (documents, etc) was passed directly between ships, or through (as you say) Federation HQ.
> Enterprise's AI data was backed up
Ultimately, I think this is where AI will differ from ML. An AI won't have data that isn't a part of the AI - i.e. you couldn't separate out information specific to Picard from the rest of the AI code. An AI might be able to "scribble" down some notes about interacting with Picard and pass them off to another ship's AI, but the second AI would never treat Picard quite the same way as the first, even with those notes.
This stems from my belief that how ML interprets data is different from how an AI would. If you were to copy all of the data used to build a ML model and apply it again, you'd end up with the same ML model. An AI, on the other hand, if built twice twice from the same data would end creating two separate AIs.
I never got the feeling that there was a lot of mistrust of other federation people.
for example, the Star Trek universe didnt seem like a universe where you had to shop around for a trustworthy mechanic, who wouldnt overcharge or over diagnose. (e.g. headlight fluid)
Maybe the implicit trust of other people was integral to the AI being successful in that universe.
It's definitely a per-ship thing, because there are several episodes where they have to down/upload updated data to/from the Federation (at least in TNG canon).
I also think the comparison isn't perfect, because Federation vessels (in my mind) are similar to today's Navy vessels. All onboard systems are connected to other onboard, but opsec demands the ship systems not be influenced by external actors.
And a big point is that on a Navy-analogous vessel, it's reasonable to assume most activity on the ship is being monitored, for the safety of the ship and crew. The fact that the ship knows where everyone aboard is or that they can pull up the full Starfleet record for anyone who's ever been in Starfleet is not surprising, this is the military* and records and accountability are a big deal. But there's nothing to indicate that Federation civilians are monitored to that extent, and I'd argue enough episodes are strong on fundamental individual rights, that it's hard to imagine Federation life for civilians being a surveillance state.
> But there's nothing to indicate that Federation civilians are monitored to that extent
The clearest example of extensive off-starship monitoring (within the Federation) that I can think of in TNG is a civilian (though, to be fair, a civilian in a role analogous to a "defense contractor"), Dr. Leah Brahms.
> I'd argue enough episodes are strong on fundamental individual rights, that it's hard to imagine Federation life for civilians being a surveillance state.
Actually, I'd say that its quite plausible that the Federation is a "benevolent surveillance state", that is, one with pervasive monitoring but a very low incidence of "serious" abuse (that is, the kind that substantially limits practical liberty -- casual intrusions on privacy may be more common.)
While the Federation seems keen on "fundamental individual rights", it doesn't seem to exactly mirror, say, some modern views on what those rights are -- and not just in terms of privacy.
I saw you give the Brahms example in another comment before I posted, so I am not surprised to see you bring it up. But I think you answered your own point. She worked heavily on the Galaxy-class starship's warp drive, which would be a relatively classified project that they would be very unhappy if the Romulans or any other hostile parties were intercepting data on.
And arguably, if she was working at Utopia Planitia Fleet Yards on the Galaxy class project, she presumably worked on a Starfleet orbital facility (technically, a number of facilities) over a span of years, where certainly enough data would be collected to make a poor replica of her personality, as in the show. I don't see anything suggesting specifically outside of basic biographical data, that she was being monitored in her civilian life.
La Forge, having interacted with that hologram extensively, and having surely read Starfleet's records... apparently didn't know she was married.
I feel your message is the most important in this thread because it's the crux of the whole concern about privacy and the cloud.
Where technology has failed us most is in the utterly stagnant evolution and maturation of secure private networks. The following is a utopian notion, but had private networks seen as much R&D as the public clouds, they would be significantly less cumbersome than today's clunky VPNs. Imagine all of your devices collaborate directly with one another and with you on your own secure private network—no central cloud servers needed. Your personal assistant is software running on a computer you own rather than a third-party's centralized server.
I still feel this ideal will eventually be realized, but for the time being, no large technology company is willing to take the necessary risks to buck the trend of centralization.
The biggest fiction propped by up centralization and cloud proponents is that it would be impossible to provide the kind of utility seen in Cortana, Siri, Google Assistant, Alexa, et al without a big public cloud. A modern desktop computer has ample computational capability to convert voice to text, parse various phrases, manage a calendar, and look up restaurants on Yelp. Absolutely nothing the public clouds provide strikes me as something my own computer would struggle to do (to be clear, I would expect a local agent would be able to reach out to third-party sites such as Yelp or Amazon at your command in order to execute your desires, but they would do so directly, not via an intermediary).
A few years back, when Microsoft was at the beginning of its Nadella renaissance, I had hoped it would be the first technology titan to disintermediate the cloud and make approachable and easily-managed personal private networks a thing. Microsoft's legacy of focusing on desktop computers would have made it well-situated to reaffirm your home computer as an important fixture in your multi-device life. They could have co-opted Sun's old bad tagline: "Your network is your computer." But they elected to just follow the now-conventional public cloud model, reducing everyone's quite-powerful home computer to yet another terminal of centralized cloud services. Disappointing, but I think it is ultimately their loss. I suspect a lot of money is on the table for someone to realize a coherent easy-to-use multi-device private network model that respects consumer privacy by executing its principal computation within the network.
>Where technology has failed us most is in the utterly stagnant evolution and maturation of secure private networks.
Not just secure private networks, but secure and programmable personal computing in general. The amount that I can actually do with my workstation PCs, let alone laptops or mobile phones, is now thoroughly restricted compared to problems that require a full-scale datacenter.
I originally enjoyed computing because, so to speak, it was an opportunity to own and craft my own tools, rather than being forced into the role of consuming someone else's pre-prepared product. Now we're being boxed into the consumer role in computing, too.
People at work keep asking me why I reinvent a few wheels here and there on my personal projects. "Why are you wasting your time with WebRCT? Why are you not using phaser.io?"
idk man. Computers are powerful. I like seeing what I can do with them.
And with UEFI-level code on mainboards and baseband software on phones, the era of "owning" a computer is basically over. All you can do is hope and trust that the manufacturer isn't co-opting your experience or data somehow. As someone who grew up hacking on a C64 in grade school, and never stopped, I find this utterly depressing.
Let me preface this by first saying that I absolutely can't wait to have my own personal home automation, AI assistant, etc. on prem without the cloud:
I think that as far as the nascence of these features goes, the cloud model will beat the on-prem features any day of the week for several reasons. Lack of configuration to set up, ease of use from anywhere without network configuration, etc. are table stakes. But the biggest at this point is the sheer amount of training and A/B testing data you can ingest to determine what is useful for your end users.
The velocity of cloud-based products is nothing short of amazing and I doubt that on-prem will compete with the feature set and ease of use of always connected solutions until there are feature-complete, mature cloud versions to then bring in.
As we just learned with Yahoo, though, once the ML models have been trained, they can be disseminated and used without the need for "cloud-scale" data or compute resources.
And, for better or worse, Dragon's text-to-speech is pretty damned good after a rather minimal amount of training.
I don't think there's anything stopping voice and intent recognition from coming back to our personal machines other than the ability to keep making money from having it come up to the cloud.
The cloud is all about scale and only having to rent resources when you need them. If you have your home server you have to buy and maintain and pay for those resources at all times. When you make a quick cloud request you only "pay" for the resources you consume.
When I was working on Google Search what really astounded me is how we could leverage hundreds of machines in a single request and still have virtually no cost per search. The reason was that each search used a tiny amount of the total resources of those machines and for a very short time. A total search might have (made up numbers) one minute of computation time, but spread across 200 machines it only takes 300ms from start to finish.
That's the benefit the cloud will provide. You don't want to have a 1000-machine data center available at all times to store billions of possible documents and process your requests with low latency. If we went to a private-network model I fear that the turn-around time would be a lot closer to a human assistant. You'd ask it to do things and then it would get back to you sometime later (seconds? minutes? hours?) when it had finished it's research and come up with an answer.
> The idea that your own data would be stored online was almost unimaginable
Except that's what I did for many years using a computer only as a terminal for an AIX mainframe. My mail was there, I browsed what was the web, used gopher, wrote programs, all stored there.
On top of that, the cloud we have now is commercialized, opaque and constantly under pressure to comply with a government that many distrust, for what I would say very good reasons.
I would like to say that the cloud we have is a privacy concern because we don't know the full scope of data collected, nor what happens to it, nor do we own any of "our" data once it's in the cloud. But not every cloud would have to be that.
There's a perfect world where one wouldn't have to be paranoid about this stuff, but it's not what we have right now.
This right here. When I first heard about android, I imagined an open system I could tinker with like I can on my PC.
Instead I get the current evolution. I want a 3rd party.
The same thing comes to my every day of usage, I'm still on win 7, and short of leaving Linux(yes I should have already), I can't upgrade without becoming a product.
I want what we all imagined and dreamt of, and I pay for it.
What I won't do, is become a product.
Instead I'm stuck with multiple fake accounts on Gmail, using a pseudonym on everything(including programming contract sites such as upwork), just to keep some semi iota of privacy, and to enjoy the benefits of what we all want.
Imho we need a new major party to emerge, that will charge an initial fee(like windows 7), and let us do what we want with those services(with caveats of course).
But my main coding(admittedly amateur and earning very little) uses .net. on top of that most of the games I play to relax are Windows does only(as far as I know for most).
O keep meaning to make time, but it just hasn't happened yet. I don't get paid as a programmer (English teacher), so I need to spend my free time earning money on what I know.
As always, one day when I have money to spare(or time, which is basically the same thing haha).
Without trying to look like an arse, why? I have been on pc since dos, I can hack my way around any pc.
I will fully admit I have not spent enough time in Mac to figure out the file system, but from what (little) I have seen, your not in control.
I like my pc because I can see what sub-processes are running, who is taking up how much memory, install things to where I want and if worse comes to worse, manually change how windows runs. (I apologise if you can do all that in Mac, as I said, I don't have the experience certificate - I bounced off it hard).
So go Hackintosh. At least you don't have to distrust your OS vendor.
To answer your questions, you're in complete control with macOS, you can turn off SIP, turn off Gatekeeper and install whatever kernel extensions you want. Apple doesn't snoop on you like with Win10 telemetry.
The problem with Mac is that one needs to have Apple hardware, you can't even change your RAM sticks to the ones that Apple didn't approve. There is nothing (apart from maybe some fancy Adobe software) that you can't run on Linux just as well or even better.
Maybe people could imagine a world where the adage "if you aren't paying for the product, you are the product" would be widely relevant, sure.
But not a world where that phrase would be irrelevant, simply because today if you're paying for the product, you're still the product.
Our expectations of how "Star Trek AI" would actually be implemented were completely different than how highly connected cloud-based services like Google Assistant work today.
Anyway, the point being, if the assistant lived entirely in your own computer, it would entirely different. Most people are not concerned about what their "computer" knows about them, they're concerned about what companies and their employees do.