Everyone involved in this article has an incentive to overstate its abilities. The creators will say it is better than it is to get more work. Defense to create a deterrent. Journalists to get clicks.
Given the baseline for government understanding of AI is poor, my prior on how impressive this thing is in reality (as some sort of AGI pathway breakthrough) is pretty low.
I would bet they have some large database and some good, but mostly conventional IT around it.
Having worked in the intel community (long ago), I suspect the practical purpose of Sentient is to replicate the mind of the intel analyst -- to select and fuse multiple intelligence sources into a coherent story that suggests an underlying human activity that would be of interest to investigate further or classify as actionable and then pass it along to an operational group like law enforcement or the military.
Another possibility, of course, is FUD. Promote a project as being more impactful than it really is so that your political critics are distracted by it and thus overlook your other projects that are more important, short-term, and real.
You do have to wonder. Why would a secrecy-driven org like NRO employ a meaningful and daunting project name like "Sentient" unless you want outsiders to become take an interest in the program? When you want a program to fly under the public radar, you name it something meaningless and innocuous like "BranMuffin" or "Spatula", not meaningful and fearful like "KillerFlyingRobots".
Another FUD angle: make your state-actor counterparties think you're developing a technology that you're aware is massively time- and resource-consuming, but not meaningfully useful, or better yet, can be gamed into being less-than-useful, such that they burn cycles and resources in a wild goose chase.
(I suspect tech companies do this, in part, with the tech-fad treadmill bandwagon.)
I mean, this is the same NRO that made the infamous NROL-39 mission patch, with a sinister looking octopus reaching across the planet, caption reading "Nothing Is Beyond Our Reach."
Well, considering all the article talks about is ML, but calls it an "artificial brain", I think the government is par for the course with your average ML company marketer.
Nobody is close to creating actual artificial intelligence and we're probably getting close to the top of the hype cycle on the term.
I have to agree, all this "artificial brain" talk when the reality is that ML is just really fancy "y = ax + b find the best values for a and b given this data "
What's this you say, everything is a hyperbolic system of conservation laws? I'd be intrigued to know how you'd put Maxwell's equations on that form, or the Einstein field equations, for instance.
Looking around at many AI startups, the only difference between that and the startups is the “good” part and the absence of an army of human contractors doing labeling and sometimes even the primary function by hand.
Well described and true.
It's the way of the world. If they don't do it, what abilities they do have, won't get funded. Right now to get funds, employees, partners what other route exists, given the competition for all three? Until better routes are discovered, everyone is in the mass broadcasting and overstating of abilities game.
Polygraphs also perform no better than coin flips in determining whether a person is lying. They're still used throughout government to make security clearance decisions. I would be careful assuming that if this technology is ineffective that will translate into it not becoming a lynchpin in their organizations.
It sounds like a new version of SIOP, the Cold War program to produce a plan for nuclear war based on computer simulations incorporating data on assets, capabilities, etc. of both sides. It was the inspiration for WOPR from WarGames, though the real plan required humans to actually run the simulations and make command decisions.
That name is stupid. It's like aspirational marketing. It reminds me of the artificial intelligence from Team America: World Police, called INTELLIGENCE.
SIOP was a plan though - originally it didn't even have any conditionality:
"During the briefings, Marine Corps commandant David Shoup (the service with the most marginal nuclear responsibilities) saw a chart that showed that the initial attack would kill tens of millions of Chinese. At the closing meeting, General Shoup asked General Power what would happen if Beijing was not fighting; was there an option to leave Chinese targets out of the attack plan? Power was reported to have said that he hoped no one would think of that "because it would really screw up the plan"--that is, the plan was supposed to be executed as a whole. Apparently Shoup then observed that "any plan that kills millions of Chinese when it isn't even their war is not a good plan. This is not the American way."
Reminder: Way back when the AT&T 'secret rooms' that enabled the NSA to tap all traffic were revealed, we learned that the NSA relies heavily upon an extremely questionable legal opinion written by their own lawyers which says that communications do not count as 'collected' or 'intercepted' until a HUMAN operator reads the plaintext. That means no amount of automated processing, machine learning, statistical analysis, filtering, profiling, etc run on your communications or its metadata amounts to your communications being 'intercepted' as far as they are concerned. They know this legal opinion is dicey, and they will do absolutely anything, including dropping cases entirely, to avoid having it tested in court.
Everyone in this thread is talking about how it's fake, but this does have an actual implication for the dangers of AGI. If every world government is scrambling to pretend to have it, nobody will be able to tell when someone actually invents it.
That’s entering conspiracy territory. A government is just as likely to hide a massive revolution in AI as it would space flight with “alien technology”, which means extremely unlikely.
The simple fact is they’d need the best minds of the world working on it and as we saw with the Manhattan project the people working on it tended to have the best grasp of the implications and wouldn’t keep it secret for long. Plus it’d turn whatever country had it into an economic powerhouse which is way more valuable than some classified intelligence product.
Why is this a conspiracy theory? In the pejorative sense of the term (because conspiracy theory doesn't mean false).
The USAF has been pretty open about how alien conspiracies help them. Makes it harder for enemies to distinguish fact from fiction. Sure the Russians know the aircraft at Area 51 aren't aliens, but it makes it harder to get good reports on capabilities of these aircraft. Op Sec is a major part of any agency. They always have been trying to obscure information. Many agencies have often both over and under reported their technological capabilities. This tactic has thousands of years of history and has been essential for all that time too. Propoganda is part of this too.
At least I read the parent talking about it in this manner. As Op Sec. Which of course it is. Anything that a government agency officially releases is in some form or another Op Sec. That's not conspiracy. That's like saying that when corporations make public announcements that they aren't trying to do something beneficial for the company, even if it is damage control.
Sure, we know they don't have AGI. But that's not what matters with Op Sec like this.
That's true today, but what about 30 or however many years from now, where AGI is only a few billion dollars away? Some government will do a Manhattan project, and when word gets out nobody will be able to tell it apart from the thousands of buzzword-driven parasitic schemes that preceeded it.
My understanding is that the world's preeminent ML researchers are refusing to contribute to the defense industry, are extremely well compensated in the private sector, and prefer to publish their work over hiding it as a trade or national secret. Given this, how are we to believe that the NRO has developed the world's most capable ML system, somehow years ahead of a field that is already perhaps the most dynamic field of research in science, utilizing, at best, second rate talent? Mark me a skeptic.
If you define "world's preeminent ML researchers" as people who publish then you are begging the question. Would you be aware of preeminent ML researchers that don't publish?
I think it is mistake to underestimate the computer science talent inside parts of the US intelligence community, some of it is exceptionally good. Most people in Silicon Valley have never met or worked with these people. I once asked a former professor from a highly regarded CS school why he quit academia to work for one of the agencies. For him it was simple: he could spend his days doing nothing but hardcore non-incremental CS research while avoiding the politics, incrementalism, and other tendencies of academia. Pure quality of life doing meaningful work in a low stress environment was more valuable to him than money or status. Many people do research solely for the challenge of solving hard problems, publishing is not important to them.
The inclusiveness and diversity of the work environment is also an overlooked attraction. The demographics don't look like your typical SV startup. Government organizations have many issues but providing equal opportunity for minorities is not one of them. Not everyone is comfortable working for tech companies.
There may be nothing to the article but don't write it off solely on the basis of presumed "second rate talent". There are research groups inside US intelligence that have world-class talent on par with the FAANG companies.
Not saying this is true - or even workable - but maybe they've figured out a way to apply so-called "second rate talent" to create "first rate results"?
For instance, have you ever researched how machining and machine tools came to be?
That is, how it was possible to create a machine capable of tolerances within say, 10-thousandths of an inch (that is, a really, really small amount, regardless of the units) - when the machines and tooling prior to that were no where near capable of doing that work?
In other words - how were we able to make more accurate machines using less accurate machines?
Maybe the same principles are being utilized by the NRO for this project...
I'm a mechanical engineer and I have literally no idea what you're getting at here. We can machine to 10mil because we have precision motors and measurement equipment. For $25 you can get a micrometer off Amazon which is accurate to .0001.
I had assumed from the title it was about AI rather than a proxy brain, but as someone with a damaged meat bag and miserable QOL as a result, I long for a transfer into an artificial body/brain where once can repair/replace parts and be as good as new. I don't imagine its possible at all really, let alone in the short time I have left, but one can dream.
I hope I am not being rude asking about this, apologies in advance: because I have two friends with MS I have wondered if consumer VR gear (Oculus Quest, etc.) could improve quality of life as they get reduced mobility. Have you used VR to augment your life experiences? I have tried a few VR experiences that are very good and it seems likely that really high quality VR with haptics could both help people with physical problems and in the future, provide fun experiences if we get downloaded to an artificial body/brain. Do you have any opinions on this?
Not rude at all and happy to give input, however my circumstances are different than those with MS/ALS etc so not sure how relevant it will be. I can still "technically" walk, move, appear fairly normal outwardly, I am just in severe pain all the time, have ongoing damage to joints and nerves from surgery that caused this, and my endurance and mental state has gone with it. It's a slowly creeping degeneration after a massive initial decrease, and isn't at all like those mentioned neurodegenerative issues where brain function is failing rapidly. I have some other physical issues as well that compound this.
From a philosophical point of view for ME personally, VR/AR/the internet/gaming etc isn't an adequate substitute for "real life" and I personally don't derive enough from substitutes to feel my quality of life is good, it's more just clinging to SOME interaction as I do here on HN. However I am sure for some people it is enough to make a significant impact and I find it a worthy avenue to pursue.
From a physical point of view, my circumstances make even the above things worse as I have some visual issues that cause headaches/eyestrain etc severely with most modern display types for some yet undetermined reason. It's none of the obvious once like PWM/blue light etc. So I am in a really corner case spot with tech and clinging to a super old device or two that aren't long for the world. I also have monovision and severe amblyopia so VR headsets don't work even if I could stand the display tech. I can stand a few min here and there to post a comment. But I cannot spend hours online or on screen anymore. A perfect ironic hit since that was about the only earning potential left and its already cost me the one real shot I have had in years. So often people are facing multiple disabilities from multiple angles.
I hope its helpful although I doubt I said much of substance.
tl;dr It's an automated image classification system ("Tank division identified"), with some ability to identify and predict movement of said objects ("Tank division likely moving east"). Not sure how the Verge author jumped from predictive defense analytics to "it's a brain!."
I'm also highly skeptical of this system's predictive abilities. I recall a similar system (also described as "modeled after the human brain," whatever that means) from my time at a major defense contractor. It tried to predict the movements of the enemy and feelings of the non-combatant population via scrapings of news sites, social media, and other online sources. Never mind that the target battlefield was Afghanistan, where Internet adoption isn't quite 100%.
Agreed. Consider the year of Sentient's inception: 2013. That's one year after Krizhevsky, Sutskever and Hinton first revealed the power of CNNs to classify images -- something the NRO cares about a great deal since their primary product is satellite imagery.
Technically, the gov doesn't really build all that much. It's the private contractors and private firms they hire to do it.
But theres a fun thing about some hundred or so Marines got greenlit to buy 3d printers and start building a wide range of stuff for themselves. After making new crayon flavors, they went on and built extremely cheap gear delivery drones of some 700lbs capacity and a whole slew of other stuff without red tape. It's actually impressive.
Google marine 3d print, cool articles and stories.
Lookup marine plywood drone. They had to do it with a firm due to gov contracting laws. But from what I understand it's mostly designed/built by marines (word of mouth). There are more and more stories like this that are popping up. Sad part, big contractors cry foul of marines making repair parts and their own gear. From what I understand, that's why they need other firms to "take ownership" on projects.
Theres a cool 3d print barracks project you can Google too.
Keep in mind that America does not reveal such things until she has already invented the next generation, typically. I would assume that since this is out, they've got their next version up-and-running already.
If I were the editor of this publication, I'd argue that, while titles like this:
"IT’S SENTIENT Meet the classified artificial brain being developed by US intelligence programs"
Might get short term attention they are bad for the long term credibility of the publication. "SENTIENT" is a clever code name for the project, but I didn't see anything to suggest that the program in question had anything to do with sentience. It seems like it's a program to synthesize and present data from a wide array of sources, and that's neat, but why try to pitch it as something that it's clearly not.
Warren Buffett once said you can have a ballet and that's fine. You can have a rock concert and that's fine. But don't have a ballet and market it as a rock concert. If you want to write a story about software that presents data insights, please label it as such and I'll be interested in reading it. Don't label it as a story about artificial brain sentience because then I'll complain in the comments.
I have to agree with you on this one. I too thought they were trying to sell us on a computer being sentient. But sentient is just code name for their "brain" they are building. Clickbaity almost.
The most successful clickbait is the clickbait you aren't quite comfortable to label as such but it's really close to the line. So, mission accomplished for the publisher I guess.
Given the baseline for government understanding of AI is poor, my prior on how impressive this thing is in reality (as some sort of AGI pathway breakthrough) is pretty low.
I would bet they have some large database and some good, but mostly conventional IT around it.