As an American, my perception is that Europeans work to live while those of us in the States live to work; the highest crime in America is hurting the rich, and the highest crime in Europe is hurting the working class.
The NIST Cybersecurity Framework for small businesses is a great starting point for developing and nurturing a healthy cybersecurity program in a startup.
At my company, the big thing that I think would help is if people would stop treating data engineers as if they are a cheap way to get both software engineering + data analytics in one package. They’re not. Data engineers are brilliant at data things like writing complex pipelines and wicked queries, but they just aren’t as good at writing secure code. Meanwhile, software engineers are the opposite.
Data engineering teams need at least part of their team to be designated software or systems engineers that are experts in stuff like how to create a robust cloud environment. This is not only beneficial for security, but also for cost optimization, performance, operations, etc! It drives me crazy when I see a data engineer or marketing team told they have to fend for themselves when trying to figure out how to build a cloud environment.
In theory, there are documents out there a startup can read, understand, and follow to get reasonable outcomes. Theory is a wonderful place, I'm told. I'd like to visit someday.
In practice, the answer is to hire a consultancy. There are fractional CISO services out there. It's very likely that a startup without significant security experience in-house does not have the expertise to make good use of any guidance they can find. Every scenario is too specific, every personality dynamic too unique, and every new technology question too novel for there to be easy pre-fab answers that a non-security-practioner can find, understand, and apply in a correct and consistent manner.
That sad thing is... I don't think most people defending Apple in these topics actually own a significant amount of their stock directly. And they're actually undermining the strength of their own portfolios by undermining the strenght and resilience of USA economy due to erosion of free market competition and market stagnation via monopolies.
There is another type of stock: immaterial social capital. As long as Apple products are status-symbols, owners gain social relevance from their ostentatious use. This is why brands promote themselves way beyond what is necessary to just sell widgets: to build identities that people will invest in, binding themselves into enough social stock that they will feel compelled to campaign for a brand just to protect that investment.
I don't disagree, Apple's brand was built before the current shenanigans were even possible. "Think different", am I right? I'm just saying that the previous investment in brand-building can now be leveraged into defending the indefensible.
Kind of ironic that social capital on HN is earned by being being pro-regulation, anti-big-tech, isn’t it? Your point holds but there is more than a little irony in the post.
I think it just shows that large tech companies have lost all credibility even among early-adopting, tech-positive nerds.
New technology, unbridled, inevitably reaches a point where its negatives become clear to society at large. The printing press is regulated, cars are regulated, nuclear energy is regulated - because society recognized that we can't just let anyone build reactors in their sheds. Internet tech has probably reached that point.
Yup, again, yet another thread that devolves into android people doing the 'brainless apple sheeple' meme. It's fucking offensive and it's literally any thread that mentions apple. I wish Dang would crack down - this is not good discourse, it's not making HN a better place.
There is the same problem with NVIDIA, all NVIDIA threads eventually devolve into "people too dumb to buy AMD like me" as well, but, with apple there isn't even a slope, it's right into it from the outset.
We have just utterly normalized this sort of conduct from some of these fanboys to the point where it doesn't even register with most people. We just have taken it as the inherent nature of Android fans to be offensive like this.
Perhaps it is their nature. People with poor social skills self-selecting to the nerdy phone, etc. I have been leaning more and more to this theory after seeing it in thread after thread after thread - people simply cannot restrain themselves even here on HN. But oh gosh you can't say those things back! how uncivil! how dare I soil our sacred discourse of "brainless sheeple buying it for the blue bubbles" etc.
But there's no reason for the rest of us to tolerate it. It's shitty discourse. But it's so utterly normalized that everyone just shrugs and looks past it.
I don't own stock in any of the companies mentioned in the article, although I do own tech stocks.
And I'm not suggesting I have a relationship with any of them, parasocial or otherwise.
I wasn't trying to suggest we treat them like people or put them above us or give them a pass... I simply meant that getting rid of them completely could possibly eliminate the problem being discussed, but would also throw out a lot of value that they created and I didn't think that was the best solution.
With the clusterfuck that has been generative AI (from OpenAI’s corporate drama to Google renaming and reörganizing their products every fifteen minutes) this seems prescient, with the only savvy player so far being Microsoft.
I agree. The only two examples I've seen of embedded AI that "works" is Bard in search results, and generative AI in Adobe's. Everything else feels tacked on.
Edit: And on my iPhone, the offline photo categorization and image OCR
They are oversaturating their products with internet-based LLMs. It is a desperate attempt to milk all possible potential value from their smart investment.
A careful plan for a product would be less hamfisted and include more flexibility to deal with the backlash.
MSFT's main benefit their chokehold on corporate software. They can provide high priced software, deals etc. due to sheer force of Office and Azure. Even Amazon does not have that.
This is true, but most big AI companies are the ones selling shovels, and tons of money is pouring into the field because everyone wants to get in on shovel selling... but I don't know if it's yet determined that there are enough motivated shovel buyers.
Microsoft haven't really taken any deep interest into the (say) Office suite in decades so now they have no intuition at all for where its value is, thus the "umm copilot?"-ing everywhere.
Yeah its kind of insane how many times Apple can pull this trick yet we still get people saying "they're late, they look like morons, Apple is finished".
Apple can arrive last to a product market. They can take six years of iterative releases to refine their vision on a product market. They will still dominate that market. Cue the "they can't keep getting away with this" meme, because this happens with EVERY PRODUCT THEY RELEASE and these people still keep thinking this time will be different.
The whole "Siri sucks" thing is also hilarious, because you have to ask: So? So what? Apple, Google, and Amazon invested billions upon billions into these systems (Amazon especially). Then LLMs came around and are absolutely eating their same lunch ten times faster. Apple, again, looks like a genius (intentionally, or far more likely, not). They didn't over-invest. They're not laying off a thousand people from the Alexa division [1], or removing a ton of Google Assistant features [2], or releasing hardware no one is buying. They built exactly enough of a voice assistant to be competitive throughout the 2010s, and now its time for the next generation of all these things anyway.
The issue here, although easier due to moving second, is that Apple don't seem to really have a software culture, and this is pure software in multiple limits.
Possibly. On the consumer side, they’re in their own niches. What will get interesting is how Apple lets developers hook into the on-device AI, or one that’s running on its own metal.
> Apple's biggest successes have come from being the first mover in a brand new space.
I would say that only one of the examples you gave was unambiguously the first mover in a brand new space. I will give you "category defining", though.
For example, the iPod had tons of competitors already in the field when it launched.
Airpods were not even close to the first wireless earbuds.
One of the Apple Watch's major competitors (fitbit) launched 8 years prior. The first smartwatch that could sync with a computer came out in the 80s.
The iPad came like a decade after Microsoft's first major tablet push. ATT and Sony/Magicap and Apple all released "smart tablets" in the early 90s.
The iPhone was not the first capacitive touch screen smartphone, and certainly not the first smartphone - over a decade late to that game.
The Macintosh was (sort of) a sequel to Apple's own Lisa, which itself was also not a first mover. The Mac was incredibly innovative and successful, but was preceded by the LISA, PERQ, Alto, various Lisp Machines.
> In fact Apple is terrible at throwing its hat into an already crowded space, and doubly so when it comes to software.
iPhone has basically defined a category of mutli-touch screen devices. It essentially created the whole foundation how all the mobile phones went. It was a completely new consumer category of devices.
Apple Watch was a success because it used iPhone as a moat. iPad was built upon iPhone's foundation.
Back in 2007 it was not seen as a completely new category or truly original. It was a variation within an existing category. At the time we did not think it was revolutionary, but of course it became the new standard.
Before they became an "iPhone company" they were an "iPod company", and that was also an existing category when it launched.
These are all classes of device where existing options hashed out many of the growing pains before Apple released something more polished or attractive to buyers - the definition of second-mover.
you think when the iPhone came out the space was not crowded? You think they defined the category? Jobs himself have put up a number of smartphones in his 2007 presentation. Yes, the iPhone was far, very far better but it was definitely not a first.
Same thing with the iPod vs Diamond Rio MP3 layers.
As for the Watch, gosh, I do not even know where to start. Pebble Kickstarter two years before that? Two generations of the Samsung Galaxy Gear came out well before the Apple Watch.
"No handset polarised opinions during 2007 more than the Apple iPhone. Although it has many good points, the list of bad points is equally impressive. The iPhone lacks 3G, the camera is only two megapixels and lacks autofocus and flash, you cannot send MMS messages, third party applications are not allowed, the battery is not replaceable and it is absurdly expensive."
Look at a picture of what the top 10 smartphones looked like the day before iPhone launched and then again a few years later. That is what category defining means.
They didn't take whatever was out there in the market and copy it/make it incrementally better. They started from scratch and built something drastically different and better than the rest. Same for iPod (yes there were plenty of cheap MP3 players out there, but none of them were comparable), Airpods and all the rest.
Literally EVERY single example you listed were markets that already existed before Apple entered them (except maybe the Mac but that was so long ago who cares). MP3 players existed before the iPod. Smartphones existed before the iPhone. Wireless earbuds existed before Airpods. Tablets existed before the iPad. Smartwatches existed before the Apple Watch. VR goggles existed before Vision. Smartrings existed before the Apple Ring (just wait, its coming).
Their skill isn't in being a first mover. Their skill is being a second, or even last, mover into a space that has untapped potential, and unlocking that potential (for both their benefit and competitors).
But what's it for? I can tell what an electric car is for, can you tell me what "generative AI" is for? A car can transport goods and people, really good non-deterministic typeahead is just.. well, I don't know what I'd use it for, do you?
EDIT: what I really mean is what makes people think this is a commercially viable thing to spend time and money on? Like, say one of these companies hits some magic jackpot and discovers "AGI".. then what? Is that worth money, somehow?
Does a computer understanding me make it better? I find that attempts the computer makes to understand me, "delight" me, etc. just end up pissing me off. It's a tool. All I ever want a tool to do is be completely invisible and become an extension of my body, which enables me to get a task done. Computer software which does anything other than exactly what I tell it to fails at this, because it instantly breaks my connection to the task I'm trying to do and refocuses my attention on the software itself.
I wouldn't dream of trying to use a Siri, it sounds absolutely maddening. All I expect is that when I press a key on the keyboard the character I commanded with my key press shows up on the screen before I can blink, and does so exactly once.
That's great, and I don't begrudge you, but most people want to be able to tell their computer what to do and not need to understand the discrete steps it took to get there.
Taking a completely different type of example, image editing. Let's say you ask your computer to remove a blemish in a photo. A professional could remove it, maybe better even, without AI. They know the tools to use, the keys to press, and effect change. Regular people don't give a crap about that, they want to circle the item (or otherwise identify it) and click "remove." When the computer removes the selected item they're happy, and generative AI is working on THAT type of solution.
It's not here yet, so yes you're right that Siri IS maddening to use.
> Regular people don't give a crap about that, they want to circle the item (or otherwise identify it) and click "remove." When the computer removes the selected item they're happy, and generative AI is working on THAT type of solution.
This feels dangerously close to a lack of empathy for the user. I understand that's not your intention, in fact the opposite. But in order to accept the notion that users actually want an intelligent employee instead of a tool I have to believe that everyone truly wants to be a manager instead of an individual contributor. I don't believe it.
Take a simpler case, hammering in a nail. What I want from my hammer is for it to disappear and become an extension of my arm. I just want to hammer in the nail. I don't want to negotiate with the hammer about how it's going to strike the nail, all I want is to hit the nail. There's no amount of "clever" the hammer can be which will help. Cleverness can only hurt my user experience.
In your example, what recourse does the user have if the AI didn't do the job the way they wanted? Removing something from an image implies (probably? or maybe not?) that the void is "backfilled" somehow. What if they're not happy with the backfill job? Do they have to argue with the tool about it? Will the tool take their feedback well or will it become a fight?
I think, generally, giving users tools that scale like hammers is the way to go. A hammer in the hands of a skilled carpenter, blacksmith, or cobbler with 30yr experience is no different than the same hammer in the hands of a 2yo child learning to drive their first nail. But that hammer's utility will scale with that child's skill for their entire lifetime. There's no "beginner" vs "advanced" distinction. What makes us (as computer hammer builders) believe that we can distinguish between "beginner" or "advanced" computer hammers? Or "regular" vs "special" users?
EDIT: or maybe we're not building hammers, instead we're building dishwashers. Dishwasher users aren't supposed to be skilled beyond loading and unloading the dishwasher, and hitting the start button. Do "regular users" really want an appliance, or do they want a tool?
EDIT: another way to phrase it -- are computers "bicycles for the mind" or are they just a bus?
That's very true, I dislike how Apple, etc. don't uncover the manual controls for things. So when the smart tools stop working it gets frustrating because there's no manual way to continue.
I love watching my friends and family use Siri. Maybe 20% of the time it does what they want first try. 40% of the time they end up unlocking the phone and tapping the screen.
Sounds infuriating to me. (To be clear, I don't have any always-(maybe)-on mics in my life, I doubt Hey Google or Bard or whatever is much better.)
But none of those things are Generative AI, which is a big part of WHY they're infuriating to use.
I use Siri to add stuff to my grocery list and set timers. That's it. It's useful when I'm in the kitchen to just say what needs to go on this list instead of remembering to write it down later.
The day when Siri or Google or whatever can make the corrections I mentioned in my higher post will improve it vastly.
Even if all it did was improve Siri’s capability to understand requests and add the ability to ask clarifying questions with no other functional improvements, it’d make Siri vastly more useful.
This, but it is really exciting because for the first time, you can just tell your computer what to do. Not just a given set of tasks, but e.g. "go to my gym and book a slot with my personal trainer"; "contact Shauna and set up a meeting to talk about X, then book me tickets to get there".
Think about how much monkey-work we all do with our smartphones. We might look back in 10 years and laugh at the idea that we had to press buttons all the time.
I had this 15 years ago when my blackberry had a keyboard on it. It had buttons, and when you press them it makes the character you commanded go onto the screen. If they'd just put buttons on the phone instead of trying to draw a fake one on the screen you wouldn't need a statistical model to make the keyboard work
I can type significantly faster with GBoard, swipe or not, than I could on any physical phone keyboard ever. Blackberries, the G1, Droid OG. No way I'd ever take those over GBoard.
But iOS users don't really know what they're missing from GBoard, so.
I haven't owned an iPhone since the iPhone 4, I lean really heavily on autocorrect on my Pixel (is that using Gboard?). It's just an infuriating experience to me compared to physical buttons. I probably hit the backspace key at least three times as often and often when I try to type a backspace instead it comes out as an "l", "m", ".", or enter.
Most of the time I just wish I could plug my full sized keyboard into the phone, that would fix it completely most of the time (except, obviously, when I'm not near my desk).
An ideal compromise would be physical buttons on the device for when it's necessary and the ability to easily use my workstation's external keyboard (dock + switch maybe?) the rest of the time.
EDIT: Now that I think of it.. let me plug in a mouse too and give me a real OS (maybe in a container like you get on a Chromebook) and i can just replace my workstation with the docked phone. But then I would buy only half as many computers and wouldn't need all that GPU compute to train a bunch of statistical models so I guess that doesn't work for the computer companies.
I knew halfway through your comment that I was going to end up agreeing with where you were going. We're so close to having a decent Android tablet, with maybe a new Firefox for tablets, with USB-C DP out. 90% of the time I also would rather just be on a better device.
I'm sad there isn't more built around Android's AVF. I thought for sure, by now, we were going to have "Linux on Android" ala Crostini.
You can indeed just plug in your full size keyboard into your phone! You may need a USB A to C adapter, but your phone will happily support an external keyboard
> But what's it for? I can tell what an electric car is for, can you tell me what "generative AI" is for? A car can transport goods and people, really good non-deterministic typeahead is just.. well, I don't know what I'd use it for, do you?
It doesn't even seem to matter anymore. The tail is fully wagging the dog. Wall Street doesn't really care what companies are doing with AI, how they are using it, or whether their use of AI is going to actually drive earnings. They just care that they are using it. If a company says "We're doing AI blah blah blah" that's enough: investors are happy and stock price goes up.
I think you're right, it'll be interesting to see whether the next AI winter brings a market crash with it. It would be one thing if it seemed like there was some commercial application beyond "neat nerd toy" but so far there just isn't that I'm aware of? That smells a lot like tulip bulbs.
If AGI is defined as a AI that can replace most humans at most tasks, it would be worth money if it's cheaper than paying humans. So instead of a Marvel film costing 100 million to make, if an AGI can do it for 30 million it's worth tens of millions of dollars. Of course society might eventually collapse from mass unemployment, but corporate owners would live like kings until that finally happens.
There are two possibilities- one is what the PC largely did. Nobody really lost their jobs even though one accountant can do the work of 40 that just had calculators. We can just do “more” now.
On the other hand there’s what the washing machine, mechanized farm equipment, etc. did. A slow shift in how many people are required to do a job. There were no absolute jobs lost, just a shift in the economy.
This comment is really odd to see on HN. It’s like if a group of computer enthusiasts (in person) had a guy saying “I don’t understand what the big deal is with this so-called internet.”
The Internet was (is) a totally transformative technology which has changed how people work, play, shop, and interact worldwide. Your claim is that generative AI will do this? How? I recall a few months ago the Web 3.0 people saying a similar thing. Is it different this time?
That's a really myopic observation. They're very very different. ChatGPT Pro helps me learn new concepts in new languages much much faster than I did in the past.
Who would that put out of business? Does that replace anyone's job function? It sounds like you're describing something like "really good search for Wikipedia" which to be clear I think is great, but who's gonna be replaced by that in their workplace?
EDIT: actually, I overcommitted a little bit with "really good Wikipedia search". I can rely on Wikipedia search to not invent stuff from whole cloth and try to pass it off as results.
Do you understand the concept of individual productivity? If you have 5 people working for you, and a new technology makes each 25% more productive, you can fire one of them.
The idea here is that it won't stop at 25%. Even if you were to accept this premise, maybe you're just thinking about chat gpt 3.5 or 4. But it really doesn't take a lot more imagination to think about what version 7 or 30 might do.
The same goes for the image/video generation models. Smaller production studios might forgo several artist hires and just generate the stuff they need. Large ones will have an enormous pool of unemployed creatives and won't have to pay them much at all.
Has anyone been made measurably more productive with this stuff? Is even 25% achievable? I'm a software engineer, and I spend less than 25% of my time typing out code. So even if copilot could write every single line of code for me it could not improve my productivity by 25%. In order to make me 25% more productive it would also have to somehow speed up everything else I do at work as well. Has this been demonstrated?
Auto-completing function bodies with stack overflow content is cool! I'm not trying to say this technology isn't doing anything. It's clearly doing something "cool". But that doesn't necessarily actually make anyone more productive. That seems like an extraordinary claim (at least based on my own small experience working with it), so I'd expect to see some extraordinary evidence.
Yes, I wrote a small app in kotlin to scratch an itch. Chat gpt 3.5 gave me code that in the past would have taken me weeks to figure out from the mountain of verbiage in Google docs for the ever changing APIs. Normally googling for this stuff just leads to loads of things that don't work and I have to figure out why. With chat gpt I just pasted whatever error and it gave me the right answer the second or third time. I now have my (private) app that does what I need. Without gpt it would have taken me several weeks (real time, it's a hobby).
I've seen new colleagues use co-pilot at work and it definitely increases the amount of stuff they can now figure out for themselves within a given time.
They actually did but mainly with Google play. So far it feels like Microsoft is moving faster in that regard and getting partnerships with various automakers with AI at least.
I would rather they focus obscene amounts of effort to making the keyboard text “correction” not utter trash. Ridiculous how frequently it will completely change the intention of my writing.
I think because if they don't, they'll be in the position of depending on someone else's platform to provide AI features for their products. This is just an analogy, but it would be like if you were an app developer, and instead of being able to control your distribution, you had to use someone's centralized store to sell everything, and then pay whatever they demanded, or be cut off at any time. Very dangerous!
i agree with your general sentiment but for Apple I do believe it makes sense to focus on AGI as they could incorporate it within their OSes and products to make them more productive for the users.
Apple has a reputation to uphold, of being bleeding edge and having the best, latest, greatest tech. Their "AI" is a not-so-great scripted bot over a decade old at this point.
They are laughably behind the curve. Android should see widespread deployment of Gemini baked into the next generation of phones, and this could have a significant impact on Apple.
> Apple has a reputation to uphold, of being bleeding edge and having the best, latest, greatest tech
Their reputation is of being the best. The most polished. The most accessible.
It’s never been to be on the bleeding edge. Apple’s brand is that of the perfectionists. Even in their hackiest 80s lore, the elements that rise to myth are those about resourcefulness and design.
> Apple has a reputation to uphold, of being bleeding edge and having the best, latest, greatest tech.
Quite the opposite.
The iPod was panned by tech commentators; famously, "No wireless. Less space than a nomad. Lame."
The iPhone saw similar reactions; https://www.fastcompany.com/40436054/10-of-the-most-interest.... "There is nothing revolutionary or disruptive about any of the technologies."; "The real elephant in the room is the fact that I just spent $600 on my iPhone and it can’t do some crucial functions that even $50 handsets can."; "That virtual keyboard will be about as useful for tapping out emails and text messages as a rotary phone."
I can't imagine how apoplectic Gates was over the iPad's success after a decade of trying to make a Windows tablet sell.
They haven't released anything yet it's unfair to say how far behind they are. I don't expect them to catch up with Open AI but perhaps they could be on par with Google.
Apple's advantage is always been superior hardware and processing. My guess is that they try to do some on device LLM. It's currently possible to run Mistral 7B on your phone (MLCChat app), which is quite decent for a small model but is pretty terrible compared to the largest / best models.
Weird thing to say about the company that will be the definitive LLM once Pro 1.5 is available with its million tokens. I can’t even imagine what ultra 1.5 will bring.
You're kind of contradicting yourself. If a high token count tells you all you need to know about how smart a model is Ultra 1.5 would be just as good as Pro 1.5.
The fact of the matter is it remains to be seen how smart either model will be.
Could you point me to evidence what you just wrote is true?
I just asked mistral 7b to provide 5 sentences that end with Apple. It couldn't do it. I then provided 5 examples generated from ChatGPT 4 and asked it to generate 5 more. It still couldn't do it. 10 ChatGPT examples- still couldn't do it.
You seem to be saying the models can generalize on the entire context size, that I should keep provided examples up to the token limit because this will make the model smarter. Is there evidence of that?
As a former google fanboy (and current genAI enthusiast that keeps being disappointed at all the models that claim to rival gpt4 and then don't even beat gpt3.5), I'll believe it when I see it.
Do we actually know if they're far behind or if they just haven't released something publicly because it's not perfect?
I mean looking at Google and the various daily AI dramas they get, it seems like everyone else has rushed to market and is dealing with the negative fallout of that.
The idea guy in this scenario doesn't even understand the basic idea that you have to provide "consideration" for the assignment of rights/copyright. If he does find a person to build his software as a favor, he's got potential issues like an unjust enrichment claim or worse if he does find a way to monetize it.
If I somehow suffer the misfortune of interacting with someone like this I often stop the conversation and tell them to visit Fiverr. A half-baked idea coming from unseasoned people is not even going to make for a good consulting client. I tell them to hire a graphic artist to mock it up before they start with a programmer.
and that's exactly the justification Apple needs to squash this. I'm not saying you're wrong (you're not), just that I'm pessimistic on this project's future.
But why can you register an iMessage account with no email and no phone number? It doesn’t make any sense to me. Signal avoided that and that protocol is documented.