The complaints here about how the new macOS does not have anything exciting or interesting is weird. I thought the best thing about the new macOS (and iOS) is that it's supposed to be Apple focusing on stability, rather than new features. Given all the bad things that have happened with macOS in the past year, I thought this was going to be appreciated.
High Sierra didn't have much notable or interesting either, and it proved to be the least stable release in a while by a damn sight.
Lately it seems like Apple has downshifted in competence all-around (at least on the Mac). I'll believe evidence to the contrary when I see it — not sooner.
I know that makes me seem like an HN Apple hater. And it pains me to say that. I've been a Mac fan since the mid-90s. I've tried to give them the benefit of the doubt for a few years, but at this point they've blown through that into deep red territory. It makes me sad :(
I would not consider a brand new file system to not be notable. Maybe for average consumers it wouldn't matter much, but for us, we probably should appreciate how big an endeavour a new file system is.
I certainly appreciated it when I went to install the beta this afternoon. Took two seconds to add a volume, as opposed to many minutes (and the risk of data loss) partitioning in the old days.
In my case it resulted in total data loss causing me to do a fresh install, then within days a file system that was filled to capacity due to time machine snapshots that couldn't be deleted due to a cyclical depedency. I had to figure out how to reinstall on HFS+ and don't plan on ever trying again.
It's a "brand new" file system with implementation details that are a disaster. Lots of apps - including Steam just don't work on it if you use case sensitivity. Which is just silly. I had to create a virtual disk and mount it in my install folder so steam could work. It's 2018.. my 3k macbookpro shouldn't have this problem.
Give me my damn physical escape key back! the touchbar is the silliest thing ever.
While I have many issues with the new file systems as well, it is difficult to justify blaming it for Steam not running. Steam for Mac had never ran with case sensitive file systems since its inception, and Valve continuously refused to fix this despite numerous user complaints. This is hardly Apple’s fault.
I was saying this almost 10 years ago with various applications blowing up on case sensitive HFSX, and not really getting very good reasons why. And it's not just a few developers, they were all telling me case sensitivity on macOS is a problem. They'd get things working with one build, and a new dev build would bust their app but only on a case sensitive file system, not case insensitive. So the fact it's 2018 and this is still going on with APFS? shrug It doesn't surprise me one bit, even though I don't understand why it's still a problem. Two different file systems, same problem. Sounds like Apple just doesn't really care to properly support or test case sensitivity, their own test suite doesn't catch problems in the real world?
As much as I value the new file system (interestingly architected by the chap who did the BeFS, if only Apple had bought Be Inc!), it appears to still be lacking all the wonderful features of NTFS.
I'm still on Sierra at the moment as I delay upgrading after all the horrible bugs of the new release have been found - in everyone's opinion, is now a good time to leapfrog High Sierra? Time will tell.
Anecdote is not data, but I had huge issues when I first updated, so I rolled back. About 6 weeks ago I bit the bullet and updated again and it's been fine.
Thank you - anecdotes help me because I know 1 other Mac user and he's not overly technical (day job is Windows C++ so there's little Apple love around here)
I'll second that it's the least stable. It broke a whole load of stuff I needed for work. I eventually fixed it up (or found kludgy workarounds...) but I definitely resent encountering that immediately after updating.
So. The most important feature of this new release "designed for pros" that gets top placement on the announcement page is ... dark mode. The second is stacks. The third is the ports of iOS apps (including the long overdue Home).
I find it very hard to be excited about a yet another "major" release that doesn't even qualify as a minor version bump.
I’ve been wishing for dark mode for all 6 years I’ve had my MacBook Pro, so I’m pretty happy with that alone. I also have next to no qualms with OS X in its current state so I’m pretty easy to please. I will say that the hardware quality on the other hand has taken a nosedive lately.
It looks like dark mode is not a global option like on Linux, but rather needs to be adopted by individual apps. That means when opening old apps at 2AM, my eyes are really gonna be hurt..
it will depend on which APIs the app is built with. I don't know which desktop environment you are refering to, but I assume it's one, not all linux based operating systems. But in this linux DE, if I run xeyes will it come up in dark mode?
It's not designed for pros. Linux is. Apple is about easy usage and they benefit hugely from having Linux underneath, which is why they even are considered by real pros. It looks nice and that's what people like.
It's a Mach microkernel and a BSD userland taken from FreeBSD, which coincided with them hiring the founder of FreeBSD into a role to do release management. He's left since.
The fact you don't know this suggests you might not be aware of macOS fundamentals, the history of OS X, or MacOS that preceded it, the design decisions that went into all of those, the user groups they targeted at key points (including the adoption of FreeBSD userland), or their overall design intent.
I therefore struggle to agree with your premise that it's "not designed for pros", or that you are qualified to make that assertion.
Their definition of 'Pro' stretches much wider than developers. Traditionally, their Pro Apps were Aperture (raw editing), Logic (audio workstation) and Final Cut (video editing). Of course, the Adobe suite fits that moniker as well. For most people doing video/audio/photo editing, macOS and Windows are the primary choices (though you can obviously do this on Linux as well).
Note: macOS is not using Linux underneath. It uses the XNU kernel, which is based on Mach and BSD.
For the longest time ever MacOS was the OS for professional print, professional graphic design, professional sound design and professional sound editing, professional video editing.
When MacOS moved to a BSD-based system, it won over a large portion of software development. Go to almost any IT conference, you'll see people with MacBooks running MacOS (some run *nixes on MacBooks because of hardware).
In the past 4 to 5 years though Apple has let MacOS more or less stagnate.
But that's exactly how they marketed Snow Leopard (macOS 10.6). They even said it would have "no new features", instead focusing on making it faster and more reliable.
Fair enough, Snow Leopard marketing was before my time noticing Apple's existence. I've just noticed that in general they never admit that they've done anything wrong or made any mistakes, and it seems hard to market a release for stability without somehow admitting to existing mistakes.
They joyously announced zero new features for Snow Leopard over Leopard and it has been known as the most excellent release of OSX, ever (before the iOS-ification of the system and deep iCloud integration).
SJ had plenty of moments where he admitted mistakes, together with the new solution. IMO he was much more engaged with user feedback than Cook/Ive/Federighi.
I was hoping they were going to remove some features that didn't work out - Launchpad for example, which seems to have been a spasm in macOS a couple years ago to make macOS more suitable for a touch screen that never came.
My understanding is a lot of users use LaunchPad (possibly all those who don't have an Applications folder dragged to their dock and do not know they can just type the name of their app to run it via spotlight)
It's somewhat useful on a laptop where it's easily invoked by a trackpad gesture — I've sent a few users who consider it to be where all their apps 'are'.
I for one am extremely happy if they work on stability rather than gimmicks (or worse, breaking my workflow which pointless changes for the sake of change).
The problem is, it's difficult to be excited in advance about a stability which is being promised, but can only be verified some months away.
But I don't care really. Give me an OS which works and doesn't get in my way. Something without ridiculous amounts of telemetry, please.
did they announce all the fixes? if not then I'd assume that's why people aren't focusing on them.
I too would appreciate stability and fixes. I know they are listed in the notes for each release. Still if that really was the focus that needs to be communicated
note I didn't watch the announcements so if they did emphasize that great!
I think no matter what the focus is, they’re never gonna spend time during highly publicized events highlighting bugs in previous versions that they’ve fixed!
Seems like some people still got this ongoing "war" concept in their minds between major desktop OSes. They demand breathtaking features that will make them outstanding placed side by side with competition and all what they're getting are gimmicks, changes for sake of change. And I don't find that surprising either; for more than a decade OS are feature complete and you can do your everyday chores no matter if it's Windows, OSX or any *nix with DE of choice - the difference is in style that mentioned are giving; so it's hard to give something really meaningful that would change the workflow, especially when majority of people are more happy with mobile envirnoments
I'd disagree. I can think of multiple feature that only OSes can improve upon, not apps.
One is, creating a common platform for apps to share data, live data, with one another. By live data, I mean editing a spreadsheet in Excel, and seeing it change a chart in Photoshop, instantaneously. Not after saving the file. Not after hitting a refresh button somewhere.
I know, this has been possible for decades, but there are no apps taking advantage of it, mainly because these things need a common platform with well-defined standards.
This is something that OSes are best positioned to implement, but none do.
Maybe because it doesn't actually mean anything. It's like those random numbers they pull out... 40% faster app starts etc... In practice, you won't even think about it in real usage. They take some part of upstart procedure and load it afterwards in the background so it seems the app is faster. It's smoke and mirrors.
I am guessing you have some specific examples of that happening in mind? Otherwise, you’re just spreading FUD, not to mention insulting the developers working there.
PS: i am pretty sure caching, pre-compiling, etc are perfectly reasonable optimization techniques that many of us rely on.
WWDC 2018, from underwhelming to boring. I will lose my mind if i hear *oji one more time. Will they ever stop with this nonsense?
Favicons in Safari? It took them 6 years for the browser to remember the zoom setting per site, now we have the luxury of having favicons.
Wait for the Platforms State Of The Union video to become available. That's usually where the good stuff is. The keynote is often more consumer oriented to grab some media headlines.
Always interesting to see the divergence between HN and what the average user cares about. For most people emojis (and animojis) have opened a whole new way to communicate with each other. I can't think of another linguistic feature in history which saw such widespread use within a decade and I feel like emojis don't get enough credit for that.
Agreed -- as a Deaf person who uses visual languages to communicate, it's allowed me to actually begin to express myself using a set of pseudo-language visual elements. Some of them roughly map over to ASL expressions; I am looking forward to when they add ASL-specific emoji, or ASL language support, or even hands in the new animoji.
That's.. actually pretty extraordinary. In my cynicism I had imagined the only possible motivation for positioning new emoji as flagship features of their iOS iterations was a blatant attempt to lure pre-teens into a long (and lucrative) journey into Apple's ecosystem. I think perhaps I should take a few steps back and absorb the idea that:
1. people communicate in many different ways and
2. these things aren't valueless just because I don't picture myself using them. And in retrospect, I only struggle to picture myself authoring one; receiving them is completely fine.
I'm really curious about this; thanks for adding this and iluminating so many of us that fail to consider deaf people when thinking about these things.
But I'm not sure I totally understand what Animoji does for you. Your deafness is irrelevant in the context of messaging apps, isn't it? Hearing people, when using Messages, have just writing/reading, just like you.
Is it that, being deaf, you are more used to puting extra emphasis on non-verbal communication, and thus the transition to Messages from "real life" conversation feels more limiting than for us? How is it any better than sending a short video, or may be recording a gif of your actual face? Doesn't the loss in fidelity make it frustratingly hard to express the nuances we get from facial expression?
I tend to think all of those grandiouse statements about "Opening new ways of communicating" or "Creating new connexions between people" are total bullshit. Most animoji users communicate equally well with or without them, it add's nothing except fun. And that's fine! Fun is good.
But it's really hard to put oneself on anyone else's shoes, and I'd love to know if anyone has found more value in animojis.
There's no way to say this without being insensitive, but this reminds me of a grade school joke without a punchline - "how do you write in sign-language?"
Can you help me better understand what ASL language support would look like or what that would mean to you?
I'm not deaf, but as per the GP's comment, I can immediately see animojis with hands performing sign gestures.
If you consider how ingrained and emotionally significant that expressions in your native language are, animojis with hands - which will take that emotional significance and put a cute spin on it - is going to be massive.
ASL (or any language's sign language) isn't a simple translation of gestures for words or letters; the different sign languages have their own grammars and a different "vocabulary". Think of every sign language user as being bilingual, with a sign language and a written language.
ASL is not English. It is an entirely separate visual language with different syntax, lexicon, etc. ASL speakers can communicate to each other over mobile devices using text in the same way that, say, 17th century intellectuals communicated using Latin, or late-dynasty Chinese officials communicated using Classical Chinese.
I do not know how to sign, but I do imagine that a member of the signing community would not feel truly at home in their digital life in that they cannot type the language that they "speak" and very probably think in, but must rather resort to a second auxiliary language whenever they interact with text.
Like, imagine if Apple (or Google or anyone else--this is an industry-wide issue) made it technically impossible for you communicate in anything except French. In this hypothetical world it is not a show stopping issue because you are fully proficient in French having used it in some way nearly every day of your life, and so are all the people you would want to communicate with. But it's not your mother tongue, and so you wouldn't really feel at home or fully included in the digital world, now would you? Texting your family and close friends in French when in fact all your other interactions with them are using spoken English would just be weird.
(Incidentally I do wonder if Swiss German or Scotts speakers feel similarly, and if they don't to what extent that serves as a counter point.)
>(Incidentally I do wonder if Swiss German or Scotts speakers feel similarly, and if they don't to what extent that serves as a counter point.)
I don't know about Swiss German speakers, but Scots speakers tend to be fairly comfortable in code-switching between standard written English and a transliterated form of Scots.
Scottish Twitter is as culturally distinctive as African American Twitter:
Swiss-German sign language is.. interesting. It has Cantonal dialects (I wish I was joking!). I'd really prefer everyone switched to German sign-language for simplicity's sake - in the same way most Swiss-German speakers write High-German instead of dialect.
I could imagine signing emoji being pretty successful. My son is deaf, but too young to be using chat software, so it's difficult to say without asking about at his school.
I would imagine that ASL speakers have usage patterns that they would like to express via text messaging without simply “translating” them into formal written English. That’s no different than why English speakers use emoji (“” instead of “I am happy”) or informal onomatopoeia (“uggh” instead of “I am annoyed”).
If you're not entirely without hearing, you may be interested in knowing that the iOS 12 beta appears to have a new bit of functionality that lets EarPods act as hearing aids.
I suffer from social anxiety. There were a lot of times in the past when I would receive a text and fail to reply because I got stuck trying to figure out how to express myself "correctly". Emojis and the normalization of emoji-heavy texts help me a lot.
I don't mean this to sound in any way cruel or judgemental, but a very large proportion of the population have very limited literacy skills. Emoji are useful for all users who are writing short, personal messages that might be ambiguous in tone. They are extremely useful for people who would otherwise struggle to express or understand tone and emotion using the written word.
In the last National Assessment of Adult Literacy, 43% of Americans were assessed as having "basic or below basic" literacy. They can extract basic factual information from short, straightforward texts, but little more than that.
Here are a couple of example questions from that test.
Only 33% of Americans could describe what is expressed in the following poem:
"The pedigree of honey
Does not concern the Bee -
A clover, any time, to him
Is Aristocracy"
Either a literal or thematic description of the poem constitutes an acceptable answer.
Read the text at the link below. After reading this text, only 16% of Americans could describe the purpose of the Se Habla Español expo.
Acceptable answers include any statement such as the following: "to enable people to better serve and sell to the Hispanic community", "to improve marketing strategies to the Hispanic community" and "to enable people to establish contacts to serve the Hispanic community".
Did you get the right answer? 84% of Americans didn't. Bear that in mind when you're writing documentation or dialog boxes.
> In the last National Assessment of Adult Literacy, 43% of Americans were assessed as having "basic or below basic" literacy. They can extract basic factual information from short, straightforward texts, but little more than that.
That's intentionally misleading and it's thrown around frequently without clarification of what the basic and below basic levels exactly mean, how they compare to the rest of the world, and who is in the figures (a lot of non-English speaking immigrants), usually to try to prove points.
The US basic literacy level is a high bar compared to what 95% of the planet actually tests at. Over half of China is below basic by the US standard. Over half of Eastern Europe is below the US basic line, including Russia.
In the US ~44% of the below basic population are non-native English speakers, who didn't speak English at all prior to starting school. 39% are Hispanic adults. Ie this group overwhelmingly consists of currently or originally low skill, poor immigrants (people that wouldn't even be allowed into most other developed nations such as Canada).
Demonstrating that effect in action, 43% of hispanic adults test poorly in literacy, compared to about 10% of white adults. Gee, I wonder if immigration into a new culture + language barrier has something to do with these numbers.
Despite a vast immigration flow of low skill, poor, low English literate persons since 1980, the US literacy rate didn't drop meaningfully. That means literacy rates for the base population increased.
Despite all of that, the US is the 7th most literate nation on earth, in front of: Canada, Germany, the Netherlands, France, New Zealand, Belgium, Israel, South Korea, Italy, Ireland, Russia.
> The US basic literacy level is a high bar compared to what 95% of the planet actually tests at
Americans are well educated relative to the global population. That isn't what we're discussing. OP is explaining why large swaths of the population might prefer communicating with pictures over words. It isn't that they can't understand words. Just that parsing and constructing language to express complex thoughts isn't a common experience for many, for whatever reason. Emojis fill that gap.
My comment was not intended as a critique of the American education system. Immigrants buy phones and computers. They run businesses and use SaaS products. Non-native English speakers are an important demographic that we need to keep in mind when we are designing products and writing documentation.
In a globalised world, a great many people are frequently communicating in a language that they have not fully mastered. South Africa has eleven official languages. India has 22. Globally, non-native English speakers outnumber native speakers by two-to-one. Hindi/Urdu has a roughly equal number of first and second language users.
What's the proper set of answer about the Dickenson passage? Does any interpretation count as correct?
(I know this thread isn't about the methodology of literacy assessment, but now I'm really curious to know how they do it. Does publicly available question-level response data exist somewhere out there? from previous years?)
I would be interested to find out if things were that bad in the 50s or 70s. It does feel like the intro of the movie Idiocracy is happening. Even among sophisticated people, when you watch interviews or speeches of public figures from the 30s or 50s, their spoken English (but it applies to other western languages too, French for sure) was so much superior than even your typical written newspaper article today. Trump’s speeches made out of no more than a 100 distinct words are merely a dent in a downward curve.
"Whole new way" is a bit hyperbolic, but an emoji definitely brings layers of meaning that are normally only feasible in in-person conversation into the written realm. It would be pretty hard to express the sentiment the ¯\_(ツ)_/¯ emoji can, and even if you put the effort into writing it, you'd lose the immediacy.
It's also interesting how they work as reactions. If you hit "like" on something, for instance, you don't have to explain why you approve or add your own commentary, you just indicate your approval. And if you write a comment on a thread, there's a certain expectation that it contain an original thought or that it demands a response. So reactions manage to avoid a lot of inane filler.
For a good illustration, watch a thread on Facebook where they say "type AMEN if you agree!" and you get a thousand people tediously writing it out. If there were just a little prayer emoji and a counter, you get closer to their actual intent, since that's literally what "amen" means.
1) The emoji has several possible interpretations/meanings depending on context, such as "I don't know", "I don't care", "indifference" or "shrug". There's this saying, "a picture says a thousand words". It applies here (nobody said the words couldn't be a few words in hundreds of different languages ;)).
2) The emoji generally does not require translation to different languages. It isn't universal, but its more accessible, and some emojis are certainly universal (such as :) which is a facial expression my 3 month old understands).
> emoji generally does not require translation to different languages
Really good point. I don't find watching a keynote speeches about emoji at all exciting, either. But I work on a mixed-language engineering team with a lot of (extremely smart, highly literate) people, and we use emoji all the time.
Whether it's cold-sweat-face or thinking-face really helps me understand the nuance of my colleague's Japanese comments (which, btw, as a native English speaker with only fair Japanese ability, gives me a free opportunity to experience 'basic or lower literacy').
I know that adding emoji characters helps them in the same way, so I use them frequently.
> It would be pretty hard to express the sentiment the ¯\_(ツ)_/¯
That's what acronyms were used for. Back in the day of AIM/ICQ I never felt I had issues expressing myself with just text. It was text supplemented with a healthy dose of emoticons and acronyms, which leads me to believe that emoji are redundant
Emoticons are severely limited in their range of expressivity. It’s hard to do much more than :) :( and :/. Acronyms are limited in both range of expressivity and audience size. Acronyms require prior agreement on what they mean, so there’s a barrier to their basic use and another barrier separating people that don’t know what they stand for. This severely limits your audience size and the number of acronyms you can use.
What’s really interesting about emojis is that they can transcend even hard language barriers. An emoji used by an exclusive Japanese-speaker can be understood by an exclusive English-speaker. The range in expressivity for emojis obviously isn’t as great as a full language, but it’s surprisingly large and can grow without cognitive costs to users (unlike acronyms). Unlike acronyms, the audience size is effectively universal. So I 100% disagree with your claim that emojis are redundant.
There are a lot more emoticons than a few simple faces. Even your example ("¯\_(ツ)_/¯") is an emoticon, albeit one from the Asian side of the pond. I've had long distance relationships over text chat and so I really disagree on the fact that they are limited in their expressiveness.
Part of what is off-putting about emoji is that they make text look like early first-language reading materials, where pictures of objects are embedded next to the word you are meant to learn. It's kind of off-putting to read, at least for those of us who grew up in that experience.
I also do not share your adoration for things that are super instantly accessible. There is much value in learning, and in struggling along the path to learning, that people think they don't want or need. Linguistic training (like learning to read written words) is a necessary skill for assimilating oneself to a new culture or in-group and language is one of the best methods for practicing that.
Also, are emoji really so universally understood? Are the peach or eggplant really universally understood to stand in for genitalia? Like almost everything the simplest form of emoji are accessible but I do not think everyone in the world is on the same page about the finer points on how to use some symbols
> I also do not share your adoration for things that are super instantly accessible. There is much value in learning, and in struggling along the path to learning, that people think they don't want or need. Linguistic training (like learning to read written words) is a necessary skill for assimilating oneself to a new culture or in-group and language is one of the best methods for practicing that.
At its core, language is a tool. Its job is to allow people to communicate ideas with other people. You're arguing that there's value in not making a tool easier to use because it promotes learning, but I disagree with that point. Sure, there's value in overcoming challenges to benefit learning, but we shouldn't create artificial challenges (like not using emojis) just for that benefit to learning by doing things in a harder way. It's like saying we should use hammers to put in screws because using a screwdriver makes things too easy. I agree that there's much value in learning, but I think learning can be done in a much more efficient and productive way than by refraining from using emojis.
Also, you have to think of the costs associated with the inefficiencies of human language. One example: look at all the scientific work being done in English. Anyone that doesn't speak English fluently is automatically at a massive disadvantage in the scientific field. These non-English speaking scientists have to spend years simply learning English to contribute their work. That's an opportunity cost. All those years could have been spent on their actual scientific work, and those individuals and society as a whole have to live with that loss. I'm obviously not arguing that emojis fix this problem, but I'm saying that simplifying our language tool in some way could.
> Also, are emoji really so universally understood? Are the peach or eggplant really universally understood to stand in for genitalia? Like almost everything the simplest form of emoji are accessible but I do not think everyone in the world is on the same page about the finer points on how to use some symbols
That's an interesting point. There are differences in how emojis are interpreted, but this is no different from written or spoken language. Since the beginning of human communication, people have developed slang words and altered the rules of language. Some of these changes have spread and persisted while others died out or remained in use within specific groups of people. While the peach and eggplant emoji may not have the same interpretations across age groups, different cultures can likely still infer their meaning. For example, the see/hear/say-no-evil monkey emojis likely transcend many cultural and language barriers. Knife + scream + shower head emojis likely convey the shower scene from Psycho to anyone who's seen the movie regardless of culture or language.
> Acronyms require prior agreement on what they mean
Emojis do as well. As someone who has autism, I don't recognize or understand facial expressions all the time. I remember being 18 years old and finally understand what faceroll/roll eyes meant (thanks to -what it was called then- an emoticon).
AIM/ICQ already had picture-smiley support, converting :) to a smiley. The first smileys on the internet were used in the start of 80's (they're apparently used in written form as well see [1]). It was used on IRC and e-mail.
Another fun fact of that time (80s) is that domain names and TLDs used to be written in CAPITAL LETTERS. And the first spam was from DEC (Digital Equipment Corporation).
Acronyms are very old (widely used), and useful, but that doesn't mean they're better because they're older or more used in the past. Remember that reading in past centuries wasn't for everyone, same for latin. Acronyms are language specific which emoticons/emoji are not. The acronym LOL, one of the first chat-specific acronyms, stems from IRC and is believed to be coming from Dutch (The Dutch word lol means "fun" or [non-sexual] "pleasure". The Dutch were one of the very first if not the first countries to connect to the USA internet, and same for its IRC presence.) If you're a native English speaker you may not give a rat about acronyms being English-centric, but for the rest of the world they often don't even know what the acronyms stand for or they have their local acronyms which you or me wouldn't grasp.
Emoticons and emojis do not suffer from that problem. Case in point, the red 100 emoji is widely popular in the USA (so I heard). People don't use it here in NL. But we understand what an American would mean with it.
I believe picture language (as I call it) plus on-the-fly translation devices (what Google Glass could've been ages ago but didn't work out due to public outcry) is going to solve communication in the 21st century. The effect of the tower of Babel shall be mitigated. Why are my glasses still dumb? All these brands being sold here, are ultimately owned by the same big fat multinational. There's a huge opportunity here.
A major part of human communication happens through facial expressions. Emojis/Animojis enable this for digital communication. It's new in the sense that we now have a global set of symbols for these expressions.
It's hard to say how to express that, since I have absolutely no idea what it means :) If I got that message I would be pretty confused. Is the other option literal nail-painting or is it just sarcasm?
The nail-painting one can be used to be a bit ‘sassy’, so it’s like saying “don’t worry, I know other ways to find myself a bed for the night ;-)” but in a more nuanced way that I at least find funnier.
It’s probably not a universal meaning but my point is that emojis can be used to express more than just basic emoticon ‘UNIVERSAL EXPRESSION OF HUMAN HAPPINESS’ style things.
This may or may not be an example of it -- for all I know it's a commonly used one with a well-established metaphoric meaning --, but I think emoji usage is strangely idiosyncratic.
I did too. Perhaps if emoji are such an ambiguous method of communication, they aren't so good after all?
That isn't an isolated instance of emoji confusion for me, either. Apart from the basic facial expression emoji, for me they're a great way of making a sentence more confusing.
Very curious - what do you think makes the image version superior to the text-based smiley?
And as a follow-on, do you think the animated and personalized versions of these emojis make it that much more effective?
For me, the text version communicates the same meaning and context, so it’s fascinating to see examples where the representational medium has a significant impact.
This smiling-face-with-smiling-eyes is one of the few emoji that cause an automatic emotional response inside me. Another is https://emojipedia.org/loudly-crying-face/ (Apple rendition of both, particularly).
Text, and ASCII emoticons, just don't do that.
(and I'm guessing that after I see enough "we can save you money on your car insurance :smiling_face_with_smiling_eyes: manipulative overuse by marketing "humans", emojis won't do it anymore either).
(Although, do you genuinely think a photo of a cat being cute is emotionally the same and just as effective as a description of a cat being cute?)
There’s a whole range of “smiling face” emoji with slightly different smiles, and slightly different eye expressions, some blushing, some not. In the context of “i have all these options to choose from”, choosing that one particular option conveys more information than what the text-based alternative provides
Sometimes I'd like to think that it prevents misunderstandings. But that's probably very subjective.
I'd also guess that just because they feature it heavily on the keynote that a huge team spent the whole year only working on just Emojis. That doesn't mean that there weren't a lot of other teams doing important ground work and internal improvements that are not that easy to showcase on a keynote for a very diverse audience and press.
I suspect some people have either never heard of emoji - could be old or young, not sure - or simply didn't think they would use it before these kinds of features came along. Outside of Apple nerd (and broader HN) social circles, the awareness of this is probably much smaller than we realize.
After all, even with social networks, we still haven't convinced everyone that the "at" username construct is a useful way to get people's attention online.
It amuses me to think we have moved from the printed word back to hieroglyphs (emoji). Not in a derogatory way, communication is communication, but in a awe of older cultures. I like the thought experiment of imagining a fallen advanced civilization in Egypt. I imagine them stuck... watching their batteries drop to 0%, and turning to stone and emoji as the only way to persist communication.
What if printed words were really just a transitional state of language until their purest form, emoji!
To a certain extent, one could consider Chinese characters to be more like emoji than words constructed via a phonetic alphabet, in which case there never really was a transition period, just some backwater outliers.
That idea was dismissed several thousand years ago, and for good reason. Text is way more powerful and precise than pictures. And I'm not saying having both is a bad thing, but this focus on Emojis is just idiotic.
Text accompanied by images has been a staple of modern visual communication for the past several hundred years. Emojis just bring additional ways to express oneself in addition to plain text.
When I chat with my Thai girlfriend, many miles away, we usually use the LINE app. We don't directly use emoji's, but we do use stickers [0] as part of our chat. And I feel stickers provide much the same use as emojis, the can more easily be used to convey emotions in a chat that would otherwise be harder to interpret correctly.
> I can't think of another linguistic feature in history
Only slightly related, but this latest migration back to a "sign-like" language (the emojis) reminds me of Giambattista Vico's "Scienza Nuova" (https://en.wikipedia.org/wiki/The_New_Science), where at some point he says that the language spoken by the first humans ("the giants") was a "mute" one, based on "signs", which was correlated with a poetic sense of mind, so to speak.
> Beginning with the first form of authority intuited by the giganti or early humans and transposed in their first "mute" or "sign" language, Vico concludes that “first, or vulgar, wisdom was poetic in nature.” This observation is not an aesthetic one, but rather points to the capacity inherent in all men to imagine meaning via comparison and to reach a communal "conscience" or "prejudice" about their surroundings.
There's of course nothing scientific about Vico's discourse, but his themes somehow stick and resonate more (at least to people like me) compared to the latest linguistic findings.
> For most people emojis (and animojis) have opened a whole new way to communicate with each other
Emojis I agree, they are standardized and their meaning is clear by convention. Animojis? That's just salespeak for snapchat-like filters. They're funny but the novelty wears off the same day.
> For most people emojis (and animojis) have opened a whole new way to communicate with each other
Wow that is an extremely generous characterization. At best they're just prettier versions of :) and :( and I don't see how they allow people to convey ideas they couldn't do just as well via text.
Well, for starters, you don't have to turn your head 90º to resolve some, but not all of them as an image, as you do with your provided examples. They're also much higher-resolution so it's easier to pull meaning from an unfamiliar one.
> Well, for starters, you don't have to turn your head 90º to resolve some, but not all of them as an image
Well, no one actually does, so I'm glad we got out ahead of that problem.
> They're also much higher-resolution so it's easier to pull meaning from an unfamiliar one.
except you don't really need that many. There are a few common emotions that people use... and then there are winky T-Rex emoji's that are completely unnecessary.
I feel like you and others are being purposefully obtuse, to some degree.
Can you seriously not distinguish between the tiny selection and low res quality of text faces, and the wide variety of highly specific and detailed set of reactions now available to us? There's only so much you can do with text before you have to be extremely creative (a level of effort excessive for quick casual conversations) or rely on the other party being familiar with your specific vocabulary of text-faces.
>Can you seriously not distinguish between the tiny selection and low res quality of text faces, and the wide variety of highly specific and detailed set of reactions now available to us? There's only so much you can do with text before you have to be extremely creative (a level of effort excessive for quick casual conversations) or rely on the other party being familiar with your specific vocabulary of text-face
Then give us a single example! So many replies _and not a single example of where words or ascii fail to impart what only an emoji can_. You can say "they're obviously better" until you're blue in the face, but it's all hot air until you prove it.
> and not a single example of where words or ascii fail to impart what only an emoji can
Obviously words can (almost certainly) impart what an emoji can - but one small image versus maybe 100 words? That's before you start combining them and the expanded meaning you can get from that.
You might as well say "give me an example of where Proper English fails to import what only slang can" - you're missing the point.
Not the OP but here’s one. I recently got divorced and back into dating. These days, that means a lot of texting in some form, and I’ve found I have a distinctive style that people who know me well enjoy but that tends to produce a lot of misunderstandings with people That don’t know me that well.
I have the choice of either adjusting my writing style to new people, which I’d rather not, or use either text or picture emoji to convey the tone that makes my writing clearer to people who can’t infer it. I find that image-based emoji are much more specific in the mood they convey, and provide more range — and there is a definite difference in how clearly I come across.
> Let me put it another way: what’s the point of these newfangled moving pictures when we already have books?
Yeah, figured that would be trotted out at some point. That's a fine sounding argument, but do you really feel emoji's are on the same level as the advent of video? I don't believe you do. At some point you have to take a look at the specific thing you're talking about and get down out of the clouds.
I have yet to hear a reasonable argument as to why emoji's are better. All I see here is "they're different and can be funny." Ok.
Am I not allowed to disagree with a statement that implies emojis are some groundbreaking form of communication? I never said the concept was not useful; I said that images provide nothing text cannot aside from aesthetics. Why are you people so defensive about this?
Because you have observed that emojis do not help you communicate, and then concluded that emojis cannot possibly help anyone communicate. There are lots of people in this thread who have mentioned concrete examples of emojis "providing something text cannot," and yet you refuse to accept it.
"This helps me communicate" is not a falsifiable claim. You're telling lots of people that they have somehow made a mistake in interpreting their own life experiences. You are not even considering the possibility that something is there, and you just can't see it.
They are an improvement to an existing form of communication. Aesthetics are also a form of communication. Emojis can be used as part of a sentence and there is no way to communicate exactly the same thing without using them.
There has never been a way to put images in a sentence as easy and expressive as emoji (all there used to be was fonts like Wingdings), and it’s standardized. That is quite revolutionary.
The fact that they’re wildly popular should provide some indication that the qualities exist.
Communication is rife with ambiguities, emotion, shortcuts, and mistakes. And between people who share friendship or more personal relationships, those “flaws” are often features, not bugs.
The concept of emoji, I feel, embraces those flaws.
(And personally speaking on the subject of emoji vs common text shorthand, if I never see “lol” again it’ll be too soon.)
> (And personally speaking on the subject of emoji vs common text shorthand, if I never see “lol” again it’ll be too soon.)
Funny, I feel the same way about emoji. I dunno, maybe I'm too autistic to get it, but when people use emoji it makes me feel like I'm talking to a child who hasn't learned express themselves like an adult yet.
> The fact that they’re wildly popular should provide some indication that the qualities exist.
Really?
Pet rocks were wildly popular. Unhealthy foods are wildly popular. Cocaine is wildly popular (well maybe that's a stretch).
I think there's a correlation problem here. However, I think you're missing the point again; _what can I convey via an emoji that I cannot convey in ascii_? I have yet to see a single example, and that's what started the entire debate.
Pet rocks were popular for 5 minutes. Unhealthy foods are perpetually popular because they have the quality of tasting wonderful.
And neither of you tried to address the core argument I made in the parent, that emoji reflect the inherent messiness of personal communications and for that matter personal relationships.
The same reason it’s important (but inefficient) to tell someone you love them in nonverbal ways is the reason emoji are popular. We all appreciate communications that extend beyond the written word. Emoji is just another option among many for achieving that.
>And neither of you tried to address the core argument I made in the parent, that emoji reflect the inherent messiness of personal communications and for that matter personal relationships
And exactly zero people, including yourself, have been able to provide a single gle example where text fails to convey what an icon can. And, you, that was the entire subject of this discussion if you haven't noticed.
> I have no idea what concept is even meant to be communicated by such an absurd thing
Depends on context. If we were discussing someone, it might signal criticism or a desire to party. The fact that it cannot compress losslessly into words is the whole point.
I'd like to see examples of both of these. Specifically how the T-Rex plays a role because, if you take the T-Rex out, we're back to something I can easily convey in ascii.
> if you take the T-Rex out, we're back to something I can easily convey in ascii
May I ask if you read fiction in more than one language? There are constructions even in those close to English which I find impossible to accurately translate in a way that preserves the delight of the interaction between their phrasing and underlying meaning.
For T-Rex, two examples:
"I drank too much at the Christmas party.
Not as much as Bob. He puked in the restaurant sink before appetizers were served.
[Dancing eye-rolling T-Rex]"
--or--
"Let's go.
Where?
BarBar.
BarBar?
Happy hour pricing until midnight.
[Dancing eyes-rolled-back T-Rex]"
In the former, the emoji communicates derision. In the latter, playfulness. Depending on the style of animation and context, the emoji could further communicate cuteness versus tactile incompetence, letting go versus a loss of control, subject versus object.
The process of decoding an emoji is analogous to a simplified form of interpreting art. Why is that there? Am I supposed to interpret it using the positive or negative connotation? In some cases, less ambiguity is desired. But in others, the ambiguity itself carries information of a sort impossible to parse into words.
If it is contextual and subjective, then we could just use any word or phrase in the same manner to the same effect. Written language itself is just contextual line patterns.
Very likely less than the amount of different native language speakers who understand the vast majority of your emoticons/emojis.
I suppose you don't care cause you just speak your native language with other people who are native speakers. But a universal language on top of that has huge benefits in international circles. And, my 3 month old understands the :) smile. One of the very first abilities a newborn learns is recognizing faces. That's when they cannot even see a meter far!
So let me get this straight: emoji is both highly contextual and simultaneously universal?
I find that difficult to believe. On the other hand, given enough time and global interaction in that medium, it could develop a stable enough meaning across a large enough conceptual space to have a situation no worse than exists between any "standard" language and its various dialects. That'd be interesting, but I'm not holding my breath.
> So let me get this straight: emoji is both highly contextual and simultaneously universal?
Well, not always universal. Some are generally well understood. They're easy to learn (you might wanna also look into where to start if you're interested in learning many languages; I understood its best to start with an Asian language such as Japanese/Korean/Chinese), and on top of that even allow to learn languages easily (see Memrise and Duolingo who use SVG art to teach languages. They use the same SVG art in different languages!). Even on school when children learn their first words (which are in Dutch: boom/roos/vis/vuur, English meaning: tree/rose/fish/fire) this is done via pictures!
Emoticons and emoji are contextual, yes.
If I say:
That's fun ;) :)
That has a different meaning than:
That is fun :)
or
That is fun ;)
Different context, yet a wink or smile is universal.
And if I'd write:
Dat is leuk :)
You wouldn't understand it because you don't speak Dutch. But you would understand the smiley. Without using any translator. The emoticon & emoji always describes the text around it, like an adjective (though it could also describe other smileys). As such, it is descriptive.
True, sometimes the emoticons (and especially emoji) explanation must be explained. Once it is explained, it can be used in combination with any language. For example, the kappa emoji [1] which originates from Twitch can be used on an English stream, but also on a Spanish or Japanese one. Its generally understood within the gamer community, but if you'd start using it within your local hockey club they'd first need to understand the meaning.
> That is fun ;)
>
> Different context, yet a wink or smile is universal.
Really? Because I can't see any real difference between any of those examples. Does the wink mean you're being sarcastic? Or that you're coming on to me? What purpose does the smiley serve? you already said it was fun, one could presume that would leave you in a positive emotional state.
> the kappa emoji [1] which originates from Twitch can be used on an English stream
I hate those stupid things so much, probably because I have no context for understanding their meaning and, since its already an english stream, you could just use words! And if you're not speaking the same language as the rest of the stream, you can't express anything meaningful enough to be worth saying anyway.
> Really? Because I can't see any real difference between any of those examples. Does the wink mean you're being sarcastic? Or that you're coming on to me? What purpose does the smiley serve? you already said it was fun, one could presume that would leave you in a positive emotional state.
That'd depend on the rest of the text. It could mean I am making a joke ("not serious" / "just kidding"). It could mean I'm sarcastic. It could mean that I'm trying to hit on you. I think that sums it up (though I'm open for different explanations).
Thing is, back in the days, even in native languages between native speakers (but more so with one or more non-native) sarcasm and jokes weren't always easy to detect. The wink smiley specifically filled that niche! If you don't know about the story behind it, you might find it interesting to look it up.
As for the difference between these, "That is fun :)" denotes no sarcasm, but warmth. Possibly still humor, but its a genuine statement. "That is fun ;)" was covered earlier above and "That is fun ;) :)" is a mixed bag which could go either way (possibly clever to "talk your way out of the meaning" e.g. when trying to flirt but its not well received, or to create some -albeit simple- mysticism around your flirt). That's without knowing the context. The context still matters and is, ultimately, decisive for the meaning.
I have autism, btw, so although I find this fascinating it is rather difficult for me to understand. It took me serious effort to learn the meaning of the different emoticons/emoji (as far as one can know them, since there's so many in unicode these days).
> I hate those stupid things so much, probably because I have no context for understanding their meaning and, since its already an english stream, you could just use words! And if you're not speaking the same language as the rest of the stream, you can't express anything meaningful enough to be worth saying anyway.
(I don't like it either but that's because it is overused in these circles, and it reminds me of my age ie. that I'm not youth anymore.)
The ability to understand a language isn't binary. (See e.g. the example of the wink where language is not being understood!)
Another example coming from my own is I understand some Spanish, some French, some and some German, but I do not want to learn any French or Portuguese, and my German is better than my Spanish but I'm very curious to learn more Spanish. My English is pretty good, as is my Dutch, but I'm only interested in learning more English and Spanish; Dutch not so much. YMMV obviously.
> we could just use any word or phrase in the same manner to the same effect. Written language itself is just contextual line patterns
No, we can't. There is an inherent visual component to emojis. A picture worth a thousand words, et cetera.
It's not an abstract idea mapping to an arbitrary icon; without prior explanation, many emojis make sense (within a certain cultural context). Kind of like how we can't replace the essence of giving a friend a gift or a lover a flower with words or an arbitrary icon. Apple understands this in a way few technology companies do.
> It's not an abstract idea mapping to an arbitrary icon; without prior explanation, many emojis make sense (within a certain cultural context).
What a coincidence, the exact same thing is true about written words.
>Kind of like how we can't replace the essence of giving a friend a gift or a lover a flower with words or an arbitrary icon. Apple understands this in a way few technology companies do.
I feel like you're one of those people who would have been way into flaming guitar gifs and midi on your geocities page in the 90s. I mean, seriously? You're literally saying that sending a gif conveys so much more meaning meaning it is similar to giving a gift or a flower than sending a text.
> More challenging: what’s the text version of a singing, eye-rolling T-Rex?
Who cares because that's dumb? Can you tell me what deep emotional state is being conveyed by a T-rex rolling its eyes? I think you lost track of the premise we're debating.
> At best they're just prettier versions of :) and :(
Do you really think this?
I don't really 'get' emojis but I think you're woefully underestimating their impact on communication and language. The emoji library on a normal iPhone is enormous.
>you're woefully underestimating their impact on communication and language. The emoji library on a normal iPhone is enormous
What does one have to do with the other? Yes, there are a lot of dumb icons to chose from. How does that directly lead to "[having a] large impact on communication and language"? If that's true, do you think it's a _positive_ impact?
Have you ever encountered difficulty conveying or understanding conversational tone over the internet? No? You are lying or lack self-awareness. Voice and body language are important for disambiguating sentences with more than one possible meaning or implication. Emojis approximate the role of voice tone and body language in digital text-based communication.
The discussion is not "are emojis in any form useful?", it's "do icons provide a new and before unrealized form of communication". Literally every person here missed the statement in the first comment.
You didn't read. I said that you can convey any of these equally well in ascii. Yes, it's handy to be able to plug an :) at the end of a sentence which may otherwise sound rude/overly direct. That doesn't mean I need 1000 icons, and _that's what we're talking about_.
The emojis serve a purpose. Text doesn't serve that purpose. I don't know how to describe the niche they fill with text. They're not a stand-in for emoticons.
You don't get it. It's okay. Not everything is for you.
>I don't know how to describe the niche they fill with text. They're not a stand-in for emoticons.
So you can't explain it, but it's I who "doesn't get it". Ok then. I'll excuse you for a bit as it's going to take some time to untwist your brain from that logical contortion.
The meaning is immediately clear to anyone familiar with the reference. It's basically a pictographic language that leans heavily on a shared culture that's largely internet-based.
Ah, I get it, it's value is making the people use it feel special because only "the right kind of people" will get their jokes. Just a new generation of children using slang. Why was that so difficult to use words to describe?
>> "Ah, I get it, it's value is making the people use it feel special because only "the right kind of people" will get their jokes. Just a new generation of children using slang."
You sure do have some text there.
>> "Why was that so difficult to use words to describe?"
Do you similarly disdain things like Cockney Rhyming Slang and Polari because they're effectively "inside jokes" that express things you could equally well express with "plain words"?
I feel like I'm in the Twilight zone (and no, I don't hang out with a lot of 12 year olds, but I do in fact know that kids like to pepper near everything they write with dumb icons.)
The debate is not "are emojis widely used". Yes, of course they are. The question is "opened a whole new way to communicate with each other", which is what I responded to.
I say, no, they haven't. I can convey the same emotions with ascii. I can convey the same emotions with written text. If you want to prove me wrong then fine, but don't re-frame the discussion.
The most common emoji I see among kids is (Face With Tears of Joy) or (OK Hand) (apparently ycombinator doesn't support UTF-8). I don't know of any ascii that can do those. (FYI, I'm 28 and I often chat with my younger cousins who are in their mid-teens at the moment.)
Ok well here's what I actually said. It's right up there if you want to take another look.
> At best they're just prettier versions of :) and :( and I don't see how they allow people to convey ideas they couldn't do just as well via text.
Never did I say "These are dumb get off my lawn!" I said they don't meaningfully impact or improve communication, which is in direct response to the person I replied to who said that they have "opened a whole new way to communicate."
That's a serious claim, I'd like to see a single coherent argument to show it's actually the case. But, no, all I get are mischaracterizations of what I said.
The debate is not "are pretty things nice", it's "have emoji's fundamentally improved communication".
I think the new Facetime features are gamechangers in terms of driving usage. All this stuff was already available in Snapchat etc. but Snap's problem is that the impressive AR stuff they were doing 3 years ago is now built into the OS.
Allowing people to call each other as avatars complete with facial expressions or with flattering filters applied gets rid of one of the last remaining key barriers to mass video calling adoption: people tend to look like hell in low light on front-facing cameras. Teenagers will upgrade to Face ID just to get these features, grandkids will love calling their grandparents as cartoon tigers and grandparents will love responding as cartoon dinosaurs.
Also nice to see the fruits of the Workflow acquisition, this will allow people to do all sorts of customisations including using slang and profanity to trigger commands. "Hey Siri, order my favourite fucking pizza".
It's fun to develop it - computer vision, facial features detection, 3D mesh deformation - I programmed all those parts for another unrelated app and it was total fun. Not that it was useful or something, but I enjoyed it a lot.
I tolerate emoji (though don't use them myself) but I always get upset at the acceptance rate of new emoji compared to the historical acceptance rate of actual characters that are used in actual written languages by the unicode stewards.
There is an eye opening, amazing discussion about emojis going on here. But, like really aren't we just all accepting of the nuances of a cartoon based language? I'm not saying this is a bad thing, but that is reality. We're sending forest gump like 'shit happens' cartoons to one another.
I deeply suspect that the lack of features is because the releases this year internally centered around responding to the negative buzz associated with iOS 11 problems. I remember when emojis were showcased at WWDC 2016, a fairly low-feature release. Whenever that happens the true feature is unannounced: it's simply stability.
"Whenever that happens the true feature is unannounced: it's simply stability."
Can't remember which version it was now, but I remember when Apple heavily marketed a Mac OS X version dedicated almost exclusively to bug fixes and performance enhancements.
It was very well received. I'm sure the same would happen if they announced they were dedicating an iOS version to bug fixes and performance enhancements.
EDIT: I think the version may have been Snow Leopard. :)
Worth noting that only iOS got the "double down" on performance. Probably because the userbase is larger and this idea had crossed into the mainstream news as a thing people were complaining about.
They could get into this during the developer SOTU, so there's hope...
Exactly. But now, in contrast, they're silent on the issue of stability. I'll take them at their word: if they say they've only done X, Y, and Z, then I'll accept that this is all they've done.
Personal experience, emojis largely enrich my chats with friends and family. That’s one of the reasons I prefer WeChat over WhatsApp. In fact, WeChat even created platform to invite artists to create more emojis for the app.
How does this article support the idea that WeChat raises concerns about monitorig by a surveillance state?
The Chinese authorities in the article didn't "monitor" anything. They retrieved the deleted message from the phone itself. For all we know, it could be as innocuous as running a SQLite DELETE to delete the message and the scheduled vacuum hasn't run yet. I don't see any convincing evidence of either actual monitoring or Tencent conspiring to allow such monitoring.
Any site that is broken by Private Browsing causing storage quota exceptions in localStorage is going to be broken by quota exceptions under "normal" operation.
It's in the spec, maybe developers should make their websites a bit more resilient to the very real errors that can happen? Did they just forget about error handling?
But localStorage.setItem() throwing an error definitely is in the spec, so if someone isn't handling Safari's localStorage behaviour in Private Browsing Mode (that is, throwing exceptions) then they aren't handling the parts of the spec.
the convention is that localStorage throws exceptions. it does in every browser. if a site breaks because it doesn't do error handling, then that's the site's problem.
You and many of the commenters miss an important point: Any apparent behaviour difference between private and non-private mode should be considered a bug, because sites can and do abuse it to detect private mode.
I disagree. You might be able to use it to _guess_ but all browsers can throw a quota exceeded exception under normal behaviour. I see this every now and then on mobile devices where the phone is so full the browser refuses to store more localStorage items.
This is exactly what I want to prevent though - I want to prevent dodgy sites from using the quota exceeded exceptions from localStorage to (100% accurately in my experience) guess that I'm browsing in private mode.
I too wish they provided an ephemeral localStorage in private browsing.
But in any case, your application should be interacting with localStorage through an in-memory facade, otherwise it's still going to break with lots of other edge-cases. All operations should be treated as volatile, and probably silently suppressed on failures.
Using a facade also makes testing easier, and if you avoid global side-effects it becomes easier to parallelize test-cases.
Considering Animoji and Memoji are only in Apple's Messaging app/iMessage, I think Bitmoji will be fine as long as Snapchat is fine (which, well, isn't certain).
its so disappointing, with all the resources and talent at their disposal, that the pace and direction of macOS - to truly advance a desktop system - is so lack luster, "un-courageous", and as you put it boring.
I'm just curious, what "courageous" changes would you like to see in macOS?
Last time I saw a courageous desktop OS change, it was Windows merging their Mobile and desktop OS, and that was a hard fail.
It's a workstation. I prefer reliability and consistency. I think Apple knows they need to keep the general public buying macbooks with stupid superficial features, while maintaining consistent, reliable, functionality for the power users out there before they get annoyed and migrate to Linux machines. Hopefully they address the hardware reliability issues (ie. keyboards) in their next hardware release.
Vulkan really along with OpenGL latest. The whole Metal thing was wrong headed - both iOS and macOS should have supported Vulkan.
But that's nothing that affects me personally. What bugs me about macOS is how sloppy, buggy and limited it has become. Finder sucks big time. SMB doesn't work all too reliably. Wanna domain join - tough luck. (Even Linux distros are advanced in that area - Gnome 3 on Fedora allowed me to setup Enterprise Login and just entering ID / Password for my AD account during initial setup and it all worked flawlessly!). The OS updates are atrociously slow - at least they are infrequent but just goes to show how much attention they're really paying.
That's just from memory - I haven't used it for last year.
my favorite part of this all is that the last generation's retina have: 1) larger batteries, 2) better track pads , 3) magsafe vs crap connector for power USBC 4) horrific keyboards that have no travel, bad feel, no space between keys and 5) on macs with that stupid touch bar no physical ESC key.
not only is it disappointing and boring but things are starting to go backwards.
When a site requests location permission I only get the option to deny request and make Safari remember that decision for a "day", not "forever" which is what I would like Safari to do, esp. for sites like Google.
Now I have looked at forums and didn't find anything that helped. Contacted Apple support (via call and chat both) and on both the occasions I was told that I need to reinstall the OS which was frustrating but heck I did just that with last major update - backed up my data and did a fresh install. I still face that issue. I called Apple Support I was again asked to do the same - reinstall macOS as Safari alone cannot be reinstalled.
You certainly shouldn't get indignant about it. It's one thing to express an dissenting opinion, but throwing a mini-temper tantrum on the Internet is a bit much. Not saying that's what you're up to, but I'm not as sure about OP.
A more apt comparison would read: You watch a mediocre preview of a new movie from a company that has made the most successful movies ever and continuous to do so. They're doing something right. Instead of complaining, maybe, just maybe, we should consider whether we're the ones misjudging reality.
>from a company that has made the most successful movies ever and continuous to do so. They're doing something right. Instead of complaining, maybe, just maybe, we should consider whether we're the ones misjudging reality.
That's a rather silly comparison. I guess the same logic doesn't apply to Electronic Arts/Oracle/Comcast/<insert hated company on HN>. People seem to be giving them money, and yet people also simultaneously hate them. I think we can and should criticize things we don't like, but only if we're honest with our reasons.
This is in addition to last year's announcement that "macOS High Sierra is the last version of macOS to run 32bit apps without compromise"
I wonder if we will soon see a new lineage of Macbooks fitted with Apple-specific arm64 chips.
The most scary thought is if UIKit-on-macOS starts requiring Developer ID entitlements and need to be installed via the app store, with fairplay DRM encryption of binaries and everything.
And OpenCL too. This is terrible. I was thinking about adding GPU support to a numerical simulator I am working on, and I was planning to have nice cross-platform support with OpenCL. Whelp, that's no longer the case. My code is in C++, and I refuse to use proprietary, vendor-locked, Objective-C-only Metal. If people want GPU support, they'll just have to use Linux, which doesn't artificially constrain you with corporate frameworks.
Apple lost interest in OpenCL after learning Khronos wasn't steering it into the direction they wanted it to go.
For example, Metal compute shaders are C++14 and Khronos only adopted C++ in OpenCL after the beating they took from CUDA, which supported it since the beginning.
>My code is in C++, and I refuse to use proprietary, vendor-locked, Objective-C-only Metal. If people want GPU support, they'll just have to use Linux, which doesn't artificially constrain you with corporate frameworks.
Well, some people refuse to use non-platform-native lowest-common-denominator libs, so there's that too...
one possibility for cross platform gpu is webgpu. the people working on it seem to be planning to have a c or c++ level standalone library. behind the scenes it will be directx/metal/vulkan
Apple/Google/Microsoft/Mozilla and others are all participating
Does this imply Apple is planning to continue support for OpenGL informally/externally, the way that X Window support is through xQuartz, perhaps via an external Mesa-based library?
And maybe the same informal/external support model for OpenCL?
Performance will be diminished but not extinguished.
FWIW, they've only deprecated it, meaning no new updates. They haven't removed it from the platform. Furthermore, desktop OpenGL seems like it's dead anyway, given that Vulkan has replaced it.
People just getting into a field like to run code on their personal machines. This can be quite relevant when your code gets a 50X speedup from running on GPU.
This is sort of like saying "people only do web serving workloads on Linux, we don't need web servers to run on Apple machines" to me.
Not necessarily. Many media editing apps use OpenCL to speed up processing. I know Capture One uses OpenCL, and I think Adobe's Lightroom and Photoshop use it also. At this point even Pixelmator and less well known alternatives use OpenCL, too.
Sadly, most companies won't have any choice but to port their app to Apple's proprietary APIs. It's really a net loss for consumers because most of these devs have better things to spend their time on than Apple breaking compatibility on a whim.
I mean, if you're already running decent code or already have extremely good tooling where small examples can be easily sent to other machines. Additionally, debugging code running on other machines is also a huge pain in the ass vs. being able to step through it directly, locally.
Almost all of my (and my lab's) time is spent tinkering with small numerical examples before sending it off to one of the lab machines to run overnight, and using the GPU on the MBP through OpenCL is a huge advantage.
This is just quibbling, but 1) you can write Metal code with Swift as well as Objective-C (and that's not even vendor-locked; I'm doing Swift in Linux right now), and 2) you can write C++ in Objective-C.
I know this isn't what you're really complaining about, though.
The most common pattern I've seen with cross-platform stuff is a small Objective-C wrapper over MacOS APIs, which then gets called like C functions from pure C++ code.
Somehow this is directly related to the failure of desktop Linux.
The only successful variants of it, actually do constrain devs with either Web or Java corporate frameworks, with a very tiny subset of native code allowed to take part into the whole game.
I would be a lot more okay with this if Apple supported Vulkan, the more portable comparable API, rather than just the macOS/iOS-only Metal.
I also wonder what means for WebGL and its future. Right now, WebGL works in browsers on macOS, Linux, Windows, iOS, Android, which is incredible. There is no equivalent.
Sure, Apple has started working on WebGPU, but that’s not yet ready nor is it guaranteed to gain Linux, Windows, Android support.
It's important to note that NXT isn't necessarily a replacement for ANGLE. It's an experimental replacement for WebGL as a whole, with a different API. There still needs to be a way to run WebGL programs on Mac if this deprecation leads to removal a few versions down the line.
And not strictly a replacement either, WebGL would not go away. The successor to WebGL is still so far away that there will probably be some versions after WebGL 2.0.
Mentioning this in a couple different places, but consider Molten (https://moltengl.com/) for this use case; it's a third party but very high quality reimplementation of Vulkan and OpenGL ES atop Metal. In other words, an easy way to adopt metal without actually porting your app.
I'm guessing it's not the end of the world for Autodesk to add a metal backend to maya. Some smaller teams might very well choose to let go of mac though.
Exactly. Not sure why everyone thinks their favorite Mac apps are using OpenGL anyway. They probably moved to Metal a long time ago — it is much better.
I'm sure everyone's favorite /exclusive/ Mac apps are probably using Metal.
I'm guessing you haven't used apps like Maya, Nuke, or Houdini. They were all written in the mid-90's on IRIX machines and later ported to Linux, Windows, and OSX. Surprisingly, 3d performance isn't always big goal of their's. My guess is the core features don't sell new versions, so even though they have annual releases those things don't get much attention. They'll have drawing issues and transparency sorting problems for years. Same with audio bugs.
Their Mac support was spotty and irregular until the past 5-10 years.
When Metal was introduced for the Mac they had the Modo devs at WWDC and they ported their code to Metal in like a week or two. Not really the apocalypse.
Metal and OpenGL two completely different APIs, shading languages and probably a whole host of other things.
I've ported my fair share of things from fixed-function to programmable shader pipelines and you'd be effing naive if you think you can do that in a couple weeks on anything more than a toy demo.
I worked with a AAA gamedev recently who had written a Vulkan renderer for their game to demo quality level in 2 weeks. It all depends on existing level of abstraction for the rendering API and shading language (and to some extent assets), and how much performance and efficiency you’re happy leaving on the table.
Getting pixels on the screen and shipping something to end users are to very, very different things, 90/10 rule and all that.
Vulkan also has the benefit of multiple platforms supporting it so you're not doing all that work for a minority(which is what OSX is in the graphics space) platform.
If you already had an architecture with replaceable renderers, especially with DirectX 12 one already written, adding Vulkan one will be a matter of just a few weeks. If you hadn't, it will be much tougher.
The question is, how many devs did they have working those two weeks (and what resources did Apple provide to help them)?
I develop an OpenGL-based video engine for a live media playback application, which is very nearly as simple an application of OpenGL as you can expect to find in the real world, and there's no way I could expect to port it to Metal in a week or two singlehandedly. Like others have mentioned, it's a completely different paradigm, not just a matter of changing around some function calls.
That said, I welcome this change with open arms (and secure in the knowledge that legacy code will continue to work for the foreseeable future). OpenGL is a fragmented, brittle, spaghetti-inducing pile of global state. Rewriting in Metal isn't anywhere near as small a project as Apple claims, but I'm perversely looking forward to it -- I'll be very happy to have OpenGL in my rearview mirror.
Good riddance. OpenGL has been an awful API for many years now. The drivers are way too complicated, and applications don’t have enough control to deliver smooth performance. All OpenGL does now is let you kind of mediocrily put things on the screen.
It would be a good riddance IF there was another universal API (Vulkan) and they would adopt it in substitution.
The fact they want to force game developers to use instead Metal is... ridiculous, especially considering the extremely low macos marketshare, particularly outside US.
Sometimes at work I daydream about this alternate reality where we let go of the idea of an universal API on top GPUs and just let vendors publish some ISA and hardware-specific libs/drivers/docs. I'm sure people would figure out nice (and less nice) abstract libraries on their own just as well.
It's so frustrating to read what the GPU is actually capable of (for example in the intel PRMs) and to know that there is absolutely no way to get the driver's compiler to do the right thing in a reliable way.
All game engines that matter already support Metal, plus writing platform specific APIs is something that professional game developers are used to do since Atari 2600.
What do you think about them forcing you to use Swift or Objective-C for this? Forcing you to use languages that use ref counting and object oriented pointer tables that are traversed at runtime? How much of the gains does objc_msgSend eat up? I though you are against such things?
How ugly would the JAI code to need to be to interface with this?
I wish they had a simple Metal C API, but their new API comes with a bunch of Objective-C baggage.
Have you actually used the API? When it comes to submitting geometry and textures (some of the biggest bottlenecks), it's exactly the same as in C. You pass a pointer to your buffer of bytes and they get copied to the GPU for you.
When you're calling any library, that simple C call invariably goes somewhere else inside the library itself. For OpenGL this is because there's always hooks for introspection, or because the GPU driver wants to implement something themselves, etc. For many other libraries it's just because people don't feel like they've written "production quality code" unless it goes through a bunch of hoops and ends up in some method with five underscores in the name.
In Metal the Obj-C abstraction is part of the design and used to eliminate any other abstraction people would want to introduce. The objects you get back from the API are implemented straight in the GPU vendor code, and the debug tools can swap the methods out for extra validation, recording, etc.
Any overhead coming from objc_msgSend is minuscule compared to the gains from things like better serialization of GPU-bound tasks and not having to synchronize repeatedly with the CPU.
If you're worried about refcounting, use ARC (which you have to with Swift anyway). First, the compiler is very smart about optimizing away retain/release/autorelease calls whenever it can. Second, when those calls do have to be made, they're implemented using vtables, and never hit objc_msgSend() in the first place.
Pretty ballsy, it didn't work well for Microsoft when they said "Use DirectX or die" I doubt it will work well for Apple. This is especially true for OpenCL (also deprecated) which nobody on big Linux server farms with GPUs is going to be using "Metal on Linux" for their code.
It kept a lot of commercial software off the Windows platform and left it on Workstations like SGI and DEC had. The movie houses that were rendering movies were using OpenGL on their renderfarms and its lack of availability on Windows kept windows off those desktops.
The key being that if you've got a technology that works on both server farms for production and workstations for development, you support that so that your OS is a viable candidate for the developer workstation. I don't see a Metal port coming to Linux in any reasonable way any time soon, and I don't see researchers giving up OpenCL or even OpenGL any time soon, so it just means that Apple is going to forego that business.
With the recent github purchase it gives the oddly dissonant experience of having Microsoft being the 'developer friendly' OS company and the MacOS being the 'developer hostile' OS company. Where, and this is important, support for cross platform tools determines hostility or support. I would not argue that Apple is not the best development environment for the Apple platform, or Windows for the Windows platform.
OpenGL is a real-time graphics API, not an offline render system used by renderfarms. I have never heard of movies or special effects rendered in OpenGL. The first major renderer was Renderman, the only game in town for years, and it has nothing to do with OpenGL.
You are correct. However, many in-studio tools are written in OpenGL. These tools are used to model objects and layout lighting, scenes, etc. by artists. They are written in OpenGL, usually on Linux. (Source: I work with several people who formerly built these tools for well-known studios like DreamWorks and Sony.)
Not sure what you're saying is changing on the server -- people are going to go from not using OpenCL to not using Metal.
Everything is CUDA. Everything depends on the shitty unstable software designed by a hardware company (Nvidia). This sucks and I hope someone can disrupt it, but Apple has no influence in the field of GPU computing.
I didn't say they did. My claim is that Apple deprecating OpenCL is a straightforward and uninteresting thing; it's a company that has no influence on GPU computing getting out of the business of a technology that also has no influence on GPU computing.
I work in data science too, and who cares about laptops. Desktop computers with GPUs, SSDs, and a lot of RAM are what you need. You can thoroughly bling out the hardware and the entire computer will still cost less than your monthly AWS bill to access a GPU. (This is all getting pretty irrelevant to Apple, though, who doesn't make such computers.)
Most of the field of machine learning is irreproducible right now because you can't not use CUDA, but you can't promise that it will work the same on anyone else's computer, or that it will work six months from now.
Actually, it worked well for MS - back in the day, most of the games in the industry were done in DirectX. And to be honest, DirectX/3D was the only option if you wanted to have Vulkan-like low-level access to GPU.
EDIT: As a follower of https://mesamatrix.net/ I don't think it would be unreasonable to say about 3 years to get something that kind of works, five for something semi reasonable, and 8 for something at modern open gl level.
I am not sure. In the past Carmack stated that DirectX at some point became a much better API compared to OpenGL. He also once stated (admittedly when talking about id Software, after he left the company) that he's not really a sentimental person.
Say what you will of Microsoft they still understood how important backwards compatibility was and didn't do the same to OpenGL back when they were pushing D3D hard.
I really hate to see such a focus on a platform lock-in API when viable alternatives(Vulkan) are available.
I'm curious about this – does it really matter? How important is it that the Operating System has OpenGL? Can't individual apps just static link to (and ship) their own versions of OpenGL?
OpenGL is just an API, the underlying features are provided by the graphic driver. You can't ship with your own OpenGL, that would mean shipping with your own amd/nvidia/intel driver (and the associated kernel module, etc).
A reasonable alternative would be to implement OpenGL on top of Metal for compatibility but this is a lot of work.
What is gonna happen to games and apps which were using OpenGL since today then? They will stop working in the new macOS version?
edit, ups just read it: Apps built using OpenGL and OpenCL will continue to run in macOS 10.14, but these legacy technologies are deprecated in macOS 10.14. Games and graphics-intensive apps that use OpenGL should now adopt Metal. Similarly, apps that use OpenCL for computational tasks should now adopt Metal and Metal Performance Shaders. [0]
macOS performance was already getting so poor on 2010/2011 MacBook Airs, so I think this is the right move. I recently downgraded my old 2GB 2010 MacBook Air back to 10.11 El Capitan and it runs much, much better than it did on High Sierra.
I guess we will be saying our last goodbye to “write once, run on all major platforms” graphics code. Does this mean that the only choices now are 1. Fork your graphics code into two separate implementations or 2. Go up the stack and use a cross-platform “game” engine like Unity? I suppose 3. Stop supporting macOS is another (sad) option.
If you're programming in Rust, you can just use `gfx-rs`'s HAL[1]. This is designed to be a Vulkan-like API that works on top of Vulkan, Metal, D3D12, or OpenGL.
If you aren't in Rust, just use Vulkan. There are high-performance wrappers for it in various stages of development, such as MoltenVK[2] for macOS and iOS, and VulkanOnD3D12[3] for UWP. Non-UWP Windows, Linux, and Android should support Vulkan natively through their GPU drivers.
Generally, yes. Certainly not all. The likes of SFML and SDL2 and libgdx have zero support for Metal, although SDL2 did recently add support for Vulkan.
How far back did Unity/Unreal transparently handle Metal support? I suspect some developers won't have the resources to update games running on older versions of those engines.
Games made by developers who've gone out of business are probably just going to stop working in a few versions of MacOS.
Perhaps they could use some middleware like MoltenGL [0]. That way they might still be able to write against an OpenGL API (which allows for code re-use), while supporting Metal under the hood. It does seem this particular tech might be more suited for mobile platforms, unless OpenGL ES is also used these days on the PC / Mac platforms.
OpenGL is a pain in the ass to use, Metal does what OpenGL does so much better, and with the Vulkan Metal wrapper you can still write cross-platform apps/libs. So nothing of value was lost.
You can’t compare them directly, they’re very different categories of graphics APIs. Metal belongs to the category of “modern low-level graphics API” (which also includes Vulkan and Direct3D 12), while OpenGL is an older higher-level graphics API (which also includes Direct3D 11 and previous).
The low-level graphics APIs like Metal and Vulkan allow for much better performance, but they are much harder to use and require more work from the developer (hence, they’re usually used only by game engine makers). Higher-level graphics APIs like OpenGL are less efficient and have lower peak performance, but are easier to use for individual projects and have the benefit of having existing functional code (no need to rewrite a working project).
Also, OpenGL (and its mobile and web variants OpenGL ES and WebGL) are very portable (macOS, Linux, Windows, iOS, Android, both and native and in browsers), while Metal is macOS/iOS-only.
To be fair - OpenGL was fairly horrible to use directly. It sat in an awkward middle area between "low level enough to be efficient" and "high level enough to be productive".
Maybe I'm biased - every time I looked at OpenGl code I shuddered and ran away to a higher level framework (I'm excluding shader code from this - that's concise enough for me not to mind getting that close to the metal)
So much this! I've been writing OpenGL code on a daily basis for the past 10 years, and I hate it. It works like an API designed in the 1970s. It uses none of the modern features of languages, easily allowing you to pass incorrect data to a function, and giving you almost no clue what, if anything, went wrong. Just look on StackOverflow some time at how many questions are basically, "My previously working OpenGL program is now rendering nothing but a black screen. What went wrong?" And then compare the huge number of different causes. There's no reason they couldn't have improved it along the way, keeping backwards compatibility, and just adding more informative error codes and better use of language features. But they didn't. My feeling is "Don't let the door hit you on the way out."
I don't think that is quite accurate. But you will have to ask a professional game developer for better judgement. Comparatively speaking.
Vulkan, you need to know exactly what you are doing. There are little to no handhelding. You are trying to squeeze out every last 10% of performance in exchange for lots more development time. You need to write a hell a lot more code to do something what you previously thought were simple.
OpenGL is higher level that should be compared to Direct3D 10, not 11. As a matter of fact I will go about saying compare to Direct 3D 9. And unless you are a OpenGL zealot, any sane game developers would have told you Direct X 9 already exceed OpenGL in most way.
Metal is offering what most of Vulkan can do and making it even easier then OpenGL.
Honestly I don't get / understand why all the backlash. OpenGL is deprecated, and it simply means Apple won't be updating OpenGL anymore. Stop asking every god damn year. They have been shipping other deprecated library and API for years! OpenGL is developed in such a way that no one really cares. And designed by committees for maximum backward compatibility. And if you want your App on iOS, you will have to use Metal anyway.
Thank you for explanation. I don't do any graphics programming, besides few toy projects with OpenGL, but my understanding was that one of it's benefits was portability (for a varying definition of "portability").
That's why I wasn't sure what Metal is offering instead.
Most of the game devs I follow on Twitter expressed a positive opinion on Metal, particularly over Kronos' Vulkan and of course everyone(that actually has to develop with it) hates OpenGL.
To my understanding, the consensus is that it's, y'know, fine, but nothing particular to recommend it over Vulkan. The main problem people seem to have with it is that it feels unnecessary, like Apple being incompatible for the sake of being incompatible.
Didn't Apple try to work with Krohnos to make Vulkan, but they were taking so long, that Apple just gave up and made Metal and shipped it before Vulkan was finished?
A 3D modern API that acknowledges the time of pure C APIs is long gone, with C++14 as shading/compute language, providing math, 3D models, materials, fonts, textures support.
Whereas in OpenGL you are stuck with fishing for libraries, multiple execution paths depending on the extensions and bugs, compiling and linking shaders at runtime without library support, C style APIs and a plethora of deprecated APIs.
Metal is a lower-level API, somewhat similar to Vulkan or Direct3D 12. OpenGL supports driver extensions that can be used to reduce overhead in a similar manner to Metal, but this is largely a moot point when it comes to OpenGL on macOS, as many such extensions are simply not available.
Mentioning this in a couple different places, but consider Molten (https://moltengl.com/) for this use case; it's a third party but very high quality reimplementation of Vulkan and OpenGL ES atop Metal. In other words, an easy way to adopt Metal without actually porting your app.
Notch decreases function and is aesthetically displeasing.
3.5 mm = 1) there is no way a wireless medium will ever have better throughput than a wired medium over a superb DAC - ever, 2) dongle, 3) extra battery worry now for BT.
OpenGL, I will grant you that one, since both Metal and/or Vulkan are a vast improvement on OpenGL.
I know that some people like to say this, but it's pretty clearly wrong, and it makes you look unobservant when you repeat it. It takes about five seconds and two phones to demonstrate this. The standard top section on every phone, Android and iOS alike, is a dedicated status section with little system icons with wide gaps of unused blank space between them. Every phone, every time. A notched phone just puts the camera directly between those icons instead of putting unused pixels there. It doesn't take a genius to see that un-notched phones have both larger top bezels and more pixels wasted in the status area. The notch doesn't cut into the screen. The screen extends up around the camera and puts the status icons on the horns.
> and is aesthetically displeasing
All of the tens of people I know who have notched phones say they love it and don't notice the notch. So, maybe to you, but it seems like the market is speaking.
It might benefit you to look at a 3rd phone. Because my top notification bar is always full. So no, they are not clearly wrong. Or better yet, maybe it just comes down to individual preference!? I personally never thought I'd see someone trying to justify the loss of screen space as a "win". But here we are. And also, the market is still out on it. Until the majority of phones are doing a notch, the market hasn't spoken. Apple hasn't been the leader in smartphone sales in many years now. They aren't even #2 anymore.
The News, Stocks, Voice Memos and Home apps were brought to Mac using iOS frameworks that have been adapted to macOS. Starting in late 2019, these additional frameworks will make it easier for developers to bring their iOS apps to macOS — providing new opportunities for developers and creating more apps for Mac users to enjoy.
Looks like UIKit is the future, despite protestations to the contrary.
This is a good thing. While very similar, UIKit blows AppKit away in terms of API usability and design. Also, I think it's perfectly fine to write macOS apps using UIKit, would be similar to making iPad apps. It needs a few Desktop/Mouse related adjustments but Apple is probably going to be smart about that.
Definitely not gonna be like what happened with Windows Metro apps
I agree, and frankly it’s been obvious since its inception that uikit is a better appkit written with the benefit of experience - more limited at present but suitable for all devices. They should have started this years ago and have a unified platform with tweaks for different UI.
Bizarre doublespeak in the presentation though as they denied this was the plan - explicitly said no merger.
It's not doublespeak, it's very clear refutation to those who fear macOS fading away and being replaced by a locked down, ARM-powered iOS laptops, Apple machines turning into alternate version of Chromebooks. The message reads to me as "we're bringing the good things from iOS over to macOS, rather than abandoning macOS and replacing it with iOS".
Then proceeded to detail the port of the iOS APIs to Mac and some apps just using those APIs.
Time will tell, but I think a merge is exactly what they are planning long term, and it makes sense to do so (with some differences for user input and larger screens). It's crazy to develop very similar APIs for GUI apps in parallel, and what has become two different OSs with slightly different apps and feature sets, but a huge amount of overlap. The savings for Apple and developers of having a consistent set of new APIs and one set of apps to manage (for example their entire office suite) would be huge.
The sheer pressure of money and users on the iOS side means that will be the dominant one, and UIKit will win.
Re lockdown, that's more a political decision than a technical one, it could go either way, I suspect both platforms will become gradually more and more similar (they already are converging) and the future will be close to the one you fear, with at the very least some ARM powered locked down laptops (which I'd be very happy with).
> It's crazy to develop very similar APIs for GUI apps in parallel, and what has become two different OSs with slightly different apps and feature sets, but a huge amount of overlap. The savings for Apple and developers of having a consistent set of new APIs and one set of apps to manage (for example their entire office suite) would be huge.
Right, and what the announcement seems to show is that UIKit could eventually grow to replace AppKit. Like you yourself say - "uikit is a better appkit written with the benefit of experience".
That still doesn't imply the two OS would necessarily be merged - there's still a ton of things that make sense for macOS that are not needed in iOS, and vice versa - just for one thing, they have vastly different energy budgets. There's also a ton of overlap and shared code already, for pieces that it does make sense to share.
So, I still think the answer is really targeted at people who fear that iOS (with all of its current limitations) will replace macOS, essentially, rather than being a misdirection as you seem to find it?
From a customer perspective, there may be two OSs - that's really a marketing question.
From a developer perspective, I expect them to arrive at one API, with a few options to allow feedback from various sources (tap or click), and handling multiple screen sizes gracefully, and therefore pretty much one OS, so that you can build one app across devices - this is exactly what they are trying out with their ios app port. Once you do that, it doesn't make sense to have two APIs.
I can't think of many features that would be exclusive - to take your example of energy budgets, saving energy on both is equally important now that laptops are a mainstay and desktops an afterthought. We may well see ARM laptops for this reason. Watches are the one exception that have a budget so tight at present that normal apps just aren't feasible.
So I think a merge is exactly what is on the cards in the long term, particularly as they are now run by an operations person and the money flows in more and more from phones, not computers.
Hope significant are the differences between cocoa and uikit? I don't have any experience besides some light source browsing and it seemed the big difference is the NS to UI prefix change. Anyone care to summarize for the unenlightened?
The differences are more in implementation than anything else:
- AppKit isn't layer-backed by default, requiring you to handle this. Something as simple as setting the background color for an NSView is far more of a ritual than it is for a UIView.
- NSColor/NSImage under the hood are different than UIColor and UIImage.
- NSTableView and NSCollectionView have big differences (NSCollectionView is more subtle, I should say) that rear their head when you start trying to do anything like you would on iOS. e.g, NSTableView doesn't automatically get an NSScrollView, grouped rows operate like headers but the underlying data structure isn't easy to share, collection view items are view controllers on macOS but not on iOS, etc.
- Delegate methods are entirely different, requiring you to #if check everywhere.
- Trying to use NSTextField like a UILabel is a trip, requiring you to understand an archaic NSCell architecture that you'll wish you didn't have to deal with in 2018.
The list goes on, and most developers that share code between iOS and macOS have their own frankenstein framework floating around. UIKit on macOS would do a lot to alleviate this.
Edited because, speaking of 2018... this should not be this annoying to format.
Regardless of how correct he probably is, the fact of the matter is that there's overwhelmingly more developers on the UIKit side and that API is what people clearly prefer - the only ones voicing otherwise are old Cocoa-heads. It's ridiculous that it's 10 years later and Apple is only now confronting this.
Also, just to follow up on my original comment... from what I saw over on Twitter, 10.14 finally implicitly layer-backs apps by default, so the first point becomes much less of a nuisance than it used to be. Apps have to be linked against 10.4, mind you, but yeah.
So is the half-assed approach of working with the layer background. ;P
That's what a ritual is; it's 2018 and I should not have to go out of my way to set a damn background color on a view. UIView has predictable behavior, AppKit does not.
The underlying OS is extremely similar (they're both Darwin and they share the Core * frameworks). And unlike Wine, they won't be running unmodified binaries.
But perhaps an important one for Apple. Anecdotally at least (which is probably the best we can do here), concerns about Apple hardware are making many pro users, particularly developers, think seriously for the first time in years about moving platforms. Each significant date (WWDC, autumn hardware releases, etc) is a focus of procrastination for those of us a little reluctant to move OS's, but seriously thinking of doing so. My guess is that on each of these lost opportunities another group bleeds away. Perhaps not significant to Apple, but -- who knows? -- mindshare can be about more than just numbers.
I feel the system wide dark mode is the next step from Night Shift mode to acknowledge or encourage people to work more toward late night/evenings. Eye fatigue has been one of the reason my body reminds me to go to sleep, now I can maybe deal with some more emails before I head to bed.
Night Shift (and perhaps Dark Mode as well) work to decrease the amount of light towards the blue end of the spectrum as in the evening to mimic the natural spectrum shift and help people sleep, rather than mimic bright mid-day light. It's not to reduce eye fatigue and get people to work more: it's to help them relax and sleep better at night. Here's an article that discusses it in a bit more depth.
There is a growing body of evidence that cumulative lifetime exposure to blue wavelength light increases the risk of AMD. (Age-related Macular Degeneration, which can lead to blindness).
Hopefully they don't fuck it up like they did when they changed from high contrast to Smart Invert on iOS.
I used high contrast accessibility as a de facto night mode. In iOS 11 (or maybe 11.x?) they changed it to "Smart Invert" which tries to be smarter about inverting by not inverting the colors on images. Unfortunately it has all kinds of problems. Safari will randomly crash when viewing images. Sometimes is shows the wrong image (e.g. two images in a row and it will show the same image twice instead of two different images). Sometimes half an image will be inverted. Sometimes when I go back to regular mode the images will go into inverted mode. And it makes charts or maps with an HTML legend impossible to read because legend gets inverted but the image does not.
Try switching to "Color Filters". Set it to "Color Tint", turn the intensity all the way up, and set the hue to pure red.
Turbo-charged night mode. No "smart inversion" to worry about, no orange stuff that suddenly becomes screaming blue. Just a nice monochromatic red display that you can see in the dark without ruining your night vision or confusing your body's clock.
If that happens, I feel it's one of the good things, speaking from general perspective on how humans interact with devices. I'm a power user, who discovered f.lux around 3 years ago, and I've been hooked ever since. Around a year after, I converted my family to use whatever blue light restricting settings they had on their devices, and can say with confidence that it does help. I wouldn't say it's encouragement per se, more of a guideline that should be there in the first place. Most users don't care about colors being off at odd hours, as they're not power users, whilst power users will have the knowledge to turn this feature off. Raising awareness is not a good term, but I feel it fits in this case.
From a publishing industry standpoint, Apple News desktop is pretty big. Especially if that means Apple News is coming to other countries (including, more specifically, Canada). And especially since they bought Texture. There's likely more to come on that front.
The lack of a desktop client is the main reason I don't use Apple News. If I were Feedly or any other RSS reader I'd be scared right now.
Granted, what Apple News could really benefit from would be an option to log in and read from my sources through a web browser. It would be nice if I could check up on things from, for instance, a public library.
Absolutely. A unified subscription model would be a boon to readers. It's proven complicated to implement due to no standard among publishers for handling subscriptions. Outside of the more innovative companies, it seems a bit ad hoc as new technologies rise, appreciate into maturity, settle into mediocrity (where they are adopted), and fall (while they're increasingly difficult to maintain until the next third state).
I think you're onto it, though. And the more effort publishers put into the content, I think the better they'd do. Unfortunately many decisions on the digital front have so far been decided by marketing or operations along the old anti-Ford model: "If I had asked people what they wanted, they would have said faster horses."— they aren't so keen on improving experience, they just want faster horses.
I'm not sure we even need something that elaborate honestly. I was imagining just a straight up RSS reader.
It doesn't seem like we need to invent anything new here, just make a version of RSS that can natively serve rich multimedia content and be gated behind a subscription.
That would definitely work to an extent—especially for newswire producers and consumers like wire services or newspapers.
Magazines and some newspapers, however, are very zealous about their design—and rightfully so. Apple has tried to find a middle ground with the Apple News format[0], but it's still a little wanting with regard to print design standards. Some art directors/teams are more accepting of new constraints than others as well.
Is there word on Mojave's improvements to performance? I got bitten horribly by moving my 2014 rMBP to High Sierra from El Capitan... GPU issues galore, with a baseline CPU and RAM usage increase.
Would be nice if I didn't have to fear this upgrade quite as much.
Does Safari have a night mode? Most of my work at night is either in Mail (which should be relatively easy to enable night mode on) or a web browser. Obviously, third-party browsers will have to decide whether to support night mode, but I wonder if Safari will have support when Mojave is released.
I imagine it won't be trivial to do this, especially on pages that have a background image. You don't want to invert it, but leaving it as-is would ruin the night mode effect. Plus, you have to maintain sufficient contrast with foreground text.
> Obviously, third-party browsers will have to decide whether to support night mode
This is a big flaw, imo. A universal color scheme like 'dark mode' should take priority over individual app's preferences. A consistent, quality design was one of the things that first attracted me to the Mac, but what with every app doing its own thing, that ideal is a thing of the past.
BTW, the Spotify app called. It's had dark mode for ages ... (/s)
Doubtful. Night Mode will most likely only affect application chrome and menus that use Dynamic System Color [1] named colors, not the actual content within the app (i.e. images, webpages, documents, etc)
Websites will be entirely unaffected as they have they're own custom CSS palettes.
Interesting. I imagined that with a name like "Night Mode", it would be more universal (like Night Shift is). I'll be interested to see how this evolves and whether third-party developers adopt it.
I'm not sure what I'm expecting to be honest, but I never use any of the in-built macOS apps so I just skim over those announcements. They don't seem like they should be part of what's considered the OS.
I guess I'm expecting system wide features that only Apple could add because they involving updating the underlying OS. Adding group chat to their video chat app that I never use for example isn't that interesting to me and shouldn't require an OS update.
Desktop technology is really stagnating isn't it? I guess it s just getting matured. Even Microsoft had few noteworthy changes in the last two major updates to Windows 10.
dark mode is great, but OMG someone should have talked to the press people about this one and make sure the page was in dark mode. (also make sure you always say dark mode with batman's voice, it's not only necessary but much much more satisfying)
IMO, the most interesting thing happening with Apple right now is the surprising (to me) amount of effort being put into their News and Stock apps (which are being ported to macOS). The only reason I can come up with for them to be pursuing this with so much effort is that they saw what happened as a result of Facebook dominating news and decided to put their best foot forward in the hopes of bringing some semblance of order.
>IMO, the most interesting thing happening with Apple right now is the surprising (to me) amount of effort being put into their News and Stock apps (which are being ported to macOS).
I think you missed the hidden story there (and by 'hidden' I mean, they actually mentioned it during the keynote). They announced that they're working on a toolkit to easily port over iOS apps into MacOS with very few manual changes to the code.
The effort they're putting into porting News, Stocks, etc. into MacOS is actually just them field-testing their UIKit <-> AppKit conversion platform.
I get that these were good apps for them to test out their new UIKit thing on, but that's not all that I'm talking about. They're devoting time at keynotes to their News app. And they're clearly putting time into courting content partners. That's what surprising to me; the totality of their effort in this area.
Perhaps that's part of it, but I think $ is a bigger factor.
Apple News has a surprisingly large (and fast growing) usage numbers/readers (1, 2). Add in news of Apple re-booting ads and allowing simpler multi-site news subscriptions I think News is going to continue to grow profit and importance for Apple (3).
It's where I read 90% of my news now. It works with consistent UI+readability, it's way faster than the mobile versions hosted by pubs, and with minimal ad bloat. I highly recommend!
Though they might have potential FB-style think-alike algo problems with the love/hate data and personalization News does.
> Though they might have potential FB-style think-alike algo problems with the love/hate data and personalization News does.
this is my biggest concern with it, and it sounds like they might be making changes to the interface to help:
> Apple News is redesigned to make it easier to discover new channels and topics or jump straight to favorites, and on iPad, a new sidebar makes navigation even simpler.
personal anecdote: since that fiasco I have stopped using Facebook entirely, unsubscribed from all news subreddits, and have created a curated collection of reputable news publications (and subscribed to some!) via Apple News. I suspect I'm not alone, and I am quite happy to hear News is coming to macOS.
I have to say, I would probably have never paid for news had it not been for how convenient Apple has made it.
I pulled that move once in an old product - not all of our users could upgrade to the newer product because of HW constraints - the one we had directed like 97% of our development efforts at, but that tiny thing kept them happy for quite some time. (We went with four color themes though, not just two.)
I recommend this move if you want to make an abandoned product line seem like it's at least a little bit alive.
Not so sure about this. WWDC 2017 (last year) had all of these announcements:
- New iMac Pro
- New iPad Pro 10.5
- New HomePod
- Updated iMac
- Updated MacBook
- Updated MacBook Pros
Hopefully, a keyboard that actually works, this time!
I have no idea why even after the class action no one talks about how embarrassingly flawed their thin keyboards are.
Mine (EUR 1799 laptop) broke after 2 months, was fixed by Apple, then broke again. People I talked to experienced similar problems, with missing keystrokes, double letters, etc.
You will be able to use UIKit to help port your apps across. But they aren't going to be merging all of the other parts of the OS together. Especially since iOS and OSX differ significantly in parts e.g. security posture.
Aside from Steve Jobs surprising the Facetime team with his open source announcement (and honestly, any Steve Jobs claims in general), what are you referring to?
This was exciting enough for me to get over how bummed I was about Github selling out. All those little finder and facetime updates will add more value to the apple products I've invested in.
Apple did not mention AMD or nVidia in their eGPU support announcement. With ML/DL, Rendering (Redshift is nVidia exclusive) and Gaming; Apple needs to throw the towel and work with nVidia to make a solid native driver for the mac.
Hmm I just put 16 GB ram, an SSD, and a new battery in my early 2011 mbp, also bought a new charger. The thing flies but it seems High Sierra will be my last MacOS version.
That's what I said to myself some time ago about Snow Leopart, and more recently - El Capitan, but unfortunately one eventually needs to bite the bullet and stay current with security patches and 3rd-party app support. I begrudgingly went to High Sierra, though it provides no added value to me whatsoever (and even breaks a few things).
I'd like to say that I won't be getting another Mac in the future, though I can't honestly be sure of it. I'm already fully on Linux on my company laptop, but I own a few Mac-only apps that prevent me from going full Linux on my personal machine.
I run Linux on my personal laptop and server as well but my wife needs some Adobe tools unfortunately. Well I guess I'll just sit it out until security updates don't come anymore. Anyway, I think this was also my last Mac for these reasons:
* I hate it that I can't open them up anymore
* That butterfly keyboard
* it got even more expensive (for a 15") (What we got back (the touchbar) has no added value for me)
* Windows can run Adobe tools and now has a subsystem for Linux.
* The Dell XPS line approaches MacBook quality pretty well and can be opened and is well supported under Linux should Adobe start to support it (hey, we are talking something like 2023 here, one can hope an Adobe Snap will be available by then, I think Adobe would rather like the Linux ecosystem.)
As said, I'm still very happy with a 7 y/o mbp because I upgraded it step wise. Though to be honest, I feel computer land stagnates a bit now that we finally found a solution for the hdd bottleneck, still there is little to fix yourself if anything gives out.
"macOS Mojave will be available this fall as a free software update for Macs introduced in mid-2012 or later, plus 2010 and 2012 Mac Pro models with recommended Metal-capable graphics cards."
Looks like it's the end of the road for my 2010 Mac mini running High Sierra. So long as Photos and Photo Stream continue to work, I'm not complaining. It was a good run.
There will likely be hacks like DosDude's Sierra patch [1]. 7 years of updates is still quite a lot, and I'm sure your Mac Mini will still be useful past its 10th birthday.
Check ebay. I recently bought a 2013 Mac Pro off ebay for a good price. My 2009 Mac Pro was no longer supported. I figure the 2013s will probably have another 5 years after whatever new model they come out with next year.
Ya, I couldn't get it to work and I have more money than patience. I also didn't buy one when they first came out, so I was happy with my 4 years of holding off for a good price. Besides, new (to me) hardware always motivates me to build products. I'm working on an Azure app on a Windows 10 VM inside a macOS. :) I need to get around to learning how to work on C# apps in macOS rather than Visual Studio, but I haven't quite gotten around to it yet.
Either way, now I have a trashcan / hand warmer on my desk with a TB connected external drive cage with 2 mirror sets, and it makes me happy.
"macOS Mojave will be available this fall as a free software update for Macs introduced in mid-2012 or later, plus 2010 and 2012 Mac Pro models with recommended Metal-capable graphics cards."
Which means they aren't doing it now, not that they aren't doing it ever. Apple publicly derided phones with large screens for years and then released their own.
They depreciated Open GL and built Metal 2 for macOS, differentiating touch and mouse input screens from each other and file access are trivial by comparison.
It's not like iOS doesn't have a file system, they just don't allow you to access it.
I'd happily go back to paying for MacOS if it meant a focus on stability and performance improvements. I waited in line for the Snow Leopard release and to date it was probably the best era of the Mac. I haven't even brought my office up to High Sierra due to the number of problems it's caused with all of our software and SAN systems.
Sounds like the return of Apple from years ago: in house everything, walled garden everything, including graphic stack, custom CPU, etc.
It almost ruined them (desperately holding on to their drastically interior CPU). What could go wrong this time? Especially when they no longer have Steve running the show.
has anyone actually made a solid case that Apple's current generation of silicon is "drastically inferior"?
Didn't Windows (on desktop) ignore OpenGL in favor of DirectX for years to great success, because it was deeply integrated into the platform?
Yes, Apple's ecosystem is the same walled garden it's always been. Any time it wasn't was an anomaly. The best fanboy response I can give is that they seem MUCH more willing to work with partners this time around.
Ah, they added Favicons back on Safari. Maybe 10.15 will give us again the full address bar, because I still don't understand how this could have been changed in the first place:
That never went away? Settings -> Advanced -> "Show full website address".
On the other hand, the way they currently display it has actually been praised for at least making it slightly easier to spot phishing attempts, since the only differences there would lie in the domain names (there are articles about it).
Check linked the screenshot out - there's a LOT of empty space that could easily (and previously did) display the additional path/query string information in a URL. The screenshot does look a wee bit silly
The main difference is that, while you have already been able to change the menu bar and dock to a dark color scheme, the update coming in Mojave allows the window chrome to be dark too (note Finder being dark in the article).
I'm continually amazed at how much Apple hypes up the most pointless, useless, basic things. This is a new version of a major operating system, and we get... the desktop organized by filetype. A built-in stock application. How is an App Store update considered part of an OS update?
To be fair, New, Stocks, and Voice Memos are really there as the first public tests of UIKit’s Mac port.
And desktop icons might not be a huge deal, but I’m glad to see Apple acknowledging that they’re useful and continuing to improve it, in contrast with major Linux distros removing files from the desktop entirely.
For the Mac App Store, developers have been complaining about how subpar, useless, and stagnant it’s been for years. If the new version can drive customers to buy things, it’s a potentially big deal.
Personally my favorite features are dark mode, fingerprinting protection in Safari, and improved security of the camera/microphone and sensitive files.
UIKit will be a big deal for the Mac ecosystem when developers outside Apple get it next year.
It'll certainly be nice to have the fingerprinting feature on my macbook pro, given how difficult it is for me to type in a password error free using their unreliable keyboards.
Not sure if serious, but fingerprinting as in device fingerprinting for tracking scripts. If you want a fingerprint enabled password manager those exist already.
It's an annual, incremental OS release. We're not talking about multi-year developments or grand visions. There are a bunch of features that look quite nice from a usability standpoint, some improvements to the App Store, and a crack at a unified development platform for iOS and macOS. Those seem like a pretty reasonable set of features to roll out, and no doubt they'll be accompanies by the usual array of more minor improvements.
The only thing we should really be hoping for is that they focus more on reducing the number of bugs, performance issues and security vulnerabilities.
OS X/macOS has had releases (since version 10.0) after:
0 months, 10.0 released
6
11
14
18
18
16
15
12
15
12
11
12
12 10.13 released
I think I got them all right. When you wait 1.5-2 years between releases, it's reasonable to expect more extensive changes. But every 12 months, the big changes won't happen every single year.
Have you seen the part where you can easily port iOS apps to macOS that was probably in the making for a few years? That's pretty huge for a developer conference.
Changes to the app store to make discovery easier and feature more developer stories also have a huge impact on Mac/iOS developers and their revenue.
There is so much more they could do to make it be a real developers system. They could make the system modern. Right now there is more desktop innovation in KDE for gods sake.
I loved developing Mac apps. It was awesome. iOS is even better, so as a shared platform grows I'm excited about the prospect of more native Mac apps becoming available.
What do you feel is the problem? Primarily Xcode? I hear many people have issues with it, though I've not been a heavy enough user to notice anything particularly problematic.
Apple's platforms are amazing, but fragile. API's constantly change. And even if they don't, they functionally behave different.
You just can't release an app without constant maintenance. Apple has no problem innovating, even if it means pulling every developer along with them, ready or not.
Why do you need a major OS update every year? It's been a few years now that they are incremental with small improvements and fixes and not major changes that force people to change the way they use the system.
They obviously can't put all the under-the-hood changes on the slides as there's probably a ton of stability fixes and hardening going on behind the scenes.
There are good iOS apps for a lot of things, if they can easily be adapted to run on the Mac as they demonstrated by porting the News app it can only be good for the ecosystem. The apps won't look like they are running in an iOS simulator and you use a mouse pointer to do touch actions. It'll be a normal native app that shares code with iOS.
I would assume it's more about making it more convenient for developers to build an app for iOS and macOS using similar frameworks and APIs, without depending on React Native or just throwing Electron and the web stack at the problem. From my own minimal experience you have to pull in a fair few dependencies or do quite a lot more work if you want a more modern look for a native Mac app.
Additionally, if I want to bring an iOS app to the Mac, then it'd be great to not have to figure out the Cocoa version of a certain UIKit view and then rewrite all of the code to fit.
I can't imagine this would translate to some next-level version of universal binaries, but who knows what it'll be like in a few years.
Decent apps in the actual native store is what the UIKit port is about. It’s not iOS apps on the Mac any more than tvOS apps are iOS apps on the TV.
You still need to adapt it to native UI conventions like the menu bar, you just won’t need to rebuild the entire UI from the ground up in AppKit anymore.
Let me put it another way, when was the last time you heard a user ask for an iOS app to be on the Mac? Is this what Mac users want or is it something that Apple wants?
Users, absolutely. Some apps have already done this (Deliveries and Paprika Recipe Manager come to mind). But it's a ton of duplicated effort right now to do that. Paprika's Mac version costs $30 to compensate. There's much less competition here; iOS has a bunch of recipe managers but as far as I know this is the only one with a companion Mac version.
Maybe Facebook will make a Mac client for Messenger (messenger.com works, but some people might prefer a desktop version). Maybe Netflix will have a Mac client (they already have one in the Windows store). We could get Google Maps as an alternative to Apple's native Maps app. Slack could throw away their Electron battery-guzzler and replace the Mac version with one based on the native iOS codebase.
Not every app wants or needs a Mac port, but there are plenty of cases where it could be useful.
I'm right here, as both a user and a developer, and would really like more native apps on the Mac. A unified platform to make it easier for developers to build apps across both platforms seems like an incredibly obvious win.
I love apps that allow me to work on an iPad while I'm on the tube, then pick right up where I left off on the desktop. It’s the best of both worlds.
A good example is Bear Writer[0], a Markdown-based writing and notes app. The Mac app, while fully featured, is based on Electron. Without knowing anything about this new system I’d think these changes would allow the developers to create a native Mac app at least as easily as an Electron web app.
>when was the last time you heard a user ask for an iOS app to be on the Mac
This is something pretty much everyone I know wants that has had both a Mac and iPhone.
There are great iPhone apps and they usually don't have a Mac version. You're stuck whipping out your phone to use some app even though you're already using your laptop.
If iOS has the dominant app platform, wouldn't making it easy to port to macOS be a good thing? You get to harness all those iOS app developers (who code on the mac anyway) able to have their apps run on both platforms. Seems like a win.
I actually agree with you broadly, but the funny part is that stacks (or, basically, some form of 'smart folders') actually hit me as a genuinely interesting addition to the very old and tired UI paradigms for desktop software.
When they add big features, people complain they haven't spent enough time on stability and performance. The event is every year so they announce something, it's not like they're going to write a new OS annually. Slowing down the release rate of shiny new things is probably a good thing.
This was essentially their sales-pitch demonstrating write-once run-anywhere codebase reducing the effort required to develop an app which can be built to run on both iOS and MacOS hardware.
The other thing that was glossed over was Panic's Transmit and BBEdit coming to the App Store (both currently not available there). Panic wrote on their blog last year about problems they had trying to sell their app on the App Store and pulling it. Looks like Apple is working with developers to help them with things like demos, trial periods and such.
I can't help shuddering everytime Craig presents (is forced to present? is paid to present?) the... omg... wait for it... the new set of animojis, on stage.
Well, I get excited about this stuff. I really like Telegram because of the stickers. I get stupidly excited every time I find a new sticker pack I like.
I'm a developer using the mac as a programming and devops system. I do not want teen features to be the best thing about my next upgrade. I want it to be about the computer feeling faster, smoother, better. You know, like an operating system feature?
This. MacBook Pros are getting more and more expensive surely they’re already outside of the teen demographics purchasing power.
It’s professionals and those that aspire to be professional who spend > £2,000 on a laptop.
I want decent UI (including keyboard shortcuts I can touch type), more features and better performance more than I want fluff like voice commands or anything-moji.
When thy said there another part of the platform they were ignoring I got excited for a moment.
I thought they bringing out an Apple official version of homebrew. Nope iOS apps.
emoji's and the like are iOS features (for now). Teen focused features drive sales of iOS devices, increasing Apple's revenue stream and your potential customer base as those teens will likely buy macOS computers. Craig owns software for both macOS and iOS, and in fact part of iOS 12 is about improving performance of the OS to operate faster, smoother and better. So they are delivering exactly that on iOS. ¯\_(ツ)_/¯
They are doing absolutely nothing to bring back gaming to Mac. Major developers (like Blizzard) who have been loyal to that user base for decades are giving up and it feels like Apple isn't even trying to figure out a solution.
During the keynote, they showed Fortnite and a Unity demo running on Metal 2, and showed off the power of an external GPU. The new Mac app store will also give them the ability to highlight standout games, like it does on iOS.
This is great, however there are a slew of games that should technically work on mac but won't ever be ported because of 'technical limitations' set by Apple.
Those reasons could be any number of things like not having native nvidia drivers, or lack of openGL support (which is completely deprecated in Mojave), etc...I'm happy to hear about their work on fortnite because currently my mac is overpowered for that game yet it is absolutely unplayable in its current state.
Yes, I mean other than that they are doing nothing. Vulkan and OpenGL are shunned by Apple and developers won't even port over games that should run on macos (i.e. any game using the unreal engine -- pubg for example).
Not sure what you mean, I can download battle.net on MacOS and play every game on there, although admittedly I haven't tested them all. SC2 runs pretty flawlessly too.
There are some games not available for MacOS but they seem to be less and less common. Almost everything now is cross platform, in part thanks to Unity making cross-platform games easy.
The launcher can be tricked to download Overwatch but it's not playable. Due to shoddy OpenGL support the shift lately actually seems to be away from cross platform games, especially from companies that use their own engine.
Most if not all of the Blizz games render using OpenGL. Heroes of the Storm used to have some experimental options to use Metal or OpenGL 4.1, however Blizzard got rid of them at some point. They seems to not be satisfied with Apple's drivers
And this is ending with new releases, starting with Overwatch. And according to Blizzard it has nothing to do with money or market share, but with Apple specific technology.
They've been the biggest supporter of gaming on macos, and with them gone, the future is looking grim (for mac gamers - obviously a first world problem).
Apple Hype Theory requires the irreducible Apple quantum of hype at any event be spread across whatever announcements are being made at that event. It's Apple's modus operandi, and I don't mind it, because I can compensate.
What I wish, though, was that they themselves would get more excited about making MacBook Pro hardware improvements intended to drive people to upgrade to new MacBook Pros instead of seemingly trying to drive them to move on to iPads.
I really have to agree with this. It used to be a selling point for me that Apple gave free OS upgrades (cost baked into hardware I know).
But lately I’m on the fence wether some of this is really a free OS upgrade or really even fully justifies a full point release. Lots a hype for what imho used to be a 10.x.1 to 10.x.4 release.
>it's pretty cringy not only seeing them demo the new animoji stuff as well as people frantically clapping.
Because you don't see the vision that the developers in that room do. You're thinking about yourself. You're not thinking about Apple expanding its user base and making more customers available for its developers and their companies.
There was a lot of love in the keynote if you're a company looking at the bigger picture. If you're just a dev pounding on a keyboard, I can understand being less than impressed.
So much this. The complaints about Memoji for example, does anyone actually understand the possibilities of that tech, especially for making feature films? Think about the applications of that in areas such as hospitality or customer service as well. An augmented reality “person” to show you around your hotel room for example or help you check in for a flight using natural facial expressions — and super easy for a company to create without having to do character animation for each spoken sentence.
For “Hacker” news, many people seem very much “get off my lawn” and fail to see the future. They see stickers and teenagers but what they should be seeing is an entirely new and natural way to interact with the world. This is like rounded corners in 1981.
Apple could invent a flying car that reverses global warming and people would complain that the color scheme is just a touch too whimsical to be taken seriously. They’d complain about how Apple doesn’t support their cassette deck they had installed in their old Audi. They’d complain that real drivers shouldn’t need cup holders.
Do any of you people care about actual joy, delight or fun? Does anyone actually have the ability to imagine the future even a little bit? Must we be constrained to a monochrome CRT?
This tech is a small building block of the future and many of you dismiss it like it’s some gimmick. I know you guys wanted Craig to talk about performance and stability — but, if people actually knew anything about WWDC, you’d know that those topics get covered exhaustively throughout the week. The keynote is about big ideas and what is going to be possible. The smaller sessions are where the questions get answered and the spec sheets get explained.
"We know you're all very upset about our new keyboards that break and require a $700 repair if they get a grain of sand in them, so today we're announcing that the new OS is named after a desert.
We're Apple! We hate you, and you're going to keep buying it regardless."
I’m a Mac user who used Linux for years and windows before that. I don’t miss the borders anymore.
Try to use the keyboard shortcuts to switch windows. Even if you always have a hand on the mouse/pad, your other hand can still be your keyboard commander.
If you’re convinced to keep using clicks to switch windows, look up the gesture to zoom out so you can see the whole windows and the click.
> Apps built using OpenGL and OpenCL will continue to run in macOS 10.14, but these legacy technologies are deprecated in macOS 10.14. Games and graphics-intensive apps that use OpenGL should now adopt Metal. Similarly, apps that use OpenCL for computational tasks should now adopt Metal and Metal Performance Shaders.
I'm just being snarky about it. Sure, you can use MoltenVK to work around this for Vulkan, but Apple are lock-in jerks here not to support OpenGL to begin with and call it "legacy" when they don't offer Vulkan support either.
MoltenGL isn't FOSS so projects like Wine can't use it for instance.
https://www.macrumors.com/2018/01/30/apple-focus-on-software...