Hacker News new | past | comments | ask | show | jobs | submit | WorkLifeBalance's comments login

Beating macOS at gaming sounds like a punchline, that's just not impressive, macOS has only ever been an afterthought for gaming.


Maybe so, but in some circles Macbooks are the most prevalent computer, and these Linux advances are thus hugely relevant.


Yes, you've got to try very hard to be worse than MacOS that is openly sabotaging OpenGL and Vulkan efforts.


Lack of OpenGL and Vulkan support never mattered on game consoles.

Even with Switch's adoption of Vulkan, most games are actually using middleware that makes use of NVN.

If a platform is relevant to professional games developers, they will target it, regardless of which APIs and OS it requires.


Consoles give very strict contracts that you can rely on to not change for the whole lifetime of the console. In contrary macOS changes stuff every year apparently based on what a fortune teller says for Steve Job's ghost.

That means that early on, yeah it's as bad as Mac, but it gets way better over the lifetime of the console.


All the middleware that matters for professional game studios has supported Metal since for at least one year now.


Yeah, no. There's talks of Blizzard for instance dropping support for Mac. Dice has a blog post that's messaging concern. It looks like a bunch of engines are going to limp on MoltenVK which kind or imposes a weird impedance mismatch and gives weird perf issues soemtimes that you probs wouldn't see with a native Metal backend.

And that's before getting into the release of Metal 2. There's a non zero amount of work to support it, and it's not clear how long Apple is going to support Metal 1.

And all of that is before all sorts of other crazy stuff with Apple changing their app signing requirements, messaging that they're going to require all apps to be signed by Apple in some future macOS release (but won't tell you when that is).


> All the middleware that matters....

Unreal, Unity, CryEngine, ...

I wasn't talking about in-house solutions, rather engines that many AAA studios buy in order to actually focus on the game itself.

As for the rest of your remark, it comes up in places like HN, but not at all when attending local game developer meetups, developer articles on Making Games, Gamasutra, Connection, IGDA, or many other professional publications.


I mean, my day job is supporting an application across Win/Mac/Linux. Even ignoring the graphics, Apple is easily the hardest to support. I don't really care if you haven't read a magazine article on it.

And to pretend like FrostBite doesn't matter is ridiculous.


It doesn't matter for developers that want to support OS X, as it is mostly focused on game consoles and Windows.

Whereas the ones I listed do support those systems, Apple platforms, Android and GNU/Linux.


So wait, the AAA developers not supporting don't count against your argument? Even in the case of Blizzard who has famously been one of the biggest Mac supporters? Isn't "all AAA support Mac... except all the ones that don't support Mac" a tautology?

Also, just noticed that you lumped in Unity with AAA, lolz. What's next libgdx?


Because their focus is clearly PlayStation, Xbox and PC, not even Nintendo hardware.

There are plenty of other AAA studios using Unreal, Unity, CryEngine.

Maybe you should check again the names of some studios using Unity, ever heard of Nintendo and Microsoft?


> Because their focus is clearly PlayStation, Xbox and PC, not even Nintendo hardware.

> There are plenty of other AAA studios using Unreal, Unity, CryEngine.

Blizzard's focus had been on Mac in addition to Windows. With the switch to Metal, they're probably abandoning it. FrostBite means that EA AAA games probably won't either. Ubisoft didn't release Assassin's Creed Odessey on Mac. And even looking at Unreal Engine 4 games, only Fortnite and a twoer defense game have been released for Mac. Looking at CryEngine, no games have ever been released for Mac. So where is all this AAA support for Metal that you're talking about?

To lead you to water, Mac support is a nice to have so that their in house tools work with the artists' platforms they're used to. But they don't care enough to finish out the QA, or put in any work to make the game actually shippable on that platform. The switch to Metal means that you can't justify it with "well we can just support OpenGL and get Mac for free" like they used to.

> Maybe you should check again the names of some studios using Unity, ever heard of Nintendo and Microsoft?

AAA is about the games, not the studios. Name a single AAA game on Unity.


Responding because I can't edit:

It was about three years ago, that through pseudo public channels, that Apple started messaging that OpenGL was on it's way out. Oh, look what Blizzard game came out (Overwatch) which has pretty flagrantly disregarded the idea of Mac support, even entertaining the idea of possible switch support.


Yet OpenGL doesn't make them support Linux any better.

So support or not for Metal is not the real reason why they don't want to focus on the Mac.

As for games, Nascar Heat 3, for example.


> Yet OpenGL doesn't make them support Linux any better.

> So support or not for Metal is not the real reason why they don't want to focus on the Mac.

"But they don't care enough to finish out the QA, or put in any work to make the game actually shippable on that platform.". Mac was a fixed platform, and you used to be able to justify the engineering because the work ultimately helped make your Windows port better ("the end user will have a way out if there's a bunch in their DirectX drivers"), and let your artists do all the work on the tools they were used to. Then if you're running your tooling on Mac, you've been supporting it the whole time and there's very little QA overhead for release since it's a relatively fixed platform. That last part doesn't apply to Linux. This whole time I've been saying it's not just OpenGL->Metal, it's a Nexus of several things all coming together to break the camel's back.

> As for games, Nascar Heat 3, for example.

You know that a game that's less than $50 at release isn't a AAA game, right?


I guess none of the games on computer stores at shopping malls on my city aren't AAA then, zero.


The ones that release on consoles too for less than $50? Yeah, none of those are AAA.


Today I learned that games like Sea of Thieves, Fortnite, Hitman, GTA, Assassins Creed aren't AAA because they are too cheap according to your price table.


Sea of Thieves - $59.99 on console

Fortnite - F2P, so different enough business model that you have to take that into account

Hitman - $59.99 on console

GTA - $59.99 on console

Assassin's Creed - $59.99 on console

Nascar Heat 3 - $49.99 on console

Are you still going to pretend that you don't see the difference here?


So $10 difference according to your table makes a AAA game, while I can get all of them around here between 30 to 40 euros, whatever.


Yes, the initial MSRP is an extremely high SNR signal as to whether it's a AAA game or not.


(not in the game industry, but a graphics programmer)

Are there really no games out there that program their own graphics anymore and don’t rely on “middleware” engines? This seems shocking to me. Then again I was shocked the first time I learned that most games don’t hand-code assembly anymore. Things move so fast.


AAA studios always use middleware, if it isn't bought, it is done in-house.

The actual set of 3D API is a tiny portion of everything that a game engine requires, among scene management, materials handling, graphical editor, plugins, sound, physics,....

So one always ends up with a pluggable rendering layer, where adding a new API is relatively simple.

Now what has been happening is that with production costs skyrocketing, most studios are increasable adopting external middleware that they just adapt to their purposes than writing everything from scratch.

For example, you can get Unreal and get support for NVidia's raytracing features out of the box, or invest the money to develop the same features from scratch in-house.

The culture in the games industry is that what matters is the story, gameplay, taking advantage of hardware features and getting the game out there, tech comes and goes.


In which jurisdictions is this deliberate circumvention of access controls legal?


In Germany, this is defined in § 95a UrhG [0], as in "bypassing safeguard measures to gain access to copyright material". "Anti-Anti-Adblocks", as in Adblock filters bypassing adblock popups, were already declared illegal in the BILD case [1].

[0] https://dejure.org/gesetze/UrhG/95a.html

[1] https://www.wbs-law.de/it-recht/verbreitung-einer-anleitung-...


Whether or not clearing cookies would count as bypassing is an interesting question, because Adblockers themselves are legal. The specific BILD case is about blocking and disabling a tool to detect adblockers, in which case it fully denies access to the article.

Ie, the anti-anti-adblock is actively interfering with the site's function on the client side.

Cookies and referer are handled on the server-side and outside the user's control.

I would question if the last sentences of 95a apply to setting a referer and clearing cookies. They are more closer to tampering with computer system (263a).


The first word of the law is "wirksame", which means "effective".

It's very easy to argue that this protection that is trivial to circumvent is not "wirksam".


Yes, that's why I've added the second link. Someone posted a tutorial on how to bypass the BILD Anti-Adblock. He argued that this is not an "effective" protection, but a court ruled an "Anti-Adblock" script as "effective".


It depends on the court. IIRC, that ruling came from the Landgericht Hamburg, which has become a bit of a running gag because of their copyright friendly rulings.


It still depends on whether the protection is considered effective.

In the case of this addon, the paywalls are often just overlays that you can also remove manually with a few clicks.


The effectiveness of the access control is usually a factor.

If there is just a banner hovering over the actual text, and the extension merely removes that banner, than one could question whether there even was an access control in the first place.

As an extreme example, a Finnish court ruled that CSS (as used by DVDs a long time ago) was ineffective.

https://www.turre.com/finnish-court-rules-css-protection-use...


I agree that a banner or any over overlay is not protection - the content has already been delivered to the recipient.


> Is there a name for this type of fatigue

Stress. (One of the smyptoms of)


Is it? To me the situation in question seems like the complete opposite of stressful


Yeah, this sounds like too _little_ stress to me. Imagine a bell curve with stress on the X axis and performance on the Y axis, and there's a sweet spot in the middle where you're getting enough of a kick to motivate you, but not so much that you're drowning.


Yes. And I've personally found that the "sweet spot" in the middle is all too elusive.

In my career I think I've spent the majority of time at one extreme or the other. Feast or famine: overworked & stressed or underworked & depressed. When you can hit it just right in the middle, life is truly great. But rarely have I been able to sustain it for more than a couple months at a time. Let me know if you've found the secret :)


Stop trying to re-invent statistics. Use a box and whisker plot of latency. You quickly get to see the mean, the quartiles, and all the outliers and you get it in a format which is familiar and easy to understand. You can even plot box and whisker plots next to each other for quick meaningful comparisons between different things.


I've recently grown to like violin plots for latency (https://en.wikipedia.org/wiki/Violin_plot). I've also added 99%ile tick marks, which with the already present median mark, gives a relatively full picture of latency that is easily digestible.


Mathematics is not something handed down by the gods. It's possible to encounter not just completely new problems, but also limitations to existing methods for solving a known problem.

In this particular case the challenge is aggregating statistics from a very large fleet & having automated alarms. Visualization tools don't help with any of that. More specifically, the reporting tools out there apparently have a very common & persistent flaw of reporting an average of percentiles across agents which is a statistically meaningless metric. It makes no difference how you visualize it - the data is bunk.

This article flips it so that agents simply report how many requests they got & how many exceeded the required threshold. This lets them report the percentage of users having a worse experience than the desired SLA. You can also build reliable tools on top of this metric. It's not a universal solution but it's a neat trick to maintain the performance properties of not needing to pull full logs from all agents & still have a meaningful representation of the latency of your users.


There's also microsoft's R-Open (https://mran.microsoft.com/download) which I've found is faster than the out of the box R since it supports better multi-threading of commands.


IIRC most of that is because they use Intel's MKL and a better BLAS; if you like docker, using the Rocker containers uses the better BLAS, and I think adding MKL isn't too hard either.


That reminds me of this work (2009) by artist Ellie Harrison who made a vending machine which dispensed snacks any time the word recession popped up on the BBC news feed: https://www.ellieharrison.com/vendingmachine/


These may well work on some (cheap?) phones, I've had phones not display properly through polarized sunglasses before.


How do you take this from here to the next step?

Let's say I want to develop a multi user todo and I've followed the examples and now have some Users who log in entering a password.

Someone points out that I shouldn't be storing passwords in plain text so I want to store them hashed.

Despite there being hints in the documentation that this is possible (migrations shows the password hashed), it's not clear how to carry out this kind of change.

A walkthrough of making a change would help convey the use case much better than just templates of "this incantation produces this output".


Well, in this particular case passwords are always hashed. So the platform deals with that for you. Other changes to the data model need a migration, which is covered here: https://alan-platform.com/pages/tuts/migration.html


It looks like you're using SHA256(username||password) in this example. Even if it's only an example, why use a homebrew password hashing scheme based on an unsuitable hash function and bad ad-hoc salt handling, instead of a strong standard password hash with built in salt handling? And what code/specification is required to use a secure algorithm, like bcrypt with a random salt?

People often copy from such tutorials and will then end up with insecure password storage.


As someone who's not a developer but still occasionally looks through these "build an app" tutorials, password/authentication portions always worry me. I wish I could know that best-practices were shown as far as storing/encrypting user data/passwords.


Where is the definition of the password being hashed? The tutorial defines it as text:

/* 'Users': collection { 'Password': text }*/

How does the platform know to hash that? Is it looking for magic property names?


No, there is no magic :) When you set up the users at the top of the model file, you specify which property is the password.


Specifying something as password hashing as (sha256? salted sha256?) without telling it to seems like magic to me.

So to clarify, it's this password: declaration which tells the framework to hash the input?

    users
        dynamic :  . 'Users'
        password : . 'Password'

    interfaces

    root {

    }

    numerical-types
Is this defined anywhere within the project or is this framework magic? (Or "glue" if you don't like the term magic). password is not mentioned again anywhere in the documentation, I'd like to understand how the framework knows to hash the input.


The line `password : . 'Password'` points at the password property, which tells the framework to hash it. What kind of hashing isn't something you should have to specify or worry about. It's a strong salted hash and we'd like to make it even better at some point, but that's firmly in the realm of the framework implementation.


This parses to "Just trust us to get this right."

I think you may have some issues persuading customers that's a wholly valid approach, especially when you're dealing with security and data integrity, GDPR, and so on.


the goal is to "step in when excell starts to get in the way"

I doubt they will ever care about any of that. a "side loaded application" will likely be the answer to most of those comments.


There is a lot the platform can do by itself, but I think you grasped it well enough. The side loaded apps are definitely the escape hatch to enable things it doesn't cover (yet).


The difference between 99th and 99.99th percentile is 1.4 standard deviations, a common IQ test ought to be able to be accurate to that. Otherwise it couldn't measure the difference between 100 and 120 IQ (0 to 1.4z) which it clearly can.


I don't think your reasoning is valid. A test might be unable to discriminate accurately at the extremes not because it's uniformly too inaccurate but because it doesn't have enough range.

Toy model: the test consists of one question that everyone in the top 1% can do and no one in the bottom 99% can do; one question that everyone in the top 2% can do and no one in the bottom 98% can do; ... one question that everyone in the top 99% can do and no one in the bottom 1% can do. This test discriminates very nicely and accurately throughout its range of applicability, but it will do no good at all from distinguishing a top-0.01% person from a merely top-1% person.

(Just as a tape measure 2m long will let you compare people very accurately by height provided they're no taller than 2m, but will be much less useful for people taller than that.)


Modern tests are delivered by computer and typically are adaptive. This means that as you answer questions correctly you get asked increasingly difficult questions until you get some wrong.

This means that you aren't limited to asking the same questions to everyone so you can have appropriate discrimination through the range.


> a common IQ test ought to be able to be accurate to that

Accurate to what, even? The very notion of IQ is fuzzy, so naturally any test trying to measure the value would inherit that fuzziness.

The difference between 100 and 120 may be statistically identical to 99 and 99.99, but the practical difference is vastly different. At a certain point, the IQ test is "defeated", and any value above a certain threshold is nose.


Sorry, I don't understand: how do you know the percentile difference in terms of standard deviations but not know if a test is accurate enough?


By assuming that it's a normal distribution.


An answer, by analogy-

If you yell into a microphone, the recording will come out distorted and inaccurate. Yet if you speak normally into one, the recording will sound rather true to life. This is because the microphone has been tuned to a certain level of sensitivity, and when that threshold is exceeded, what it records is clipped.

A similar principle holds for many types of human tests in education, psych, etc.


Meta doesn't imply cycling, just that the best strategy may be a mixed strategy which involves randomly picking between different pure strategies.

Instead of cycling, the meta ought to converge to a Nash equilibrium.

With the right mixed strategy, an opponent choosing a pure strategy would be at a disadvantage.

Randomising over a huge choice of pure strategies may be infeasible of course and in the real world players have to train for particular strategies which is why we see meta shifts. (Plus of course in the real world the conditions (assumptions) change due to gameplay patches.


I wasn’t suggesting that meta’s naturally cycle, but that the ideal meta (for human play, based on my experience on what people enjoy about pvp games that aren’t purely emphasizing skill) is one that lacks an optimal strategy, because the usage of an optimal strategy implies its own downfall, and that this meta-countering operation is acyclic. (A cyclic meta is likely created by accident, and kills the pvp community if left as-is)

And notably, a random strategy selection being optimal is non-ideal for human consumption. And as you note this doesn’t naturally occur in human pvp, because there are heavy natural biases (information spread, natural leaders in the subject, limited skillsets, time for the community to learn between dev balance shifts, etc). But even if we could have it, I don’t think we’d want it.

I think what competitive pvp wants are somewhat obvious optimal solutions, with natural counter-play. But these near-optimal solutions are tied to the current popular strategy. That is, half the fun is figuring out what the community at large is up to, and tracking it.

Which, finally, implies that the kind of games that grow a significant pvp community are naturally selected because they offer no clear, and static, optimal strategy. If an ML program did find such a strategy (outside of requiring superhuman capabilities, like zerglings dodging siege tanks), it would either kill the community, or get patched out. You could consider the ML algorithm as competing with an adverserial meat learning algorithm, in both strategy and spirit


However, the meta does in fact exist. The optimal, mixed strategy is only optimal when your opponent uses the same mixed strategy. We don't, so if the AI can predict us then it can do better.

An optimal AI should therefore include theory of mind, and human-prediction in particular, such that it can stay ahead of the meta.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: