Hacker News new | past | comments | ask | show | jobs | submit | more BossingAround's comments login

Amazing. This made me want to play BB on my old PS4 that's gathering dust. Then, I remembered it's 30FPS locked, and has pretty bad loading times. I'll wait for the emulator, or a remaster, whatever comes first I guess :))


Apparently someone made it run at 60fps on a PS5 devkit, and it's frustrating because I actually have access to one but there are no instructions anywhere so I can't experience the beauty of Bloodborne in 60fps :-(


Anybody with a hacked/jailbroken PS4 Pro or PS5 can run it at 60 fps. A dev called Illusion makes patches for other games too.

If you got an old PS4 Pro lying around, and haven't updated it in a year, it's more than likely hackable.

It's pretty simple, really.


The PS4 Pro struggles to run this game at 60fps due to a CPU bottleneck. On the other hand, PS5 runs it remarkably stably at 60fps and even 120fps is decent.

Source: Me and my hacked PS4 Pro and PS5.


I've got Bloodborne running on a standard PS4 at 60fps using the Illusion patch (you have to drop to 720p but it still looks great).


Now that you mention....I do have an old PS4 Pro I definitely haven't used in more than a year - will look into it!


Don't worry, I think it's only a matter of time before you'll be able to do this. Either through an official PS5 remaster, an official PC release (less likely) or via some funky unofficial emulation.

Even though it's only 30fps the game is quite beautiful for being 9 years old and running on a previous gen console.


it is something to worry about if you care about playing the original. I'm only interested in straight ports with minimal QOL features like 60fps(or unlocked if viable), higher resolution and controller rebinding.

I haven't played a single remaster that I preferred to the original game. I'm fine with remasters existing, but only if faithful ports are made available as well.


The Resident Evil 4 remaster is considered to be much better than the original by almost everybody.

Same with the Demon's Souls remaster.


Those are remakes and not remasters. Different beast.

The trouble with remasters is they're usually farmed out to third-party devs who don't always give it the same attention to detail as the original developers. A prime example would be Dark Souls Remastered, which actually makes some of the graphics worse compared to the original PC version.


The line between remaster and remake is a blurry one.

Nier Replicant ver.1.22... isn't officially called either a remake or a remaster, and a convincing case could be made for either. Yet it changes more than Demon's Souls PS5 does.


It is easy to find criticism of Demon's Souls' remaster's art style by entering that term into your favourite search engine. This isn't "considered" true by "almost everybody".

Regardless of any person's opinion on the changes, being substantially different is enough reason that it isn't a suitable substitute for the original.


Lance McDonald (@manfightdragon) is the someone, I don't think the patch is shared publicly though. He plays it sometimes on his Twitch channel. I've seen LobosJr play the 60 fps version too.


> because I actually have access to one

I wouldn't be running patched pirated game on devkit.

SCE certainly get logs of everything you run on it.


Similarly, I’d love to play Zelda in 4k/120hz, but it’s hopelessly locked on an underpowered pocket box.


Not sure whether you know about it or not, so: you have to do a little bit of work, but it _does_ run in 4k on PCs. Do some googling, you'll find all you need.


How helpful of you.


Some times, the writers don't want to repeat themselves because they were taught that it's "poor writing." And I'd agree, maybe, in (some) prose.

But in tech docs, please, repeat yourself instead of using "this", "that", "those", etc., even when perfectly non-ambiguous.

This should be preferred "The service is now ready. To check the service's status, ..." over this "The service is now ready. To check its status, ..."


> the writers don't want to repeat themselves because they were taught that it's "poor writing."

Yes! As you say, it massively depends on whether you are writing fiction or non-fiction. In any sort of formal document, especially technical reports, etc, the reader should never have to spend time working out what the author means. I used to be a doc reviewer in a previous life, and lost count of the number of times docs used different terms to mean the same thing, especially where multiple authors were involved, or a single author was writing different sections at different times.

General plea: If you value your readers, please, please get someone else to check a doc to look for these sorts of problems. If multiple authors are involved, always get someone on the team (a lead author?) to do this check even before submitting it for formal review.


I found your last paragraph entirely unambiguous at all three levels, which led me to disgree with your overall point. Before that I was with you!


The benefit of "The service is now ready. To check the service's status, ..."

Is when someone needs to amend this sentence later, e.g. inserting another point:

"The service is now ready. This means you can query the health endpoint. To check the service's status, ..."

It makes the writing less likely to become ambiguous.

Most documents I see in work will constantly have sentences added/removed as things change.


Many 'rules' including this one can be broken if you know the tradeoffs and can make the case-by-case choices correctly. But that was a poor example for motivating a behaviour.


We should write with *pointers.


And yet for every one "you", there are thousands of "them" who simply type out a few commands, load the firmware once, and then just play the games.

Maybe two decades ago, there was an article going around about a person who became a commercial pilot thanks to World of Warcraft (WoW improved his English through raids). That doesn't really mean that playing WoW is a good way to become a pilot though.

I suspect that in your case, even without the Gameboy, you'd end up where you are now.


I don’t know. I was always anti-authority and hated rules. I just did what I wanted. My point is there is hope for the rule breakers, and technology obsession combined with a poor school record doesn’t instantly mean someone should be written off


ChatGPT can help with very simple problems that people have already solved online. That said, typically, you can search for the actual solution online without the ChatGPT's possible hallucinations.

A better use of LLM is to input your reasoning and ask if it is correct, but even then, you can't rely on the output. Probably the best is to ask on the math side of stack overflow and rely on the kindness of a stranger.


Honestly, this seems needlessly painful to me. Of course, you can be scanning each sentence for a proposition, then pause, and try to reason it out, thus spending 4 weeks on like 5 pages of a 10-page chapter. But is that the best use of your time?

The bigger problem is that not every thing that the author says is within your level of reasoning. Some very simple things can be exceedingly hard to prove, and you, as a learner, don't know which is which. That's why there are the problems at the end of the chapter, which are designed for the level that you should have attained by the end of the chapter. Without solutions though, you have no way to check your understanding, and you are forced to try and squeeze every little problem from the text.

Not having solutions is simply not suitable for a self-learner. It is sufficient for a class settings, where you can ask the professor if your solution is correct.

To me, a good compromise is to provide solutions to every odd- (or even-) numbered problem. Thus, the self learners have at least half of the problems within their reach, and t he teachers can assign the other half of the problems.


> spending 4 weeks on like 5 pages of a 10-page chapter. But is that the best use of your time?

Look at it as the best use of paper :)

Many math books are dense. They don't bullshit you around. Spending several hours on a single page is the normal usage.


Yeah, and for better or for worse, it is the best arrangement. I remember ploughing through Alan Baker’s number theory book. You have to sit with a pencil and paper and convince yourself of half the steps, but you sure as hell know the material afterwards. And you do need the skills you gain by doing this.


> And you do need the skills you gain by [engaging].

The slogan I've heard is: "Mathematics is not a spectator sport."


I haven’t heard that one before but it’s on the nose. I have heard Euclid’s much older “There is no royal road to geometry”.


How do you deal with disputes? One's code is flagged even if the student in question didn't actually cheat. What then? Do you trust tools over the students' word?

In addition, do things like stack overflow and using LLM-generated code count as cheating? Because that is horrible in and of itself, though a separate concern.


The output of plagiarism tools should only serve as a hint to look at a pair of solutions more closely. All judgement should be derived entirely from similarities between solutions and not some artificial similarity score computed by some program.


Unfortunately, this is not really what happens in my experience. The output of plagiarism tools is taken as fact (especially at high school levels). Without extraordinary evidence of the tool being incorrect, students have no recourse, even if they could sit and explain the thought process behind every word/line of code/whatever.


Lousy high school.


Indeed, this is exactly what I did.


If you talk about the written code to the student in question it should become clear whether it was copied or not.


Well, in this case I noticed the same code copied while grading a project. I used then JPlag to run an automatic check in all the submissions for all the projects. It found many instances where a couple of students did a copy-paste with same variable names, comments, etc. It was quite obvious if you look in detail, and JPlag helped us spot it in multiple files easily.

*edited mobile typos


> almost no individual at Google has the fire or drive in them to play to win

Yes, that's why you go to large megacorps. I wouldn't go to Oracle, IBM, Google, Facebook, MS, etc. etc. if I had a "huge fire or a drive to win". Honestly, I go there because I wanna work 5 hours a day (less if possible) and have a stable career.

If I want to work 9-12h a day, give me an upside. None of these huge megacorps will reward that.

If I'm the lead of a Goog project that becomes a hit, do I get $10M bonus? Of course not. I get a pat on the back and something to put in my packet for the next promotion interview.

So you're absolutely right, but the problem is not in people. It's in the way the system's designed.


> do I get $10M bonus?

Seems like you might want to consider the finance industry.

If you make something important 5% faster, you bet you are getting a few years worth of salary as bonus.


I think you would have to target the scope even smalller to particular areas of finance probably quant or trading within a hedge fund. The majority of jobs at a large bank are mostly fixed salary with limited bonus. There are many jobs that are essential but don't capture a percentage of the value they generate. For example processing transactions is essential for a bank but typically doesn't pay large bonuses in that area.


Large banks can pay very well for the right software role, but there's so much more nepotism than big tech that it's pretty much impossible to land these unless you have a godfather in the industry AND you are a recognized prodigy.


>There are many jobs that are essential but don't capture a percentage of the value they generate.

Isn't that strange?


Unfortunately not. Nurses, doctors, bin men, car mechanics, plumbers can all be essential at times but rarely pay large bonuses.


It's the scale.

One important blue collar worker can help a limited number of people at a time, usually one.

One line of code can affect what billions of people are doing.


The incentives in finance are remarkably well-tuned to rewarding employee effort. Make the company 300k? You get 30k. Make the company 300 million? You get 30 million.


That seems so easy to game. Do some small change that brings short term gain, bag the profit and let the rebound be the failure of another team.


Doesn't that pretty much describe the 2007-2008 financial crisis?


And every such crisis before it, and the next one too.


The crisis was 07/08, but the buildup was almost a decade in the making, does that count as short term?


the law in the UK is that material risk takers bonus pay over a certain amount (couple of hundred thousand) is deferred over 3-5 years and can be clawed back


If you think it's easy to game 30M in short term revenue, please try to do that in a large firm.

The risk/compliance teams would be interested in your strategy.


I'm a socialist/anarchist, they would never give me that opportunity :)


You could try this overall strategy in the stock market and see how it works out ;)


Maybe the profit share is earned out over time? Otherwise I would agree with you..


how do swe get into finance?


I see quite a few SWE jobs here in Singapore in finance, mostly realtime C++ order management. If the advertised salaries are real, they're very well-paid (300-700k USD, plus bonus).


The path of least resistance is to become a well known C++ or systems guru.

This can be done by contributing substantially to the language standard, a compiler, etc etc.

Or you could learn COBOL!


First step, understand what Finance means in the scope of technology work.


i dont have a clue


This is oversimplifying too much IMO. Obviously the potential reward working at startups is much higher than megacorps, and you can very safely say that people working at startups have a higher risk appetite. However, plenty of people work 9h-12h a day (say, at Meta) in wish to get promoted at rocket-speed and play to "win" the higher TC at megacorps, and it happens often enough that very-driven people do join megacorps.


A lot of people at a megacorp play to win, they've just realized that the company's value is so much higher than the value they could add to it, that for them, winning means capturing part of the company's value for themselves. In a sense the company becomes the market and their friends become the company.


>>but the problem is not in people. It's in the way the system's designed.

System was designed by the people, more specifically people with ability to make decisions a.k.a management

There might not have been one person making all the wrong decisions. But more like lots of small wrong decisions, but it doesn't reduce the fact that it is management that is always responsible for the state of affairs.


> Yes, that's why you go to large megacorps. I wouldn't go to Oracle, IBM, Google, Facebook, MS, etc. etc. if I had a "huge fire or a drive to win". Honestly, I go there because I wanna work 5 hours a day (less if possible) and have a stable career.

I agree that many want the stable and low hours career but how many people at these big companies are getting that? I mostly see it as a FOB farm and trying to overwork the overwhelming majority of workers at these companies. For all the stories of people working five hours a day and making $400k/yr - I hear many more working 60 hour weeks.

> If I want to work 9-12h a day, give me an upside. None of these huge megacorps will reward that.

I don’t really see startups rewarding that much either. Maybe it’s more rewarding if you’re a founder. I’m speaking as someone who has been an early engineer at startups and gone public with them. I still don’t see them as that rewarding unless you’re a founder.

Also, incentives aren’t the same. You might make a great thing but unless you’re near the top - you’re probably not going to get properly rewarded regardless of how good your ideas and whatnot are. People at the top will steal credit because that’s what they do. (“Look at how good I am at hiring/managing/inspiring/etc.”)


> For all the stories of people working five hours a day and making $400k/yr - I hear many more working 60 hour weeks.

This is absolutely right. For every senior person in a glamorous role at a FAANG making $400k/yr, there are probably five less senior, less glamorous people making $150k/yr, grinding away trying to justify a promotion to the next level. The people posting to HN that their brother's girlfriend's nephew's roommate makes $400k think that's every FAANG developer.


To be more balanced on this, I don't think that many are making <=$150k/yr if they're in NYC/Seattle/SF and are in engineering. I think many are making $300-400k/yr but have a high workload.

Levels gives a clear direction that if you work at big tech as an engineer, you'll usually make decent to good income. Whether it's worth the WLB/PIPing/misery is what you have to figure out.

There's a reason a ton of the people at FAANG are all on H1B. It's not a lack of domestic talent - it's a lack of Americans willing to be worked that hard and go through insane hoops to get said jobs. (justifiably so btw) I think a large reason why most of the crowd at FAANG is way more autistic than average is because autists can put up with such insane working conditions/hoops. Either cause they enjoy it or because they just have something about them that allows them to ignore it. I'm not even going to get in on how so many people in SV are also on various stimulants.


Insane working conditions? At least at Google I'd say you have better working conditions than the sales staff at Anthropologie. They are not mining coal over there. Laptop class is indeed spoiled.


My hard-charging peers are coming out of the FAANG world with hypertension and/or type II diabetes.


So is the rest of America. Any evidence the rate is higher? Free soda and food may contribute I give you that.


They are two sides of the same coin, aka culture.

System incentives,management,and staff hiring all form a feedback loop which sets culture and performance.


If you're lumping together all the big corporations you're missing my point. Sure size is a parameter but not precisely what I meant to communicate, which is more true about Google than Tesla for example.


That's the first thing that occurred to me too. It could also not be constant even at the same place, i.e. could it not be speeding up and slowing down as the universe expands?


I love k8s, but bringing back up a single app that crashed is a very different problem from "our k8s is down" - because if you think your k8s won't go down, you're in for a surprise.

You can view a single k8s also as a single host, which will go down at some point (e.g. a botched upgrade, cloud network partition, or something similar). While much less frequent, also much more difficult to get out of.

Of course, if you have a multi-cloud setup with automatic (and periodically tested!) app migration across clouds, well then... Perhaps that's the answer nowadays.. :)


> if you think your k8s won't go down, you're in for a surprise

Kubernetes is a remarkably reliable piece of software. I've administered (large X) number of clusters that often had several years of cluster lifetime, each, everything being upgraded through the relatively frequent Kubernetes release lifecycle. We definitely needed some maintenance windows sometimes, but well, no, Kubernetes didn't unexpectedly crash on us. Maybe I just got lucky, who knows. The closest we ever got was the underlying etcd cluster having heartbeat timeouts due to insufficient hardware, and etcd healed itself when the nodes were reprovisioned.

There's definitely a whole lotta stuff in the Kubernetes ecosystem that isn't nearly as reliable, but that has to be differentiated from Kubernetes itself (and the internal etcd dependency).

> You can view a single k8s also as a single host, which will go down at some point (e.g. a botched upgrade, cloud network partition, or something similar)

The managed Kubernetes services solve the whole "botched upgrade" concern. etcd is designed to tolerate cloud network partitions and recover.

Comparing this to sudden hardware loss on a single-VM app is, quite frankly, insane.


If you start using more esoteric features the reliability of k8s goes down. Guess what happens when you enable the in place vertical pod scaling feature gate?

It restarts every single container in the cluster at the same time: https://github.com/kubernetes/kubernetes/issues/122028

We have also found data races in the statefulset controller which only occurs when you have thousands of statefulsets.

Overall, if you stay on the beaten path k8s reliability is good.


Even if your entire control plane disappears your nodes will keep running and likely for enough time to build an entirely new cluster to flip over to.

I don’t get it either. It’s not hard at all.


Your nodes & containers keep running, but is your networking up when your control plane is down?


Sounds like we need a WSL in the other direction - have a Windows subsystem for Linux, so that your $distro integrates with Windows and you can run apps on Windows while having the window show up on your Linux system :)


There's WINE for basic compatibility, like for Affinity tools in TFA. You can use a Windows VM, and IIRC both VMWare and VirtualBox have integrated UI options to put the start menu/bar integrated into the windows desktop. I've done this before, putting my mac/linux dock on the side, and the windows bar on the bottom. I haven't used Windows outside work projects the past few years though.

Aside, surprised and disappointed in that you can use Affinity tools, which is very cool, but kind of sucks that you have to jump through that many hoops. Would be somewhat nice if Affinity themselves gave you an install script that could self-detect the major Linux variants (Fedora, Ubuntu/Debian) and install more directly.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: