Hacker News new | past | comments | ask | show | jobs | submit login

Idling on the New York Times's homepage without an adblocker in Firefox, Activity Monitor says the tab is using ~10% of one core of my cpu, and About:Performance says it's using ~200 MB of memory.

Now, I have a pretty fast CPU—a 4790K—and it wasn't that long ago that most computers had less than 200 MB of RAM total. And, you could read on those computers just fine.

On the other hand, if I repeat the same test with my Slack tab, I get 30% of CPU and 400 MB of RAM. Slack doesn't have ads, and IM clients are another thing that worked just fine 20 years ago.

I guess my point is, the web is bloated, and I'm not sure why we're harping on The Times.




Slack has an entire app ecosystem layer in it these days, not to mention they've never really trimmed down the electron fat. Slack hasn't been a contender for "well made" in quite a few months.

NYT should be a relatively simple website. Sure, they have some very nice interactive stories and we'll give those a pass, but even just plain text articles load an amazing amount of cruft. It's text, just send the damn text.

Everyone else being bloated isn't an excuse.


The New York Times's Homepage isn't just text though. I see a lot of large, high-quality images, many of which transition to other images so you can see multiple visuals for a story. There's a "live updates" sidebar and a real-time stock ticker. There are video and audio embeds.

> Everyone else being bloated isn't an excuse.

It's not, but I'd rather focus on the worst offenders instead of the half-decent ones. I didn't compare it to Slack because I thought Slack was a good example—I did it for the exact opposite reason.

And, as long as there are sites that are so much heavier and yet seem to be doing fine (I don't like it, but it is what it is), I doubt NY Times is removing 3rd-party ads due to performance issues. It could be a nice side effect, though.


For comparison, I disabled ublock origin and privacy badger and loaded https://fivethirtyeight.com which is in the same realm - heavy text content with several interactive widgets, video and audio embeds.

Render time for my (aging) macbook pro is within the acceptable threshold (< 1500ms), memory snapshot in firefox is ~90Mb, and it feels like it loads quickly and responsively. It could be better - but it's infinitely more usable than NYT for me.


I don't think 1.5sec render time is acceptable for a webpage whose main payload of interest is 2-5kb of text (the article).


There's no articles on the page linked, and all the graphs on the right are dynamically rendered


Optimisation opportunities: if you have a dynamically generated image, and you haven't yet received user input, statically generate it.


NYT is not app but website (IMO) but Slack is app. Not good comparison target.


Websites are apps, apps are websites. The area between each is a gradient, not black and white. The best way to measure this is to check a couple of metrics.

Does this resource need to be generated dynamically?

Related: can this resource be cached?

More important: can this resource be resized?

Often overlooked: do I need to process this javascript to provide the equivalent UX?

The modern Web is terrible, and the modern state of app dev is terrible due to a substantial subset of the reasons that the modern Web is terrible.


> Slack hasn't been a contender for "well made" in quite a few months.

It was a contender?

> NYT should be a relatively simple website.

There's editors from the NYT lurking here who will strongly disagree.


Once upon a time, Slack was actually really solid and I happily used it. It.. was quite some time ago.

And sure, obviously NYT is more complex than throwing textfiles at users.. but do you really need to be running A/B tests for acquisition and have such a complicated pipeline to buying a subscription? Do you need dozens of analytics suites? Data is useful, but only to an upper bound of what you can meaningfully analyze.


I don't disagree with you AND I've already been chastised once in the last week by a NYT staff member for daring to challenge their assertion that the web should be and overly complicated mess.


Cutting through institutional pressures is not easy, especially when those pressures are well entrenched. I recently had to rid our SEO department of a religious belief that everything in creation must be server-side rendered or it couldn't be indexed and the phrase "holy war" applies.

So, I have some empathy for the NYT staff as individual contributors, but as an org, it's time to evolve.


.


Are you working for booking.com?


"It's text, just send the damn text."

They only send what the user requests.

Using a software program that makes automatic requests that you are not easily in control of, e.g., a popular web browser, might give the impression that they control what is sent.

They do not control what is sent. The user does.^1

The user makes a request and they send a response.

One of the requests a fully-automatic web browser makes to NYT is to static01.nyt.com

Personally, as a user who prefers text-only, this is the only request I need to make. As such I don't really need a heavily marketed, fully-automatic, graphical, ad-blocking web browser to make a single request for some text.^2

    #! /bin/sh

    case $1 in
    world        |w*)  x=world       # shortcut: w
    ;;us         |u*)  x=us          # shortcut: u
    ;;politics   |p*)  x=politics    # shortcut: p
    ;;nyregion   |n*)  x=nyregion    # shortcut: n
    ;;business   |bu*) x=business    # shortcut: bu
    ;;opinion    |o*)  x=opinion     # shortcut: o
    ;;technology |te*) x=technology  # shortcut: te
    ;;science    |sc*) x=science     # shortcut: sc
    ;;health     |h*)  x=health      # shortcut: h
    ;;sports     |sp*) x=sports      # shortcut: sp
    ;;arts       |a*)  x=arts        # shortcut: a
    ;;books      |bo*) x=books       # shortcut: bo
    ;;style      |st*) x=style       # shortcut: st
    ;;food       |f*)  x=food        # shortcut: f
    ;;travel     |tr*) x=travel      # shortcut: tr
    ;;magazine   |m*)  x=magazine    # shortcut: m
    ;;t-magazine |t-*) x=t-magazine  # shortcut: t-
    ;;realestate |r*)  x=realestate  # shortcut: r
    ;;*)
    echo usage: $0 section
    exec sed -n '/x=/!d;s/.*x=//;/sed/!p' $0
    esac

    curl -s https://static01.nyt.com/services/json/sectionfronts/$x/index.jsonp

   Example: Make simple page of titles, article urls and captions, where above script is named "nyt".

    nyt tr |  sed '/\"headline\": \"/{s//<p>/;s/\".*/<\/p>/;p};/\"full\": \"/{s//<p>/;s/..$/<\/p>/;p};/\"link\": \"/{s///;s/ *//;s/\".*//;s|.*|<a href=&>&</a>|;p}' > travel.html

    firefox ./travel.html
Source: https://news.ycombinator.com/item?id=22125882

The truth is that they are just sending the damn text. However you are voluntarily choosing to use a software program that is automatically making requests for things other than the text of the article, i.e., "cruft".

1. The Google-sponsored HTTP/[23] protocol is seeking to change this dynamic, so if websites sending stuff to you without you requesting it first bothers you, you might want to think about how online advertisers and the companies that enable them might use these new protocols.

2. However I might use one for for viewing images, watching video, reading PDFs, etc., offline. Web browsers are useful programs for consuming media. It is in the simple task of making HTTP requests that their utility has diminished over time. The user is not really in control.


I'm just as upset by bloat and tracking as well but the criticism seem a little off for some reason I can't quite put my finger on.

I go to a restaurant and I can't just walk into the kitchen and grab a plate of food. Nor can I walk into the refrigerator, grab some supplied, and then walk over to the stations and start cooking. Instead I have wait to be seated, order indirectly via a waiter, wait for the chef and staff to prepare more order, etc...

It seems to me visiting a website is similar. The user choose to visit the site. That includes the 3rd parties and less controls. Just like I don't get to pick what sources the restaurant used for their food, nor do I have any say in their hiring or management practices. Nor do I have any choice in the music they play or the TVs they have on (bar like restaurants often have TVs). If I don't like their choices my choice is to be or not be a customer. I don't get to hack around that, walking in the back door and taking the food.

I know the analogy isn't perfect. It's my computer and I have no obligation to let them use it as they please vs as I please. But still, there's some middle ground IMO between the 2 extremes.


>the criticism seem a little off for some reason I can't quite put my finger on.

IMO the reason is quite easy to put the finger on:

It's because it is framing the problem squarely as one of the user, culminating in the phrase that one is "voluntarily choosing".

If you don't want to do research and customize scripts for every friggin' domain/website (and having to do it again when the site structure changes), there is no "voluntary choice".

If you don't want to accept that this "solution" has to forgo a lot of essential characteristics of hypermedia, there is no "voluntary choice".

If you're not technically versed in these things, there never was a "voluntary choice" to begin with.

In general, if you want to use the World Wide Web remotely as it is intended, there is no "voluntary choice".


> I'm just as upset by bloat and tracking as well but the criticism seem a little off for some reason I can't quite put my finger on.

I think you're unclear in your mind about the relationship between you and the website you visit.

To use your restaurant analogy, browsing the web is more like ordering delivery. You send a request for food from the menu and money to cover it, and a while later, a driver with a bag arrives at your doorstep. That bag contains the food you order, some packaging, often plastic cutlery, and some advertising. The transaction between you and the restaurant involved exchanging money for food, and the restaurant doesn't get to have any further say about what you do with that food. You're free to throw away the box, the cultery and the advertising leaflets into the bin, and give half of the food to your cat. They cannot, technically or ethically, make you eat the food out of the box it came from, while reading the advertising leaflets.

It's like that with web browsers. You ask for content (via HTTP), you get a response that includes links to other things you're invited to request. You're free to cut the response up and render it the way you like, you're free to request or not request the other linked resources. That was how the web was designed to work, that's how HTTP protocol is meant to be used. Now plenty of websites will try to insist they're more like dining in than delivery, but that's just them trying to guilt-trip you into making them more money. It's not something they're entitled to.


Chrome, Firefox, Safari, Edge all include the ability to block certain requests.

https://developers.google.com/web/tools/chrome-devtools/netw...

https://developer.mozilla.org/en-US/docs/Tools/Network_Monit...

https://developer.apple.com/documentation/safariservices/cre...

https://docs.microsoft.com/en-us/microsoft-edge/devtools-gui...

The web's first browser, working in line mode, could probably request the text, and only the text, of an article from nytimes.com. It has no capability to automatically follow links to ads and trackers.

https://www.w3.org/INSTALL.html


The long line of sed is out-of-date and thus "broken". For something simpler that works, try this:

   nyt tr |sed 's/ *//;/</!d'|uniq > travel.html
This will produce a simple web page of titles and URLs for each article page.

An interesting point of discussion might be the amount of third party cruft on the template article page versus the more dynamic front page. When Javascript is disabled, on each article page all images display and there are no ads. Downloading any video in the page is as simple as

   curl -O `grep -o https://[^\"]*mp4 article.html`


You're right, I guess, if you consider "the web" to mean "HTML over HTTP". In real terms, though, a modern web site is the HTML plus all of the images and other text that goes with it, and it's designed as a package. The fact that the web browser connects to the web server to download all of the bloat doesn't change the fact that the bloat was specified by the HTML served by the web site. It's just an implementation detail.


Pro comment here which should be way higher up the page. Good comment content, good Unixbeard vibe, great use of sed.


> it wasn't that long ago that most computers had less than 200 MB of RAM total

I don't think you're aware of how much time as passed since 200 MB of RAM was the norm. Even in 2005, lower end Dell laptops ($639) had 256 MB of RAM as the minimum [0]. The 2002 PowerBook G4's base model had 256 MB [1].

[0]: http://web.archive.org/web/20050309050556/http://www1.us.del...

[1]: https://en.wikipedia.org/wiki/PowerBook_G4#Models


Sure, you can argue it wasn't that long ago, but that totally ignores the rest of the point:

> And, you could read on those computers just fine.

The problem is not that the New York Times website is using too much memory for the average system. The problem is that it is using several orders of magnitude more memory to run ad-related JS in the background than it needs for the user-desired functionalitty.


Isn't the user-desired functionality that the New York Times make enough money to continue investigating and publishing news? Ads are simply a means to an end.


No. The user doesn't desire journalism to be dependent on ad revenue.

It isn't the user's responsibility to provide a better solution. Not having a solution doesn't make the problem disappear.


A few weeks ago, out of Coranavirus-induced boredom, I decided to run Apple Rhapsody in VMWare, which can be thought of as a very early version of OS X.

At first, I couldn't get it to boot—it kept kernel panicking, and I couldn't figure out why. After a bit of Googling, I found the problem—I'd given the VM too much memory. Rhapsody DP2 will not boot if it has access to more than 192 MB of RAM. I assume Apple figured that no one would ever need that much.

Rhapsody came out in 1998. Not really that long ago.


> 1998 [...] Not really that long ago

The Apple II came out in 1977, 43 years ago.

1998 therefore roughly marks the halfway point between the dawn of mainstream personal computers and the present day. I dare say 1998 was a very long time ago, in PC years.


That's fair, but respectfully, it really wasn't my point.

"There was a time—recent enough that it was within most of our lifetimes—when most computers had less than 200 MB of RAM total. You could read on those computers just fine."


On a side note it just shows poor programming on Apple’s part . I mean they didn’t built it to last , at that point Moore law was very much in force , even few years ahead had they projected they would have clearly seen that 192 mb was going to be too less.

Most ofthe gnu utils still work perfectly fine many of them were written in late 70’s and 80’s , apple itself ships , in the latest OSX , posix utilities from late 80’s versions for GPL reasons.

Perhaps it is not fair comparison , utilities are not the same as an OS . unix or Linux from the same era may not work anymore as well.


I don't think it's a fair comparison. :)

In addition to being an entire OS as you mentioned, Rhapsody was a "developer preview". The final version was called "Mac OS X Server 1.0". https://en.wikipedia.org/wiki/Mac_OS_X_Server_1.0


>I decided to run Apple Rhapsody in VMWare, which can be thought of as a very early version of OS X.

Rhapsody was more of a quick and dirty port of NeXTSTEP to Mac hardware than a version of OS X.

https://en.wikipedia.org/wiki/NeXTSTEP

>Rhapsody came out in 1998.

NeXTSTEP is from the late 80's.


> NeXTSTEP is from the late 80's

The version he tested was from 1998. Similar to how Windows NT came out in 1993, but the current version (Windows 10) has a few added capabilities since.


The version he tested was not a version of MacOS.

MacOS addressed up to 96 Gigs of RAM initially, now up to 128 Gigs of RAM.


> I'm not sure why we're harping on The Times.

Because '10% of a core and 200 MB of ram' just to read a few headlines is obscene, and slack being even worse doesn't make the NYT's bloat less obscene.

With my uBO/uMatrix settings, about:performance in firefox reports that the nytimes.com homepage takes 7.9 MB of ram (and immeasurably little CPU time, because I disabled javascript.) https://lite.cnn.com/en/ is better insofar as it takes 2.6 MB of ram.


Thank you for making that point and alerting me to the existence of the far more usable Lite CNN site.


It really isn’t easy to make any fair claims about memory usage unless you’re being extremely careful. The browsers and apps do all kinds of crazy caching with freely available memory, for performance reasons, and they can get by with a lot less than what you’re quoting. Try seeing how memory scales if you open 10 or 100 tabs. It might seem super bloated until you’re about to run out of memory, and then somehow seemingly magically be able to still open two or three times as many tabs.


Having the ability to magically open new tabs in this situation doesn't do a whole lot of good when all the other applications still suffer from overall memory pressure because the app doesn't give back memory to the OS in time when needed.


> it wasn't that long ago that most computers had less than 200 MB of RAM total.

Half-Life 2 lists 512mb ram as required. It came out in 2004.


The retail version required 256MB, it's the Steam version that needed 512.


I was thinking another decade further back than that, or so. :)

Also, Half Life 2 was a state-of-the-art video game for an audience that largely had fancy gaming PCs.


Video games still target pretty mainstream pc specs. They have to, or no one can run the game. Sans Crysis.


Not every game is Crysis for sure, but for instance, a lot of AAA games right now require 8 GB of memory minimum. The $400 Surface Go 2 that came yesterday has only 4 GB of memory in the base config, and such specs aren't isn't particularly uncommon. Not that you'd be playing AAA games on that machine even if it had more memory, but hence my point.


Neither the Xbone or the PS4 offer 8GB of ram to the game (system reserves), and yet almost every AAA game launches on those platforms.

They likely couldn't run Slack or the NYT tho. Just think about how slow the PS4 store loads (it's html5) or the whole xbone ui.

Games are actually optimized for speed. NYT is optimized on a different key performance indicator.


Looked up similar hardware running games on Youtube and yup - it's pretty awful even for quite old AAA titles. Bet it still runs league of legends at like 150fps though.

https://www.youtube.com/watch?v=bQ4Oh0BAt6I


Half-Life 2 was remastered with updated graphics in 2007 when it was re-released as part of The Orange Box. I pulled the original box from 2004 out of my closet and it states 256MB was the minimum required: https://slerp.xyz/img/misc/hl2_reqs.jpg


A six year old CPU can still be described as "pretty fast?" Moore's Law really must be dead and buried.


8 threads, 8MB cache, 4GHz? Pretty fast, yes. Not the fastest, by far, but are you really arguing that casual web browsing for news viewing (viewing! the stats are not even for page load, but idle) should require top of the line equipment?


No, I'm saying Moore's Law suggest(s|ed) that every 18 months your chip's speed relative to current offerings is about half what it was. And this old chip has had four cycles of that exponential decay.

I just built a 12 core system I would describe as "pretty fast." A high end consumer desktop these days is 16 cores. A couple years from now, that will be considered "pretty fast."


Actually, Moore's law says nothing about speed, just the number of transistors you can stuff into the thing. Plus progress has been leveling off.


The only reason anyone ever cared transistor count is because that is a rough proxy for measuring speed.


It hasn't been for a long, long time now.


Moore's paper was about VLSI manufacturing. He certainly cared about transistor count. He didn't once touch on how those transistors get to be used.


Why did he care about transistor count?


VLSI manufacturing makes chips, made of transistors. It does not sell end products. The end products are a concept that their clients care about, not them. The same way a tree farmer does not spend his days thinking about toilet paper, even though that's what he is actually helping create. Instead, he thinks about trees, and how to best grow them.

Make no mistake, he knew he worked on the production of CPUs and memories, it was just not his focus. Also, he certainly cared about switching time, but his observation was about transistor count, not switching time.


Why did their clients care about transistor count?


Ever since the arrival of multi-core CPUs chip speed should probably be considered a dual number: max single core throughput and total throughput.

Also, my over 6 year old DAW (Digital Audio Workstation) CPU[0] is still plenty fast. It’s almost everything around it I have since upgraded (NVMe SSD, RAM, audio interface, video card) to keep overall system performance high.

[0] https://ark.intel.com/content/www/us/en/ark/products/77780/i...


We've just hit a point of AMD being competitive again in the last couple of years and things are looking up - Intel has doubled core count on their top end consumer CPUs and more than halved prices in their last couple of generations.

The last decade of stagnation in the consumer CPU market was less the end of Moores law and more Intel not needing to do any better because they had no real competition.


This is pretty severe revisionist history that presupposes that Intel's investment of billions every year in trying (and failing) to make soft X-ray lithography work well enough to double transistor count was just them not trying hard enough due to lack of competition.


So them more than halving prices and boosting core counts massively in the space of a couple of years just coincidentally coincided with AMD competing with them again?

I wont argue there wasn't lots of R&D going on behind the scenes, but their current improvements are still on the same architecture they've been using for years - this wasn't something they couldn't have done earlier (especially the prices - surely being able to lower them so much and still make a profit means that consumers were getting screwed earlier?).


Intel's competitive strategy has, historically, always been to retain such an overwhelming technical advantage in terms of transistor count that the other things didn't matter--they could make a huge profit while still providing better value than the competition. The only time that didn't work until now was when they tried to completely switch architectures (with Itanium) and they were able to quickly recover their advantage by returning to an x86-based architecture. Now, of course, this strategy has finally failed them, and all sorts of people are accusing them of having been complacent due to the lack of competition, but really I don't think they were doing anything differently from before (at least with regards to what you're talking about).


Intel has had serious yield issues for their top-end chips for the better part of a decade, so that will also affect pricing. Everyone from ARM and Qualcomm to AMD and nVidia has been able to successfully step to new process nodes with acceptable yields where Intel struggled to hit the same node steps.


I think it's more a function of modern OS's not really taking that much more from the CPU than they used to be- IIRC, Windows 10 is faster than Windows 7 on slow hardware because it disables more nice-to-have features (think Aero).

CPUs continue to get more powerful, but the minimum hardware requirements for newer editions of operating systems tends to not follow the same curve that new game releases do. (I can't play Rainbow 6 Siege on my Ryzen 5 3550H + GTX 1050 laptop, but CSGO runs fine on it and my i7-2670QM laptop mobo with a GTX 1060 strapped to it.)


Oh I completely agree that basic desktop computing is still reasonably responsive on older CPUs; I'm just pointing out that the mere idea of a six year old CPU being called "pretty fast" by current standards would have been unthinkable for most of my lifetime.


> IIRC, Windows 10 is faster than Windows 7 on slow hardware.

This has not been my experience with Windows 7 ==> 10.


10 (and IIRC 8?) felt incredibly slow on my gaming PC until I switched to an SSD for the OS disk. I think it hits disk way, way more often than 7, which makes it feel slower. Even on an SSD it doesn't really seem any better than 7 did on spinning rust (programs load faster, of course, but you can't really credit the OS for that—the OS interface itself, and OS utilities, don't seem faster). This was true even with Cortana and all that junk totally disabled.


Strange. I'd expect you to be hitting at least 40-50fps on 1080 High settings in R6 Siege with those specs. Minimum system requirements for R6 Siege is an i3. Maybe because its a laptop you are either hitting thermal limits or you have significantly less VRAM than the desktop counterpart.


> Minimum system requirements for R6 Siege is an i3.

I've never played R6 Siege and have no idea how it performs, but I would just like to say that this is effectively meaningless and I hate how games put it in their system requirements. Writing "moderately fast CPU" would carry more useful information. A Westmere Core i3-530 from 2010 is not going to perform anything like a Coffee Lake Core i3-8100B from 2018.

(I'm not that smart, I looked up the model numbers on Wikipedia: https://en.wikipedia.org/wiki/List_of_Intel_Core_i3_micropro...)


The actual minimum listed for that particular game seems like it does what you're asking for:

> Intel Core i3 560 @ 3.3 GHz or AMD Phenom II X4 945 @ 3.0 GHz


I think it's possible for Moore's law to still be true while single core performance stagnates


For everyday user computing tasks, it’s not that Moore’s law is dead so much as just irrelevant. We’ve hit the point of diminishing returns on CPUs getting faster.

Many users have flipped to prioritize power efficiency and quiet operation over speed. And even despite those optimizations, most of their tasks are still bottlenecked as often by their available memory and the quality of their network connection as as their processor.

We just haven’t figured out what to do with all that power in the hands of a user who isn’t super technical, so we’re just spending it on inefficient cruft instead.


That CPU is pretty close to the Ryzen 2700x I just installed last year.

And yeah, Moore's Law isn't a law recently. :)


Yeah, an 8 year old cpu with a 2 year old graphics card still makes a decent gaming computer.


As another datapoint, this HN page uses 56MB in Chrome for me.


That's actually strangely high for HN, I get 7.6 MB in Firefox.


15Mb when measured through about:performance 70Mb when measured through about:memory

I don't know why they are different.


17.35 MB for me in Firefox 76.0.1 on MacOS 10.15.4.

But it has been a couple hours since your comment and the other replies :)


I have 10.4 MB on firefox and, on Chrome is 33 MB.


I get 11.1 MB in Firefox.


M? Are you sure?

I get a 58kB HTML document, plus 2kB each CSS & JS, 3 * ~400B images, and a 7kB favicon.

Edit: Oh sorry, RAM. Leaving comment just to compare page size to RAM use for interest.


Wow.

There's now ~388 comments. Source is 529kb and content 114kb (simple cut and paste of page, no markup).

Those are some impressive ratios.


11MB on Firefox, Windows.


47.10MB in Safari


All I can say is Slack performs just fine for me on my Mac laptop, I never notice it being laggy.

When I go to NYTimes without an ad blocker, it takes a long tie to show up, it sometimes freezes my scrolling while it loads things, things jump around on the page so when I'm trying to click something I get the wrong target as some ad loads and moves things around, it can be 10 seconds until the page stabilizes after load. I do not experience that with Slack.

How does Slack use so much RAM and CPU and still perform better than a mostly static page of text and graphics? I dunno. But it does. In the end the RAM and CPU usage matter to me theoretically, but all that really matters is the actually experienced interface. The NYT may not be the worst, but it's definitely worse than many, and the comparison to Slack is odd to me, cause my experience with Slack is that is quickly responsive with no lag.

You have a different experience, for you nytimes loads faster and has interactivity with less lag/freezing up than Slack, which for you has a lot of lag/freezing/slow load problems?


Not the NYT, but some other at least partially renowned news site in my country has that problem. I am one of these crazies that doesn't like mobile sites, so I mainly use desktop versions. Had my phone fully charged an left the page open. An hour or so later the phone already shut down.


I don't think 4790K is that powerful anymore. Anything below intel's 8th generation is pretty lower end regardless of gen.

They increased core count for every gen starting 8th gen and made a significant leap in perf.

Your processor would give the same perf as the lowest end ryzen.


> Your processor would give the same perf as the lowest end ryzen.

Not for single core performance though, right? Which is what I'm looking at here.


An AMD Ryzen 7 2700x has faster single core performance than yours, maybe 5% more. Anything faster/more recent is even more. It's multicore advantage will be much more.


2700x user here. I didn't realize how low end it's considered! I game and do all the coding things and never really wait long for games to load or code to compile...


It's not low-end, it is middle of the pack, given it's age, and for the price it's still a good buy for things like compiling and encoding, let alone any grand strategy/4x style of game that relies heavily on CPU multi-threading speeds.


Everyone has different requirements. Depending on your resolution and required frame rate you might have get a more powerful video card that becomes bottlenecked by your cpu.

But the 2700x was a good buy, in retrospect.


Yeah, I also built this machine last year. 1050TI, 64gb ram.


See I can't imagine running on a 1050ti but I only need 32gb of ram. I can't drive my monitor (5120x1440) with that video card at any reasonable rates. Everyone has different requirements.


Oops I meant 1070ti. Nice monitor!


cant do it on a 1070ti either, sadly. Its a samsung CRG9 and it is awesome.


I remember when computers were faster than me. Those numbers barely keep up with my standard issue brain, which is 40-year-old tech and can just about read the newspaper and carry on intermittent conversation at the same time.


You weaken your argument by making a rediculous statement that "and it wasn't that long ago that most computers had less than 200 MB of RAM total."

Yes it was. 1997 was a long time ago. Kids born that year can legally drink.


> IM clients are another thing that worked just fine 20 years ago.

Bit of an absurd claim to make, since IM clients 20 years ago had a fraction of the features that Slack has today. To pretend otherwise is disingenuous.


"Progress" has been disappointing in the computing world but none of the modern devs seem to have an issue with these slow giant bloated apps!


Now compare to nj.com


Because this is an article about The Times.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: