Hacker News new | past | comments | ask | show | jobs | submit | sliken's comments login

If you have a few 100 people in an area literally spending their waking hours worrying about having enough food. Areas without enough of the right nutrients are pretty common. People are pretty good at figuring out what makes them feel better/healthier.

Some places are iron poor, some even resort to eating dirt, especially when pregnant when you need more iron. Some areas are salt poor and animals will go to extreme measures to get to salt. Some areas have poor bioavailability and require crushing, special cooking, soaking, or a narrow range of acidity to be available, which of course becomes the norm for cooking in those areas. Some even become religious standards, things like fish on fridays or avoiding pork (before trichinosis was controlled).


Or buy 2 Nvidia digits for $6,000 to get 256GB vram.


Keep in mind the strix halo APU has a 256 bit wide memory bus and the Mac Ultra has a 1024 bit wide memory bus.

Here's hoping the Nvidia Digit (GB10 chip) has a 512 bit or 1024 bit wide interface, otherwise the Strix Halo will be the best you can do if you don't get the Mac Ultra.


I mean it remains to be seen if it will be compute or bandwidth bound, I am sure mac ultra will also have double or triple compute as well.

But in either case its going to do much better than currently available CPUs with easily upgradeable ram. I would not be surprised to see 128gb configurations for around 3k (going of the ASUS g13 announced pricing of arround 2k for 32gb version and them saying it will go up to 128gb).

At that point sure it might not compete with max but its at a much more acceptable price point, it will not be a device you get just for the AI, but a mobile workstation that you can also run some local models on for normal money. Will need to wait and see. I know I am not buying anything from ASUS either way.


I do want a 192GB Mac Ultra, I'm hoping the Nvidia Digit achieves similar at $3,000. Sadly no specifications or benchmarks, so tokens/sec is just a guess at this point.


Maybe you want to configure 4 nodes to provide redundant network service.

Or experiment with VLANs, firewalls, QOS, and related. Sure transcoding multiple 4k video streams is intensive, but there's plenty to be learned about performance, networking, configuration management, DNS, fallover, network storage, rolling upgrades, SSL certs, VPN endpoints, and 100 other network services that run easily on today's SBCs.

Plenty to be learned with a handful of cheap linux boxes and even a RPI5 or RK3588 is quite capable. Sure not state of the art today, but home lab != hyperscaler, but just might help get you are job at one.


Er, and how to you make the images? Or a family of tweaked images for different use cases. The make sure security standards, access to centralized services, etc is working?

Doesn't that just move where you need CM a bit?


You mean like over half of hardware added to the amazon cloud in the last year? Graviton, now in it's 4th gen, seems to be doing quite well. I believe other large cloud providers are working on similar home grown arms.

Or maybe all the apple desktops? (Imac, Studio, and Mini)

Or maybe one of the largest HPC clusters on the top 500 list, fugaku? Was number 1 for several years.

AMD had and killed an arm project, but rumors claim they are working on sound wave APU that has an arm chip combined with a GPU.

Or similar Nvidia's GB10, which is their new AI dev kit the size of a mac mini with "1 petaflop" that combines 20 arm cores with a blackwell GPU.

Seems like arm is doing just fine outside of mobile.


See my other comment in this thread.


Arm's definitely trying to push on the laptop, tablet, desktop, and server markets. The fastest cluster on the top500 was arm for several years, most of the big clouds either have home grown arm servers (like graviton) or will soon.

They are definitely making progress.


Not to mention the GB10 where Nvidia mates a 20 core arm chip to a blackwell chip and puts it in a widget the size of a mac mini.


Step #1 keep your CM files in Git.

My favorite part of puppet is demonstrated well by the trifects:

   package { 'openssh-server':
     ensure => installed,
   }
   
   file { '/etc/ssh/sshd_config':
     source  => 'puppet:///modules/sshd/sshd_config',
     owner   => 'root',
     group   => 'root',
     mode    => '0640',
     notify  => Service['sshd'], # sshd restarts whenever you edit this file.
     require => Package['openssh-server'],
   }
   
   service { 'sshd':
     ensure => running,
     enable => true,
   }
That will keep SSH installed, a service running, and the puppet config file managed forever. If you accidentally replace the config file it will be fixed and the service restarted. If you remove the package it will be reinstalled, config file updated, and service started.

Not a fan of ansible, it's more of a "run this playbook", which translates to reinstall and "run this playbook", which in environments that don't reinstall very often can be painful. Since ansible playbooks don't know the current machines state they never know exactly what commands to run.

Generally if you spin up containers or similar short term servers I think ansible is fine. If it's a larger and more complicated environment with longer lived servers I'd use puppet.

Oh, one other think I like about puppet is if you say apply X to all nodes, you can literally not run puppet (it will fail to compile the manifests) if you try to override it. Which security auditors LOVE.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: