Hacker News new | past | comments | ask | show | jobs | submit | zavertnik's comments login

This is something I've wanted to do for a while! I wish Samsung still produced their 55" 8K displays-- 8k @ 55" gives you effectively the same PPI as a 27" 4K display. Maybe someday.

Cool project! This post also appears to be the second result on google when searching "vercel app" (as of +7 hours from your post), which I also found interesting.


I'm not aware of search engine, can you elaborate?


I have used SteerMouse for years after giving up on the dumpster fire that is Logitech's G-HUB for macOS for my G600, which has 12 side buttons + a the G-Shift button which when held acts a modifier for additional macros.

My issue with SteerMouse is that when creating chord macros, it forces the original macro to only work when depressed, rather than activating upon press.

I haven't been able to find a suitable replacement. Curious if anyone here on HN has worked around this in any way?


Hardware QA is sometimes hit and miss, but Roccat mice have a shift feature that is implemented in firmware. The same goes for their macros: their mouse emulates a keyboard at the hardware level. That way it Just Werks and you don't need some cringey gamer-themed spyware always running in the background just to make full use of it, like you do for many features with Logitech mice. My go-to atm is the Kone XP, which doesn't have as many side buttons as the Logitech G6xx mice. They do make an 'MMO mouse' with many many buttons like that called the 'Nyth'.

Unfortunately their configuration software hasn't supported macOS for a long time, but you can configure your device on a Windows VM via USB passthrough, which is what I do. Alternatively, there are also several reverse-engineered tools for configuring Roccat mice, including libratbag (and the older roccat-tools) for Linux and roccat-iokit for macOS, if you'd be interested in either choosing a model according to what's supported there or adding support for the Nyth.


I wasn't aware of Roccat! sounds like what I'm after, especially without the gamer aesthetics. The Roccat Nyth looks close to what I'm after, but I'm pretty married to my muscle memory with using 4x3 buttons (of which are all in use, in both layers).

Because it is possible with G-Hub, I was just curious if anyone knew what the gap is between SteerMouse's support and G-Hub's support for the seamless G-Shift/shift key experience. I don't know enough about hardware to make a guess.

I'm 100% fine with using a separate OS to config the mouse, since my layout is not app specific. Despite the button layout difference, this will definitely be the mouse I try out next.


> I wasn't aware of Roccat! sounds like what I'm after, especially without the gamer aesthetics. The Roccat Nyth looks close to what I'm after, but I'm pretty married to my muscle memory with using 4x3 buttons (of which are all in use, in both layers).

Oh the mouse will be a bit gamer-y. You'll only get to escape from the gamer-y software. :)

In seriousness, I love mine for 'productivity' (a more pleasant desktop experience).

> Because it is possible with G-Hub, I was just curious if anyone knew what the gap is between SteerMouse's support and G-Hub's support for the seamless G-Shift/shift key experience. I don't know enough about hardware to make a guess.

I also have a similar Logitech mouse, the wireless G602. I can't remember all of the details, but one of the things I wanted to do with it (I think bind a layer 2 button press to a key chord) is something I was told I couldn't do 'without G-Hub', and in particular its Lua scripting interface.

I'm mildly curious about the division of duties there, too, which presumably also explains the behavior you observed (SteerMouse has to reimplement something G-Hub normally takes care of, and they did it differently).


Can you explain more of what you mean there?


Haha, of course!

So when you click a button on the G600 (and most other mice with side buttons), the button fires when you press down, just like the Mouse 1 or Mouse 2 button. On the G600, there is a third click button to the right of the right click, which is called G-Shift. When pressed, all of the side buttons have secondary assignments. Since you have to hold G-Shift to access this other layer, the macros are often referred to as chords in mouse customization software like Steermouse, since it requires two buttons to fire.

To configure this, you need the G-Hub software, which is in a nightmarish state on macOS. To get around this, I use Steermouse. Steermouse lets me get around this, however with one trade off. If a side button has two assignments (one when pressed by itself, another when pressed with the G-Shift), then the button does not actually fire when its pressed, but instead fires when the button is depressed/let go of. I imagine this is just how Steermouse handles buttons which have more than one assignment.

I haven't found a suitable replacement that is as robust as steermouse. Its one of the first apps I install on my mac, but this is one killer feature that I've only found in the G-Hub app.


I believe the poster means that the macro operation is that of an astable multivibrator, whereas they would prefer that it be that of a one shot multivibrator.


Ah, got it. Yeah, I don't think there's a way to do that.


> The argument is basically that for the average US national, mass surveillance and propaganda by US tech is more of an issue.

Correct me if I'm wrong, but doesn't this track 100% with the bills stated intent: to prevent foreign adversaries from having this kind of influence?

I don't think the US is making an argument that data collection, surveillance, and propaganda are bad and then selectively only choosing to enforce these ideals on China. I think the US is making the argument that a company which belongs to an adversary and operates an a domain that is out of bounds from US regs/enforcement is a NatSec issue.


I'm seeing a lot of comments like this, but I have no idea what your camp is trying to say.

The bill isn't a criticism of TikTok's operation, its a criticism on its ownership and how that ownership + influence creates an exploit that poses a threat to NatSec.

This kind of reactionary framing feels like an attempt to put Facebook and TikTok on a level playing field, but the premise of the bill is about tech ownership and influence by companies of or from a foreign adversary.

Facebook is a US company. What am I missing here?


How does this invalidate the argument Warner is making?


> Is China really a foreign adversary?

Yes, they are officially recognized as a foreign adversary. They're the first country listed:

https://www.ecfr.gov/current/title-15/subtitle-A/part-7/subp...


I have had the same results with multiple VPN providers for the better part a month now.


I daily drove a hackintosh for years until I recently pivoted to apple silicon. I was a very enjoyable experience for me. The success and reliability of a hackintosh is really dependent on your hardware configuration. I had lucked out that my desktop tower that I had built years prior just so happened to coincide almost 1:1 with hardware requirements for a golden build. (6700k, 64gb ram, Vega 64, compatible wifi/bluetooth pcie, compatible m.2 controllers, z170 mb which is well known in the hackintosh community, etc.)

Being able to have a modular Mac was really something and I exploited that to tailor my machine to my use case (television/video production). I never had issues with bluetooth or WiFi, nor did I have ever have an issue with Apple's services like iMessage/Facetime.

What sucked about the process was staying current with system updates. Updates within the macOS release went without a hitch, but my hardware was aged out in newer macOS versions which made upgrading a bit too much like surgery, and since this hackintosh was my production device, that wasn't something I wanted to roll the dice on.

Having switched to apple silicon, I do kind of miss that freedom, but I've found that same freedom just by doing things a little less hacky. Instead of a board I can add drives to, I just setup a NAS, instead of using an old PCIE HDMI capture card, I got a more modern USB one, etc.

For a long time, Hackintosh was an opportunity to do things my way, and that experience led me to learning experiences that have improved my day to day that I otherwise may not have learned. It was a freeing experience. Today I still do things my way, but these days my way is more focused on convenience for the things that should "just work" so I put my attention on things that matter, rather than things that shouldn't, such as modifying my EFI before a macOS update to trick macOS into thinking I have the iGPU of a newer chipset because Apple dropped support for Skylake on a new release.

Good times, the headaches were worth it in hindsight.


I’ve got a machine pretty similar to what you’re describing in my closet (6700k, mobo reasonably well known in the community, 5700XT GPU) which used to be a hackintosh. Might be worth reviving and trying to find a use for.


From my experience, the thing that makes using AI image gen hard to use is nailing specificity. I often find myself having to resort to generating all of the elements I want out of an image separately and then comp them together with photoshop. This isn't a bad workflow, but it is tedious (I often equate it to putting coins in a slot machine, hoping it 'hits').

Generating good images is easy but generating good images with very specific instructions is not. For example, try getting midjourney to generate a shot of a road from the side (ie standing on the shoulder of a road taking a photo of the shoulder on the other side with the road crossing frame from left to right)...you'll find midjourney only wants to generate images of roads coming at the "camera" from the vanishing point. I even tried feeding an example image with the correct framing for midjourney to analyze to help inform what prompts to use, but this still did not result in the expected output. This is obviously not the only framing + subject combination that model(s) struggle with.

For people who use image generation as a tool within a larger project's workflow, this hurdle makes the tool swing back and forth from "game changing technology" to "major time sink".

If this example prompt/output is an honest demonstration of SD3's attention to specificity, especially as it pertains to framing and composition of objects + subjects, then I think its definitely impressive.

For context, I've used SD (via comfyUI), midjourney, and Dalle. All of these models + UIs have shared this issue in varying degrees.


It's very difficult to improve text-to-image generation to do better than this because you need extremely detailed text training data, but I think a better approach would be to give up on it.

> I often find myself having to resort to generating all of the elements I want out of an image separately and then comp them together with photoshop. This isn't a bad workflow, but it is tedious

The models should be developed to accelerate this then.

ie you should be able to say layer one is this text prompt plus this camera angle, layer two is some mountains you cheaply modeled in Blender, layer three is a sketch you drew of today's anime girl.


Totally agree. I am blown away by that image. Midjourney is so bad at anything specific.

On the other hand, SD has just not been on the level of the quality of images I get from Midjourney. The people who counter this I don't think know what they are talking about.

Can't wait to try this.


Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: