Hacker News new | past | comments | ask | show | jobs | submit | m4r71n's comments login

Is the busy bar's 72x16 LED matrix for sale anywhere with the muted look that one could build something comparable?


This looks great! I've considered building a simple device with an LED matrix that looks similar to this, but could never figure out what gives the LEDs the muted look. All of the devices mentioned here (Tidbyt Gen2, Lamarca, Ulanzi, even the busy.bar) have it. Is the back-pannel just an LED matrix with a custom acrylic in front of it? How do they ensure the light from individual LEDs doesn't bleed into its neighbor?


I wouldn't swear to it but from mine it looks like there's a grid of light guides between the led PCB and the front screen to prevent any cross bleed. The front screen is translucent but not transparent (think heavy tinted window) acrylic which gives it the muted look.

It's pretty good hardware wise, it would be hard to knock up DIY for $50 even just in BOM.

Edit: teardown in German https://youtu.be/-Dn3A5V8ZPo @ 04:30


> custom acrylic

While I'm not an expert, my own experimentation suggests this is correct


The RSS feed is open to public, so is the errata page on the Portal (https://access.redhat.com/errata-search/). Subscribing to email notifications requires some sort of account, just like the mailing list did.


That's a common experience for sauna beginners. You may feel as if you're not getting enough air, that the hot air is difficult to breathe in, and your heart rate goes up dramatically. But! That's something you get used to fairly quickly, go 10-20 times for four 10-15 and I promise you'll feel much more relaxed once the body gets used to the hot environment.

I moved from Europe to southeast of US and (dry) saunas are pretty much unheard of here. I miss them very much! It was very relaxing to go for two hours once a week, especially in the winter when it was snowy outside and you could plunge into an icy lake right after coming out of the sauna.


I would not recommend the default arguments hack. Any decent linter or IDE will flag that as an error and complain about the default argument being mutable (in fact, mutable default arguments are the target of many beginner-level interview questions). It's much easier to decorate a function with `functools.cache` to achieve the same result.


Or, if you need a "static" variable for other purposes, the usual alternative is to just use a global variable, but if for some reason you can't (or you don't want to) you can use the function itself!

    def f():
        if not hasattr(f, "counter"): 
            f.counter = 0
    
        f.counter += 1
        return f.counter

    print(f(),f(),f())

    > 1 2 3


I didn’t realize that the function was available in its own scope. This information is going to help me do horrible things with pandas.


This is very important for self-recursion.


Is there something that isn't "self-recursion"?


Mutual recursion. Horrible example, don’t use this:

  even 0 = true
  even n = not (odd n-1)
  odd 0 = false
  odd n = not (even n-1)


That should be

  even 0 = true
  even n = odd n-1

  odd 0 = false
  odd n = even n-1
I fed a C version of this (with unsigned n to keep the nasal daemons at bay) to clang and observed that it somehow manages to see through the mutual recursion, generating code that doesn't recurse or loop.


You are correct, I don't know why I put the nots in there. Either way, demonstrates mutual recursion.


This is very important for self-recursion.


This is very important for self-recursion.


RecursionError: maximum recursion depth exceeded



This is very important for self-recursion.


In Python you'd maybe think, smart, then my counter is a fast local variable. But you look up (slow) the builtin hasattr and the module global f anyway to get at it. :)

I looked at python dis output before writing this, you can look at how it specializes in 3.11. But there's also 4 occurences of LOAD_GLOBAL f in the disassembly of this function, all four self-references to f go through module globals, which shows the kind of "slow" indirections Python code struggles with (and can still be optimized, maybe?)

You could scratch your head and wonder why even inside itself, why is the reference to the function itself going through globals? In the case of a decorated or otherwise monkeypatched function, it has to still refer to the same name.


More concretely, one of the classic Python bugs is to use `[]` as a default argument and then mutate what "is obviously" a local variable.


I think it's even more safe/preferable to use non-mutable `None`s as a default and do:

``` def myfunc(x=None): x = x if x is not None else [] ... ```


In some cases you can also do:

  x = x or []
Your method is best when you might get falsy values but if that’s not an issue the `or` method is handy.


I tend to dislike this method as it's unclear what or returns unless you already know that or behaves this way. x if x is not None else default is cleaner in my opinion


I'm learning python, and I hit this milestone about a week ago!


What's it do?


When you set an object as a default that object is the default for all calls to that function/method. This also holds true if you create the object, like that empty list. So in this case, every call that uses the default argument is using the same list.

    def listify(item, li=[]):
        li.append(item)
        return li

    listify(1) # [1]
    listify(2) # [1, 2]


I would hate to get an interview question where the very premise of it is wrong. Python does have mutable arguments, but so does Ruby.

    def func(arr=[])
      # Look ma we mutated it.
      arr.append 1
      puts arr
    end
Why calling this function a few times outputs [1], [1],... instead of [1], [1, 1],... isn't because Ruby somehow made the array immutable and hid it with copy-on-write or anything like that. It's because Ruby, unlike Python, has default expressions instead of default values. Whenever the default it needed Ruby reevaluates the expression in the scope of the function definition and assigns the result to the argument. If your default expression always returned the same object you would fall into the same trap as Python.

The sibling comment is wrong too -- it is a local variable, or as much one as Python can have since all variables, local or not, are names.


Just as a demo of what you're saying:

If you were to do (the following is from memory, probably has typos):

  def func(arr=[]):
    print(locals)
You'd see `arr` there. The `[]` value lives in `func.__defaults__`:

  def func(arr=[]):
    print(locals)
    print(func.__defaults__) # will print: ([],)
If you assign to `arr` nothing changes with defaults:

  def func(arr=[]):
    print(locals)
    arr = 10
    print(func.__defaults__) # will still print: ([],)
But since lists are mutable, calling a mutating function on the list referenced by `arr` will cause a mutation of the list stored in defaults:

  def func(arr=[]):
    print(locals)
    arr.append(10)
    print(func.__defaults__) # will print: ([10],)
But only when `func` is called without something to assign to `arr`:

  # if pristine and it has not been run before
  def func(arr=[]):
    print(locals)
    arr.append(10)
    print(func.__defaults__) # will print: ([],)
  func([])


Agreed, I found that example very confusing.


Why does that issue only come up with default arguments?

Why not other places?


Default arguments are evaluated and created when the function definition is evaluated, not when the function itself is evaluated. This means that the scope of the default argument is actually the entire module, not just a single invocation of the method. This is what throws people off.


functools.cache is pretty new; py3.8 is still supported for another year and a bit.


functools.cache is basically `functools.lru_cache(maxsize=None)`. `lru_cache` was added in py3.3, which is widely available.


> "Consequently, we now have to gather the source code from multiple sources, including CentOS Stream, pristine upstream packages, and RHEL SRPMs."

Oh no! How dare they make us do the work?

It feels tiring to hear these arguments that they must be provided with everything bundled neatly with no questions asked and no contributions to the actual upstreams.


Yeah, that's not what any of us are saying.

We already _do_ a lot of work. This is _more_ work.


So who should be doing that work? On whose payroll? Should Red Hat engineers be spending their time de-branding and wrapping things up neatly for rebuilders to use? Note that every minute they spend on that is a minute they're not spending on adding features, fixing bugs, or backporting fixes to the last ten years' worth of releases. You know, the things they're actually obligated to do by their contracts with customers. Why should they continue letting free work for non-contractual partners - who seem increasingly inclined to be competitors - displace or delay that?

This is the rebuilders' burden, and always has been. It should be their engineers doing that work, just as with other open-source project. If you want to rebuild TensorFlow or React, slap on your own branding, maybe sell support or consulting for it or enable others[1] to do so, do you think those teams will go out of their way to repackage stuff for your convenience? That's above and beyond common open-source practice. Expecting Red Hat to continue going above and beyond forever just seems awfully entitled.

[1] "Team members don't do X but sponsors do" deserves its own thread.


Note that they're actually doing more work now (checking for contractual entitlements, playing whack-a-mole with rebuilders, trying to reassure ecosystem partners, etc etc) than they did before.


I'm not sure that's true at all. Having done a bit of packaging myself, I'm well aware that it's hard, tedious, frustrating work. Doing it twice, once for their own users and again for the benefit of those whose only practical effect is to fragment the ecosystem, is a substantial burden.


> playing whack-a-mole with rebuilders Are they playing whack-a-mole? Or was this one change that people are arguing (and Red Hat's lawyers seem to think) is within their rights under the GPL ? It will be whack-a-mole if Red Hat tries to stop supporting VPS instances or stop updating UBI, both of a 1% chance of going away.


More work that has little effect on the actual upstream ecosystem beyond giving out something for free that 20k people at Red Hat are literally paid for. Ubuntu is not a clone of Debian, they extend it, tweak it, provide the code back to the community. What is the specific parts of the work that Rocky does that benefits the open-source community? What improvements has the community benefited from through your "work"?


They arent asking to be handed everything - they've simply explained how the process has changed now.

Complaining about change; and describing the steps that are being taken is far from saying they "must be provided with everything bundled neatly with no questions asked"


So what's stopping Alma/Rocky from just "taking that same stuff other people wrote" and not bothering with RHEL at all?


A note of caution: never scrape the web from your local/residential network. Few months back I wanted to verify a set of around 200k URLs from a large data set that included a set of URL references for each object, and naively wrote a simple Python script that would use ten concurrent threads to ping each URL and record the HTTP status code that came back. I let this run for some time and was happy with the results, only to find out later that a large CDN provider has identified me as a spammy client with their client reputation score and blocked my IP address on all of the websites that they serve.

Changing your IP address with AT&T is a pain (even though the claim is that your IP is dynamic, in practice it hardly ever changes) so I opted to contact the CDN vendor by filling out a form and luckily my ban was lifted in a day or two. Nevertheless, it was annoying that suddenly a quarter of the websites I normally visit were not accessible to me since the CDN covers a large swath of the Internet.


I run a search engine crawler from my residential network. I get this too sometimes, but a lot of the time the IP shit-listing is temporary. It also seems to happen more often if you don't use a high enough crawl delay, ignore robots.txt, do deep crawls ignoring HTTP 429 errors and so on. You know, overall being a bad bot.

Overall, it's not as bad as it seems. I doubt anyone would accidentally damage their IP reputation doing otherwise above-board stuff.


I’ve learned a bunch of stuff about batch processing in the last few years that I would have sworn I already knew.

We had a periodic script that had all of these caveats about checking telemetry on the affected systems before running it, and even when it was happy it took gobs of hardware and ran for over 30 minutes.

There were all sorts of mistakes about traffic shaping that made it very bursty, like batching versus rate limiting, so the settings were determined by trial and error, essentially based on the 95th percentile of worst case (which is to say occasionally you’d get unlucky and knock things over). It also had to gather data from three services to feed the fourth and it was very spammy about that as well.

I reworked the whole thing with actual rate limiting, some different async blocks to interleave traffic to different services, and some composite rate limiting so we would call service C no faster than Service D could retire requests.

At one point I cut the cluster core count by 70% and the run time down to 8 minutes. Around a 12x speed up. Doing exactly the same amount of work, but doing it smarter.

CDNs and SaaS companies are in a weird spot where typical spider etiquette falls down. Good spiders limit themselves to N simultaneous requests per domain, trying to balance their burden across the entire internet. But they are capable of M*N total simultaneous requests, and so if you have a narrow domain or get unlucky they can spider twenty of your sites at the same time. Depending on how your cluster works (ie, cache expiry) that may actually cause more stress on the cluster than just blowing up one Host at a time.

People can get quite grumpy about this behind closed doors, and punishing the miscreants definitely gets discussed.


It makes very little difference what IP you scrape from, unless you're from a very dodgy subnet.

The major content providers tend to go on a whitelist only based approach, you're either a human-like visitor or facing their anti-scraping methodologies.


I think the emphasis is on "never scrape from YOUR local/residential network".


Most probably cloud based scraping services were not available in 2012. Now there are services available like scraperapi and others that don't need you to install anything at your end. You pay them, use their cloud infra, infinite proxies and even headless browsers. Shameless plug, I had written about it a few years ago on my blog post [1]

[1] https://blog.adnansiddiqi.me/scraping-dynamic-websites-using...


>(even though the claim is that your IP is dynamic, in practice it hardly ever changes)

Every ISP just uses DHCP for router IPs. It's dynamic, you just have to let the lease time expire to renew it.

Or, have your own configurable router instead of the ISPs so that you can actually send a dhcp release command though they don't all support this part. Changing MAC Address will work otherwise.


When the lease expires, the same IP is prioritized for renewal. Leases are generally for a week or two, but I've noticed dynamic IPs staying for 3 months or more. Swapping modems is really the best way to get a new external IP.


Not sure how is it in python, but what about using something like arti-client? Would it be already blocked?


Podman still uses QEMU on Mac: https://podman.io/docs/installation#macos


Qemu is the hypervisor in this situation. Which doesn't necessarily preclude having Rosetta acceleration of AMD64 binaries within the ARM64 Linux guest itself.

That said, as far as I know, the only official way to use Rosetta inside a Linux guest is using Virtualization.Framework, which allows mounting a Rosetta binfmt handler via Virtiofs. So it's also going to use Qemu inside the VM to handle running amd64 images, not Rosetta.


Mastering Regular Expressions by Jeffrey Friedl! I was fresh out of college where I literally memorized certain regular expression patterns for a Unix class exam just to pass. It was only when I started an actual job in development did I realize what a powerful tool regexes are and was recommended this book. It explained everything so clearly and easily that to this day I love regular expressions.


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: