Hacker News new | past | comments | ask | show | jobs | submit | sratner's favorites login

My personal favorite resource is "R for Data Science" by Hadley Wickham. It covers lots of nice data manipulation and visualization examples, and provides a good introduction to the tidyverse, which is a particular dialect of R that's well-suited for data analysis. It's available for free at:

https://r4ds.hadley.nz/

For more specialized analytical methods there are lots of textbooks out there that provide a deep dive into packages for a specific field (e.g. survival analysis, machine learning, time series), but for general data manipulation and visualization it's hard to beat R4DS.


One might be interested in “Font for rendering line chart data” that was on the homepage a few weeks back https://news.ycombinator.com/item?id=39173438

> It may decrease privacy philosophically, but it isn't nefarious.

It doesn't decrease privacy. It decreases anonymity which is distinctly different.

> If you want a private messaging platform with zero prerequisite identity, use Briar.

Or Session which is a fork of Signal that runs it's own network using standard PKI instead of a phone number for identities and a decentralised message delivery/onion routing system.


Yes, let's hope! The strategy has worked out sometimes - Google shut down 'Google Refine' 10 years ago, it got turned into 'Open Refine', last release five days ago. https://github.com/OpenRefine/OpenRefine

It's a hugely useful tool if you're working with messy Excel-scale data, i.e., most biologists or social scientists.


Is it recycling that needs to be implemented or go ever deeper at the source and crack down on disposable, unfixable, ephemeral gadgets? Seems to be more of a greed rush problem, fix it in the future and make more money type of issue at the root of it. Recycling isn't necessarily not useful but attacking the wrong root cause of e-waste.

> If current quantum computers were scaled up to more qubits

That depends on what you mean by "scaled up". There is a concept of "Quantum Volume" that exists, which basically means the depth of the longest qubit circuit you can pull off.

https://en.wikipedia.org/wiki/Quantum_volume

'Simply' (it's never simple ;) ) adding qubits to a machine does not necessarily increase its Quantum Volume. Decreasing the noise typically will.

However, there is a threshold at which point you can scale up mostly indefinitely. This is what the whole Quantum Error Correction is all about.

https://en.wikipedia.org/wiki/Quantum_error_correction

There is a paper

https://arxiv.org/abs/1905.09749

That goes into a clear discussion of how to build a quantum computer and the associated thresholds that would allow you to do so. There is a minimum number of qubits needed (that work perfectly), but the paper analyzes how many qubits you'd need under realistic assumptions about how many noisy qubits you'd need to get error correcting qubits at the needed reliability.


DNS block:

events.gfe.nvidia.com lightstep.kaizen.nvidia.com


Relevent concept: Zooko's Triangle [1]: it's hard to have a naming system which is decentralised and secure against spoofing without giving you long and random (or hashed) names, like with .onion services

[1] https://en.wikipedia.org/wiki/Zooko%27s_triangle


The best managers I’ve worked for recognize that the role isn’t “IC + Authority”, but something entirely different.

- They trust their team, but stay engaged/aware enough to know when to ask the right questions

- They run interference for the team and make sure they’re shielded from org BS when possible

- They make sure the team has the tools they need, and when they don’t, they relentlessly pursue a solution

- They establish good relationships with other teams and leaders, and leverage those relationships to help the team (so important when your team has dependencies on other teams)

- They shut down adhoc/direct requests from upper leadership that would distract from current priorities

- They consistently sell the value of the team and highlight its accomplishments

- They convey enough about the business context to make the work more meaningful

- They always have your back in public, even if there might be critical feedback to deliver later

- They elevate the members of the team and don’t take credit for the team’s work; instead, they take pride in creating the environment that allowed the work to thrive

- People want to work for these managers, and you also think hard about leaving them

I’ve been lucky enough to experience all of this in a single manager a few times, but it seems rare.

But also keep in mind that managers are human, often thrown into the position, and while they’re eager to be the kind of manager people want to work for, they may not have the experience.

When I was a principal IC/team lead, I found it useful to “manage up” (I kinda hate this term), and communicate as clearly as possible about what the team needs and how they can help. Especially with newer managers, this is critical. I’ve seen new managers chewed up and spit out by snarky devs who view them as an adversary instead of a member of the team, and in a discussion about good managers, it seems important to mention that there’s a lot a team can do to help a new manager find their stride, and one of the truly good managers I worked for emerged from this kind of situation.


Get this book:

https://www.opencircuitsbook.com/

It’s a book featuring macro photographs of cut-aways of electronic components. It’s the first book that helped me really think of electronic components as a physical things whose function followed from physical principles, rather than an arcane collection of various bits of black magic strung together.


Just a quick comment on the product, since this it effectively an ad for Altium.

They typically present themselves as the most popular solution, but they also very clearly out-price any hobbyist at $10k for a perpetual licence. Their "hobby" version CircuitStudio lacks critical features and has 0 support and 0 updates and the forums are just crickets. But KiCad is free, open source, and looks similar enough that I had a great time following the Altium tutorials by Rick Hartley with it.

In my opinion, that also invalidates their claim that Altium is the most commonly used PCB design software, because there likely are 100x more hobbyists using KiCad than people able to spend $10k on a hobby.

It seems to me like Altium is developing like Eagle. They used to court the makers and hobbyists and then greatly profited when those people started working. But now both of them are mostly in the business of milking companies who have existing data in their proprietary file format.


If they open it up, possibly. But honestly, building your own tools is _super_ easy with langchain.

- write a simple prompt that describes what the tool does, and - provide it a python function to execute when the LLM decides that the question it's asked matches the tool description.

That's basically it. https://langchain.readthedocs.io/en/latest/modules/agents/ex...


This is hardly an "advanced algorithm", anyone can do this on a free colab instance. You need a few images of yourself for training on, and you can make a LORA for Stable Diffusion, and use it for anything you wish. The pricing is only for the ease of usage. People have made LORAs for anime characters, celebs, etc and they work pretty well. See Civitai, it has a large collection of models / LORAs / text embeddings.

These days you don't even need to suss out the negative prompts, you can use a negative text embedding (bad-hands, easynegative) to get good quality images.

Dreambooth is practically ancient now. You don't need to lug around huge converged models trained on a few images and a few tags. You can download a much smaller LORA and include it in your prompt and it just werks.


I want to stress that for anyone looking to use Neovim as an IDE (lite), all you need is something that speaks the Language Server Protocol (LSP). You don't really need to download all these configuration frameworks.

- Neovim's native LSP (with the neovim-lspconfig plugin) handles completions, go-to-definition, linting, formatting and refactoring. (Here's my init.vim config for the LSP https://gist.github.com/bokwoon95/d9420fce4836f6b518b02bd60a...).

- Instead of trying to get an autcompletion plugin to work, just use vim's native omnicompletion <C-x><C-o>.

- Instead of a plugin manager, use Vim8 native packages (https://vi.stackexchange.com/a/9523). I use a custom shell script for updating plugins via git (https://gist.github.com/bokwoon95/172ecc04039afdbe9425678946...)

- I use Fern.vim for a file explorer.

- I use Telescope.nvim for fuzzy jump-to-file.

- Dynamic statusline is just a few lines of config (https://gist.github.com/bokwoon95/d9420fce4836f6b518b02bd60a...), no statusline plugin needed.

- No debugger support, I'll use a CLI debugger or an IDE if I need one.

I've been using this setup for very a long time, and I barely touch my init.vim anymore. Here's a recent thread from the neovim subreddit where the OP talks about how much effort it takes to properly setup a configuration framework https://www.reddit.com/r/neovim/comments/11p6iiu/i_love_vim_...:

> I want to fix my problems and consolidate my environments BUT setting it up is too painful and I don't another hobby as a job (I already have servers and a 3d printer lol). I've tried multiple times this week to setup either pure neovim, lunarvim, nvChad, astrovim and LazyVim starter and there's always something that I can't find how to change even after searching online and not even taking into account that setting it up takes like a day for each. I don't really want to dedicate a whole month to reading docs, debugging and discovering plugins to fix issues that i'm hitting and I don't wan to blindly learn some commands to then throw them away because that distro didn't work out.


> developers own their code from IDE to production.

That was an important implied Principle of Steve McConnell's famous Software Quality At Top Speed article[0]. In fact, if you read that article, a number of these things resonate.

Not everything good is new. I just think that we forgot them, along the way.

For myself, I generally operate as a one-man shop, so I have the following tips, that work for me:

* Modular Design

I write my stuff in discrete modules, if possible. Sometimes, the modules are source code files, organized by functional realms, other times, they are complete, soup-to-nuts, published and supported SPM modules. The iOS app that I'm writing now, has this current dependency manifest:

    * KeychainSwift: 20.0.0
    * LGV_Cleantime: 1.3.6
    * LGV_MeetingSDK: 2.3.0
    * LGV_UICleantime: 1.1.4
    * RVS_AutofillTextField: 1.3.1
    * RVS_BasicGCDTimer: 1.5.2
    * RVS_Checkbox: 1.2.2
    * RVS_GeneralObserver 1.1.1
    * RVS_Generic_Swift_Toolbox: 1.11.0
    * RVS_MaskButton: 1.2.3
    * RVS_Persistent_Prefs: 1.3.2"
    * RVS_UIKit_Toolbox: 1.3.2
    * White Dragon SDK: 3.4.2
All of them, with the exception of the first one (KeychainSwift), were written by me, and are released, supported open-source Swift Packages.

I also wrote a couple of backend servers, that the app leverages.

* Layered Design

I write stuff in layers; especially server code. Each layer has a loose coupling with the other layers, and also maintains a fairly strict functional domain. This makes it very easy (and less bug prone) to pivot and fix.

* Document As I Go

I write my documentation into the code. I've learned to keep it mostly to headerdoc-style documentation. I write about my process here[1].

I also practice "Forensic Design Documentation"[2], and "Evolutionary Design Specifications"[3]. These help me to move quite quickly, yet maintain a well-structured, well-tested project.

* Test Harnesses over Unit Tests

I find that test harnesses[4] allow me the greatest flexibility, and fastest path to integration testing, which is where I want to be.

* Reduce "Concrete Galoshes"[5]

This is pretty important, and many of the above practices go a long way towards this goal.

I know that the way I do things won't work for many folks; especially teams of less-experienced developers. My experience, coupled with an OCD nature, make it work for me.

WFM. YMMV.

[0] https://stevemcconnell.com/articles/software-quality-at-top-... (It would be great, if he fixed the images in that article, one day).

[1] https://littlegreenviper.com/miscellany/leaving-a-legacy/

[2] https://littlegreenviper.com/miscellany/forensic-design-docu...

[3] https://littlegreenviper.com/miscellany/evolutionary-design-...

[4] https://littlegreenviper.com/miscellany/testing-harness-vs-u...

[5] https://littlegreenviper.com/miscellany/concrete-galoshes/


decentralization of X does not and cannot exist without decentralized funding of X.

this is why there is simply no way to separate blockchain (as an implementation of decentralized X) from tradeable cryptocurrency derived from units of X.

this is not in any way a defense or support for cryptocurrency, only that it is nonsense to claim decentralization of function can exist independently from decentralization of funding. that cryptocurrency is designed to explicitly realize this principle doesn't excuse its failure to do so.


What if there were a way to create some sort of centralized/decentralized identity store... where anyone wanting access to your information would have to request it, you then supply them a 'key' to gain access...like a pgp key... they can then use your data for business purposes that you allow... you can destroy keys at will so you ALWAYS have the means to shut off a business who's abusing your information, or you could drop off the entire grid by basically revoking all keys...

This would apply to images as well with your likeness (ai-made or w/e)...though I'm not sure how you'd enforce that, unless the protocol also did facial reck/search online notifying you of images of you, that you can then force to be taken offline through legal means.

Of course there's the argument of who owns the copyright -- the person in the picture or the photographer even if they took photo w/ or without your permission/knowledge...etc...

I'm not sure how this would work in the wild, and it's just a concept idea... It would be nice though in the case of a true dystopian future to be able to pull yourself off the grid easier than it is today.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: