Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: What interesting problems are you working on?
348 points by rohith2506 on Sept 16, 2022 | hide | past | favorite | 621 comments
This is a follow up of this thread ( https://news.ycombinator.com/item?id=22174828 ) which got some amazing responses but considering it's been almost two years, It's time for new edition

I'd love to hear about what interesting problems (technically or otherwise) you're working on -- and if you're willing to share more, I'm curious how you ended up working on them. This time, with a twist. Please include web technologies too if it's really niche and not a lot of people know about it




A little while ago, a patent for high throughput DNA synthesis using silicon chips went off-patent, so I have been learning analog IC chip design to build an open source high-throughput DNA synthesis chip. If it works, and I sold them at cost, it would reduce the cost of oligo pool synthesis by ~100x and possibly gene synthesis by 10x.

For about 8-9 years now my goal has been to build a cell from scratch for less than $1000 and enable anyone to be able to do so. While I’m handling the other bits elsewhere, DNA synthesis is a key technology primed for open source disruption.


For anyone interested, found following blog post on OP’s website, which is linked to from their HN profile:

_____

How to make DNA synthesis affordable

https://keonigandall.com/posts/affordable_dna_2.html


Having worked at the largest synthetic DNA manufacturer (at least they were, I think they still are)... a lot of his information is just wrong. There is a huge incentive to lowering the cost of producing oligos and synthetic genes. This is actually a very interesting time in the space (shoutout AnsaBio).

I encourage him to continue explore new ways of making DNA, but understanding the market is very different from building off an expired patent.


I’ve spoken to the CEO of Ansa bio about this topic and they aren’t focused on lowering prices, but increasing what can be built.

I would like to know what specifically you think I am wrong about, though. Always happy to improve my thinking.


(Digital) chip designer here: this is super cool. What tech node are you targeting? Have you checked out any of the free open-source tools and PDK from Skywater, Google, eFabless and co?


I'm planning on using the chip ignite program! They can build the chip I need - the harder bit is figuring out the coatings needed (sputter coating post-processing is needed to be compatible with the chemistry).

I am pretty new to chip design but know the DNA space quite well - I would love to do a call, if possible, because I have lots of noob questions about chip design and chip design tools. Happy to share anything about the biological side in exchange!


Is there a page, blog, twitter account, newsletter, Github repo or anything to follow this? I have no knowledge in this space so I'm of no help, but I'm very interested in the prospect of this stuff becoming more accessible!


Github here - https://github.com/koeng101/dnachips

It is a bit dead when it comes to a git, since I am trying to get the first machine needed, which is a DNA synthesizer (asked about that here https://groups.google.com/g/diybio/c/V3OYVBxaH04 for example).

The idea is that with a traditional DNA synthesizer I can have positive controls of the chemistry, and develop a chip that can fit inside the flow cell of the existing synthesizer. In biotech, everything goes wrong a bit more often than in computer science, so the focus lately has been getting my hands on a working synthesizer. This is a tried and true method of getting chip synthesis working.

If that works, I'd like to provide the chip at cost for integrators, as well as develop a functioning full product for integration with some bots I'm building for my official work.

Personal website is here - http://keonigandall.com


Likewise, piggybacking!


I'd just like to echo (verbally "vote") my concern. Every great step forward comes with the potential for an order of magnitude steps backward, i.e. destruction. The amount of man hours to e.g. blow up a building is far fewer than the amount needed to build it.

Will the output of your product add a suicide timer to a cell?

Will the output of your product prevent the cells from procreating/multiplying?

Will the output of your product prevent pathogen creation?

Will the output of your product require a specific, unnatural energy source that can only be man made?

Professionals take great care in thinking about those problems, and sometimes still fail. (IIRC, a synthesized breed of mosquitos that were released in Brazil failed to die off and are now a part of the biosystem).

[0] https://www.dw.com/en/genetically-modified-mosquitoes-breed-...


This is like asking people if they've properly guarded against a malicious AI before making hobbyist computers.

> Will the output of your product prevent the cells from procreating/multiplying?

Why would this matter? We have immortal cell lines like hela cells that have been alive for decades.

> Will the output of your product prevent pathogen creation?

No, but no one is going to "accidentally" create a new pathogen (you'd need this as well as some expensive labware and a lot of expertise), and the people with the incentive can already do so in labs.

> Will the output of your product require a specific, unnatural energy source that can only be man made?

What lol


I'm not an expert, but if it's a chip that just synthesizes DNA from a sequence of base pairs, isn't what you're asking similar to making a computer that can't be used to perform evil? I suspect that computing if a given sequence is usable in a pathogen is equivalent to the halting problem. And practically, it seems that a lot of computational resources are required to figure out what a protein does, even for common cases.


If we had really smart software engineers, well paid red-teams, and robust government policy collaboration with industry, I think we can make it at least 95% harder to create something dangerous. We have none of those, though.

It's super hard, but until novel bioweapons are discovered, it is at least a tractable problem.


> Will the output of your product add a suicide timer to a cell?

Absolutely not.

> Will the output of your product prevent the cells from procreating/multiplying?

Absolutely not.

> Will the output of your product prevent pathogen creation?

Strictly defined as the output being a chip, absolutely does not.

> Will the output of your product require a specific, unnatural energy source that can only be man made?

Absolutely not.

> Professionals take great care in thinking about those problems

I am a professional in this field, and have been thinking about these problems quite deeply (if you check on my website, my first time writing about my concern for these problems was back in 2014). I have developed opinions on this over the years, but roughly they come down to the fact that many folks have a gross misunderstanding of the field in general, but quite like to think that they understand what is going on.

For example, I mentioned I wanted to do oligo pool synthesis - how the hell would the output of an oligo pool synth run add a suicide timer? Or prevent replication? Or require a certain kind of energy source? In the context of the stated goal, these objections really don't make any sense. It is roughly equivalent to someone wanting to run a mining company and getting countered by "will the output of your product stop school shootings?". Perhaps better questions are along the lines of - how are oligos matching biohazard sequences prevented from being synthesized? Well, this is a question of both governmental policy (what IS a biohazardous sequence?) and of the integrated device (does it phone home for each synthesized sequence? What about hardware hacking?).


Cart is way before the horse. Also grad students doing this type of work are absolutely not taking great care in thinking about these problems.


I fail to see how considering the impacts of a tool before building a tool is putting the cart before the horse. How is that not a necessary step in building a thing?


I saw someone posted a reactive database just now on HN. I can't believe they didn't think about what child pornographers might do with that. There is literally no discussion of it on their web site. How could it not be a necessary step in building such a thing.

Perhaps part of the disconnect here is not realising how vast the applications are for synthesizing DNA oligos are. It's a very basic thing, and anybody can already basically order them online for a very affordable amount. It's like being worried about someone open sourcing a way to make printer ink.


I mean, at what point do stop and ask yourself if the thing you're making is going to cause bad things to happen? Do we only consider that when the thing is really obviously a weapon and ignore all the other creations?


> I fail to see how considering [all] the impacts of a tool before building a tool is putting the cart before the horse.

It’s interesting how people revise their arguments, or omit words from it, to make their interlocutor’s response appear more absurd.


I hereby vote for the establishment of a Regulatory and Executive Committee to Understand and Reconsider Special Impacts On Nature.

Err, wait, ummm, it looks like we first need a R.E.C.U.R.S.I.O.N. to establish the R.E.C.U.R.S.I.O.N.


> Will the output of your product prevent the cells from procreating/multiplying?

How would unicellular organisms procreate exactly?!

I am not being pedantic here but procreation usually entails sexual reproduction and I don't see how this is possible for these organisms.


I'm pretty sure procreation just refers to the process of reproduction, sexual and asexual inclusive.


My understanding of the pertinent terms in this specific context is as follows:

1) Multiplying: Asexual reproduction only.

2) Reproduction: Sexual and asexual reproduction.

3) Procreation: Sexual reproduction only.


I appreciate the way you understand the terminology. However, I'm not sure everyone else understands it the same way. Although, I do admit the terms can be confusing.

BTW: I don't see how 'multiplying' could refer to just asexual reproduction. People often use it in sentences like 'the deer population near here multiplied the last couple years'.


Well, I see how referring to reproduction in a herd of deer, as a collective, in everyday speech, by using the word "multiplying", but not on an individual level, which adds to my point.

As you can see, getting terminology right is challenging and not always straightforward.


> so I have been learning analog IC chip design to build an open source high-throughput DNA synthesis chip.

Any resources and books you'd recommend?


As a software engineer, verily and the like just didn't make as much sense to me as chisel. Got this book and have been very slowly working through it - https://www.amazon.com/Digital-Design-Chisel-Martin-Schoeber...

Efabless's chip ignite program is also great - check it out! https://efabless.com


This is not at all an area I know anything about, but a podcast I was listening to (Moore's Lobby) recently mentioned on chip design broadly:

https://www.zerotoasiccourse.com

And also:

https://www.udemy.com/user/anagha/


Thank you!


How will you prevent people from using this technology to create DIY pathogens? If the answer is "I won't" or "it'll be open source, I won't be able to", consider doing something less harmful instead.


How far down this road do you go? Someone engaged in bioweapons manufacturing probably uses laptops and chairs too.


The difference is that those already exist. I think concern about the development of new technology is valid.

Suppose nuclear weapons did not exist and someone was concerned about the Manhattan project — would you say that chairs enable people to build regular bombs, so we shouldn't worry about nuclear bombs?


Nuclear bombs have very few uses other than blowing up cities or threatening to do so. DNA synthesis is both of considerable use outside of bioweapons production and not really a critical bottleneck in bioweapons, which is why I'd consider it more like laptops or chairs than like, say ICBM guidance systems or weapons grade enrichment programs.


The technology already exist. The OP stated, patents are over, and that is why (s)he is working on it. Even if it not exist, security by obscurity is no security... if anybody can do the tech, the bad guys are going to be as fast, or faster than the good guys... So I hope the good guys do the tech for everybody...


You are severely underplaying how chaotic people can be, and how valuable a barrier to entry is at deterring that chaos from causing damage.


Synthesizing oligos is not the same thing as creating an infectious clone. This technology would be used to synthesize primers and custom promoter+gene fragments.


Presumably the answer is "do exactly what is done now", which is to say offer the tech as a service (i.e. don't sell the hardware) and screen the uploaded sequence requests for dangerous sequences.


How about “oligo pool synthesis chips” != “dna synthesis machines”

“Harmful” can also be seen the other way. From my view within the industry, it is harmful that the tech does not exist, and the idea of harmfulness is, largely, propaganda perpetuated by elites to maintain the status quo.


This specific attitude is something that I have capitalized on in many ways in my life to do well: ad tech, location data, trading. There is massive alpha in not being like this. And if this DNA synthesis thing had some value to me I would gladly do it and may the terrorists breed superbugs from it if they will. Even if you convince him, you won't convince me. I will become even more convinced that I should build the building blocks for extinction weapons. Every time you make this argument, I will move closer to doing it.

One day, I will do it just because I can and because you argued I shouldn't. Then your actions will be a proximate cause of the existence of the thing.

Ideally, I would not be so controllable but a deep part of me loves the transgressive nature of fighting a puritan.


All talk, no action


Hahaha! Fair fair :D

Upvotes for the bants


Not knowing anything about this field, what could someone do with such a chip?

Is it something like a 3D printer where you can get a rough but usable finished product, or is it more like a "if you want to make bacterium X produce Y product, you need to do X, Y, Z, W, etc." and the chip is used to do one of the steps.


It is more like the X Y Z W. However, the X Y Z W bits I am working on as well (https://github.com/TimothyStiles/poly , https://github.com/TimothyStiles/allbase , trilo.bio, freegenes.org). Going for fully automated "make bacterium X produce molecule Y", but still a while away (but surprisingly not THAT far off)


That's really cool! I was about to ask how to get started on that field, but I also noticed that you have that covered in https://github.com/TimothyStiles/how-to-synbio


Do you mean completely from scratch?

If so, why would one want to do that versus taking an existing cell and injecting custom DNA?


You can't jump over fitness troughs. For example, full tRNA recoding. Plus, it is neat to make life from chemicals.


I don't understand why, if it really worked, the org holding the patent didn't do anything of it.


kodak invented digital photography


They did. This runs genscript's high throughput DNA synthesis platform. They actively sell the result as their oligo pool product.


this sounds so cool! i wish you the best luck with it.


Sounds fascinating, what could possibly go wrong?

“Your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should.” —Dr. Ian Malcolm (Jurassic Park).


That's an excellent question. Another of my faves is "what are the worst consequences and how do we reduce the risk of a negative outcome? Is it possible to entirely eliminate issues responsibly?"

The former tends to stop all progress, while the latter considers that negative outcomes are real, but that we can productively reduce risk.

Kind of like how OpenAI removed racist, sexist, and other troubling images from their AI training pool and then released it in pieces (to a small group of subjects) to remove as many issues as possible. With that said, they definitely still made mistakes.


I like the motto “safety is our number three priority” but I also think that people sharing info about their danger-spraying project should take a moment to say a line or two about how they think about the issues. Instead of just ignoring them.


Kind of an uncharitable quote, considering I have essays and presentations going back 8 years (and more recently) where I question potential harmfulness to the environment. I also previously worked for 3yr at a nonprofit specifically aimed at improving the world/society with biotech, where we considered these questions carefully and deeply.

In any case, blatant sarcasm is also out of the spirit of hackernew’s “ Be kind. Don't be snarky. Have curious conversation”


Responding to the first part of your comment, I’m glad to hear that!


“Possibly” is the crux of that question, as it requires knowledge and expertise to answer in any kind of real way.

It’s very easy to come up with answers to that question if you don’t know what is actually possible. Not very useful answers, though.


> “Possibly” is the crux of that question, as it requires knowledge and expertise to answer in any kind of real way.

That is untrue. If you mean by "...it requires knowledge and expertise to answer" domain knowledge.

It is possible to understand the implications of technology without mastering it.

The experts in the field have huge incentives to say it is safe.

If we are to make decisions about releasing novel self replicating organisms into the environment, we should not task the people who will profit (scientists) to decide if it is a good idea or not.


What's your point? The technology already exists and is being used.


Your name, password, and the names and addresses and phone numbers and emails of all your family members already exist and are being used. So, by your logic, you have no reason to share them to everyone?

The logic doesn’t hold up.

You can make the same argument for anything. It makes sense to think about the dangers, not pretend that bad logic is an excuse to do anything you want.


No, actually your first sentence's logic doesn't hold up as it isn't a fair comparison at all. You can already order custom oligos and synthesized genes for relatively cheap. The chip idea would just reduce the cost more.


Digging into personal information on people can be done for relatively cheap. I’m not sure it’s a good thing to make it cheaper.


But, in the concrete case in question (gene synthesising technology), what are those potencial dangers?


This is silly. Unix already existed, what's the point of linux?


I think you didn't understand what I was referring too. Read again.


Pretty cool. May I ask, what is your background? And, what resources are you using to learn about analog IC chip design?


~10yr of synthetic bio, did 4yr of mitochondrial engineering + directed evolution at UCI, then ran the FreeGenes Project for 3yr at Stanford.

Chisel for programming it (great little lang/project) https://www.amazon.com/Digital-Design-Chisel-Martin-Schoeber...

Efabless PDKs - https://efabless.com/

Other than that, lots of trial and error. Hardware people are brutal with their acronyms, so it really comes down to a lot of stumbling around and reading whatever I can.


Very cool. Best of luck!


I’m working on making body doubling a more main stream approach to accomplishing everyday tasks.

Body doubling is known within the ADHD community and entails performing a task in the presence of another. More details here: https://www.healthline.com/health/adhd/body-double-adhd

It helps to engage motivation by using another person as the proxy. I wrote a bit about how I think it works here: https://doubleapp.xyz/blog/body-doubling-proxy

The technique goes way beyond just ADHD applications for executive functions and is something we tend to do anyways, e.g., running with a friend, studying in a group, etc.

It solves an issue for myself and I truly want to help others with the approach by making a way to stay accountable through the help of others.


I am so happy to finally have a term for this phenomenon! I find it really hard to explain to friends and co-workers why I prefer to work in pairs, up until now I've been describing it as something along the lines of a motivational feedback loop, but your description of it being a proxy for your own motivation is quite good and I'm going to have to steal that going forward! Thank you!


+1 I've never encountered this term but absolutely recognise the effect although I don't have ADHD myself.

Ever since school where I was wholly unable to revise or do homework anywhere but the library (where others would be doing the same), and all the way to today where I tell my partner "I'll do the washing up if you stay in the kitchen"! Doesn't really matter if she talks to me, the task is just hugely more bearable in her physical presence.


I’m glad you found it helpful!


This is great! I have ADHD and find this helps a lot. A YouTuber I follow has a two hour video called "Work with me" of just him working at his desk with music playing. When I first saw it, I laughed. But then one day I was having trouble staying focused, remembered the video and put it on my TV and I found it to be surprisingly effective! Unreasonably so! So good luck with your project! You're on to something.

https://youtu.be/vEqJAReXiN4


Fellow ADHD here. A while back I downloaded a bunch of "Day in the Life" videos off YouTube of people accomplishing the type of productive stuff I need to do everyday then I used iMovie to splice together the best relevant parts starting with getting out of bed, going through the day and ending at night with getting back into bed. Now when I hit a rough patch where I can't accomplish anything I just start the video as soon as I wake up and follow along, going through the motions. This has had the greatest impact on me actually getting things done than anything else I've ever tried.


Can you share to see what you mean by the edits?


I've checked out the video but I got really distracted instead. Maybe I am not the target audience of this video but to stay on topic and expand on the same theme, I came across this video [1] that's probably more effective in extracting productivity gains from you than that video and I reckon it's less distracting as well:

[1]: https://www.youtube.com/watch?v=3RGEo2Kohb8

(4 Hours of Asian Mum to Help You Focus on Practising/Studying/Working)


Neat! I look forward to seeing more - For a long time I found myself saying absurd things like "i work better in distracting environments" or "It's easier to have a clean house with roomates to clutter it up".

What really was happening was that I respond well to body doubling. (Never diagnosed with any sort of executive functioning disorder, but I suspect that's more because I never went and got diagnosed than me lacking one).

Cool idea.


I've never heard of body doubling before but you got me to Google it and that's very interesting phenomena. I definitely do better at some tasks when there's someone around with me even if they're not directly participating.


Thanks! I had similar thoughts as well. For whatever reason, having others around in coffee shops or bars seemed to give me the energy to focus on actually working. A very counterintuitive result for sure!


Wow that's nifty! Some ADHD friends of mine joke that it'd be awesome to have a ADHDer house cleaning or chores "swap" service. We suck at cleaning our own places but will spend hours cleaning a friends house.

I've never had luck actually doing a "lets get together and do X chore" event though. Well aside from homework and study sessions in university. It'd be awesome if this app helps with actually doing that for everyday life tasks.


Thanks for sharing, I've signed up for the waitlist. I never knew there was a name for this concept, but I realized that I've been doing this for a long time. I've set up some pair programming sessions just so I could get the motivation to work through a boring or difficult task. I'm a solo founder who works from home, so it would be good to spend some more time with people as well (even if it's online.) I'm not sure if I'm ready to go back to a coworking space though. Maybe after I get my next booster shot.

I've also noticed that weekly accountability meetings don't seem to work well for me. I joined a group where we would meet once a week to share our progress and discuss goals for the next week. I realized that this didn't seem to help with procrastination or focus, and I was just feeling additional pressure and guilt when I wasn't able to achieve my goals. Working with (or next to) someone in real time is a very different experience.


Thanks for signing up! We definitely do the same when it comes to pair programming the boring or difficult parts.

You make a great point about the real-time experience. We think the key component for why body doubling works is because of that real-time aspect. I've tried a variety of journals, to-do lists, calendar reminders, etc. all for naught. But once someone is there, a switch flips and it's almost effortless.


Isn't that a bit like rubber duck debugging ?

I also find pair work to be very special on many levels (intellectually, emotionally,. logistically). I wish there was more theory on this


Exactly! Pair programming is definitely within the realm of tasks we have in mind for Double.


I didn't know about body doubling so in the mean time let me ask you some things:

- do you consider the friendly competition aspect (seeing someone do a task revives inspiration, desire, confidence, motivation to do it, when the other is tired you feel happy to fill the role)

- the emotional bond from sharing: sharing any task is often a strong bond (even just carrying a load), pain and success

- the intellectual benefits: a bit like the first point but searching for solutions gets much much cheaper with a peer. You can validate point of view / theories, feel less emotional pain from doubts it the other doesn't know too

I had more topics but I forgot


These are all amazing points.

In my opinion, they all fall under the category of a body doubling effect. Your second and third points strike a particular chord for me as well. I wrote a bit about why my cofounder and I are creating Double rather than doing it alone. In essence, we are cofounding because seeing the other person also trying their best and going through the rough parts with me drives me to keep going. (It's a quick read here: https://doubleapp.xyz/blog/dont-sleep-in-the-mud ).


Amazing. Have you ever considered the effect at large ? In education or workplace ?

I see so much misery that could be replaced by joy, insight, friendship..


Absolutely - one of the most common forms of body doubling that a lot of people have experienced has been studying with others. An emerging trend has also been, as others in the thread have linked, "study with me" videos.


Interesting. A lot of study group I joined dried out. But maybe there are better ways.


I have no discipline of the superego variety— but when I realize I will let someone else down, I become motivated and focused. I’m learning Linear Algebra now by working with a self-described ADHD colleague. We are 150 pages in and actually doing the homework.

I wrote about an approach called “Bold Boast” in my book on coping with my particular mentality (which led me to quit high school, but also led, a few years later, to managing a team at Apple). I feel like everyone else has a powerboat, but I am forced to sail by the wind.

Anyway Bold Boast means you publicly commit to achieving a learning or teaching goal that involves a presentation or visible product.

(see Secrets of a Buccaneer-Scholar, which I believe you find pirated online here and there)


> I have no discipline of the superego variety— but when I realize I will let someone else down, I become motivated

I think that's superegoic, you're doing something because you should, and you'll beat yourself up morally if you don't. Rather than egoic, doing it because you want to for your own gratification.


You can look at it that way. I suppose another way to say it is that I can’t seem to get motivated about letting myself down, but I can about other people.


I don't know if I have ADHD or not, and I don't think so because I'm sometimes able to focus for hours if it's really an end-of-the-world type of situation, but just having someone passively working next to me is 100% useless.

My girlfriend is always working next to me, and that doesn't prevent me from getting distracted every few minutes.

The only thing it helps is for stuff like cleaning the house where the problem is not focus but motivation, and we will be doing the same thing together.


One of the main symptoms of ADHD is nearly self destructive procrastination often leading to hyperfocus at the last minute because of that looming deadline you avoided that whole time.


There’s still a lot to be learned about body doubling and why/how it works so it definitely varies from person to person.

I’m in the same boat that house chores need someone there doing the same thing, rather than just their presence.

One of the things we’re playing around with is different user experiences for different tasks (e.g., computer work vs house chores). It’d be great to hear what you’ve tried when it comes to focusing on work and how it went!


that sounds like hyperfocus, which is absolutely part of ADHD.


TL;DR: ADHD isn't what people think it is. It's worth learning about, even if you don't have it. This YouTube channel is very clear and informative: https://youtube.com/c/HowtoADHD

ADHD is a disastrously wrong name for the disorder. There is no deficit in attention, and hyperactivity is only as common as inattentiveness, and neither of the two is a primary symptom.

Executive dysfunction is the primary aspect of ADHD. Poor working (short term) memory is the second.

We get distracted because we can't choose where to put our focus. We miss important details in conversations because we tried to listen while also thinking about something else, and lost both details to a mental stack overflow. We get hyperactive because we are understimulated and can't be intentionally calm. We don't do homework or wash dishes or finish projects because we can't intentionally get started on anything.


thank you for this! It describes me very well. I'd say having someone next to me even makes me more anxious.


What makes this different from Focusmate.com?


Great question!

Focusmate is a great product centered around virtual co-working and work productivity, whereas Double will not be centered around co-working, but for everyday tasks (which can include co-working). As a result, we are designing the experience to be tailored towards the everyday experiences rather than a co-working product used for everyday task.

Essentially, we want to expand the scope of tasks you can double up to do together and provide a better UX for those diverse tasks.


Nice, afaik focusmate is used for every day tasks, but not much. What do you want to do different? I Used focusmate for a week or so but It stopped helping me at all once novelty wore off ( same as every adhd/ procrastination tool yet)


Task-specific Double communities and interfaces will be the primary difference between Double and competitors. For instance, not all tasks are best, or preferable, when you are on video (e.g., exercising/running).

Additionally, we are building Double with the ethos that you don't need to always "crush it", but doing a little bit is okay too. We want to help you build the habits you'd like to see a little step at a time (one of our key inspirations is Tiny Habits).


Are these body double volunteers active part of the ADHD community?

I can see how this can set you apart from the competition as you have already an active and enthusiastic members among the community doing their best to help each other since they share the same background and life experience.


Double will be matching people with others that are performing similar tasks. In this way, we will be able to foster communities of people that have similar goals.

Down the line, we do have ideas on incorporating “hosting” a Double to help give a sense of service and further reinforce the community aspect as well.


I got it.

Can you tell me please why opted for this esoteric and non mainstream term and not for a more common one like "companion" and the likes?


My sister works as a veterinarian and is experiencing some serious burnout from her current practice (a subsidiary of a larger corporation).

I guess it's pretty common practice for corporations or private equity to snatch up smaller independent practices and then run them like a sweat shop. Her current CEO's resume is a list of Starbucks, Walmart, and a few other corporate roles that are totally unrelated to the medical industry.

We're tinkering with a platform that can somehow disrupt this cycle and add some transparency to the practice structure for the would-be employee.

The current form is kind of a niche Glassdoor, or we've been thinking of it as the "online dating between veterinarians & practices." Still playing with the concept.

Anyone have any thoughts on how to take out these massive PE firms that are plaguing the Vet Med industry?

Here's some more info on the problem: https://www.ftc.gov/news-events/news/press-releases/2022/06/...


My wife and I are suffering personally from this. I'll help for free.

I'm convinced Vetmed will follow human medicine's history. Currently, the future of human medicine is small clinic subscription medicine without insurance. Apps like Roo could help with temporary staffing issues.

Let's talk: https://www.linkedin.com/in/nicholascgilpin


Connected via LinkedIn. Excited to talk!


What's wrong with opening your own practice?

Investors can only invest in scalable, replicable businesses. Veterinary practices are intensely personal, often specialize in different species and breeds, and can develop unique experimental treatments. The cost of labor is ridiculously cheap (people become Vet Tech's for love, not money), and buildings need not be in commercial centers, so capital costs are minimal.

Honestly, veterinarians are really in the best possible profession for maintaining independent practices. If they can't resist investor takeover, who can?


> The cost of labor is ridiculously cheap (people become Vet Tech's for love, not money), and buildings need not be in commercial centers, so capital costs are minimal.

This is absolutely not true and it's a big issue in the field currently (my wife is a Veterinarian). This kind of thinking is also what is causing a lot of mental health issues in the field.


I'll echo that. Also being married to a vet it's insane how little very qualified and experienced techs are being paid.


how is that in disagreement? Cost of labor is indeed cheap


It's often referred to as compassion fatigue. Just because you have a love for something doesn't mean you should do it for free or cheap. The job also isn't all love and cuddles; there's a lot of having to turn away care or euthanizing because owners can't afford the needed care.


That is a problem that needs solving!

That is what burns the labour out


You are saying two different things here. Are you agreeing with the poster's position that veterinary professionals aren't getting paid what they're worth (aka. cheap labor), or are you refuting it?


I don't believe they are saying two different things, given the indication of mental health considerations- I would imagine that it is not true that people become vet tech's only for love and don't actually want to make more money. You can't pay bills with your pride. This is an issue with nurses, among other professions. Yes they love what they do, yes they also want to be paid worth their societal value. It's diminishing to think otherwise.


> The cost of labor is ridiculously cheap (people become Vet Tech's for love, not money)

I guess the question is then whether labour is abundant/available. Even if there are people willing to work for little, are there anywhere near enough?


All my vet friends need this. Veterinary medicine suicide rate is pretty high.


> Anyone have any thoughts on how to take out these massive PE firms that are plaguing the Vet Med industry?

Have you considered forming / joining a union?


Vets have crazy high margins, and thus salaries, so from an outsider perspective perhaps the market is ready for some more competition.


What are you considering to be a crazy high salary? Family doctors (some of the lowest paid doctors) earn double the reported average of a vet in the US. They're the only business I can think of that have people with doctorates providing medical services to other live creatures from a business perspective.

Also curious on what your source is for their high margins since you mentioned you're an outsider?


those savings don't go to you, at least not the majority. It goes to the starbucks ceo that turns the industry into a sweat shop.


Ok, but that is a wider problem. At least vets have better salaries than most other employees in the same business-construct.

Perhaps they should start a union, like the rest of us?


Building a platform that accelerates the construction of protected bike lanes and car-free streets worldwide by connecting citizens, advocacy groups, cities, and urban planners: https://twitter.com/betterstreetsai/status/15705341894974832...

(Backstory: Started a Twitter account in July (twitter.com/@betterstreetsai) posting DALL-E-generated street transformations, it immediately took off, got lots of press, etc. and made me realize how huge the demand for this stuff is.)

Feel free to send me a DM on Twitter if you're interested in helping!


NotJustBikes channel on youtube would love this stuff


He's definitely seen the account, we made one of his hometown ("Fake London," Ontario) and he retweeted it: https://twitter.com/betterstreetsai/status/15541736198734520...


Hm, have you talked with the bike lane uprising folk? I think you'll find that these problems are way, way more political than they seem. Good luck, though.


I've run large bike lane/car-free streets campaigns before [1] and have talked with a bunch of activists from groups all over the country, so I have a good deal of experience here.

You're right that these problems are political—in fact, they're entirely political!—but the reason they're so political is because there's no existing platform for capturing latent + kinetic demand, so electeds (even the ones who want better streets—we've heard from many of them on Twitter) have absolutely no idea how much political support they have—and they need that support to take bold action. We're solving that problem!

[1] https://bikeportland.org/2020/04/24/grassroots-push-emerges-..., https://bikeportland.org/2020/05/20/its-official-pbot-consid..., https://bikeportland.org/2021/02/02/support-builds-for-bike-...


FWIW, I consider bikelane uprising an org/platform that's capturing that sort of latent/kinetic demand you're talking about, which they've been able to take their work to electeds, who in-turn take action on creating new bike lanes, fixing road damage, raising accountability for companies that park in bike lanes, etc. The emotion they work from is discrete anger and frustration, which tends to ignite action from a bunch of folk to make calls to electeds and such. If you can find a way to bring out that raw feeling on top of what I personally feel as nostalgia for the future in what you're building, it can almost definitely lead into some great stuff. Just don't forget about the politics :)


Yep, absolutely! Our plan is to integrate existing local advocacy orgs into the platform—so we automatically benefit from their political connections + bandwidth, while they benefit from our tools + reach. Perfect win/win to Get Stuff Done.


Volts podcast had the dude who did bike lanes in east bay area on. It was an interesting interview, directly relevant to this space


DALL-E also can transform American suburban shopping centers into mixed-use urban attractors.

https://aixd.substack.com


As always working on search.marginalia.nu.

Search itself is a fractal of interesting problems. Haven't had much time to write about it lately, but I've pretty much doubled the size of the index and re-written a lot of the query logic to make it much better, faster, and more accurate. Will do a write-up eventually, since it may be relatively explainable without getting too into the weeds that the audience dwindles entirely.

I keep having breakthroughs that make it in one way or another better, but as soon as I do I find something new that could be improved.

Kinda bonkers it's been possible to build this alone and run the entire thing on what amounts to a souped up PC :P


How big is your index and how many sites do you cover?


1 million websites, a bit above 60 million documents in the index; the crawl is a couple of hundred million but a lot of it gets filtered out for various reasons.

The crawler itself is aware of 470 million URLs.

I've actually had it up to 50 million before, but that was a lot noisier data with fewer keywords per document. The current 60 million is significantly "bigger" than the old 50 million. Index size is not actually a great metric for how comprehensive a search engine is. A small index with good signal-to-noise ratio is much more useful than a large one where 95% is chaff.

100 million is my current goal. I think that's about what's doable on my current hardware. It also gets increasingly unwieldy to deal with the data. I've already got processes that require several days non-stop computation.


For sure, a large index by itself doesn't mean anything. I was more curious about the size on disk and how you manage it on a single machine.

Also curious now, why you say half a 470m URLs? :)


Size of disk is like 3-400 Gb I think. Fairly manageable. I think it would require significantly more hardware with a multi-node approach. Locality is hella efficient.

I accidentally a word while editing the sentence.


I really appreciate your work.


I love this :)


I got deep into color contrast algorithms. Human vision is a rabbit hole indeed.

You would think that this is a well established field. But then I found claims in online discussions (including at the W3C) that were obviously wrong on a mathematical level. The more I looked, the more the arguments fell apart. So instead of reading online discussions I concentrated on reading the actual formulas as well as research on the topic. I now belive that the reasons people cite for choosing one algorithm over another are often completely bogus.

I tried to bring this up at the W3C, but the response was anything but friendly. Not sure what I did wrong. I still think that I have something valuable to contribute, but I am not sure how.

If you are interested in more details: http://tobib.spline.de/xi/posts/2022-09-10-contrast-algorith...


Your blog post makes lots of sense if all you care about is one threshold: contrast must be above X for readability. Then the equivalence-under-monotonic-function makes total sense: you literally don't care what the function does above or below your threshold. This extends to if you pick N threshold points (text must be above Contrast1, contextual borders between Contrast2 and Contrast3, etc.) by first looking at the function and then "eyeballing" those thresholds. But is that the best we can do? The difference between (Ymin / Ymax) and log(Ymin / Ymax) starts to matter a lot as soon as you start talking about "twice the contrast" or "20% more contrast", doesn't it?


Sure, perceptually uniform contrast in the sense that you can talk about things like "twice the contrast" is useful. If I had to design a modern contrast algorithm, I would try to make it perceptually uniform.

But all that is irrelevant in the context of WCAG (and most other contexts). It literally only cares about thresholds.


Great write up. Color overall is a rabbit hole of a topic. I have been looking into color matching, delta E etc. and it is been surprise to realize that the variances of human vision mean that the field of colorimetry is as much of an art as it is science.


I've been working on a programming language for UI designers.

To put it simply, I think the trend towards no-code/low-code is misguided. The assumption behind these products is that code is slow, difficult, and expensive. I disagree - I think that "code" is merely a formalized written expression of what you want. It's actually the most efficient, easiest, and cheapest means to solving digital problems.

I believe anyone can write code if it's specific enough to a domain that they understand, so what I'm trying to do is create a hyper-domain-specific language around designing UI components. The goal is to have a platform agnostic base syntax, with "dialects" that can extend the language for any current or future platforms.

The idea is that designers could write in this language, and developers could build tools that will transpile this code into a consumable format for whatever system they're building for.

I have a very rudimentary demo site that's like halfway built: https://matry.design

edit - here is a simple example of what a component might look like in this language:

  component Button
    variants
      color bg: #007BFF
      text label: Click Me
      
    elements
      shape ButtonContainer
        text ButtonLabel

    style ButtonContainer
      fill: $bg
      padding: 10px

    style ButtonLabel
      content: $label
      font-size: 16px


I love the idea of this, but I just learned about xstate.js.

It was a eureka moment for me. Why was UI design such a mess? It’s because the UI vocabulary came from publishing and graphic design, but UIs are not static and describing them in that way will always fall short.

I’m convinced that any UI language needs to incorporate a visual state diagram editor to really make a dent in the space.

UIs are not fixed objects, but responsive object reacting to user intent.

Does your language incorporate anything like this?


One of the premises behind the language is parametric rendering. So nothing described with Matry is static.

The difference, however, between Matry and something like xstate is that the actual state is intended to be left to developers, because I’m trying to create an interface that allows designers to just focus on what they need to focus on.

So take something like dark mode. A designer might allow for a Boolean parameter that determines whether a component render in dark or light mode.

But as to whether the browser supports it, or whether the user has that mode set in their system preferences - that’s the developers responsibility.

That doesn’t fully answer your question but it’s a complex topic so hopefully that gives you an idea as to where my thoughts are going.


Designers absolutely need to focus on dynamic issues and state issues. Not all state is exclusively the developer’s domain.

There’s language, locale, time of day (which you mention), accessibility, screen resolution and those are just things off the top of my head that are implicitly exist as state in a designers head.

Then there’s the whole idea of transitions which are explicitly about state and in designers domains.

You can’t tell me designers don’t care about transitions. It’s developers who usually don’t care about that state.

And that last bit is my point. Let designers handle the state they care about. UIs will get better as a whole with it.


No what I mean, is that they only care about the effect of the state on the UI.

There’s a difference between determining the state, and deciding how that state affects the pixels on the screen. The former is the domain of engineering, and the latter is the domain of design. I’m not saying at all that designers don’t care about state.


I disagree. If the UI person could handle a hover-over by defining the state themselves then they and developers would both be happier.

The developer wouldn’t need to implement yet another boolean for something trivial and the designer wouldn’t need to waste time prodding a dev to finally get around to implementing it.

That state has nothing to do with logic or state about the functionality of the program. It’s state who’s entire purpose it to control a bit of the UI.


Kind of. A simple hover, yes. But even then, there are some interactions that are fairly complex, and require in-depth understanding of the event model in order to implement correctly. Like knowing which events bubble and which don’t, for example.

I hear you though, it’s definitely more of a chore for devs to have to define all that stuff. But on the other hand, the UI would otherwise just be done by the time of handoff, which IMO would outweigh those cons.


That’s why I like xstate’s visual state editor. Engineers and designers can both reason about and modify it in tandem.

I think most event models are conflating things much more than they need to be. By having clear point where the data model interacts with the UI model both sides have better clarity. Yes there are complex interactions, but designers also need to understand the model interactions to design the UI correctly.

If you have teams working on a single project and not communicating you’ll have problems no matter what tools you’re using.


Having worked a bit in a conceptually similar product, in my opinion the real problem is how you define the layout of your UI, not the individual components.


There is a sweetspot for nocode where you don’t try to be too general purpose. Concrete examples would be Excel, IFTTT, Airtable, and many of the startups that convert those into apps.

The sour spot is things like Bubble, which while being a fine product suffer from “ok now am really coding but with a clunky UI instead of a text editor” which is a problem inherent in making universal nocode solutions.

With this in mind I think layouts and connected components should be provided to non coding users as templates. E.g. a site where you log in and can show notifications etc. is a single template. You then extend and style that.


Yep. And I’m still on the fence about how to tackle that, but I’ve got some ideas.

My favored approach is to use what I’m calling a “this goes here, that goes there” syntax.

The idea is to give designers a conceptual framework that mimics how they understand layout, and allows them to describe positioning using a syntax that feels as close to natural language as possible.

Much easier said than done, of course. And there are good arguments for allowing the platform dialects to handle those more complex features (in addition to things like scrolling and animation).


My first impression is that this looks a lot like Kivy's "Kv Design Language": https://kivy.org/doc/stable/gettingstarted/rules.html


Ah interesting, wow it does look very similar. I’d heard of Kivy but never looked into it.

The language syntax is in fact inspired by Python; mostly because I tried to remove as many extraneous characters as possible.


I ended my career in June, and I am attempting to entirely reinvent my professional life at 36. It's interesting going through this sort of open ocean search having a little bit of savings and stability, but the stakes feel higher (emotionally) even though, rationally, I am in a better position to take on risk than when I was young and completely broke.

The hardest part of it is actually deciding what do I want from the time I spend working. Many of the folks I know who do interesting work are either woefully underpaid or chronically overworked.

I'm hoping I can thread the needle on that one.


I can so relate :). I left my position as a principal engineer at a FAANG/MAAMA/etc. company in 2020 with a general direction to try to make a difference in the education space. It has been maddening trying to live up to my own expectations.

Good luck!

p.s. In the off chance that you are interested in the education space, you might find my post in this thread of interest: https://news.ycombinator.com/item?id=32870101


Thank you for the comment, and I commend your mission. Do you have a website or more detailed information about your intended approach? My email is in profile if you would prefer to share privately.


I have built a proof of concept that demonstrates the core mechanics of the approach. The challenge is that my background is in backend development, and so the UX doesn't make a good first impression. I am building a more polished demo, but it is slow going given my lack of frontend expertise.


Hey, I'm in similar shoes, left Microsoft to focus broadly on EdTech, my skillset is in realtime graphics (WebGL and friends) and frontend

Could be fun to connect sometime, even just to make a community of devs in this space!


I would be happy to share notes if you want to connect.


Just yesterday I listened to a podcast about the Hacker School in Hamburg. Essentially introducing hacking to school kids to scho what it is all about. https://hacker-school.de/kurse/international/


Just jumping on this thread to say that I'm an elementary school teacher by profession and geek by nature. I would love to connect with folks who are starting projects in this space and could use teacher input.


Hey germinalphrase, I'm doing something similar after leaving my job earlier this year.

I took a bit of time off, then started exploring how people use their time on sabbatical to learn new skills, work on personal projects, raise kids, travel, etc.

After talking with a bunch of people about their experiences, the high-stakes feeling you observed is pretty common - I think people tend to underestimate the emotional aspects of taking a break from work.

My current transition plan is to spend a bit more time studying sabbaticals and turn that into something concrete (a book/guide and coaching) before doing some part-time consulting with my professional skills (marketing analytics) to cover my living expenses ala CoastFIRE.

I think there's a gap in part-time professional work - it's really hard to find long-term, part-time positions that take advantage of professional skills and offer benefits.

If you want to bounce ideas off someone who is also trying to thread the needle I'd love to chat more.


I'm looking for some help with marketing if you're looking for something to do. Email in my profile.


Shoot me an email. I’ll throw a few questions at you.


I wish you great luck on your journey! I did this when I was 42 - studied for a degree part-time, wrote some books, ran out of money and did some minimum wage jobs, etc - and it was a scary-yet-exhilarating adventure which, thankfully, ended well when I finally landed a full-time job developing websites.


Hope sustains me, but if it all falls apart I might just have to commit to punching out that solarpunk AR novel I've been taking notes about for ten years.


I hope that goes well for you. Have you considered taking a part time job or maybe in apprenticeship or internship in something you are interested in?

I imagine that if I was doing what you are describing, I'd enjoy keeping busy while still having most of my day to pursue something new. And having my side job (that pays or not, who cares?) help push me forward into learning something I couldn't easily do on my own.


Since I've only been unemployed since June, I've largely been plowing my time into house projects and taking care of my young son.

I would be very open to an apprenticeship/internship if it allowed me a more direct entry into an exciting position (I have the freedom to do so, certainly); however, the internship pipeline seems almost exclusively focused on undergraduate age young people.


Care to share more details? I'm in a similar situation. Thanks.


Of course. I grew up the son of two professional artists. While they are beautiful people, they are not particularly career minded/knowledgeable - so my direction in life has been dependent on my own designs and devotions.

I spent about five years working on feature films, commercials, music videos, etc. as a Local 600 Loader and Second Assistant Cameraman. I loved the work, but the travel and lifestyle wasn't a sustainable fit for a healthy life (for me).

Before film work, I studied comparative literature in a strong undergraduate program and made a shift into teaching high school English. I taught in rural, urban, and suburban school districts in the upper midwest until June of this year. The work was Good, and I was good at it. I didn't burn bridges or burn out; I just had a belly full and want to open a new door.

I know a handful of people that work in tech sales, PM, infrastructure that have been willing to talk about their work and employers (for which I am very grateful). An important part of these conversations has been noting which paths are inaccessible/poor fit/rough culture.

I'm in talks to accept a position at a small local tech consultancy in a role that is a mix between Customer Success/Training/Project Management - but it's more of an entry point and stepping stone than final target.

All that said - if anyone works in product management, experience strategy, or simply would be open for a chat, please send me an email. I would love to schedule a short phone call.


Academic and Scientific publishing. It's the primary source material of human knowledge. It should be completely open and accessible to everyone with no barriers to access the literature or to add to the literature.

The structure it currently takes - academic journals - made perfect sense in the 1600s when that structure was developed, and it continued to work reasonably well for distributing academic results up into the early 1900s. But then it got privatized. Now there are tens of thousands of academic journals, 80% of them charge a fee for access, and most of the remainder charge a fee to publish. Often thousands of dollars.

Given that science works in the aggregate and you can't know if you have the real answer to your question until you've accessed all of the literature on the topic, this structure is now making it impossible for people not in the institution to even figure out what we know on a topic. And hard even for people in the institution.

The ultimate decider of policy in a democracy is the average citizen. The people who decide our government policy don't have access to the primary source material of science. In the US, a lot of really important policy is set at the municipal level (city, county), and in that case the people actually writing and implementing the policy also don't have access.

If we can develop a web platform that does all the journals do (match papers to qualified reviewers, maintain literature integrity by filtering out bad work, and dole out reputation), then I think it's possible we could draw publishing out of the journals and into the open where it belongs.

I've got an idea for a platform that does all that by crowdsourcing it (the journals are already crowdsourcing it - just manually using an editor). It's basically Github+StackExchange for academic publishing. It works by tying the reputation system to academic fields, so papers are tagged with fields and then reputation is gained and lost in those fields.

I'm building it now, I'm month or two out from beginning a closed beta. (Aiming for end of October, beginning of November.)

I wrote up a detailed description here: https://blog.peer-review.io/we-might-have-a-way-to-fix-scien...


As you suggest, journals are a critical area for society.

Since we're on HN, we get to take off our technical hat and put on our product hat -- not the fit-to-customer product, but disrupt-the-industry product.

What are the incentive structures that would destroy this or make it work?

Consider the scenarios today...

- What's wrong with the thousands of similar-sounding journals popping up on China publishing research that is either unvetted or copied from other journals, so researchers can satisfy their publish-or-perish needs?

- How do you deal with reviewer networks?

- How do you get the best of the best, people with no time and plenty of opportunities and money, to provide effective feedback on an avalanche of articles?

- How do you get the support of, without being dominated by, the big research companies and universities?

- Look at the history of reputation systems, from quora to stackexchange to games, on a matrix with the difficulty of determining correctness of questions:

-- Who won the game?

-- Does this code work?

-- Which library should I use?

-- Is this paper correct and valuable?

- What about prediction markets, which try to use money/investment as a measure of seriousness: how would that help or hurt? (and isn't that what we're doing by insisting academics publish for tenure?)

In the spirit of MVP, you might consider a pivot to a much smaller problem: how to professional groups establish and document their standard of care? A team wiki? (gets stale, disconnected from operations) Run books? (too stepwise to convey meaning needed to transform the system) Slack-capture? email? documentation?

What about capturing the benefit in a change of format, from opaque text/pdf to something like a per-domain semantic web, with connections to empirical methods and findings? That could provide incentives for everyone. Like the professional-group scenario, the killer feature would be something that grows incrementally with multiple authors (as code does for developers).


"The ultimate decider of policy in a democracy is the average citizen. The people who decide our government policy don't have access to the primary source material of science."

Often they do - quite a lot of stuff is open access. The biggest issue with getting people to read scientific papers is the demonization of those who actually do it. A lot of papers are terribly misleading so the moment outsiders start engaging with the literature they conclude a lot of science is junk, and when they act on that they're immediately attacked as "science deniers" ... by the sort of people who don't read papers, because they are convinced they don't need to.

Getting more stuff as open access will only accelerate the decline in trust in science, by allowing more people to see what's going on behind the curtain. That is not necessarily an issue, but just so you're aware of that - it isn't going to lead to lots of people suddenly basing policy on scientific papers. It's going to lead to a lot of scientists getting defunded.


Quick suggestion from a researcher: people will only want to submit if there’s an editorial team with high trust and reputation for their specific area.

Automated reviewer matching won’t be good enough to find decent reviewers.

I would suggest partnering with people in very specific disciplines who want to break away and establish open access journals, but want to focus on the editor and review process rather than the logistics (which you provide).


This is on the roadmap. The plan is to create a "Journal" entity on the site. Editors can create a Journal on the site and then create teams of reviewers which they tag with fields. Authors submitting papers can then submit those papers to community review, or one or more journals, or both. Reviews coming from journal's teams will be highlighted. At any point during the review process, once the journal's team is satisfied, they can mark the paper with their badge of approval. Papers can collect badges from multiple journals.

If the authors disagree with the journals team, or just get tired of waiting, they can still publish at any time just like with community review. For authors who choose both, the journal's team and community reviewers can interact with each other just as in normal community review.

I think it's a way to provide a stepping stone for people from the existing system towards full community review and would provide logistics to those organizing open journals with teams of high quality reviewers.

The plan is to implement it during the closed beta period. I would welcome feedback on this concept as well :)


Sounds great and good luck!


I would call it something more specific than peer review.


Name suggestions welcome!

I suck at naming - Peer Review was a working title intended to be somewhat tongue in cheek (similar to StackOverflow). But... because I suck at naming, it is, of course, a little on the nose. So far I haven't come up with anything better.


Unless 'peer review' elicits negative feelings in the target audience it seems like a pretty good name. It relates to the core concept, it's respectable and easily memorable. Seems like good branding but I'm no expert, nor am I the target audience.

To toss a few names in the hat, off the top of my head: - openresearch - colab - openjournal - stud.io

Probably collect a few ideas and run a small survey for the initial audience


My concern is that "peer review" is a wide term, it would be kind of like calling a startup Web Browser or Network Engineering. Too much confusion.


This is a brilliant idea! If you think the problem is bad in the US, you haven't stepped into spaces like Asia. This is a path that can help the world, not just EU/US scenarios.

May I also suggest you consider the network layer centralization. When you mentioned Github+Stackoverflow, I got the point even before I visited your site.

However, even as you think about an alternative on how we publish, consider that technical questions can have significant political consequences. I am of the view that centralized networks are a major contribution to the situation we find ourselves in today. Distributed/Decentralized/Federated options like ActivityPub may help in your journey in what surely is a great idea. Check Lemmy, for example, on a real stackoverflow option - https://join-lemmy.org/ and Gitea already working on a federated "Github".


If you haven’t heard of it, you may be interested in openreview.net

They operate mostly in the computer science academic community, last time I checked.


Yep! I came across them. There are a ton of different projects and platforms out there trying to tackle this problem. But so far every alternative I've come across can only handle some of the various services the journals are providing (or are trying to side step and redesign the whole process from the ground up, which is an even bigger hill to climb).

For example, there are several attempts to overlay review on pre-prints and open repositories (like Zenodo), but they aren't identifying qualified reviewers and matching them to papers in any way. There are the repositories and pre-prints themselves, but they aren't providing review. There are attempts to build open journals, but they're still taking the journal form, which means they still have lots of manual overhead.

Most of them have varying amounts of traction, but few seem to actually be on a course to replace the whole system.

...it remains to be seen whether peer-review.io will fare any better. I think it has a chance because it does provide an alternative system for each of the services the journals provide. Well -- except one, which is the moderator role the editor sometimes plays. That may prove fatal, but only time will tell.


I love this. Given the other connections to software analogs, any chance you are planning to add plumbing for some sort of citation dependency tracking?

Like if a major paper is withdrawn, or a theory disproven, could you get a list of significantly impacted papers?


I’ve been interested in building a similar platform for quite some time. I actually bought the scholar.io domain for that purpose, but never got around to it.


I'm working open source and would welcome contributions! (https://github.com/danielbingham/peerreview)

(Although, the first contribution would probably need to be getting the local working again in a new context... I've been going fast and taking on some techdebt that will need to be paid down soon.)


Important work! Thanks for trying to do something about it.


I heard few song lines from a new artist I've discovered. It has me thinking on a problem I want solved. Although, I'm not actively working on it.

Why aren't there more employee-owned companies? And I don't mean solely shareholder programs, I mean actually owned by every employee and they are paid dividends of profit after all planned R&D costs. Similar to the Alaska oil pipeline bonus. Even interns.

All studies show that the companies structured this way that do exist (they seem few) have much higher output, quality, and happiness among staff.

After more research I found that this exists: https://esca.us/ - Wawa is a member, pretty cool.

Talking more to others about the idea I've heard interesting stories. Some machine tooling shops have an actual employee-owned setup. All employees are incentivized to make every product output great and keep profits high, because they all share the profits.

Anyway, I'm not actively working on this. But wanting to shape this idea more in the future. It feels like it could help the current state of America. But perhaps I'm too hopeful and naive :-)

---

Hobo Johnson - My Therapist:

"The idea's about equity, it's about wealth

Most think that it's dumb, you should think for yourself

If I buy a pizza place that makes a definite profit

Yeah, let's say yearly, the owners make $100,000 off it

And if I buy this pizza place for, let's say, $300k

And when the workers recoup in three years, I'll sign it over that day

And wouldn't everybody not see

They should buy their pies from me?

You'd rather have a boss

When you can work democratically?"

---

"Incentives are the strongest force in the world. They explain why good people do awful things, why smart people do stupid things, and why ordinary people do amazing things."

- Quote I have pinned from @morganhousel


There are some employee owned cooperatives, including a few in tech I've seen, but I think they're a lot harder to raise capital for (and thus a lot harder to grow) than either public companies or member owned cooperatives.

How do you fundraise outside of the small pool of employee-owners?


The US Small Business Administration actually offers small business loans to groups of employees trying to get a controlling share of their companies. However, the program is pretty hard to use (like every other government program). https://www.sba.gov/brand/assets/sba/sba-lenders/ESOP_Borrow...

There's actually a bill in congress right now that would, among other things, provide technical assistance and outreach for employees who might be able to use the program: https://www.congress.gov/bill/117th-congress/house-bill/4141...


I am involved with a company called Teamshares that helps small businesses convert to employee ownership. A big part of what makes it work is exposing company financials to all employees in a way that helps connect one's actions with financial performance.

We are hiring software engineers, in case anyone is curious. https://www.teamshares.com/careers


Ownership implies both risk and reward. Those who own anything (a private company or stock in a public company) are entitled to their share of the profits but also must take on the risk of the value going down. How do you convince all the employees of any company ('even the interns') to take on the risks accompanying ownership?


They already do - when the value goes down today, they are laid off, or the bosses do more wage theft than usual


In ESOPs part of your pay is in the form of company shares. Your risk and reward increases as you build up company shares and pays out when you retire.


Maybe take a hybrid approach? The leadership members contribute some investment funding and in turn own portions of the company's profit relative to their investment. You also have hourly/contracted workers who do not have a stake in the company, but have well defined job specs.

My fear in that situation would be 'lazy investors' who donate some money but do no work to grow or manage the company.


> 'lazy investors' who donate some money but do no work to grow or manage the company.

so regular investors then xD


I think people will do that calculus on their own and decide their level of risk tolerance. Typically it takes a lot of trust and communication, at the very least a solid decision-making structure, for people to feel comfortable signing on to such an enterprise.


Most business ventures have risk of failure and loss of capital. How does that work in this model?

If the capital comes from the employees, this loss would be very hard for them and probably would not appropriate for more workers.

If the capital comes from a bank, is there collateral? If so, where did the capital for that collateral come from? If not, does the bank need to lend to you at an extremely high interest rate for it to be worth the investment?

If the capital comes from investors who have an equity stake in this, how is it different than any other VC? Who keeps the profits when the business is up and running (the employees or the investors)? If the employees, why would the investors invest?


Hobo Johnson is amazing. Not sure about his pizza place idea, but his stories about relationships are so raw.

https://www.youtube.com/watch?v=15nberWl0EU


I am writing a book (The Software Mind) and am bouncing around the next stage of company formation. Democratic companies are such an obvious next stage.


How would this be different from a partnership like a law firm or consultancy? Or are you just suggesting that more firms are structured as partnerships?


Igalia is a private, worker-owned, employee-run cooperative model consultancy focused on open source software. They’ve contributed significant JavaScript and CSS features to Chrome, WebKit, and Gecko.

https://en.m.wikipedia.org/wiki/Igalia


Worker Co-Ops aren't a new concept but I think in the USA it's taboo because it gets in the general vicinity of "Communism" so people are wary. I always thought it was disturbing that Americans can simultaneously preach about their love of Freedom, shouting Freedom this Freedom that and yet subject themselves to an essentially Authoritarian/Dictatorial workplace for 8+ hours a day with no qualms. Apparently Democracy only goes as far as the front door of your office and is dropped off there until you clock out. Quite strange.

As poster above mentioned fundraising is a major challenge. Seems like debt is your only option if you want to keep the thing 100% employee owned. Bootstrapped businesses are of course a thing and can scale to Unicorn level but at a much slower pace than a VC backed startup.


I'm also thinking quite a lot about this.

TL;DR There are many pieces to our culture and capitalist economy that make this difficult, but somewhat surprisingly to me, also many legal hurdles to formalized democratic corporate structures.

I've taken a few steps to talk to lawyers and contact advocacy/support groups for worker-owned co-ops and things, and there are a number of surprising hurdles for this type of formalized corporate structure.

In particular, whether a member is considered an "employee", an "investor", an "owner", or some combination of the three differs in jurisdictions, often at the _county_ level, not just the state and country level (experience primarily in the US).

US law gives a massive number of very specific rights to investors, making it difficult to pull someone's ownership shares if they leave the organization in some jurisdictions. Additionally, the tax implications for individuals can be extremely complicated.

Talking to a lawyer, they said one of their clients had been working for ~10 years to hire someone across state lines and hasn't managed to do it yet because of the legal complexity (some kind of employee-owned construction company in Northern California that wants workers from Oregon I think).

For the time being, my own structure is on hold, but I'm thinking of simply formalizing the structure in the equivalent of an employee handbook and putting legal ownership into some kind of blind trust then paying members at a rate tied to net income without formally specifying that it's a dividend, but I have no idea how many laws that might be inadvertently breaking.

In any case, Mondragon is an interesting case study, as they are (to my knowledge) the largest worker-owned cooperative in existence at the moment, but workers outside of their home country (France, I believe?) are not able to participate in ownership and profit sharing for legal reasons.


This is fascinating. If this ever becomes a more serious endeavor, reach out - email in my profile.

Another user posted about Teamshares (https://www.teamshares.com/). It appears to operate similarly: investors buy up companies and give them to the employees. I'm assuming some level of fee is given to Teamshares to continue operating.

Oddly enough.. one of the backers of Teamshares is Collab Fund, where my second quote is pulled from on one of their most recent posts. Small world.


Look into coops.


Tessitura and NISC are two great examples of technology-centric co-ops.

Technically a co-op is usually "member-owned" whereas they were talking about employee owned. A substantial difference in some ways but not in others.


Worker cooperatives/co-op is the most commonly used term for this type of structure, I think.


I am working on climate repair, specifically the removal of methane from the atmosphere. Although the excess methane is only about 1.3 ppm, it's responsible for about a third of the temperature rise.

We're not building a huge machine to try to suck the atmosphere through a straw; we're doing chemistry in the open atmosphere. Plan to have a pilot running within the next year and within five years be removing up to 100 Mt per year.


I have heard that terpenes are a major source for the production of HO (hydroxides), which only last for a microsecond, but are responsible for transforming most methane molecules to CO2. Did I get that right? (Also, is my vape pen making a difference?)


Yes, hydroxyls are a primary source of methane oxidation over land. They are very corrosive and so reactive that they almost immediately participate in some reaction as soon as they are created.

The reactions over the ocean are largely Cl, which shouldn’t surprise anyone who has ever visited the seaside. It’s also quite reactive, but the CH4 oxidation is a little more complex.


That sounds interesting. How do plan to fund that? 100mt a year of sequestration sounds like there would be a fair amount of costs and labour involved.


For the past year I've been paying the bills (this is hardly my first startup) but we're just starting to talk to investors to bring some more staff on and start scaling up. We only need a few million for this stage.

There are people who pay for GHG removal and we have plans on how to expand that. We think it can be self-sustaining.


Sounds great! Glad you’re able to bootstrap it. I ask because I’ve looked into this myself.

The issue I see with carbon sequestration is how the unit economics add up. Whether the cost of raw materials, labour, facilities, and scaling R&D all add up. My sense is that the he price of carbon credits is still too low for this. Would be interested in hearing more about your ideas.


Would love to support you on this journey! I'm on the team at Fifty Years, we've backed teams like Noya (carbon removal) and Solugen (Carbon negative chemical manufacturing). Would love to chat! Feel free to email me at peregrine at 50y.com


Wow. I really want your group to succeed! Also that was a clear succinct writeup of what you do.


Thanks!

I couldn’t imagine writing that “I am passionate about transforming the atmospheric methane sector through an exciting and innovating GaaS (GHG-elimination as a Service.”


Is the plan to remove it by converting methane to CO2?


Exactly: methane breaks down naturally into two H2O and one CO2. Unfortunately nature can't keep up with the increased emissions (which are mostly impossible to prevent -- the heating earth is increasing the rate of natural methane emissions as well) so we are giving nature a hand.


BTW we are looking for MEs, EEs, and a physicist


Working on a better solution to sedentary health and sitting caused back pain at work. No one posture is the answer, so there is probably an optimal amount of regular postural variation during deskwork that wouldn't interrupt focus. That is what a chair should do for you to protect your health, so making a smart dynamic chair for standing desks.

Had a cycling accident years ago, became much more sitting intolerant afterward, probably made me quit coding as a career.

Have been testing prototypes for a year, people have done hours long stretches on a 2-3 min interval feeling no pain or stiffness at the end. A study showed the same strong results.

Website: https://www.movably.com/ Early user review: https://www.youtube.com/watch?v=hUjkbMc_xBw&t=1s


This is a great idea. On a related note I just read this ~2000s article[0] by an anthropologist about natural sleeping/resting positions amongst humans/primates. An interesting take he had was that having the chest firmly on the ground during sleep was ideal because breathing causes constant motion of your spine and prevents stiffness.

[0] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1119282/


Interesting, that fit's one of the lines of research on disc health because the disc are nourished indirectly mostly through the porous vertebrae, it takes motion to create the pressure variations to facilitate blood flow. This may be why most active chairs don't move the needle enough, the motion is both limited and usually only triggered when a person is prompted to move by discomfort.


I wonder if a chair that allows muscles to relax for hours on end can be a solution.

I went the ‘standing as much as possible’ route and am thankful to have made the move from a health standpoint.


Really interesting. I too had a bicycle accident four years ago and have been really sensitive to bad ergonomics and spending too long in one position since. Just understanding what makes things worse and what better has been a big challenge. Coincidentally, just today I ordered a Håg Capisco chair, which serves the same purpose of allowing a variety of different positions with a standing desk and also makes it practically impossible to slouch. I'm hoping it'll help.


The Capisco was the last chair I had before diving into this and it actually worked pretty well. I mostly alternated legs, with one standing while the other was draped on the chair. The only drawback was the friction of repositioning and I think the trick is to minimize friction so frequent movement is easy.


I had a similar issue, it started 5 years ago, I was training and got a spasm in my back, it happened a few more times over the next year.

The spasm would take 10-30 seconds to fade, but it left behind a discomfort in my back, and the discomfort wouldn't go. especially when I approached work. if I was gaming or having fun, the pain would go.

it was chronic, stayed with me for 3-4 years, until I couldn't sit for longer than 5 minutes without my back killing me.

I tried posture exercises, sitting straight for long periods of time with a good posture, shoulders back and down, 90 degree elbows, 90 degree knees, monitor right infront of my eyes instead of looking down on a laptop.

But the pain only got worse, and I got depressed, then had to quit coding for 1.5 years.

until someone here I think mentioned a book called "Heal your back", I thought it was BS but read it anyway, and I found a cure to my back there. Turns out its in a totally another place.

I was extremely stressed, and I was not taking care of my psychology, and my perfectionism was contributing lots to the stress and therefore more pain.

It's called TMS "tension myositis syndrome", basically when stress becomes real pain that you feel in your body. and now I'm seeing more and more therapists for it, search Pain Reprocessing Therapy on youtube

I hope this helps someone :)


A new kind of Key/Value store. In this architecture, keys and values are stored within two separate 'data objects' which are linked together. One holds all the unique values along with their reference counts, the other contains all the keys and links to their mapped values.

The architecture allows any value to be mapped to one or more keys and any key to be mapped to one or more values (unless an attribute on the data object prevents it). These KV stores can be used to attach tags to other objects. They can be used to form columns in relational tables (i.e. a columnar store). It can be used to create indexes into file contents.

Each object is designed for parallel access by multiple threads. Only a single block needs to be locked to update any value or key so multiple writes can occur at the same time. The data in each object is organized to find values or keys without inspecting every block. It is very fast and allows both OLTP and OLAP operations on the same data set.

So far, I have used them to attach tags to millions of files and do searches. I have created relational tables that I can query much faster than the same table in Postgres (https://www.youtube.com/watch?v=OVICKCkWMZE). I have created 3D relational tables that can be queried the same way 2D tables have traditionally been queried (https://www.youtube.com/watch?v=AXqvWMmoL1M).

The software is currently in an open beta and anyone can download and try it out on their own data set. https://www.Didgets.com


Sounds kinda like tuplespaces/Linda (which I think are very good, largely unpursued ideas).


It might have some things in common. In my case, the objects are persistent and can be read to and written from disk with ease. The relationship between keys and values can be 1:1, 1:many, many:1, or many:many.

If the KV store is a 'state' column in a DB table of US customer addresses for example, the number of unique values is limited to 50 (if you ignore D.C., Puerto Rico, etc.). The table might have 100M customers, each mapped to at least one state. Each state (except less populated ones like Wyoming) might have millions of rows mapped to it. Many customers might have addresses in more than one state.

The system has to do things like: "Find every key mapped to a state that starts with the letter 'M'" or "Find every value that is mapped to this set of keys". These are the kinds of things a normal SQL query has to do.


What's the practical value of (quickly) knowing which keys share the same value?

"pet":"cat" and "program":"cat" have different semantics.


The 'key' in this case is what the tag or value is attached to, not the context of the value. So in your example, all the "pet" values would be stored together within the KV store just like values in a relational table column. Likewise all the "program" values would be stored together within a different KV store.

So some typical queries might be: Find all the photos where "Event" = "Wedding" Find all rows where "State" = "California"

The keys in this case would be the IDs of the photos or the row keys in the relational table.


I'm going back to basics and learning to write a compiler in the dumbest way possible. I'm starting with a basic assignment (a = 3) and reverse engineering what it would take to turn that into a working executable. Then I'll add more features, lather, rinse and repeat. I'm hoping to eventually turn it into a series of blog posts so everyone can see just how little I remember from college.


This is exactly how Compilers are taught at the University of Maryland. The class CMSC430 (https://www.cs.umd.edu/class/fall2021/cmsc430/) actually starts off with a Scheme (limited subset of Racket) and gradually grows the language to include more features. The first class compiles just numbers to x86 code, followed by arithmetic operations for numbers, building up to higher level features like function calls, pattern matching, and so on. See the notes at: https://www.cs.umd.edu/class/fall2021/cmsc430/Notes.html

This style of building compilers is called the Nanopass style (https://legacy.cs.indiana.edu/~dyb/pubs/nano-jfp.pdf), making it much easier to teach.

Source: I was a TA for the earlier iteration of the class.


I am not a computer science graduate and have always been fascinated by programming languages and how they work. i am currently reading a book called crafting interpreters (https://craftinginterpreters.com) which i think is a very good introduction to the topic plus it's very practical without getting too deep into theory.


I've built a C compiler and have read the dragon book. I wish I read Crafting Interpreters before reading Compilers by Aho. It's good, and there's no nonsense. The author knows what he's talking about. Plus it's free to read online. 10/10 would recommend.


I like it so much, reading your comment make exited to reading your blogpost. In moments like this I wish there was sth in hacker news that would allow tracking progress of e.g. your project. In twitter you can just follow people, reddit has remindme thing.


If you're interested, I created a substack I'll be publishing the first few posts to. You should be able to subscribe there.

https://painfullynormal.substack.com/


for an interpreted language like JS, this project is really nice https://github.com/engine262/engine262. More or less 2 parts: parser and evaluator


Working on (actually just got my v1 up last night) a price monitoring tool for a bunch of local stores. Trying to prove they are price fixing.


Please please please publish the results of what you find!


I'm hopeful that I can turn it into a public facing dashboard that others can see. It covers the entire state I live in, monitoring 32 stores and about 8k products right now. Already seeing some weird stuff in the data, like a product that changes price every 30 minutes, $30->$50->$30 repeat.

It's a niche type of store (dispensary) and essentially a state blessed monopoly since they limited it to 40 store licenses total. But I'm hopeful being able to show people the data will help with some of it... or maybe I get my door kicked in.


The price change you describe sounds like a poorly implemented price test. I don't think Shopify allows dispensaries to operate on their platform, but that is the type of hack you have (had? it's been a few years) to do to run a price test on Shopify.


That's what I assumed as well, but it's still... off somehow.

The same product is carried at the other stores, each has it at a price of $20 exact...then this one bounces between $30 and $50 all day.

I mean, maybe they are getting suckers to buy it at the inflated prices for both, but I'd be shocked if so. They also carry other products from the same line, and they are all priced normally at $20.


Robots that build large-scale solar farms! My friend and I started working on this last year and ended up starting a company. We moved to the bay area and did YC last summer. We've been spending a lot of time on solar construction sites and did our first demo (part of the material handling system) on a site in March. Now we're working on a robotic factory + our team is up to 5 folks.


Awesome, think I saw your launch post and sounds like an awesome project. How has the experience been with permitting and regulatory issues (not technical)? Is there a valid strategy in building at the border of hard to build states and selling electricity back to them (e.g. building a massive facility in Nevada and selling the energy to CA when they inevitably suffer another crisis to the grid)?


Hey, thanks! We haven't had too much trouble with regulatory barriers. There are some other companies operating in the robotic construction space (e.g. Built Robotics) that have served as good examples for us on the regulatory front.

As for your idea, you do hear about hacks like that in this industry all the time. We're operating a little bit downstream of those decisions, working with construction companies after the site has already been chosen.


How hard is it to found a robotic HW startup? What are the roles of your team? (Mech eng, Embedded eng, etc)

Do you have a PhD? Would you advise getting one (in robotics)?


Would this be easier if you were building solar into the ground directly to use the earth as a heat sink? Or is it cheaper to have floating panels?


Is it more profitable to turn low cost labor into robots? (Honest question) How do you determine that?


So the robots that build panels or that take the panel and mount it into the ground?


Do you have a website already or somewhere else with more info?


We don't have a ton of info available publicly yet, our Launch HN thread [0] and jobs board [1] are the best places to learn more for now (or email me!).

[0] https://news.ycombinator.com/item?id=30780455

[1] https://www.ycombinator.com/companies/charge-robotics/jobs


I'm having a blast working on https://printnanny.ai/ - the embedded computer vision system is cool, plus there are many meaty unsolved problems in the 3D printing universe. To name a few...

- Rolled a Linux distribution (PrintNanny OS) designed for mixing/matching controller software across a manufacturing "fleet".

- Created an atomic upgrade/rollback system.

- Built a WebRTC stack for live camera streaming.

- Lots of fun applying queue theory to distribute work across a fleet.

I'm learning a lot about how the government contract/bid process works, which I've always been insulated from as an engineer (even when working at companies known for gov support, like Red Hat).

My biggest customers tend to serve an ultra-specific niche, which is always fun to learn about. I love getting to talk to people who have spent thousands of hours cracking the laser tag gun market in Canada, or top RV parts supplier in Pacific Northwest, etc. There are entire worlds-within-worlds out there. =)


That's an awesome one.

I recently had wondered why the only one I knew of was spaghetti detective.

Just a heads up. Trying to preorder gives a stripe error (below) from the orders endpoint.

Copy also says 50% off then shows a $199.99 original price and $149.99 discounted price (25% not 50%).

> Request req_JF7zd2A0rO1rXY: One or more provided prices do not have a `tax_behavior` set which is required for automatic tax computation. Please visit https://stripe.com/docs/tax/products-prices-tax-categories-t... for more information.


Oops, thank you so much for letting me know! I'm trying to roll out Stripe's tax product for VAT this week. If you send a quick hello to leigh@printnanny.ai, I can ping you when that's fixed. I'll honor the 50% discount too - thanks for catching that copy snafu.


Pretty interesting. Have you worked with existing manufacturers that do this like BambuLab?


I haven't done any OEM deals, but I think that's an interesting angle! BambuLab added lidar defect detection to their lineup. Prusa recently launched "Prusa Connect" for remote file management.

One insight that I have is that 3D printing ("additive manufacturing") is just one process among dozens used by small manufacturing shops. The hardware vendors launching software clouds/products are missing the bigger picture, which is that manufacturers need ERP and production schedule management way more than a "file cloud" per hardware vendors.


I am building a "sausage machine" to commercialise Australian univerity technology and create American companies. It is a one-stop-shop for Aussie companies to enter the USA market.

I have a Tech Center in Maryland (http://www.autechcenter.com) which offers lots of free services to Australian startups and we ticket clip any capital to make money.

Current projects are:

* NASA award winning Wound Healding tech https://rapair.com.au/

* AI for drug discovery https://www.polygenrx.com/

* 3D printing bone implants https://globalsurgicalinnovations.com.au/

* Water filtration - https://www.starwater.com.au/


When you say to commercialise AU university tech, what do you mean? In particular the university part, as opposed to the startups?


Ever dealt with stuff beholden to ITAR? Asking for reasons I mentioned in my own post in this thread.


I’m working on the expert elicitation of priors. We’re making web UIs to better capture human judgement, especially for forecasting. I work at a data driven company, sometimes I think a bit too data driven. I want to capture more intuition for our systems in a rigorous way.

It’s incredibly interesting and very niche. There are a handful of academic papers out there. I think it will be a big deal in the next 10 years.


This sounds incredibly interesting to me!

For years I've had this idea rattling around in my head to build a web UI that shows a graph of some historical quantity, e.g., company revenue, disease cases, global mean temperature anomaly, stock prices, etc. The historical data is shown as a curve which terminates at the present, and there's some empty space for the future. The user then uses a freehand drawing tool to extend the historical curve into the future, based on what they feel will happen. Save this data, and let users view some average(s) of all crowdsourced curves, grouped by submission date. Stretch goal: let users track their own forecast accuracy, and let users view forecasts weighted or filtered by the historical forecast accuracy of contributing users.

Is this close to what you're working on? I have no idea how useful or interesting this would be, but I feel like I'd learn a ton just trying to build it. Alas, I just haven't made the time, and lack the skills to hack something together in a reasonable time.


I’d like to do things in the whole space. What you described that’s very close to what I’m working on is that drawing tool. Imagine you already had a system where you asked people how a number would change over time, but you just gave them 12 boxes with a month name next to each. I’m trying to make a reusable system for improving that experience. Your chart drawing would be an example.


What is an expert elicitation of priors? Sounds like something the FBI would do when interrogating an old timey train robber or something.


“Priors” in the “Bayesian priors” sense. Beliefs represented as probabilities.

“Expert” is that you’re asking people who should know what they’re talking about, not random laypeople.

“Elicitation” is that there’s a whole process to it. You don’t just ask “What are the odds this movie will do $100 million in domestic box office in its first month?”. You ask lots of smaller questions and use visuals to build up an answer.

I also like how “expert elicitation” sounds like we should be doing a good job eliciting!


How would you compare what you're doing to a prediction market and/or the so-called Superforecasters?

ex. https://goodjudgment.com/


What angle are you tackling this from? People submitting maximally informative priors and you do some model selection on the effectiveness of the prior?


Right now I’m working on the elicitation process. To see an example, it’s similar to SHELF[1]. They’ve been very helpful by the way.

We already have so many places internally to put in forecasts. Cash flow projections for example. But, all you get is a text box. I’m working on making that easier and more representative before anything else. Making a web app (when all you have is a hammer…) to start. From there, there will be lots more to do but beyond what I’d like to share publicly for now.

[1] https://shelf.sites.sheffield.ac.uk/


You might be interested in the S-process: https://m.youtube.com/watch?v=jWivz6KidkI


That is very interesting, thank you!


I'm working to reinvent personal websites. My goal is to replace social media with personal sites + newsletters, which are more distributed, privacy-friendly, and calm.

https://postcard.page

The product seems simple, but there are so many opportunities for optimization. There's caching, CDNs, custom domain support, dynamically-generated open graph images, image optimizations, email sending, email reputations, analytics, and more.


I've always thought that plans where custom-domain-or-not is a differentiator between free and paid are unethical. In this case, antithetical, too. (If a sufficient number of people abandon Twitter as part of the your-own-personal-homepage bandwagon, but they make the unfortunate mistake of letting their dart land on Postcard and signing up for a free plan, then you really haven't achieved the stated goal. You've recreated the way she was tethered by name to twitter.com/alice, except the difference is that you're the beneficiary, not Twitter—because she is now held captive at alice.postcard.page.)

Previously: <https://news.ycombinator.com/item?id=21921176>


In this case, it's just cost-based pricing. I'm using Render.com to host, and they charge $1.25/mo per custom domain. I can't afford to give away domains for free.

I'm open to some kind of free trial, too.

I don't offer domain name purchasing - so people own their domain, independent of Postcard.


The effect (see "antithetical") is the same, even if you're just passing on the costs that someone else is imposing on you, instead of capturing it all for yourself. That's one oversight in this response/rationalization. There's at least one other oversight (or possibly dishonesty) at play here.

That Postcard is merely passing on the cost of custom domains doesn't actually explain Postcard's pricing wrt this issue. If someone wants to use Postcard without having to use a postcard.page domain, they have to pay $8 a month, not $1.25 a month.


One product in that space that I particularly like is https://mmm.page/ It's just so much fun and adds all the tools to have fun when building a website. I wish that aspect of old internet would be more present in website builders and by extension in personal websites.


Nice. Feels like you are a “Ghost” or “Bearblog” kind of offering. Always good to have more choice here.


It sounds interesting. People would then own their data. The social part will then be done by signing up to a newsletter on that friends page? Finding friends will be hard like that. A opt-in registry service could improve search and connecting between friends.


Beaker browser wanted to do this, and it was awesome


Please move away from email to something more decentralized and powerful. Email is the source of so many lost productivity hours and is now wholly controlled by large entities like Microsoft and Google.


RSS feature coming soon! JSON API to follow.

Any suggestions of what to offer instead of email? For instance, it's definitely more decentralized than push notifications.

Every post that gets sent through email also has a permalink on the site. So, you can write it once - then share it across any channels (including social media). For instance, I'm generating opengraph images automatically now to make Postcard more friendly with sites like Twitter and LinkedIn: https://twitter.com/philipithomas/status/1570770548988465152


This is just creating FUD about e-mail.


Revenue recognition accounting. (Is that not interesting to you? It's super interesting to me!)

There's this whole world of rules that accountants follow to properly record revenue. These rules are different for physical products, subscriptions, contracts, and even rules for what to do when you bundle these things together. And yet, the only two options for companies to crunch all this data is to use an Oracle product or Excel. (Or get the eng team to write SQL queries, to try and produce a correct result for a topic they don't have a background in!)

To do this, we're really leaning into databases like Clickhouse and DuckDB - taking raw data and very quickly summarizing them into journal entries that you can post during the month. Also using kinesis to let users stream in financial data in realtime, so we can then process that data in batch. (Kinesis -> S3 -> Lambda -> DB). In doing this, have also been writing low-level C and Go code to quickly iterate over a stream of transactions and accumulate results.


Hey I’m a CPA (former big 4 accountant) that’s taught myself how to code. Interested in learning more if you want to share: contact+hn@winstoncooke.com


I’ll reach out! Also looking at your GH, you might be interested in this blog post I wrote https://blog.journalize.io/posts/an-elegant-db-schema-for-do...


I’ve been learning Italian for a few years and one thing that’s frustating is searching for words I don’t know. Italy has a lot of local languages/dialects that influence regional variants of Italian; it’s common to see dialectal words enter the mainstream Italian language. The largest dictionary, Treccani, is very good but often misses words that are either too recent / vulgar / dialectal.

There are various other big dictionaries and websites specific to different dialects, but I don’t want to have to search on each of them by hand.

I’m currently working on a meta-dictionary, which is a fancy name for unique search engine for all the dictionaries I can find out there. It’s not finished but so far it works great; I use it almost daily.

I started with Phoenix (Elixir framework) and Svelte to learn them but I ditched Phoenix because it was a hell to work with how implicit things are there (no import; just assume that variable is defined "somewhere") and went back to a very basic Python Flask app that serves an API for Svelte.

Nothing very fancy on the tech side but it’s interesting to deep dive in the languages of Italy and learn a lot of things about both Italy and linguistics at the same time.


I've been burned by this in the past (also for Italian, but also Japanese).

Do you find this issue persists even if you attempt to search the Italian internet for definitions/usage examples in Italian?

Or is this problem specific to finding information in English about Italian terms?

The difference here is between an n problem and an n^2 problem (if the site expands and wants to accumulate definitions from each language/dialect to ~several supported major languages, for example).

Anecdotally, I've found that once I got to the point where I could read Italian well enough to use Italian dictionaries when I don't understand the word, finding the information on the Italian internet has been much easier. Of course, certainly there have been dialect words and slang I haven't found (which I attribute to some of the words being in a dialect mostly spoken by older Italians who don't use the internet).


I do search for definitions/usage examples in Italian. Thinking more about it, I think most of the frustration comes from the Treccani iOS app, which is very helpful because you have all of Treccani offline, but it hasn’t been updated in years and so it’s missing a lot of what you get on their website.

> The difference here is between an n problem and an n^2 problem (if the site expands and wants to accumulate definitions from each language/dialect to ~several supported major languages, for example).

Yeah; here it’s not an issue because I only search for Italian dictionaries, I voluntarily exclude any sort of Italian<->French/English/etc dictionary because I learn a lot more when I stay in one language instead of translating.

> (which I attribute to some of the words being in a dialect mostly spoken by older Italians who don't use the internet)

Yes. Even in the case of Neapolitan, which is still very active, it seems that all the studies of the language were done before the Web, and so you find a lot of good paper dictionaries in Naples, but pretty much nothing online. Most of the content is found on some random blog where someone listed 300 verbs in Neapolitan or an AltaVista page where someone wrote the meaning of a couple hundred words.


wiktionary


On the commandline using sdcv[0] with a generate stardict from wiktionary[1] is a great combo.

https://github.com/Dushistov/sdcv https://github.com/BoboTiG/ebook-reader-dict


Yes, but it’s not sufficient. It has ~50k words in Italian, which is half of what’s in Treccani (~115k) or De Mauro (~120k). Also, definitions are often of much better quality in the Treccani dictionary.


A postgres foreign data wrapper for PACS so you can send and receive medical images with SQL. DICOM, the medical imaging standard, is from the 80's and you need a bit of domain knowledge to integrate e.g. AI with hospitals. This makes it a lot easier. And with Supabase on top you have a very cool backend for building medical imaging products in my humble opinion. We started out solving the problem of anonymizing medical images and now we want to make it easier for our customers to exchange data with hospitals, both for deployment and for extracting large amounts of data for research.

We are also getting ISO 27001 and we are going to open source the entire bundle of policies and the "information security management system" itself on GitHub, so I am also working on that.


You're integrating AI and data management? Sounds promising for your case. For one, it's highly improbable that we're not moving towards some type of tool which would manage a person's data -- be it a spreadsheet, or just an email. And since your partner is using a spreadsheet-like tool to compose a long-form piece, it only sounds more likely.

Probably socialized medicine nations are not progressive enough to attempt this.


Is there a Git repo, mailing list, newsletter, or website that I can follow the progress on this, perhaps even contribute?!


Sure! There is tealengine.com and you can follow github.com/tealmedical to get notified when we put it up. What are you interested in? The foreign data wrapper or the iso?


I am interested in innovative ways Postgres is used. Been part of the Postgres community for a while, so the FDW, and innovations related to using FDW to make more/new data accessible to customers and applications is my curiosity.


High level: finding the next big meme stocks (which morphed into finding any stock that could increase a lot)

I was pretty late to GME, despite having been on Reddit for a good few years. I started researching what had happened, and where the conversations were taking place just trying to understand where to place myself to be a part of the next one. I ended up checking out the typical r/WallStreeBets and the other big trading subreddits but I always felt that I'd need to devote 100% of my time to Reddit in order to capitalise on any of the information. So I built a scraper to do the heavy lifting for me. That worked well. Then I started trying to find ways to identify the good stocks from the bad. That worked even better. So I turned it loose onto many different social media platforms where people discuss trading.

It worked well enough that I ended up quitting my job to develop it and trade full time. There are still improvements that I'm working on but I think that'll always be the case.

A great proof of concept of this is BBBY last month. Out of nowhere it went from ~$5 to $30. There was no real business reason for this, just rumour and speculation. It won't be the last.

I don't know if links are allowed but if you want to follow along, it's https://feetr.io (unfortunately not accepting any more beta users currently, but all data is posted to Twitter until we launch). There's also a leaderboard at https://feetr.io/leaderboard if you're only interested in the big numbers.

The tech stack is 100% lisp. I use CCL locally, SBCL on AWS. Also, hunchentoot will power the backend when it launches. I mention that because a lot of people have this fear that hunchentoot is slow or something but in my case it's pretty rapid. Of course, make sure you're writing fast code, but it's not the bottleneck that I think people think it is.


How are you finding communities to track? Are you crawling and flowing links to shitty subteddits? Creating honeypot accounts to try and get spammed invites to telegram/discord groups? Are you following the different *Chan's? IRC?

I had a friend that did something for crypto. He would try to map pump and dump efforts to crypto projects. Gave them codenames and followed which ones had the most successful pump and dumps and then get in on them and sell early. It was a lot of work getting the spam early and being involved in all the places. I don't think he ever managed to fully automate it.


Good question!

It's a manual process of discovery. As in, I actually go out and hunt down groups talking about stocks and if it looks interesting, I'll add it into the algorithm. I'll then analyse the data over a period of a week or two and determine whether it's worth continuing. If it is, I move it to production and it'll perform as the rest do.

If a community is not publicly visible (discord, telegram, etc) I'll ask an admin if they're okay with what I'm trying to do. If it's a no, it's a no.

There are those groups that do try and force a pump and dump but I'm against using them. That's ultimately not what I'm after as I prefer organic conversation with genuine reasons to buy. Maybe the reason doesn't turn out to be true (BBBY) but that's the case with a lot of stock analysis anyway. Sometimes the reason a stock makes it to me is due to a pump and dump but there are measures taken to make sure that we're not acting on artificial metrics.

There is also a period of premarket validation which tries to measure the stocks that are deemed interesting each day, so people are (hopefully) not getting sent duds. It can happen, yesterday ADTX only managed 0.58% but moments like that help us fine tune the algo and those happen fewer and fewer.


Wait, I thought the whole point was to find pump and dumps early.

I'm not sure what you're trying to do now. The most interesting (for me anyway) would be to figure out what is going to be on Reddit before it appears by analyzing spam/pump and dump groups etc. If you're analyzing Reddit after the fact, isn't that kinda late?


No, not pump and dumps. Just stocks that a large amount of people are about to buy into. A pump and dump is a coordinated effort, whereas I'm looking for something more organic.

It might be late in that you might make less money than you would if you were in those groups but I can't say that I'm unhappy with what the algorithm is currently finding. It's primarily built for day trading, so you're in and out of trades that same day. There are times you're out within 30 minutes. The goal is near daily compounding, which leads to higher gains than holding for X number of days.

It has been recording data since 9th of August 2021 and is averaging 4.5648% across 340 stocks. Since Jan 1st this year, it's averaging 5.1779%, and I would like to think that I can get that number higher.

Note that these are perfect values, using the open price and the highest price of the day, they're unachievable consistently, I use them to understand how much potential for profit we could've had.


> No, not pump and dumps. Just stocks that a large amount of people are about to buy into. A pump and dump is a coordinated effort, whereas I'm looking for something more organic.

I guess what you call "organic", I call the very end of the tail of a "pump". I believe that Reddit sentiment is largely manipulated. When GME was going crazy, there was definitely a coordinated effort to pump AMC, BB and some other garbage that I forget.

I don't believe that suddenly a group of people will all randomly pick the same tickers to start hyping up. I'm not saying everyone involved is trying to pump/dump those stocks, I think it's a small group of people posting bullshit analysis in random places trying to get other people to buy + hype up the stock to create that "organic" interest that you are capturing.

I reckon BBBY was definitely an organized pump and dump for example, but only a small amount of threads/comments were organized. A lot of people buy into the hype and continue to hype it up themselves.

The most interesting part is identifying which stocks will end up "going" viral, by analyzing who/when/how the first mention of these tickers starts up. Maybe a certain writing style works really well, maybe if some ticker gets mentioned at the same time in 3 specific discord channels then that stock goes on to do well. Maybe if a certain reddit account posts it first it goes well, etc.

Using PRAW to get counts of ticker mentions in some subreddits isn't super interesting, at least in my eyes.


> Using PRAW to get counts of ticker mentions in some subreddits isn't super interesting, at least in my eyes.

Just to clarify, this isn't what is happening. And I agree that's not interesting. It's measuring the response to mentions and that's the score we save (we call them impressions). It's like that saying "if a tree falls in the forest, does it make a noise?", if a post goes unseen, will people still buy the stock?

We also don't just release the top X number of stocks that we recorded per day, we're looking for patterns in the data and this becomes the interesting stocks of each day, which then go on to be validated.

> that suddenly a group of people will all randomly pick the same tickers to start hyping up

Of course not, like you said: it's memetic. Sure it can start off coordinated but when it goes viral its self-perpetuating. Those are the kinds of patterns I mention in the last paragraph. You can see these upticks, the strength of them, and try to gauge how sustainable they are.

Here's me semi freaking out about BBBY just before it really ran:

https://twitter.com/0xsmcn/status/1558770400095502336


What do you mean when you say 3.5648% profit over 340 stocks?

Is it weekly/monthly/yearly? Are you tracking buy-sell for each stock daily? Are you taking into consideration loss potential? One loss could wipe multiple days gain.


> Note that these are perfect values, using the open price and the highest price of the day, they're unachievable consistently, I use them to understand how much potential for profit we could've had.

The stocks increase by that daily. You will not reach that value consistently, I urge everyone to take a conservative approach when investing, and to take profit whenever they can.

We don't (currently) invest peoples money for them, so we need a benchmark to understand how well the algorithm performed, and that was deemed the fairest.


Why does this feel like a good use of skills and time to you? I ask because to my mind it feels like a value scraper rather than a value maker, so I'm genuinely curious.


It was earning me a lot more money than a salaried position.


[flagged]


I can understand the criticism that it's not adding value to the world. However, the same could be said of the stock market in general. If they shut that down, I'll happily close Feetr.

In the meantime, someone will be making money from the stock market. Why shouldn't it be you? Or me? Or anyone reading this?


The exact same thing could be said by the people running toxic mining operations with clearly safer alternatives, monopolistic healthcare firms who regularly deny basic access, lobbying operations to eliminate laws that keep kids from getting addicted to nicotine, and plenty else. These are all things your investments are surely contributing to.

Ethics is hard because there's no enforcement like there is in the legal system. You have to think through the consequences of your actions yourself to understand the harms you engender throughout the system you inhabit, and then choose for yourself what level of harm you feel is acceptable to bring onto others for your own benefit.


Sure, I guess. However, the market will not die due to my lack of participation.

It would be amazing if we lived in a world where morality would win out but we don't. My concern is "how do I help the most amount of people?", and that's why Feetr is public and cheap.

You can live the life that makes you happiest but how are you helping people deal with the current economic downturn? There's a food bank near me and the line is much longer than I'd care to describe. It's not even cold yet. And it's going to get much worse.

You can either pretend to help by avoiding the big bad, or you can work within the current framework to try to do as much good as you can.


I give my time to folks in my neighborhood who are food-insecure, and help those who can't or won't drive to fix their bikes so they can still get around to their jobs, doctor's appointments, grocery runs, etc. Why do you ask that question when you have nothing to show in kind?

Further, why do you imply that participating in the market contributes to "good"?

Your reliance on a false dichotomy to make your point shows everything about how little you care about the actual problems we face as a society, if you can only come up with such unimaginative and ultimately selfish approaches.

I don't understand why people snap back like you have, when you have literally nothing to stand for.


I'm saying that helping people make money is good.


I'm trying to improve my search results by building my own Search Engine.

Over the past couple of years, the popular search engines haven't given me great results, and I've had to use a series of different engines, search operators, and dorks to prod out the search results I'm looking for, with mixed success. The front ends have also become a bit too feature rich for my use cases; I prefer a minimal front end with just search results, but most front ends have links to news, videos, drop downs, cards, cookie banners, SEO spam, and have too much javascript. Too much time spent loading and rendering the page, and too many requests, in my opinion. Instead, my search engine only take 1 HTTP request per search result page and is only a few KB in size.

Right now this project is still in the early stages, and mostly proxies bing's api with some minor adjustments, but if anyone is interested in testing it out here is the link to the search engine [0] and a link to the blog with an RSS feed to follow future updates [1].

[0] https://simplesearch.org

[1] https://simplesearch.org/blog.html


Are you trying to objectively measure your search engine vs others and if so, how?

(And yes, realize yours is currently basically just custom version of Bing.)


Not a big project, but just last week finished a visualization of electricity spot prices in Finland. Seems like a potentially difficult winter coming in Europe, using spot priced electricity and optimizing use would make sense. I didn't find a good visualization, so I made my own.

https://otsaloma.io/sahko/

Open source, GitHub link at the bottom of the page. The API used serves a large part of Europe, so you might be able to fork the code and adapt it to another country with fairly minimal work.


Automatic camera calibration refinement/healing. SfM is a well developed area, and there are well understood methods for gathering correspondences like SIFT in order to calibrate cameras. However, it usually requires a separate calibration phase prior to doing whatever you want to do with that calibration. In my case, we’re using it to triangulate 3D objects from 2D object detections done in each of our calibrated camera.

Over a long time period, some perturbance befalls those cameras: gravity pulling them down, things knocking them, etc. That causes our accuracy to degrade over time, so we started looking at ways to continually heal that calibration without needing to do another stop-the-world calibration phase.

Turns out you can use the 3D objects themselves (actually the 2D constituent detections that you use to compute a 3D point) as correspondences instead of doing SIFT matching. The hard part is that you have to preserve the original coordinate frame after doing a bundle adjustment.

Computer vision is a ton of fun, and I highly recommend you give it a try if you’ve enjoyed data science stuff, but the incomprehensible “try it and see” engineering of working on deep neural nets turns you off.


Is this anything you're ready to share? I'm interested in something similar in the context of Drones.


I can at least share some research that has been useful for what I'm working on: https://arxiv.org/pdf/2104.08568.pdf

Face reidentification is the approach this paper uses to find corresponding views of the same person, but that's not necessary if you have the ability to match the objects in your scene otherwise. For example, doing 3D triangulation to geometrically verify random matches using RANSAC.


I'm working on a programming environment for making interactive programs. It's focused on addressing how to handle state and network through three main parts. 1/ an embedded functional database as the scope and environment for variables 2/ a managed syncing of database state across the network 3/ the interactivity extends to the compiler itself.

I've been following the Tools for Thought crowd for a long while, and recently started looking into why we still have problems building interactive web apps, and where the issues and complexities arise. It's lead to examining the Clojure, Haskell, and Rust ecosystems, various papers, data structures, and databases. I've been writing my thoughts exploring the space at https://interjectedfuture.com

If you're looking for niche technologies, we've got a nice assortment at our podcast The Technium: https://www.youtube.com/channel/UCl_rEKDGBw4myn0uOnPxYsg


Can you share some connections for "Tools for Thought crowd"? Sounds like a Bret Viktor connected thing


Writing automated tests for video games. Why is it interesting to me?

I started to teach myself unity in December 2021. I've personally experienced benefits to writing automated tests and using CICD; therefore, I thought it would be fun to learn about writing tests for a 3D based software. It will be different from the web/CLI based stuff I usually write.

So, it turns out that the video game community (or at least the online circles I frequent) are extremely against the idea of writing automated testing for various reasons. This translates into there effectively being a non-existent pedagogy around teaching how to write tests for 3D based software. Content is scarce, and the content you do find is produced by people who obviously don't write automated tests. So, for me I've hit the books to arbitrage & translate techniques and philosophies into this "untapped" domain.

It's interesting because I've figured out tips & tricks that I'd consider low hanging fruit... For example, if you place a "test" camera in the test case, you can actually see what's going on in the test when it's executed. Or how important it is to clean up every created game object in your test after each test; if done right, you can keep your SUT at origin (0,0,0). Or the importance of "test prefabs" who are effectively mocks of other "real" prefabs...

One innovation that I would like to use/build that I truly consider (((revolutionary))) is this: I want the test cases I write to also automatically (or when tagged with a certain C# attribute) generate the same game objects in a "exploratory(manual) test scene." I think this innovation is the "killer app" that will completely and totally sell the value proposition of automated tests to those opposed. If you decide to build this, please also publish it on openupm and reach out to me.

I have plenty of other thoughts and ideas on this space. I love talking about automated testing; it feels like a very futuristic programmer practice. I hope this post demonstrates why this work and space is so interesting.


I worked at an MMO game company and the only testing we had was manual QA for functional testing and bot clients for server stress testing. Not a great environment for building quality software. (Our max server uptime was about 24 hours.)

The client had a secret level with all game objects for testing and debugging. But reading your post makes me see that we didn’t think ahead far enough: the secret level could have been the foundation for automated unit testing game objects and battling characters.

I haven’t watched them yet, but I added these relevant videos to my YouTube backlog just this week:

* Automated Testing of Gameplay Features in 'Sea of Thieves' (GDC 2019): https://youtu.be/X673tOi8pU8

* Taking CI and automated testing seriously (2018): https://youtu.be/YGIvWT-NBHk


Would like tro hear more of what you're thinking! I agree it's important, but it's also secondary to building. Facebook has no tests, for example. Would tests make them more profitable?

But I'm not dismissing testing in unity. I think there are some simple approaches to testing automation that could be done. It would be nice if unit testing made more sense but in reflection i've often found it's the integration of components that breaks or changes, less than the behavior itself.


Old post so IDK if they still do it, but Riot Games does some automated testing for League of Legends: https://technology.riotgames.com/news/automated-testing-leag...


This talk has some great ideas about testing areas that are considered untestable: https://youtu.be/5_IW7npQk9k


A keyboard based IDE for a keyboard based GUI for PCs. Common PC GUIs are WIMP based (Windows, Icons, Menus, Pointers). They require the usage of a mouse and are pretty complicated. I find the usage of the mouse in a PC GUI "unnatural". So I set out designing a keyboard-input only based GUI. It turned out, that wasn't as difficult as I initially thought. Then I started writing an IDE for designing apps using this new GUI approach. I call it EngageUI, since it is user activity based. https://github.com/Rohit-Agarwal-Khitchdee/EngageUI/


I feel like if you advertise this to the various "Mechanical Keyboard" communities they'd lose their shit in a positive way. r/mechanicalkeyboard is a main public one


Thanks. I'll try that.


Well, Emacs already did it. I kinda hate after moving to IDEA (as IDE it's just better, still prefer Emacs as an editor tho) I had to use mouse more.


Emacs is a great editor if you don't want to have to use the mouse and once you get familiar with its command set. What I'm working on is creating a keyboard-input only GUI SDK that enables you to build something like Emacs easily.


The most prevalent type of security vulnerability in all of software is XSS. (If you count CVEs, which is admittedly a bit problematic).

I’m collaborating on an attempt to shift the responsibility for XSS from the developers and towards the browser. The current stage focuses on getting the use case "insert HTML but without any scripts" right.

There’s a public specification and prototype implementations in Firefox and Chrome. You can play with it here https://sanitizer-api.dev/

We think that we’ve made the best of many different tradeoffs, but we’re also keen to hear wide feedback.


This sounds potentially useful, but I'm not sure about the practicality.

It's usually pretty easy to not write XSS vulnerabilities, as long as you know they are a thing you need to think about.

Given that people don't bother to avoid writing XSS bugs right now, why do you think they will bother to use your tool to avoid writing XSS bugs in the future?


Given the new DOM API, it’s also relatively easy to forbid the "bad APIs" using something like eslint (at the source level) or Trusted Types (at runtime).

The hope is to also cater to frontend frameworks enough that they will adopt it. There are already some conversations.


This is fantastic work! One thing I've been trying to accomplish on my own site is embedding others' HTML fragments along with CSS and/or fonts. I reckon there would be sanitization concerns for those technologies as well? I understand that might be outside the scope of your project, but I'd love to hear your thoughts on it.


That’s pretty much the exact use case of a sanitizer.


Am I right that this makes the tradeoff of removing the possibility for vulnerabilities in specific web applications, but creates the (admittedly slimmer) chance for Universal-ish XSS in browsers?


It’s a risk. That’s why there are bug bounty programs and open processes for the specification.

Browsers have a track record of being able to ship security bugs for severe issues within a day or two. Compare that to patching every individual website.


What is your opinion of Content Security Policies? Last time I looked it was praised as an XSS killer.


They are hard to configure and get right. If you overdo it, it can cause lot of issues with real users.


Trying to teach myself linear algebra and deep learning so I can apply it to some interesting ideas I have about making current chess AI much stronger. I don't want to reveal too much since I don't want some Stockfish dev to steal my idea but I'm interested in tips and resources on linear algebra. I always hated matrices...

Interspersed with that I have a huge amount of stuff to implement for my project still(search methods and the like), so I'm working on that as well. It's a lot of code to write and I'm just one guy so progress is slow.

What I'm doing at work would bore you to death so I'll leave it out.


Assuming you have looked at MuZero, since it’s able to not only beat Stockfish, but also not specifically for chess:

https://en.m.wikipedia.org/wiki/MuZero

https://github.com/werner-duvaud/muzero-general

https://m.youtube.com/watch?v=L0A86LmH7Yw


I am aware of MuZero, but I'm not convinced it's able to beat the latest Stockfish. Stockfish continues to dominate TCEC even after the advent of more advanced MCTS searchers.

But MCTS is on my map as something to look into more deeply.


For others, in cases it’s unclear:

- MuZero uses MCTS (Monte Carlo Tree Search)

- TCEC (Top Chess Engine Championship) is a computer chess tournament organized and maintained by Chessdom.

___

My understanding is MuZero doesn't compete in TCEC, but might be wrong. Given Stockfish is open source and open source versions of MuZero exist, it would be easy enough to test the current version of Stockfish again; honestly surprised if it’s not already done, that it is not, given MuZero already beat it.


MuZero doesn't, but from what I know MuZero was only on par with AlphaZero, and while AlphaZero beat Stockfish initially, Stockfish has since caught up and continues to beat LeelaZero, an OSS implementation of AlphaZero in TCEC.


My understanding is that MuZero beats AlphaZero, but not by a significant amount. Even if Stockfish is better, to me at this point feels needless, given it’s highly unlikely a human over significant volume of matches would have any hope of beating any of them. It is possible that new approaches for masters might be found, but for average player, guessing it’s highly unlikely that anything new might be uncovered. Am I missing something?


You're not missing something. I just wanna see how far we can take this thing, you know? 4000 ELO seems within reach now. What about 5000, 6000 etc?

Computer chess is essentially a sport unto itself at this point, beating humans is sort of uninteresting, though I'm sure grandmasters will always find uses for ever stronger engines. They're mainly used to study openings, and since analysing from the start of the game is extremely complex, stronger engines are always helpful.


Fair enough.

As you likely know, ELO has no maximum upper limit and chess has 10^120 possible matches — so in theory maximum ELO for chess has a long way to go, assuming there is even one engine that keeps improving over time to beat its prior version. Huge aspect of performance is tied to availability of compute; as such, my understanding is that if quantum computers ever became mainstream, even a “dumb” algorithm using true quantum computer would beat non-quantum algorithm over significant number of matches and the “ELO race” would then only be define by progress by the non-quantum algorithms against themselves. Basically, at this point, it’s a race that’s already theoretically over; might be wrong though.


Actually Elo does hit a maximum, because a perfect player doesn't beat an imperfect player 100% of the time: sometimes an imperfect player gets lucky and plays a perfect game.

A number of people have made conjectures on the Elo of perfect chess play, using extrapolations from data how chess programs scale. I'm not sure what the latest analysis is (things may have changed with the strength of neural network chess), but iirc they usually estimate something like 1000 Elo over current programs.


ELO does not have a maximum limit, chess does; already said this in my comment prior to your response. Also clearly stated “over significant number of matches” - not just a single match.

Perfect chess play would not set the maximum ELO, that’s not how ELO works. ELO simply ranks players that play. If a prefect chess engine existed and continued to play and there was at least one other chess engine that improved by beating its prior best version, its ELO and the prefect players ELO would continue to rise. Long, long way to go to fill the ELO ratings between current state of the art and the prefect all knowing player that’s aware of all 10^120 possible matches.


I think you're naive if you think decades of chess engines development are going to be disrupted by you picking up linear algebra.

I also applaud and envy the craziness.


It's not like I'm abandoning the decades of development. I'm starting with a fork of Stockfish 10(pre-NN, and using too recent a version could make it hard to submit it to TCEC due to rules about similarity to existing participants.) and I've been digging into the code and algorithms therein for months now. My approach is an extension of existing techniques.

I actually started with a greenfield implementation, but found myself reading more and more Stockfish code anyway as it's an excellent reference for the state of the art. And I realised it was far too much work to reimplement all this stuff, so that's why I switched to a fork.

And I wanna learn linalg and deep learning anyway.


I'm actually in a similar boat learning these things to apply to chess but rather than making a stronger AI, trying to learn and catalog strategies to make learning chess more intuitive. Would be interested in talking more if you are! Can message me on lichess.org/@/v8xi if you play there


That's a really interesting and sorely needed niche! I originally started out thinking about this, but the only ideas I could come up with were strength related, so I switched directions.

Today's engines are hard if not impossible for amateur players to use correctly. I sent you a message on lichess!


Agreed! Hm, I don't see a message (and my account settings shouldn't be blocking it) - whats your username?


An amateur space telescope, I started it as a COVID lockdown hobby project. It’s like a much nerdier “build my own car” sort of project at this point. I’m trying to get it ready for when reusable launch vehicles lower the cost of launching to orbit to a suitable level it won’t bankrupt me. Surprisingly the most difficult thing is communicating to the ground. Not batteries, not imaging, not structural engineering… communication!

It’s an absolute nightmare of paperwork and massively expensive fees and hardware. Communication systems and related fees may end up being more than 50% of the entire cost, it used to be over 75% but I gave up and decided to just make the entire thing bigger and more expensive because of how ludicrous it seemed to build a $10k cubesat and spend $135k just to get a permit to communicate with it!

It’s been a fascinating and rewarding endeavour but don’t get me started on how much I hate radio regulations and export control (ITAR) … I’ve had multiple business ideas along the way from all this learning but have done nothing about them because I quickly realised that I’m in the rare realm of needing a legal-expertise cofounder due to how insane these regulations are. I’ve honestly been told by some companies that telling me how much something costs to buy is both a trade secret requiring an NDA and that I’ll need to submit appropriate ITAR paperwork before they can even send me the NDA because if other people knew how much it cost that would be materially aiding efforts to reverse engineer such devices in circumvention of export controls… this is the insane world of aerospace engineering outside of academic projects and big defence subcontractors.

Still working on it though… it’s not like I’m in a rush, launch costs won’t come down for a while, likely 5 years once a couple more fully reusable launchers are competing with SpaceX’s Starship. Plenty of time to find solutions to these kinds of problems, like how I have to use software defined radio to build my own GPS unit since all commercial GPS units have speed and altitude lock outs to prevent use in guided missiles and military jets.


Curious, what functions specifically fall under ITAR?

For example, have you reviewed something like this page:

https://research.mit.edu/integrity-and-compliance/export-con...


The largest issues I’ve had with ITAR are:

1: I’m building this myself as an amateur, not a university or company, this seems to instantly put everyone on high alert and I suspect I’m getting “ITAR” as an excellent stonewalling/excuse instead of just saying they don’t want to talk to me.

2: I’m working with an orbital imaging systems which could in theory be used as a very ineffective spy sat (the optics aren’t designed to look at the ground but it is a telescope)

3: I’m not in America, so I have to deal with the “international” part of ITAR. It’s not like I’m in an embargo country, I’m in Australia… but a lot of people want to minimise their ITAR exposure so they just don’t want to work with anyone outside the USA, they are effectively “outsourcing” the ITAR problem to the downstream customers that build systems using their components. Since they aren’t exporting they have a much simpler due diligence process, like just making sure they confirm the buyer is an American citizen.

Overall it’s just been less of a complete dead end but more of a minefield of bullshit I have to dodge any time I’m forced to deal with American companies… dual use technology is ok to buy, until your using it for one of those special purposes, then it’s a mountain of paperwork.


If the aperture is below 0.50m, it should be subject to US Department of Commerce’s Commerce Control List (CCL) of dual-use technologies governed by the Export Administration Regulations (EAR) — not ITAR. How big is the aperture? Did something else qualify it as being under scope of ITAR?

SOURCES:

https://spacepolicyonline.com/news/satellite-export-controls...

https://www.spiedigitallibrary.org/proceedings/Download?urlI...

* (might try reaching out to author’s of paper above to see if they have any suggestions)

Personally, I would not make to much of how people respond to you independently trying to complete paperwork. My guess is it’s more likely related to any non-consumer system that requires paperwork not wanting to deal with non-professionals and/or give legal advice.

____

Unrelated, do you have a breakdown of the SatComm expense by type product/vendor and/or utility fees/rates and expected utilization?


I'm currently working on tools for better latent space exploration in generative models. The latent space of Stable Diffusion for example, is incredibly rich, and the traditional txt2img and img2img pipelines for accessing it are only scratching the surface of what's possible. We're talking about billions of images compressed (lossily) into 4 GB — yet we're still using very basic interfaces for interacting with this neurally compressed data structure. I'm working on creating other ways to explore latent spaces like this. My current project is just to dip my toes in the water, and it's an animated music video generator built with Stable Diffusion. I hope to show it off here on HN in the near future.


Love this! How? Are there any exciting concepts or possibilities for supporting latent space exploration?


I'm no expert, but this paper[0] has piqued my interest recently. From what I can kind of piece together, it discusses treating the latent space as a manifold from differential geometry, and creates a Riemannian metric for measuring distance between points on that manifold in such a way that the "curvature" of the manifold represents the semantic density of the latent space. This makes traversing between two points in a smooth/logical fashion easy by following the geodesics of the manifold. I think. Maybe. I understand maybe about 10% of what I just said to be honest, and still have lots to learn, which is part of why I find it so exciting.

[0] https://openreview.net/pdf?id=SJzRZ-WCZ


A couple of years ago I realized that there weren't any good open source software packages for designing DNA so I wrote one.

https://github.com/TimothyStiles/poly

Goal is to have a suite of packages and databases that can be used to design entirely novel proteins, metabolic pathways, and DNA constructs at scale because right now that software ecosystem just doesn't exist.


*username checks out*


Right now I'm thinking about how to silence my squat rack so it doesn't bother my neighbors when I'm using my punching bag.


Things like they have on this page might be helpful:

https://www.raptorsupplies.com/p/mason/pad-anti-vibration


It's the safety bars and rack pins that are the problem. I'm deciding between covering them in liquid rubber or splicing tape.


I'm working on parallel multithreaded systems. I wrote a M:N kernel-thread to lightweight thread userspace scheduler which multiplexes multiple lightweight threads onto kernel threads. I also wrote part of my own libuv.

I also wrote a multithreaded actor implementation in Java. I am currently working on solving the expression problem and thinking of how entity component systems (ECS) can be used for parallelism and solving the problem of deciding how to map data to operations efficiently.

I am also thinking of how to patch running systems without downtime.

See my repository multiversion-concurrency-control for my parallel work https://github.com/samsquire/multiversion-concurrency-contro... and the userspace scheduler in preemptible-thread https://github.com/samsquire/preemptible-thread

See my profile for links to my journal/blog where I write of my progress everyday, specifically ideas4.


> I am also thinking of how to patch running systems without downtime.

This is cool, I spent a few nights thinking about how this might play out. Unison-lang has a novel take on this.


I think if the system behaviour is a data structure then data structures can be loaded during runtime and again later on, this is a bit similar to Entity Component System, entities can come and go at runtime and be efficiently handled.

If you combine this with dynamic embedded scripting, you can have runtime systems that are completely reloadable while running. You can also decide when to serve requests with the new design and old design.

I'm trying to bring the patch to the application itself rather than at the network layer.


I started working on a WYSIWYG CMS Editor for Web-Applications and Websites.

Based on my experiences, I think there is something missing in the market for content creators / editors for WebApps (with landing pages etc.) and websites. There are many headless CMSs available but many are lacking a simple editing experience.

I want to make it easy for the developer to integrate it into a project (using React for example) - but also make it really easy for the content creator to be more independent of developers - while keeping UI constraints etc.

I choose to build a native iOS and MacOS App with SwiftUI that basically provides all editing UI as native components instead of using HTML/CSS - Basically an overlay on top of the website that let's you control the components and website grid/layout. It's mainly a proof of concept and for me a way to learn SwiftUI :)

There are many Website builders that provide great editing UIs (e.g. Webflow, Builder.io etc.), but I feel a native app, especially on iPad OS could provide benefits and it's a nice challenge :)


Hey! Huge shot in the dark, but I am working on an ML/NLP driven CMS for content creators. Have made great progress, just started on the CMS piece. Maybe we could collaborate?

Email is in profile.


Doesn't these content creators, by nature, collaborate on cloud application? I know native app can do that too, but I still feel people use web based app more in this area.


I recently looked up shape matching with OpenCV for a game I'm making.

Basically, you draw black and white images by spawning meatballs, then the drawing is rendered to a image and checked against a reference using openCV.

I found gesture algorithm online's but they always take into account individual strokes, and the drawing system is not using strokes.

I also have requirements to match the *whole* image but not *exactly*.

So what I'm doing is :

1. Resizing and cropping the input image to size of the reference image.

2. Matching each shape in the source image to the closest shape in the reference image (if any shape is below a certain threshold, the drawing is failed).

3. Blur the whole image a bit, and count the percentage of overlapping pixels (if over a certain value, this makes sure the different elements of the drawing are approximately in the right position).

This took me like 4 days to figure out, but it was really interesting having to prototype it using python and translate it to C# afterwards. (Though I sadly ended up buying a 80€ Unity OpenCV library since nothing else was working for me)


Not quite sure what you're looking for but opencv's contour functions might help you here

https://docs.opencv.org/3.4/d8/d1c/tutorial_js_contours_more...

There's also a method of calculating the similarities of two areas called Union over Intersection, which is easy and quick to calculate.

Although it sounds like you got it figured out, so don't let me over complicate something that's already working.. :)

Would be interested in seeing the game!


I'm building https://royal.io, which helps turn music royalties into an asset class people can invest in. When investors (fans, mostly) "buy" a portion of a song on our platform, they become entitled to a percent of the recurring royalty revenue stream. It's one of the few applications of crypto that I think genuinely makes sense right now, and I especially like it because crypto is not the star of the show, it's just the backend implementation detail


I made a voice simulator of my own voice (https://locserendipity.com/sim/voice.html) by recording individual phonemes. My goal was to create the smallest possible phonemes that can be concatenated into a wav file. Ultimately, I want to be able to export a wav file or mp3 file as base64 text enabling the largest amount of voice data to be stored on a CD. Does anyone have further advice on how to achieve this?


Kind of simple/shallow compared to what some other people are posting here, but I'm working on a Python library to compute numerical expressions with arbitrary precision: https://github.com/rubenvannieuwpoort/reals


I'm on a (currently very experimental) project where we're trying to answer natural-language queries using data from an arbitrary database (given hypotheses on the schema).

The natural-language plaintext is fed into a language model that annotates it. That output is converted into an intermediate representation based on relational algebra, which we manipulate and finally compile to SQL.

It's probably not scientifically groundbreaking, but there are a lot of interesting technical bits along the way, at least for me.


So you're making big assumptions that the database's table names map to important concepts in the user's vocabulary, and that a thorough understanding of the data model can be ascertained from the schema alone? To be brutally honest: that's a silly assumption and likely a dead-end.

(One thing I want to emphasize here is how abysmally, unbelievably, depressingly god-awful the vast majority of database schemas are.)

You wouldn't expect any user-friendly feature to be automatically stamped on top of a database without the involvement of a programmer, and this is no different. You need some sort of descriptive layer between the database and the query engine (or beside it, helping out) that programmers are involved in building. AI is not going to magically solve this problem for you. Figure out what you need described to you and turn that into a meta language that application developers can write to help your engine out.

Frankly I'm wary of any direct link between the user and the database. How do you deal with constraints like "users should only be able to see data associated with their organization in tables X, Y, and Z" if the programmer doesn't tell you? How could you possibly prevent exploits and security problems? Another reason to focus on querying an intermediate layer or relying on a programmer-provided description of what's available.

(Or am I assuming way too much, and "hypotheses on the schema" is already this meta-language?)


I haven't explained this very well.

By "hypotheses on the schema" I mean we are assuming the schema is already in a very normalized form. I fully agree most database schemas are awful, this is a deliberate choice to avoid getting us into that kind of trouble.

The "intermediate layer" role is played by the IR I was talking about in my previous comment. The user would never be able to see anything not provided by the IR because it's not even representable.

Our scope is much narrower, really.


Are you working on the Yale Semantic Parsing and Text-to-SQL Challenge? Are you already on the leaderboard? The topic has always fascinated me and got me learning natural language processing to understand how it works.


We are going for a narrower scope than that. We are working with stricter restrictions on the database schema, where you are only allowed to join on synthetic keys and there is an enforced distinction between measures and dimension attributes.

On the other hand we are aiming for reasonable response times on large scale datasets.


Kinda like Wolfram Alpha?


Sort of a Wolfram Alpha that hooks up to your own SQL DB, yeah.


Problem: how to make HTML5 <canvas> elements responsive, interactive and (importantly) more accessible for end users.

Solution: an OS Javascript library to make it easier for devs to achieve the above

Progress: ongoing - possibly neverending

Why: in 2013 I was having real trouble breaking into the industry; a recruiter told me I might have more success if I had a project to showcase my skills. I settled on building a canvas library because I enjoy creating pretty graphics stuff. After I (finally) landed a job I just ... carried on with it because: fun! The drive to make the canvas element responsive/accessible/etc came much later when I was looking for an edge over the competition and realised all the other JS canvas libraries didn't seem interested in solving those problems.


1. Learning systems for patients with cognitive decline

2. AI writing tutors for middle schoolers

3. Paper-digital integration to support equitable education in India

4. Systematic review of resonance in human interactions

5. Computational models of beauty

6. Generating new Socratic dialogues by fine-tuning GPT3 with the complete works of Plato

7. Promoting ADD as a valid lifestyle choice


I am working on implementing a more easily presentable programming language for my implementation of a multi-modal, multi-mode dependent type theory.

I have been using a barebones version with a minimal and slightly UX hostile (for most peoples standards) version as a personal programming language. There was a random comment in a thread asking if I had anything readable for others. So I started getting a github based ‘blog’ together and started porting the compiler to JavaScript (which I don’t know very well) to have some interesting examples/demonstrations of the type theory and compilation process.

I don’t think the language will be released, but maybe some people will find the theoretical or implementation presentations helpful or interesting.


I'm building an origami application where folds are described in a simple English-like structured language. The editor will be realtime where you can see how the folds are animated.

How I ended up working on this? There were a few ideas I wanted to work on. Most of them are much simpler and have easier path to monetization. But this kept coming back to the top of my thoughts stack, so I gave up fighting it. It ended up much harder than I even thought it was.

Niche tech? Explored F# Fable and ReScript. Both are really still too young so they added too much typing complexity to overcome, so it's just mainly JS and some typescript.


Decentralization and bidirectional real-time communications. I don't mean blockchain. I mean no servers with end-to-end encryption, no third parties of any kind.


I noticed that jitsi supports peer to peer for one on one video conversations. I liked the idea of that. Probably a very small stepping stone towards your goal.


How do endpoints find each other?


That is a different but very real problem.

Without consideration for mobile where IPs rapidly change currently it’s based upon IP swapping and I recently added a convention to auto-update addresses when a node makes a connection to other trusted nodes.

Later I envision an optional opt-in service that resolves node identity to IP address. Something like DNS but for hashes instead of domains.

Currently I am focused on updating a bunch of test automation and then I want to turn my attention to adding a command shell to the application GUI, which would solve for SSH into remote personal devices in your node list.


If you can tolerate transient centralization, webrtc might be a good choice and it's very likely if you are skilled at it and thought hard about the trust model you could figure out how to multihome webrtc, which would re-decentralize it.


I'm coming up with ways to stay on task and avoid distractions. So, I have this:

1. Make a list of tasks you need to do.

2. Take on one task and complete it.

3. Take On Me was a cool song.

4. Oh, I should go watch the video

...

I'm not doing very well at it.


I've been slowly learning SAT/SMT, but the materials I've found are very academic. Being almost 20 years out of my CS degree and this being a rather niche field, it's tough to learn, so I want to continue teaching myself and help others by creating useful "SAT/SMT for newbies".


I’ve been trying to build a way to allow for more scripts to get automatically turned into UIs. Then can be built, shared and run in web browsers. It’s amazing how far browsers have come especially with the addition of ES modules.

Its basically a JavaScript DSL to create little web apps with reactive, understandable code. For example:

const age = input(“what is your age?”)

text(“your age is” + age)

Would automatically create a UI around this.

Can see it at https://trytoolkit.com

I’ve been surprised how much I’ve personally been getting value from this little app.


I’m working on a proof of concept for replacing passwords with public/private key authentication. Inspired by how ssh and got repo access are both more secure AND easier to use than password login.

The interesting parts are convenience features like sharing an account across multiple devices, allowing temporary access to a device, etc.

The goal is to implement everything user side - it would act a bit like single sign on but with no need for any trusted third party. I’m starting with a browser plug-in.


A project idea that I've been daydreaming about for a while (but haven't moved past research) is a cleanroom implementation of the online game Space Station 13. SS13 is built on the BYOND game engine, an ancient, closed-source client/server platform with a clunky, proprietary programming language. There have been a few attempts to rewrite SS13 in Unity or custom 3D game engines, but they've all suffered from second-system syndrome and development stalled. Plus they don't leverage SS13's most valuable asset: its existing game content, a huge library of game scripts and assets still being actively developed.

My vision is a new client with an interpreter for BYOND's proprietary language that can execute SS13's existing content unchanged. Compile the client to WebAssembly so people can play and develop the game in a browser without installing any software. Unshackle the game from depending on hosted servers by making the client peer-to-peer using WebRTC. If the new client dethroned the BYOND client, then incompatible improvements could then be made to its fork of the BYOND language.

The name for this project in my head is "Babylon 13". :)


I think this is a cool idea! But I disagree with your direction. Lets say you do rework everything to be web assembly and not bound to BYOND. I'm sure the 300 people that play SS13 would love it. But I don't think that goes far enough, for the effort required.

I think rebuilding SS13 in unity would overall be a great idea, but there is the problem of how complex SS13 is that would need to be worked. But I think with the right baby steps and SOLID design, it could be done.

But I do feel like any effort to rebuild would not be a huge success. SS13 is fun, but comparatively niche. Though as a mobile game there's a huge market.


Yeah. The game is far too niche for a rewrite to be worth the effort. This idea is more of a thought experiment in software migration (and an excuse to play with writing a parser in Rust and WebAssembly for an unusual programming language :)


The most practical approach would probably be writing a new SS13 engine in Unity or Godot to interpret SS13’s existing BYOND scriptures and asserts and also leverage Unity’s or Godot’s tools and game engine.


I'm currently working on a file indexer for my pet project tonehub[1] (an audio API like navidrome or audiobookshelf). I thought this is gonna be an easy task, but after running into a lot of problems it turned out that this is indeed an interesting problem to solve. Just to name a few problems I ran into:

- Performance: How to scan files as fast as possible?

- Tag-Storage: How to store audio tags as generic as possible?

- File-Watchers: How to prevent fully indexing the filesystem over and over again and only react to changes?

- File-Sources: How to manage multiple file scanners as efficient as possible?

- Cancellation: How to cancel running tasks, when File-Sources are removed?

- Moved files: How to not lose all customized information (rating, playlists, playcount, etc.) when a file gets moved to another place?

- and many more :-)

[1] https://github.com/sandreas/tonehub


You might find the idea of "cancellation trees" useful. Create a hierarchy of loop data structure of loop iterating variable and loop limit then cancelling any part of the tree cancels its children.

If you have a worker thread that has a hot loop that checks a pthread structure as its limit. You can then send a virtual interrupt to it, or preempt it from running further by setting the loop to its limit.

You can see my C, Java and Rust code of this in this repository: https://github.com/samsquire/preemptible-thread

It can be used to create extremely responsive software without slowing down the system with cancellation checks.

https://github.com/samsquire/ideas4#120-cancellation-trees https://github.com/samsquire/ideas4#99-register-loop


Thank you. This is indeed interesting... Currently, I use the `CancellationTokenSource` / `Task` concept of C# and I'm pretty happy with it, but this is definitely worth reading.


Would you like to talk more about the things you're working on? I am interested in performance, the architecture of software and solving problems.

One problem I have with computer systems is data liveness and synchronization. You want to react to change in many situations but don't want to do things inefficiently such as polling regularly.

You kind of want to react to change given an event, when that event happens. So you don't need to poll and compare.

You also have the problem of identity, how you map data to other data and keep it in synchronization.

If you can capture events at source, then you could do the right behaviour. But it's very hard to capture events at source in modern computing systems as not every API has a callback or event log mechanism.


Sure, why not... so my pet project is basically for managing my audio files. There already is navidrome[1] and audiobookshelf[2]. They work great so far, but some minor details are kind of annoying...

The first milestone will be providing a basic API for my files - the main components of this will be the database (postgres), the API (C# + swashbuckle + JsonApiDotNet + Websockets) and the file indexer (C# HostedService). All parts except the file indexer are pretty much done, but it is a critical component, because it has to be as fast and correct as possible.

There are multiple approaches to index files... A best case scenario would be an "import" / "move" of files into a library or repository. That way you would be always up to date and always perfectly sorted. Unfortunately, an import would also be a big amount of work, because analysing the files and getting metadata from online sources is... lets say a huge project. And NOT getting metadata would mean, that I cannot move the files while another app manages the metadata. So I took another path - scanning an existing and well tagged library (that I manage with beets[3] for music and m4b-tool[4] / tone[5] for audio books).

My current Idea is to have a file indexer that:

- can run on multiple sources

- runs one full index scan after starting the app

- registers a filesystem watcher for every file source and reacts to events

- To ensure, no filesource is blocking others, each source is processed by a fixed batch size and then move on the the next file source

- If sources are modified (added, changed, deleted), there is a decision, what to do with already running indexers and registered file watchers (added just go to the queue, changed and deleted cancel already running tasks only for this source)

- All files are hashed (content only) to ensure, a change of metadata or tags will not change the hash, and if a file is moved, it will recognize this and update instead of delete and insert

The database will contain Tag-Values for every possible value. E.g.

  File.Location music/album/AC_DC/Back in Black/01 - Hells Bells.mp3
  FileTag.Type  Artist
  Tag.Value     AC/DC
That way I can add a fulltext index on the Tags.Value field containing a searchable value while maintaining the FileTag.Type for recommendations.

Let's say I search for `AC/DC`, it will provide an auto-complete for all FileTags.Type values, that show a match + a generic one for searching ALL values:

  Artist: AC/DC
  FullText: AC/DC 
Searching for 2010 will show:

  Released: 2010
  Title: 2010
  FullText: 2010  
because it contains matches in Releasedate and title.

There may be a lot to optimize, but I think my current plan goes pretty well. Let me know what you think about this approach :-)

[1]: https://www.navidrome.org/ [2]: https://www.audiobookshelf.org/ [3]: https://beets.io/ [4]: https://github.com/sandreas/m4b-tool/ [5]: https://github.com/sandreas/tone/


Do file watchers registered in the main thread get called and then enqueue a message to a worker thread for processing?

I am guessing you want to keep the code that handles file events on the watcher and on startup the same code used in two places.

Guessing you scan multiple source directories of files recursively.

Does C# have a thread safe queue object? You could create a pool of worker threads and the file watcher can enqueue events

You could have threads that scan file sources (one per source) which enqueue file names to worker threads which do the work. You could have a queue per source thread and worker thread.

The problem with the file watcher code is that I don't know what context that event runs in, so you would either have to enqueue events from the main thread context to one of the worker thread queues.


> Do file watchers registered in the main thread get called and then enqueue a message to a worker thread for processing?

Yes, Producer Consumer pattern. Currently a single thread each, but that would be scalable later. For now I try to keep things simple.

> Guessing you scan multiple source directories of files recursively. Does C# have a thread safe queue object? You could create a pool of worker threads and the file watcher can enqueue events

Yes. There are a few. I use BufferBlock<T> [1], which is pretty flexible.

> The problem with the file watcher code is that I don't know what context that event runs in, so you would either have to enqueue events from the main thread context to one of the worker thread queues.

This is the long term plan. Using events is much more flexible than "polling" the next batch of file items (even if it is in realtime). The architechture seems to work out for this but I think for now I'm pretty close to a working solution. Maybe I start going for it, develop a small UI in flutter and see, where there might be problems :-) Currently there is too much "theory" - I would like to see this in practise.

What do you think?

[1]: https://learn.microsoft.com/en-us/dotnet/standard/parallel-p...


You seem to be taking a practical approach and I like it.

You're working on something interesting.

Thank you for sharing!


- Moved files: How to not lose all customized information (rating, playlists, playcount, etc.) when a file gets moved to another place?

Many filesystems try to solve the same problem (eg: customize the appearance of files in a particular folder [1]). One solution is adding extended file attributes [2], however this might not be supported on all operating systems.

[1] https://en.wikipedia.org/wiki/.DS_Store

[2] https://en.wikipedia.org/wiki/Extended_file_attributes

A slower but more portable solution might be content-addressable storage. Basically, create a directory containing just metadata files for each song. Name each file as the SHA256 sum of the associated music file, and put metadata into it in a binary format like flatbuffers [3] or Cap'n Proto [4] or a plaintext format like TOML [5] if you prefer to make the system human-editable at the cost of lower performance. Even after moving a file to another location, the SHA256 sum of the file should not change.

Note that if you have duplicated files, then there might be hash collisions where you'll have to reconcile metadata differences (or you can just merge the metadata together, keeping attributes with the later timestamp). There are various solutions to this as well like building a parallel directory structure which mirrors your music filesystem, but that can get complicated.

[3] https://google.github.io/flatbuffers/

[4] https://capnproto.org/

[5] https://toml.io/en/

- File-Watchers: How to prevent fully indexing the filesystem over and over again and only react to changes?

When first loading a directory of music into the program, build a merkle tree [6] of the files' hashes and save them to the content-addressable storage directory described above if they do not already exist. Once indexing is complete, serialize the merkle trees for each directory as well, this way the next time the program starts, you can just load these up and check for consistency of the files in the background. Then set up FileSystemWatcher [7] to notify you when the contents of a directory changes, and update the metadata files and merkle trees accordingly.

[6] https://en.wikipedia.org/wiki/Merkle_tree

[7] https://stackoverflow.com/questions/721714/notification-when...


Wow this is awesome stuff, thank you very much.

My current solution / plan is:

- Get byte offsets of the audio stream only (e.g. 384, 834882948) - Ignore metadata and build xxhash only over this part - To be faster, build the hash over 5MB in the center - If a new file is indexed, build hash and look it up in the database - If one match is found, assume this is the file - If 0 or more than one, assume new file

The edge case of 2 or more with the same hash has never happened. Worked out pretty well so far, although its not perfect.

And C# File API provides FileWatcher possibility, so I get notified whenever there is a change. The first run has to be done as full scan, after starting the full scan, a FileWatcher is registered and pushes to the Queue whenever a file has changed.

It's far from perfect, but it works.


I'm working on developing a modern version (e.g., unicode, box-drawing characters, text coloring, etc.) of Super Star Trek.

It's probably not very interesting to other people, but I find it fascinating.


I'm working on taking the best tools from social networks, contacts app, and crms[0] to help you make the most of your relationships[1] all while protecting your privacy.

It's something a lot of folks struggle with yet it is a trait that is eminently learnable[2]!

[0]https://clay.earth [1]https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3498890/ [2]https://journals.sagepub.com/doi/abs/10.1177/014616721663784...


Curious, how’re measure performance of your system if it’s private?


Collection of python modules to run a one-person hedge fund. See https://github.com/westonplatter?tab=repositories&q=finx&typ...


I am working on video, but specifically trying to make it easier for anyone to “direct” their own content creatively without having to know all the technical skills for editing. Near to mid term is to try to use AI tech to enable this, though there’s actually a lot you can do even without that. My background is in developer tools (I was product director for CI/CD at GitLab) and so I am also trying to bring in the best from that experience around how to make hard stuff easy and difficult stuff at least possible.

I suspect there will be a lot of companies in the near future that make AI models with permissive licenses easier to use, along different niches (like I am doing for video). It is a really promising area and picking a niche helps with papering over the challenges with general models.


I always wanted to take a challenge in an old boring industry and build something completely new. Myself, my co-founder, and a couple of other guys have looked recently into a very very boring hiring industry and had decided to do something about it. That's how the idea of Careera.io (https://Careera.io) was born. We have just started this month and currently working on an MLP (minimum lovable product), will launch soon and let you guys know then :) Basically, we want to completely remove and automate the majority of a very mundane process of hiring on both sides of the market (job seekers and hiring managers). We want to finally introduce a bit of magic into this industry! Wish us luck!


Volumetric displays. The perfect mix of personal challenges - insane data rates, complex realtime software, weird mechanical setups and warped graphics.


Like, this kind of thing? https://voxon.co/

I had to Google it, but since I've recently been diving into volumetric video, the words made me curious.

How's it going? Any particular area you're working on?


Yes, like Voxon and Lumi - swept volume (rather than static, hence 'weird mechanical setups'). It's basically the problem of taking any 2D display of reasonable resolution and moving or spinning it through the 3rd dimension at least 50 times a second. For example, you could spin one or more 32x64 LED panels (like these [9]) around a vertical axis at 3000RPM.

Currently working on both spinning and reciprocating ('flapping') approaches; spinning is nicer in many ways but reciprocating gives a better result.

Have a look at articles tagged 'volumetric-display' on Hackaday [1] for examples.

  [0] https://www.adafruit.com/product/5036

  [1] https://hackaday.com/tag/volumetric-display/


We're trying to automate tutoring. As a college student, I spent a lot of time trying to fill knowledge gaps with textbooks, Google, or other students. Now, we're trying to provide a streamlined and customized AI Tutor service to help everyone learn faster.

It's a lot more technically challenging than we thought. For example, what's the best way to statistically guage a student's knowledge level? How do we recognize when an ngram counts as a new concept?

We haven't launched publically yet, but we have a LinkedIn page of you'd like to see when we do. https://www.linkedin.com/company/conceptionary/


Hi ultra_nick,

>> trying to automate tutoring.

>> what's the best way to statistically guage a student's knowledge level?

As you have probably already discovered, there is quite a bit of research in this area :). If anyone else is interested, you can search Google Scholar for "deep knowledge tracing", "bayesian knowledge tracing", "performance factor analysis", "knowledge space theory", "intelligent tutoring systems", and "item response theory". Some intelligent tutoring systems include ALEKS, Squirrel Ai, and Riiid.


I'm building a privacy-friendly personal finance simulator that doesn't ask to link your accounts (projectionlab.com), and can run Monte Carlo simulations in your browser and keeps data client-side unless you choose otherwise. I'm hoping to spin up a self-hosted version soon as well.

The overall design ethos/constraints have posed some interesting challenges along the way, but so far I've had good luck developing with the following: Vue.js, Vuetify, vuex, Chart.js, Threads.js for orchestrating web workers, vue-router, a bunch of smaller libraries, Paddle, Firebase, and some Google Cloud Functions (the latter only come into play for those that upgrade + choose to enable cloud sync).

For now, it's still a side project :)


This sounded interesting and I want to follow up on it in a while to see what progress you've made. However, I know I will forget in the future to actually follow up so I wanted to sign up for email updates or even a newsletter if you had one, but I couldn't find one from your landing page.


Sounds like I need to make the newsletter more discoverable. Currently it only pops up if you go to the blog or if you do the onboarding wizard or sandbox. I'll take a note to add it to the homepage as well.


I am still slowly working on reverse engineering an MMO. I have client source, but no server source (it was lost to time), so I am building a new server.

My current problem is I have concurrency working, but there is a aspect to movement that I have little information on, and the client isn't informative enough to help. So I am kinda stuck. I plan to do some manual testing of just fiddling with a value and see what happens. Its fun, but very demoralizing at the same time to see multiple people logged in, but their characters jumping around and not properly being moved around on other people's screens.

I also don't want to make any changes to client code, because that's our only source of truth to work with.


I'm interested to know which MMO?


Blackmoon Chronicles: Winds of War from like 2000/2001. It never left beta as Vircom closed down their game division and cancelled all projects. It was my first MMO and what got me into MMOs


That’s an interesting problem!

Since you have the client source code, have you been able to fully reverse engineer and document the client/server protocol? Do you have a black box server whose responses you can record and analyze?

Have you been writing your server top-down (building an idealized set of game objects and then wiring them up to the protocol) or bottom-up (layering the minimum game code on top of the protocol needed to make the server respond appropriately)?

How do you obtain the client source code? Will you legally be able to open source it or at least distribute binaries you build?

Is https://blackmoonchronicles.com/ your project? Looks like it’s a different project because they’re writing a cleanroom implementation of a client.


That is our project. We also have a FB Page we put up. The clean room implementation is not us but someone else that we linked to, in order to funnel those that prefer that style of work. We can't work in the open yet.

I have been able to fully reverse engineer the protocol, including what data is expected by the client and what data the client sends for various packet types.

No blackbox server, game has been dead for years.

Writing bottom up. Protocol is more important. We actually have some left over server files with npcs, quests, items, mobs, etc. So I can recreate the entire world by loading those in (already reversed the file formats and can parse them into memory as data structures)

We have the client source because one of the former QA devs on the game bought the rights to the client many years ago, and he is working with me on the project. We will be able to open source the project eventually, if we can prove we have a custom server. Because BMC was built by the people that built T4C and shared server designs. But Dialsoft currently owns the rights to that. We need to show we have a server implementation to that is net new and unrelated to T4C, and then we can legally open source everything, and that is the goal as we mention on our site.

We already have the capability of walking around maps, teleporting, and dealing with inventory and equipped items. We have all the packets reversed. Most of the work now is in concurrency and world state. Loading mobs, quests, items, and broadcasting information to those around you, etc. Right now a lot of stuff is hardcoded. Once I get this part working, the rest is mostly filling out the world and getting combat to work.


That’s very cool! I was a developer on the MMO “Gods & Heroes: Rome Rising”. Two years after the company cratered in 2008, another company bought the game code and pushed it across the finish line and launched it themselves. So it can be done!


People are talking about existential crises arising from some DNA gene stuff and I’m just here trying to understand this yaml powered dynamic DAG building airflow based ETL and ingestion framework my company built in house and that my team uses.


I am working on getting this project to compile. It's interesting in the sense that nobody in the documented history of the internet has ever ran into these specific errors. So I guess I'm going where nobody's ever gone before.


I track parallel currency exchange market rates and provide usage-based API.

It's like oil; many sub-products can be made from this.

For starters, many of them:

- Currency exchange basic calculator.

- Commissions calculator (many exchanges claim to be "no commissions," but you pay a lot in exchange rate flexibility as well)[0]

- Some financial institutions need to track economic indicators... I have that data.

- Integration with CRMs, to issue correct quotes using parallel exchange rates.

- Integration with ERPs, for whatever you use ERPs. - etc. etc. etc.

EDIT: Clarity.

--

[0]: https://www.ivanmontilla.com/blog/how-do-zero-commissions-cu...


I'm writing an ed clone in scheme. I hope to have autoindentation since that's about the only thing I miss with ed.


Multiview, multi-object tracking - https://github.com/prcvlabs/multiview

I started doing research on cameras around 2012 and have been obsessed every since. The company behind this code failed unfortunately but I've kept poking at it and finding more people interested in it, even as my day job is more about cloud infrastructure. Companies like Zippin which have carved out a niche in the cashierless space really impress me.


Mine is not an interesting "problem" generally speaking. But, to me, it is a problem because I haven't solved it yet.

I am on the journey of learning computer science and programming on my own from awesome books and resources available out there.

I came across this article by Peter Norvig: https://norvig.com/21-days.html after I read his code of a spell checker: https://norvig.com/spell-correct.html. I was mesmerized by the code itself. It is so clean as if it is speaking to me. No series of spaghetti statements making me cringe just at the sight of it. I want to be able to achieve that skill. That is my project/problem if you can say so. In college I was studying electrical engineering when I was taught programming in C using a very bad book and I hated coding after that. Now, I am gradually beginning to see the beauty in it. It is just prose. Prose can be bad and excellent. I was taught bad prose that's why I dislike it.

I hope to become an elegant and useful programmer one day and build things that are useful to people.

If you have any particular resource in mind that can help me in my journey, don't forget to suggest some in the replies.


That code is written in a very declarative, functional style. Haskell is a language that forces you to write code like that, so might be a worth a look if your goal is to write ‘pretty’ code.

However, I’d also add that becoming an “elegant” and “useful” programmer are often at odds with each other. It’s very easy to spend so much time trying to make your code pretty with the perfect abstractions that you never actually finish anything.

If your goal is to be useful and productive then learning by writing a lot of code in a lot of different languages, styles and code based might serve you better than focussing on beautiful source code. Though if you can do both, then please do!


Thanks for the suggestion. Will keep that in mind. I understand that too much perfection can be a roadblock to progress.


I was curious about the composition of music, therefore a friend and I created frequency-domain visualizations of interesting sounds using the Welch power spectral density estimation algorithm and Fast Fourier Transform.

A few examples:

1. Dialtone using dual-tone multi-frequency signaling and 56K dial-up modem connection: https://www.youtube.com/watch?v=FomWraKuDFg&list=PLn67ccdhCs...

2. Deluxe Multitone Car Alarm: https://www.youtube.com/watch?v=A4uKcvZL7HM&list=PLn67ccdhCs...

3. Composition using only sounds from Windows 98 and XP https://www.youtube.com/watch?v=6lT-jr9sS6Y&list=PLn67ccdhCs...

4. Piano Music (Ballade Pour Adeline): https://www.youtube.com/watch?v=RnAfrEk429w&list=PLn67ccdhCs...

5. Electronic Music Demo: https://www.youtube.com/watch?v=MllJLIX1glg&list=PLn67ccdhCs...


Flipping back between ~2:

- A tool for resolving external executables and sourceables in Shell scripts to absolute paths (and erroring when they aren't found). This is mainly to make it possible to package Shell with Nix in a way that they don't depend on all of their dependencies being in the environment.

The top level of this is ~easy enough, but identifying executables in the arguments to other commands is significantly harder. So, much of the focus is on triaging executables to flag those most likely to exec, coming up with human/automated processes for triaging the source of those executables to confirm, finding a good way to express complex nonstandard CLI syntax for parsing (in a way that's humane enough to be maintainable and ~contributable), etc.

- A toolchain for single-sourcing documentation in order to generate idiomatic markup for multiple output languages with different whitespace semantics. Because it's really hard to get good idiomatic format conversions out of largely presentational source such as markdown, this includes a flexible language with DIY semantics.

The entire toolchain does something a bit like what you'd try to accomplish with j2cli, some yaml, some jinja templates (and probably some python plugins, if you're trying to do anything complex) with more precise whitespace control. Still very rough/early, but glad I finally bit the bullet and started on it.


Making solar available to apartment dwellers/multi-tenancy housing. We switch a shared solar system to different circuits based on who's currently using power, to optimise on-site solar usage and reduce feeding back into the grid. As well as having some interesting electrical issues, there's an amazing world of regulatory bureaucracy to wade through. https://allumeenergy.com/


Trying to build a computer algebra system. Basically like a small version of Wolfram Alpha, but customized and accessible by my own code so I can use it for whatever I want in the way I like to. It's starting to get cool enough to be usable for a bunch of things, but I'm still trying to figure out what I really want to use it for

It's challenging and interesting work, and I'd recommend building your own as a way to gain deeper understanding of math.


How to get over my mid-life crisis.


In the past 7 years from my 25, I figured out that there always be crisis .. at any age.


The answer is clear, Corvette.


Rookie. It has to be a motorcycle


Or a Mercedes coupe if you're in Europe.


Simple: get older.


Me too. Let me know if you figure it out.


I have been thinking of reconstructing history from books and all kinds of material. So if someone wants to know what a newspaper on 3rd Jan 1403 looked like, they can.


That'd be some feat, considering the first printing press wasn't perfected until about 1450! LOL


So basically assign a date to all news articles and then the context and well - create a template for newspaper and accordingly assign importance


Oh that would be fun!


Can be done as an extension to wikipedia itself - who knows


I am working on fusing classical control with reinforcement-learning based control in drones.

The good thing about RL is that it is adaptive and proactive - it tunes control from data as it streams and it learns to act with delayed feedback in mind.

The bad thing is that it can be sample inefficient. It needs lots and lots of data. That is not good in a time-critical case such as when faults occur. I've come to enjoy pushing back on completely black-box control of systems, foregoing the advances of decades of theory. That helps me scale back complexity of the system if only a little domain knowledge is sprinkled on the solution.

So, first, I started using transfer learning to re-use existing policies from similar situations. However "similar" is a very open problem in the domain. I then thought that we can exploit insights we get from classical control of linear systems into these more complex domains. If, for example, we can make some simplifying assumptions (locally linear, there's some global optimum etc.). So far I am getting promising results. But there is always some data wrangling involved. I am hoping to use that in my dissertation later this year. (If anyone's hiring, feel free to reach out).


I bet the folks at Skydio would want to talk to you.


Thank you for the suggestion! I will look them up.


Might as well drop my project in here as well.

I spent the last year or so exploring and validating solutions to the "voice cliff" problem found in multimodal voice-controlled devices such as smart displays and TVs. That's when you are controlling some UI with your voice and suddenly you can't, because that particular screen hasn't been designed for voice. Think of Echo Show settings or literally all of Nest Hub's UI. As a result, I came up with a "big idea". So now I want to file some preliminary patent applications for some of the bits that might actually be new inventions. The chances of actually getting a patent are slim - most of the ideas I thought were brilliant were already thought up, patented, published in research papers or even productized, sometimes a decade ago or more! There are a lot of forward thinking researchers out there. And this doesn't even take into account patents which are pending. Apple could have a tranche of patents announced tomorrow which were submitted years ago. Or some startup could come out of stealth and you learn that a well funded group of people way smarter than you has been working on similar problems for years.

Anyways, actual new ideas are rare, but I seem to have a few.

Distilling a big idea into smaller pieces that can be explained in detail is a good exercise, regardless of the eventual success of any patents. So I'm churning through 100+ pages of notes, twice as many patent and paper references, and a bit of code and trying to get it into a state to share with the world. I've never been an academic, but I used to work at Nokia Research Center, so I've worked with quite a few and now I know how hard their job was.


I have been developing a teleportation device. I've managed to move 20 micrograms of matter over a distance of ~30mm and now it is simply about scaling up.


Apologies for my incredible disbelief, but you cannot just leave someone hanging by saying this and not expanding further.

Details please.


Do you have a blog post or some link to info about your experiments?


Please share more details. This is incredible and defies belief.


I'm working on the incompleteness theorems, human-centered artificial intelligence, infinite spatial systems, and more https://recursion.is https://notes.recursion.is https://recursion.is/youtube


In working on a text/code generation system, similar to OpenAI gpt/codex it's a bit different in that it researches links via crawling and understands any linked images. Also different in that it has reasonable pricing and a free teir.

https://text-generator.io/blog/text-generator-now-researches...

It also does embeddings for text/images and code. The problem i suppose is general agi where you have unstructured input and unstructured output and you need to do a lot of research in the process of producing the output which one day may include training, using other web services/information retrieval etc.

Looking to expand the system to understand more and more unstructured inputs and be able to output more and more advanced outputs like images, audio, video, datasets etc.

I was using OpenAI a lot and realised it's going to be dang expensive which is why I started https://text-generator.io so more people can use it


You mentioned pricing as your primary driver, but don’t list any pricing data or price comparisons on your site. Am I missing something?

EDIT: Nevermind. Used a throwaway email to signup and it’s “100 Free requests per month, then $0.01 USD per request” - which unless I am missing something appears to be more expensive that OpenAI’s pricing:

https://openai.com/api/pricing/


I have been using a solo WhatsApp group as my "notes app" for years. Quite common with Android users. But when I needed a dedicated notes app to get a little more organised, I couldn't find anything satisfactory. The popular notes app are way too complicated for my purpose. The lesser knowns usually don't have a good UI/UX. And there's also the issue of privacy.

I needed a privacy respecting notes app which worked like whatsapp. So I built exactly that - https://play.google.com/store/apps/details?id=com.makenoteto...

Took inspiration from Signal and called it Note to Self.


A ton of stuff, arguably too many things. Right at the moment, my main focus is redoing fundamental 2D vector graphics geometry operations (stroking, path intersection) to make them more robust and higher quality. Some of this will also run in the GPU as compute shaders, which adds some interesting constraints and ups the difficulty level.

I'm also building a new, high performance, high quality 2D rendering engine that runs primarily on the GPU.

And I'm also exploring new approaches to building GUI, primarily in Rust. A new focus is to integrate that more tightly with the renderer mentioned above. A benefit I expect is that I'll be able to do the CPU side of the rendering work in multiple threads, where existing architectures make it very difficult to break out of a single thread.

How did I end up working on this? It's a natural evolution of what I've been working on for a long time, in Ghostscript, on Google Fonts, working on the text stack on Android, and now fortunate enough to have a research position back on Google Fonts where I am able to focus on this and have great support from the rest of the team.


Javascript and ORMs have a (fairly well-known) interesting set of problems:

* There aren't many

* Those that you can find, don't support Typescript, so it's all guesswork.

* Don't have much (if any) support for related data queries

So I've been working on a ORM for PostgreSQL and Typescript for a few months now in my spare time [0]. It's been probably one of the biggest, challenging, and interesting problems I have embarked on solving.

Some things i've observed along the road:

* Typescript is weird and wonderful when you push it to it's limits, particularly around recursive structures. Strange workarounds exist, nicely provded by all of the two Stack Overflow Typescript gurus that exist.

* PostgreSQL doesn't have some of the features that you would expect from other DB technologies (i.e. certain table alterations, limit on update and delete, etc.). Weird and wonderful workarounds exist, but it always leaves you thinking _why aren't those workarounds just the de facto way for doing those things?_.

[0] https://github.com/samhuk/ts-pg-orm


Prisma and TypeORM both are made for TypeScript. The first works wonderfully well


I quite liked Prisma when I used it, I'm glad someone brought it up.

The issues with Prisma are that it had moderately weak support for Typescript, and doesn't robustly support related data querying (inc. recursively), or exotic inter-object relations (e.g. many-to-many with join tables, etc.).

As I said, a lot of the ORMs you find out there tend to be quite simple, missing some QoL features.


I’m interested to hear what those PostgreSQL limitations are more specifically.


Postgres still doesn't support SQL:2011, and the temporal_table extension doesn't support the proper syntax, and is a bit jank to boot.

If you have any kind of complex DW or SCD needs Postgres is pretty much immediately out the window. I think MSSQL probably has the best support for that, unfortunately.


I don't want to get too much in the weeds, but one well-known example is missing support for LIMIT on DELETE and UPDATE statements. The workarounds that exist range from fairly pedestrian (like using the scary ctid column) to totally wild (like creating temp tables or using transactions and all sorts).

The rabbit hole goes.


A new platform to play Magic: The Gathering digitally. Been working on it since the pandemic started. It's almost done.

It's a multiplayer sandbox (no rules enforcement) that just allows you to play by providing all the different zones the game involves (play area, hand, lib, gy, ex) and letting you free-drag between them. It has features for every mechanic the game involves - tapping, turning face-down, switching card control to another player, etc - and it looks good and is smooth and easy to use.

There's nothing like it. Existing options have various constraints/drawbacks or else are so old-fashioned and clumsy as to be nigh-unusable.

I don't know how people will react when I announce. I don't know how Wizards of the Coast will react. I'm a bit nervous about it. I know it's the best of its class, because I play magic, and I have no idea what other people will make of it. It fits their fair-use policy (product must be free, but can be ad-supported or take patreon-revenue) but even so I'm nervous.


What a cool idea! Not sure if you’ve done it already, but it’d be cool to embed video calls (maybe with WebRTC?) in your UI so you can have that more visceral sensation of playing against another person.


I have considered adding an audio channel to a match, yes, not video though. I figure people who use it to play with friends are already going to be using discord, and people playing against strangers are very unlikely to want to make video calls. I could be wrong?


I don’t know :)

When I was 10 years old and first getting into MtG, I was often playing against kids in the school playground that I didn’t know. It became a great way to make new friends.

Also as far as I’m aware, Chatroulette and Omegle are still going strong, and those are all about video chat with random strangers. I suspect that a MtG server would net fewer instances of indecent exposure than the aforementioned two also.

Probably not critical, but perhaps a worthwhile product experiment?


this reminds me a bit of Cockatrice - hope your version ends up cooler though!


Yep, Cockatrice is one of those old-fashioned options I'm trying to improve on :)


Working on a prototype for a thermoelectric generator. While the core generation module is a seebeck effect based solid-state off-the-shelf components, my focus is on optimizing the thermal gradient for voltage and current control and continuous operation (night and day).

A few experiments planned around

- design of heat absorbers and heat sinks

- night and day operation using radiative cooling/paints & coatings


Mind putting some back of the envelope calculation on the minimum power that can easily be draw for a given area/volume without too much optimization?

(I understand that if the designed is refined it will significantly improve)


For a basic calculation on the number of modules, voltage and amps relationship, see: https://thermal.ferrotec.com/technology/thermoelectric-refer...

There is a worked example in 3.3 and Fig. 13-3 in the referenced article has the chart for various temperature differentials.

The TEG modules are 30mm x 30mm and arranged in a grid in series-parallel configuration (the referenced article also has this concept illustrated). The area question can be addressed after determining the number of modules, exact series-parallell configuration and spacing between the modules.

The volume question is a function of the heat sink geometry.


I’ve recently gotten deep into thinking about “precision medicine.” Folks here may remember or know the story of Matt Might; I have a less sad but similarly perplexing story around both my wife (for whom diagnosis was hard) and my best friend (who remains undiagnosed), and started thinking about how to apply “whole systems” thinking and treat the body as a system that’s prone to vulnerability, trying to find ways to discover where a “patch” may be necessary by ingesting lots of data that is otherwise overlooked or seems unrelated.

As a security guy previously, particularly on the offensive side, this has been a fascinating endeavor thus far.

I’m particularly interested in: the application of PRP injection to various ailments in the body (where else are there oceans of stem cells we can activate to help the body heal itself), and the application of ML to surfacing data about ailments for populations that are woefully understudied (trans patients undergoing GAHT, for example).


I'm building an open source security lake platform (https://github.com/matanolabs/matano). Basically, some of the core problems we solve are:

- Traditional SIEM tools are not a good fit for large amounts of data — they're either too expensive or come with a high ops burden. - Your data is locked into vendor specific formats.

We solve this by ingesting and storing all your data in Apache Iceberg tables on S3, (allowing for low cost queries directly on S3 data + an open table format) and building everything using only serverless technologies(Lambda, SQS, S3, Athena, etc.).

We also let you write realtime Python detections as code and are written in Rust, which makes for some interesting technical challenges.


I've been writing a metaprogramming layer on top of the C programming language. It's effectively implementing a lot of C++ templates and constexpr features in a smaller, faster package.

It began as a toy/experiment that turned out to be useful, so I decided to pour the gas on and push it to it's natural conclusion.

It's close to hitting a first alpha release at which point I expect it to stabilize somewhat.

More info : https://scallyw4g.github.io/poof/

The website's a WIP, but the tool does compile to WASM so you can give it a try in the browser, which is pretty neat-o.

I'm also actively looking for feedback. If anyone has constructive comments/criticism feel free to reach out :)


Interactive extensible low level programming languages no one is trying to do this, forth exists but its one of the only attempts that is successful and still its ugly as hell. Sure you can use lisp to do everything but its horribly clunky when it comes to low level programming.


I'm working on applying game theory to the design of social media algorithms in order to help curb the spread of misinformation.

https://deliberati.io/give-truth-the-advantage/


That's a really cool project, best of luck!


Agreed, I’m excited to see where this may go.


You seem rather clever!


Try to implement a container vessel stowage planning algorithm of some kind: https://www.mdpi.com/2305-6290/5/4/67/htm


I am building a public health and wellness infrastructure for black men. Launching Nov 8.


How’re you planning to get users and monetize them? Do you have to deal with HIPAA compliance, if so, how’re you doing this?


I finished HIPA and HL7/FHIR Integration. I am getting users through a population health model. This is phase 2 of my moon shot. This was phase one. I am writing a larger blog post about it now.

https://www.prnewswire.com/news-releases/award-winning-progr...


Knowledge management.

To put it on scifi-like phrases : "The universal knowledge repository of humankind". Or "imagine if we can extract the entirety of knowledge that stored inside a certain person brain, then store it in the machine. So everyone else could access that knowledge."

Wikipedia, OpenStreetMap, social media platform, search engine and the Internet come close, but still there are rooms for improvement. Especially in "discovery" part and "overall structure" within a body of knowledge. Sure, there are quality knowledge content, but they are mostly spread out and undiscoverable and very unstructured.


Pretty pedestrian given the others, but I'm trying to build a file server setup that makes it easy to roll back from a ransomware (encryption) attack. I starting to think that going with hourly syncs from the home directory with some versioning should do it. The hard part is to make it easy for someone to rollback when I'm not here. People are a bit scared given that its happened to a couple of other small colleges. We already do tapes and we are talking files saved to the servers (most folks aren't allowed to save locally), so it should be within a budget. Like I said, pretty pedestrian. [edit: FreeBSD & ZFS]


Maybe not exactly what you were asking for, but getting to 20 or 30 users for my latest project. It's not meant to be a start up or anything, it was more just "I wish this thing existed and think it would be fun"

So this time around I'm slowly asking people I know to try it out. Honestly it takes a little courage and has given me a newfound respect for people who do sales (though it's a little different, as the whole thing was my idea). There does seem to be a big difference between an explicit ask to sign up vs. hey check this out. Oftentimes people will say that's so great but not sign up (which I try not to take personally).


Might be worth providing way for people to know what the project is and how to use it.


I know, right?


Well, welcome to my neuroses- because you asked now it's not as "thirsty" to do that. There was a fantasy sports game in the late 90s and early 2000s called wall street sports, where you'd buy "shares" of professional athletes and the prices would go up or down. My project is my start on bringing some form of it back. [https://sportfolio.money/]


Lots of ventures that have made “investing” in a given real world persona; for example:

https://en.m.wikipedia.org/wiki/Football_Index

https://en.m.wikipedia.org/wiki/BitClout ___

At least for me, your site loads for a split second, then goes completely white; might want to do some cross browser testing, especially on mobile.


On Unicode TR39 https://unicode.org/reports/tr39/ (secure identifiers). https://github.com/rurban/libu8ident/

Also on perfect hashes: https://github.com/rurban/nbperf, also gperf and cmph improvements.


Web-based data visualization for robotics and self-driving. Robotics is such an interesting industry, and we're only scratching the surface of what new tools are needed.

Try it live here (hit "view sample data"): https://studio.foxglove.dev/

And it's open source! https://github.com/foxglove/studio

Shameless plug - we're hiring: https://foxglove.dev/careers


Professionally, I'm working on the frontend of a radio frequency propagation simulator for 3D "digital twins" of cities/regions. This uses ImGui under the hood, but is wrapped by Python. It's a bit of a pain, but I came up with a class that allows the UI to be composed in a manner more akin to React Components.

For fun, I just implemented a finite-state machine for a videogame I'm working on in Rust. It's used to swap out sprite animations depending on the state of its parent entity. I'm glad to be solving problems I'd never come across in my normal line of work.


To reduce the adverse effects of fake evidence (eventually called deepfake), off and on since 2004, and especially since ten years ago, I've been thinking up, NOT DETECTION methods, but prospective, pro-active methods to instrument a data stream, to increase the odds that video, audio, or other time-series data (including e.g. scientific experimental data) eventually can be proven to have been created from real life (and not faked) at the time & place claimed.

This is not just "proof of existence" by inserting hashes of content into a Merkle tree, like Haber & Stornetta implemented in 1990 and like we talked about on the Cypherpunks email list in the early 1990s, although of course that would be part of a complete system.

Nor is it relying on just the metadata that e.g. TruePic uses. I have been developing additional methods to asymptotically approach provability that the content depicted actually happened in real life. Think undetectable watermarks by multiple observers (courts want to see multiple witnesses) but it goes beyond that.

Relevant to this are efforts by CAI, C2PA, and companies like TruePic. I am hoping my methods are different enough, with little enough overlap, that we can realize them without licensing their IP. My goal is a more decentralized system that protects the privacy of witnesses.

My contact info is in my profile. I'd love to discuss this with anyone. I'd even be relieved if someone would explain that my ideas are not useful, or already have been done, or even would have unintended negative consequences for society. (It's hard to stay current in the relevant literature on signal processing, steganography, cryptography, and zero-knowledge proof, plus politics and game theory.)

I would even considering simply donating my trade secrets (such as they are) to some person or organization who impresses me as sincere and likely to move them forward, to avoid having to sit on them hands-off (again) during my tenure at (yet another) company that is about to offer me a full-time job today to work on something unrelated. While some of my ideas "toward authenticatability" may have been ahead of their time when I first wrote them up ten years ago, they probably aren't anymore, so the window is (at best) closing now.


To me, being able to trace the province of content is much more dangerous than deep fakes, since such systems have the potential to become wide spread, unavoidable, and constrain free exchange of information.


I agree! I've put some thought into how to keep the power decentralized -- and ideally to make it more so than the status quo. I think the techniques are compatible with that. I don't like the emphasis of Truepic et all on identifying which individual or which device recorded the evidence. I mean, yes, many in the West would like to know that a video clip is just as a specific, named, respected photojournalist filmed it, but imagine what would happen under a despot who doesn't respect press freedom.


People generally speaking are too lazy to manage an independent system and governments have to much incentives to abuse systems like this if given the chance. If anything, if I was spending time on the topic it would be to drive public interest in systems that happen to also make sourcing province of media harder, not easier.

Fakes are not the issue, they’re an excuse for people to do something they would have done anyway, nothing more. No amount of reasoning or facts will ever will ever change a biased person’s beliefs. Fake news is just a symptom of the real issue, it’s not the problem.


I feel like your statement reduces to "people are not affected by evidence."

There is a category of people for whom that is true, but is there not also a category of people whose beliefs and actions are affected by evidence?

That may be the case now, or maybe it only was in the past? What will we do when ALL true evidence is indistinguishable from fake evidence?

(By the way, I think you mean "provenance," and as I've edited my post and bio blurb to explain, I too am against methods that disclose the identity of the witness. Rather, my methods would help multiple witnesses to an atrocity [such as police firing live ammo at nonviolent protestors] to corroborate one another's accounts before identifying themselves. They could possibly even effectively discredit an official, faked account without identifying themselves.)

I mean, at least in the US, trial lawyers bill by the hour, which, as I see it, is already a conflict of interest on a meta level. This system is pretty inefficient. What if we could prevent most "he said, she said" situations (and, probably, billable hours) by making recording of all one's experiences the norm, while also safeguarding the privacy of those recordings until such time as a dispute (such as a court case) arises? (This latter, especially in the context of doctor-patient relationships, was the original root, circa 2004, of my line of thought. The idea then was "universal surveillance by private individuals of their own lives, encrypted by default, with proof of existence, optional decryption, and sharing.")


Understand and disagree, nothing I say will likely change your mind. If it matters, had multiple similar exchanges in past with people who happened to get lucky, have their systems used by 10s of millions people and my objections still hold true.

Being recorded 24/7 is not healthy and in my opinion a net negative for any meaningful future; yes, it has the potential for good, but way more potential for abuse. Please stop spending time on the idea.


I understand your objections and in fact I mostly agree. So I should qualify some of my prior statements. They should be understood in the context (my context when I began this line of thought) of anarcho-capitalist values as expressed in David D. Friedman's book /The Machinery of Freedom/. (By the way, to anyone who read this 1973 book, Uber and Lyft were inevitable; the book was prescient enough that it may still contain gems worth mining.) My goal was to make professional law enforcement less important.

I am not quite still an anarcho-capitalist, as after 2004 I began to see a lot of truth in Marx and other anti-capitalist writers. So I'm in a healthy tension that enables me to embrace paradoxes and even hypocrisy. (Anyone not a hypocrite, I suspect either has less than an admirable set of values or is in prison or dead or swiftly heading toward one of those states.)

Bottom line, my hunch and hope is that some of us will see that if our goal is safety, we need not always give up freedom for it. There are other ways to achieve some of the goals we've entrusted to the State.

Yes, I am aware that with alarmingly increasing frequency the State can compel us to divulge our encrypted evidence and the keys to it. Perhaps there would be more pushback against this if such evidence were to play a greater role. Right now, people tend to say "I have nothing to hide." If everyone had a lot to hide, already recorded and encrypted by default, then I think most of them would push back against 4th amendment violations (in the US; and equivalent protections in other jurisdictions).

I am aware that the stupid and the evil will mess things up. I don't know what to do about that. I suspect they'll always be with us. The latter we might reduce in number through greater accountability, through more evidence. The former, I have no idea; our society doesn't really quite have natural selection.

I should walk back "24/7" or whatever I said. I was a bit hyperbolic. I think certain types of situations known for being contentious should be recorded routinely by both parties, by default, maybe. That should be normalized, maybe. For instance, in any situation wherein at least one party is required to carry liability insurance, the insurers could compel recording -- maybe.

Also, and I am VERY FAR FROM CERTAIN about this, but POSSIBLY even in some instances of sexual encounters, at least where there is not a fully executed (signed, maybe notarized) written agreement, i.e. enthusiastic consent -- possibly, if the tech were sound enough (which should be our goal) these too should be recorded, as a matter of course -- MAYBE.


I too have been contemplating architectural solutions to this problem for about a year. I've probably posted about it on HN at some point. I'll email you - or try to ;)


Great! Please do email me.


I’m working on interoperability between blockchains. The number and variety of blockchains are exploding and approaches so far have been sharply lacking - failing to tackle key problems in decentralizing, generalizing, data translation and the economics of providing a robust, secure interchain operability.

Moving from largely centralized and specific message passing and token bridging into general interoperability that is decentralized and modular for any new blockchain or multi-chain protocol to extend and build on top of is a huge challenge.


I'm working on a protein modeler and interaction tool, werein the protein is defined by primary sequence and dihedral angles. My goal is to learn more about protein folding. The most visible approaches of exploring all conformation space, and using ML models to estimate final product aren't satisfying. I'm suspicious we can learn the rules of how proteins take their shape, which would simplify the problem. My ultimate goal is protein design.


I'm learning more about the challenges and rewards of taking extended breaks away from work - sabbaticals.

This involves talking with people about their experiences and writing about what I've learned from them and observed from my own sabbatical.

If anyone is curious, I have some posts up at https://www.albertwavering.com/tags/sabbatical/ and would love to chat with anyone who is thinking about or currently taking a break.


I am working on trainable morphogenic functions for reaction-diffusion proceses. I.e., using Pytorch, the parameters of a neural net representing the reaction part are optimized to so the diffusion process converges in a complex, predefined structure. This is only a pet project, but I am fascinated by this because I imagine that in the future this could be used to design synthetic biological structures. All that will be needed is a compiler that translate the function to DNA...


Deploying wireless networks within legacy building automation systems. Most of these systems communicate through twisted wire pairs routed through the walls - tearing the walls down and upgrading this technology is very expensive and inconvenient. However, wireless Thread-based IPv6 networks can be installed fairly easily and take advantage of the internet technology that's already there, and also interoperate with the existing kit through e.g. a gateway device.


Our company uses Asana for tracking projects and their dependencies on a high level. However, for tracking programming projects in more detail we are using Linear. It is a pain to keep due dates, assignees, project status etc in sync if you need to do that manually. So I decided to develop a little bridge that syncs both platforms automatically. Once finished and fully tested I am going to open source it. Maybe someone out there has the same problem :)


It’s early days, but I’m looking into more precise and domain specific object recognition in photos. Apple or Google will generally tell me if a photo contains a chair; I would like to know what kind of chair (is it a Herman Miller Aeron or a Herman Miller Embody). There are a few domains I am interested to apply this to, but at this stage I’m still learning the limitations of the tech (and any leads / insights would be appreciated!)


Before you embark further on this, see what the competition has first. If for anything, but inspiration for good ideas, as well as seeing what works well and what doesn't. You mentioned Apple and Google with their photo apps, but have you checked Google Lens?

This one is very realtime and gets in quite a detail. I don't know whether it recognizes different models of Herman Miller chairs yet, but I can open the app, point my camera at a flower or a plant, and it will (attempt to) tell me which exact specie it is.


Yep, surveying the landscape now! Lens is incredible and I know particularly for shopping them, Facebook and Amazon are working on merchandise recognition. Honestly that’s part of what makes me interested in the space - the fact that it’s possible, but right now limited to the biggest players.


Working on Distance-immersion Example: click here (twice) https://free-visit.net/fr/demo01

Ps : I knew today's Standard technology was 360°photos. But I was convinced this was not an immersive enough technology.

Immersion, for me, should feel more like a video game FPS.

So I gave it a try: - A No-Coding Editor, just photos as input, plus 20 minutes of clicks and you have your immersion.

Solo project


Im building an readonly API for bank transaction without reverseiing their internal ui but relying on alert transaction email and parse it to build transaction.


I built a historical feed builder[0] so I can listen to previous episodes in one feed.

Most podcast feeds only include last ~100 episodes. Podcasts such as Planet Money and Radio Lab have more than 1000 episodes in total.

Manually combining the feeds is too time consuming and I enjoy building this tool.

[0] https://backpod.podcastdrill.com/


I was burned out at my old job, got a new job that ays better and with better work/life balance, so now that I have some spare time, I decided to start something. Then I met Stoicism (Marc Aurelius & Co.) which made me re-think my drivers for starting a business.

So I took a few steps back, getting deeper and deeper into Stoicism, and brainstorming new Stoic-based business models in the e-commerce space, or Art, or both!


Working on building a much simplified personal budgeting application. Been done to death, but usually not very well. What I'm making will put the simple and most straightforward aspects of budgeting front and center, and not try to boil the ocean with features. Mint is so convoluted now with all their credit and loan features that it no longer is useful for its actual purpose of budgeting.


A simple spreadsheet has been the perfect solution for me past few years.


That was the route that I had been using, but found it difficult to manage all of the relations between the different accounts via formulas.


I hear you. Are there parts of existing budgeting applications that you feel are done well? What are they?


Personally, at least among the apps I have used, no, I don't think any parts of existing apps are done well. I think there are a lot of existing applications which /do/ a lot, Mint, YNAB, EveryDollar, etc., etc. But the core budgeting/transaction aspect of them is always just the same thing, and instead of fleshing that particular aspect out, they branch out into investment management, credit building, loan/credit card recommendations. All important things, given, but the end result is a weaker product which does a lot okay but nothing great. The one exception is actual budget's (https://actualbudget.com/) reporting. Really big fan of their reporting feature, as well as their FOSS approach.


I'm contributing to a data streaming processor called Benthos: https://www.benthos.dev/ It's written in Go and I really love the project since it's so simple and, unlike most Apache projects, it's just one single static binary which does the heavy lifting and it's stateless.


Distributed Proof of Personhood (proof of humanity)


Having a baby in a few months


Congrats, good luck!!

___

In case you missed this, “A method to promote sleep in crying infants using the transport response” was recently on HN:

https://news.ycombinator.com/item?id=32845229


breaking Eroom's law. in the last 50 years the costs and timeline of drug development (clinical trials) were in increase. A phenomenon first published in nature and described as Moore's law reverse. As of today it costs ar. $800Mio and it takes 12 years to bring a cancer therapeutic to the market.

I spent 8 years as a cancer scientist in drug discovery space, till I realised it doesn't make much sense to keep searching for more potential targets, if they will end up on a shelf for lack of clinical trials budget.

I am trying to solve this issue by platforming the infrastructure. Bringing web2.0 to clinical drug development.


i'm working on a v2 of a site i created to allow people to find independent radio shows they'll enjoy (www.ephem.fm).

the current site texts you when shows matching your preferences are about to start. the next iteration will have visualizations of the music being played by each station in close to real time and allow you to select between them, save them, etc. the visualizations will be updated each minute and colors will be selected by the HSL color system, where hue = tempo (from blue for slower to red for faster), saturation (or intensity of color) = loudness, and lightness = pitch (technically, spectral centroid or 'center of mass' of frequencies of music).

i also want the ui to allow people to select from stickers i've taken pictures of around nyc and arrange them similar to concert venue bathrooms or middle school lockers haha.

altogether, want to create a fun way to explore independent radio!


Hi everyone,

We are building a cybersecurity marketplace for professionals looking for remote assignments. Currently it is in beta version and we are looking for test users to check functionality. We will provide gift credits for our testers when we are live so that they can be featured sellers on the main page. Just let me know if you are interested :)


I always felt that I get investment information too late and waste to much time on reading articles that have no context ("why did x happen yesterday" - only to realize that that happened a while ago and is not relevant anymore)

Also, most investment sites are deeply textual while investment information is more effective visually.


I'm actually working on a product that identifies catalysts via AI and presents them (currently through text, possibly through API and imagery later). Drop me a line if you'd like to chat more about this space.


Building a successor to https://metaset.io. My first stab at this problem (a native data visualization app) was OK, but left a lot to be desired in efficient user workflow.

I am having another go at it, this time with Swift Charts and SwiftUI on the Mac.


My colleagues and I are writing a system for insurance companies to prepare their reinsurance submission pack data, which they would otherwise spend enormous amounts of time and money doing in Excel manually.

We have some pretty big clients using this in anger, and they’re wild about it.

It’s not glamorous work, but it’s important!


I do gamedev(unreal).

So much educational content is locked behind videos or hard to search mediums like an example project you have to download and setup.

There are attempts at community wiki but I'm trying to start an opiniated blog/wiki type site since I dont agree with lots of other things that I wish existed.


I'm currently building a tool for generating website just from a text content as an input. It tries to figure out what layouts would match the content.

Why?

Because creating these websites quickly is always content problem. We should start from content and create a design according to content not other way around.


I'm trying to make artificial human ovaries ("ovaroids") to allow producing eggs in vitro.


I am playing with 100 Tb of data. Some of it is free form text (questions, comments, discussions), some of it is activity on that text. It is a lot of fun to see what questions people have and why some people think they know what they are talking about.


A small webapp to allow for collaborative estimation of quantities and their distribution. It's the vehicle I use to learn serverless development, but also intend to replace the Excel Sheet I use to provide PMs with times estimates with.


An open p2p protocol/engine (builds on libp2p) for decentralized storage that incentivizes data durability. It’s not Filecoin or any other existing platform, and there’s not a public testnet yet, but the project is well underway!


Programmable APIs.

A HTTP request is a function call with named arguments. What if the server had some endpoints that provide basic fp (map, fold, curry, …), and you let your users create their own endpoints by currying together other endpoints?


This sounds very connected to GraphQL


Working on an offline/real-time library for agnostic backends. (Most libraries force you to go with a specific vendor like firebase/aws) or use a completely different backend/db (like couchdb/pouchdb).


Voice recognition tool that toggles audio triggers to be used by Parkinson's patient for improved mobility.

Synthetic aperture radar scraper and aggregator across international public satellite providers.

And reading statistics in my free time.


Not anything interesting because I failed in my interesting endeavour!

Was trying to replicate methods from some papers and couldn’t get it to work. Seemed promising but alas. Now I’m just recovering from all that jazz.


Currently I am working on a minimalist and hackable shell with a pretty enviroment. The code is kind of a mess right now and i am working on a bunch of other stuff, but hopefully i finish it soon.


According to the National Assessment of Educational Progress, only 35% of U.S. fourth graders in 2019 could read proficiently [0]. The pandemic has only made this worse [1] [2]. Once students start to fall behind in reading, they fall further and further behind their peers each year. What is, perhaps, most criminal is that there is a mountain of research [3][4][5][6][7] about what works, and yet these approaches aren’t employed nor even well-known.

I am a former principal engineer at a FAANG/MAMAA/etc. [8] company turned independent educational researcher, and I believe I have developed a novel edtech approach to address this learning gap. I am collaborating with a few data scientists and educators, but am searching for UX designers, curriculum designers, writers, and marketers who can help make this a reality.

In brief, one of the core challenges in edtech is how to build an engaging app that actually results in learning (many apps resort to psychological tricks [9] and gamification hacks [10] that are long-term detrimental.) Although I have a high-level direction for how to achieve this, my skill set is mostly in backend development and public speaking.

If you are interested, please reach out to (my HN username) @ (Microsoft's not cold email domain acquired in 1997).

[0] https://www.nationsreportcard.gov/

[1] https://www.nationsreportcard.gov/highlights/ltt/2022/

[2] https://caldercenter.org/sites/default/files/CALDER%20Workin...

[3] https://en.wikipedia.org/wiki/Follow_Through_(project)

[4] https://www.nifdi.org/what-is-di/project-follow-through

[5] https://www.researchgate.net/profile/Russell-Gersten/publica...

[6] https://files.eric.ed.gov/fulltext/ED472569.pdf

[7] http://arthurreadingworkshop.com/wp-content/uploads/2018/05/...

[8] MANGA, GAMMA, NAAAM, M2A3, Big Tech, Big 4, Big 5, or really whatever you would like to call them.

[9] Skinner boxes, daily rewards, resource decay, loss aversion, etc.

[10] Leaderboards, points, badges, etc.


Evolving neural networks with use of the fitness predictors (co-evolution). The project has stalled a bit due to lack of time (family etc) but overall I want to get back hacking around it.


I've been steadily working on an automated trading system in Nim, currently for Binance only.

This is a very difficult problem, to trade successfully based on rules. But it's also very interesting.


Sandboxing for high-performance rev proxies. Not WebAssembly or v8.


I am working on generating synthetic ECGs that can be conditioned on a patient but making it impossible to find the patient given the synthetic ECG and the data.


I’m working on combining economics and information theory. Econ talks a lot about “informed markets”, but doesn’t measure that in Shannon’s bits.


That's interesting! I don't suppose you blog or have some other way that people can follow your progress on this at all? Best of luck with it.


I’m working on equalizing commitment between employer and candidate within the hiring process to reduce time-to-fill, cost-per-hire and cost-of-vacancy metrics.


This is really interesting. What do you mean by 'equalizing commitment'?


Building behavior based software supply chain security (e.g. CI/CD security using eBPF, identifying abnormal developer / bot behavior)


A simple rank-two mixed-member proportional voting system. The math is surprisingly involved. Thanks due to Tideman and Gregory.


One of my hobbies is tax optimisation and finding and exploiting tax loopholes.

I’m a contractor, I work through my company. It’s way more tax efficient than being perm.

But, it could be even more efficient.

So, I did some research. Turn out, where I am, a business can have multiple business codes. Activities the company is registered as performing. Now, I was and still am a software development company. But I also added artistic production as a business code.

Why artistic productions? Well, because it’s the most liberal business code for expenses.

As a software company, I can expense travel for some things, the most useful for me being conferences. But that means I need to buy the conference ticket even if I’m not interested and can only expense accommodations for the duration of the conference. That’s good but a bit restrictive.

Now, as a artistic production company, I’m not limited by that. As long as I can prove I traveled in order to produce, well, just about anything that can be construed as being art, I can expense travel and I can expense accommodations for as long as I need. You can’t rush art, you know how it is.

So, obviously, first thing I did, I whipped up a blog, posted some random pictures every day and called it good. I’m not using company money to travel the world you see, I’m using company money in order to produce art. Haven’t paid a euro of my own money on accommodation or travel for close to a year now. And the best part, I get to deduce this from my profit.

I love the tax system. I really do.

But, doing this, I got curious. I thought, hey, why not make it better. So I thought let’s see what all the rage is with dalle and stable diffusion and all that. So, what I’m doing now is, I’m changing my approach. My art will no longer be just pictures taken quickly with my iphone. No no. I’m getting the picture and I’m sending it to an API I built. The API runs some computer vision, labelling what it sees, some classification, some stuff with the pic metadata, tries to put it all together and turns it into a description of the picture.

The description of the picture I took is used as a prompt for stable diffusion. And that result ends up being the art. I’d like to thank of my art as a modern commentary on the age old “is art imitating life or is life imitating art?” - At least that’s what I told my accountant and he concerned, it would stand up in the case of an audit!

So, yeah, tldr, I’m working on image to text and text to image to expense my nomad lifestyle on the company.


It makes me sad that all these spun cycles could have been put to use doing something productive if only the tax code were simplified and the IRS automatically pre-filled our taxes, as is done in other, saner countries.


Oh hey look its the "America backwards" trope again. How tiring.

Sure. The IRS could change things. But you're also able to do a lot more in America that they can't track. Other countries (e.g. Sweden, etc) are far "simpler", and homogeneous. For example, the depth at which one can trade various instruments, produce companies, etc is far far greater in America than almost anywhere else. This necessitates a somewhat (perhaps not as much as today) complicated tax code.

I would say this is very productive. The tax code isn't actually the problem. It's corporate interests like intuit who, through regulatory capture, make it impossible to truly solve the problem. Honestly, it cannot get easier than a 1040EZ which is what mostly everyone uses. In fact the 1040EZ is so easy you basically fill in the things to confirm the number the IRS already has is correct. OP, like myself, need more complicated solutions. I have a fairly vast portfolio of different investment types and OP has a business. In both cases, investing time into making the IRS's life difficult pays a return on par with bonds.


Not tiring at all, when it's not a trope. This forum contains a surprising number of posts of sometimes ridiculously blind US-praising that is rooted in simple ignorance of how things run elsewhere and in the US.

Concerning taxes though, I'll have to (quasi) side with you. 1040EZ is as easy as it gets. One could of course argue that in that case, in which the IRS has the numbers already, why do you have to be forced to do your taxes at all (think Germany).


> Not tiring at all, when it's not a trope.

The greatest fallacy of the pseudo-intelligent is comparing different first world countries to each other without considering demographics. It's a fallacy you have committed, along with everyone else who says "America is backwards lol". That is why it is a trope. It has nothing to do with American exceptionalism and everything to do with a relatively poor understanding of how we arrived here.

America is a punching bag for the rest of the first world because it has problems literally no other first world country has to face. Problems that are too innumerable to list here. Without considering the various reasons America is a Special Case (TM) in many ways, you're missing the greater point. Sure we could have a German tax system for the simplest filers. We got to the 1040EZ because we believe in theory governments should stay out of our business. Fundamentally this is a driver of the majority of the policies in America, and when viewed from the lens of other western countries it seems backwards because every country listed in comparison has a stronger, more involved, and (in my opinion) more dangerous government. Perhaps not dangerous now but given enough power and enough reason could become dangerous faster than America's current government system. In fact, the unparalleled level of power corporations in America have over things like tax law parallels the level of dangerous power governments have over their citizen's taxes elsewhere. It's an iteration on the same old process of control. Missing how they're the same it's simple to arrive at the conclusion America is the only "backwards" one. Usually this argument devolves into tax utility, which I won't get into here because that's a philosophical argument beyond the scope of the mocking of America that ALWAYS comes with this nonsense.


You should try to read and understand posts before you try to pull out your own "pseudo-intelligence". Nowhere have I written "America is backwards".

You're using 'how we arrived here' as a cheap excuse of an excuse to justify a status quo that is worse than it is in other places. That's the trope of American exceptionalism right there, to somehow find consolation in 3rd world conditions through repeated 'but we are god's own country, screw that even China has a higher life expectancy'. Really, the trope here is how the proud patriots of the richest, most powerful country in the world simultaneously feel superior to everyone, yet feel butthurt and threatened by essentially anything else on this planet that doesn't exactly act/think/look like they do.

> Sure we could have a German tax system for the simplest filers. We got to the 1040EZ because we believe in theory governments should stay out of our business.

That makes no sense whatsoever. As you write yourself, 1040EZ simply lets you confirm what the government already knows. The difference to the German system is that you still have to jump through hoops, roll over, and catch the ball when the government tells you to. I guess you also see the 'Obey the speed limit' signs in Texas as manifestation of supreme liberty, contrasted to the oppressive German 'no speed limit'.

Most other Western countries have much more powerful (= effective) checks and balances, as should have become exceedingly clear by the failure of the American system to keep an even openly criminal President and his attempted coup in check. That game is still not over. Most of Europe learnt that lesson by studying what went wrong in Germany 90 years ago.

But, I mean, you do you.


Clearly you think that the government should stay out of your business. I think the opposite. I am also an American citizen. There are a large number of Americans who believe that America is backwards in many ways. The fact that you live in America means that you have to deal with this reality to some extent. You're of course free to leave high-handed comments anonymously on an internet forum, but you should recognize that that's all you're doing. Being more strident will not increase the validity of your position, nor will it reduce the number of people who disagree with you. Probably the opposite, if anything.


I’m sorry you’re tired. Are you sure you aren’t tired from wrangling with your taxes?


"A person after my own heart," I originally thought. I, too, enjoy reading HMRC technical guidance, although I haven't actually taken it to such extremes, instead focusing on genuine deductions few people tell you about (like annual medicals).

Your post was entertaining, but there might be a fly in the ointment! Are you actually carrying on a trade? Your consulting work is trade, but there are rules around businesses having multiple lines of business and what is and what is not considered a trade (see BIM20090 and possibly BIM85740). If your art is not commercially available and making more money than it "costs" in expenses, HMRC would probably determine that this line of business is a hobby with no expenses deductible.

If, however, you are selling this art, it's a fantastic wheeze, and also a quite legitimate one as you would, indeed, be a professional artist. You could, too, perhaps find ways to use said art in your trade such that it would commercially justify its creation, even at a high cost.


Haha I must say, HMRC has got to be the most well documented jurisdiction I dealt with. I think you are right with HMRC. But I incorporated in one of the tax friendly countries in Eastern Europe. The way I understood the rule is, I only need to try to make an income. And I do. I have a paypall button. But, there’s no rule on the cost/income. I’m just spending money to try to make money, which is what every body does. I just happen to do it with less success.


Ha, okay, if you're not dealing with HMRC, then different kettle of fish! :-) That's an interesting approach I'd not thought about before, having expenses being paid from an out of jurisdiction company.. good luck!


> Why artistic productions? Well, because it’s the most liberal business code for expenses.

> As a software company, I can expense travel for some things, the most useful for me being conferences. But that means I need to buy the conference ticket even if I’m not interested and can only expense accommodations for the duration of the conference. That’s good but a bit restrictive.

> Now, as a artistic production company, I’m not limited by that. As long as I can prove I traveled in order to produce, well, just about anything that can be construed as being art, I can expense travel and I can expense accommodations for as long as I need. You can’t rush art, you know how it is.

Clever. But given the IRS just bought more guns and ammo than many small countries I'm sure this makes your dog nervous.


It doesn’t. I’m in the EU.

There seems to be a false idea that only the USA has tax loop holes. I assure you. It’s not the case. And tax heavens are also way easier to use as a EU citizen because there’s no global income declaration required.


>tax heavens are also way easier to use as a EU citizen

I suppose it's also correspondingly easier to avoid tax Hell . . .


Did you ever explore Estonian e-Residency to open a biz and run it there? Curious your thoughts/experience about that.


It's all fun and games, but can't the auditors just challenge your "artistic production" by asking for the actual invoices proving any sort of income from it? I mean lots of costs and no related income is questionable AF. Or do you actually sell any of this "art"?


How much do you end up saving this way, through all that effort, rather than just paying the taxes?


Quite a bit actually ( for me at least ) seeing as there’s progressive taxation. I’d reckon at least 30k per year. Probably more.

I take out only 11k or so per year from the company as personal income. That’s the tax free bracket. The rest I use is just expenses. And I pay myself minimum wage as well. Also tax free, but qualifies me for national insurance :D

I love the tax system!


So you make enough and just don't really want to pay taxes, so it's a fun side hobby? Heh, that's such a foreign mentality to me, but to each their own.

Do you think the tax money serves any societal good at all? Is it worth paying any tax or would you rather avoid all of it if you could?


I will answer your questions but first I want to show you something. It’s the link at the bottom.

In the UK there’s something called IR35. Without getting into to much detail, the client will decide if you fall inside or outside. If you are inside or outside you will pay different amounts of tax.

Not that long ago I was offered £1000 per day inside. That seemed like a really good rate to me. Or so I thought. In a month, 21 working days, that’s £21k per month, how much can taxes be right? Let’s call it £17k - I mean c’mon, £4k per month in taxes is already ludicrous. I couldn’t possibly be more. Right?

Please go to the link and plug in £1000 per day. Look at what it says you get cash in hand at the end of the month and tell me, does that seem ok? Is that tax worth paying?

https://www.contractorcalculator.co.uk/insideir35contractorc...


If I read the site correctly it’s the last bit that is of particular concern. Why on earth does one need to make 220,000 per year independently in order to clear the same amount as some making 180,000 as an employee?


Does that take into costs of employment of employee benefits (like providing healthcare, worker's comp, retirement income, various insurances, accounting, payroll, etc.?) Not sure how it all works in the UK.

In the US, at least, companies provide a lot of benefits that contractors don't get and have to buy out-of-pocket... but not sure if that's what the difference comes from in this case.


I am not sure how the UK tax code works, or what I'm looking at here (the "inside" vs "outside" part is confusing). At a glance, it looks like it wants to tax you 46% of revenue (52% of profit) at £220,000/year if you're a contractor, which (it says) is the same as earning £180,100 as a regular employee.

Those numbers don't seem particularly significant either way to me, coming from the US. Our tax brackets cap out at about 40% right now, but they used to be in the high 90%s decades prior -- and that's just the federal, and then some states have a bunch of other tax burdens on top of that. But personally, I'm economically pretty left-leaning and would prefer expanded social services rather than private wealth accumulation, so I doubt I'm representative of what the average person would consider reasonable vs excessive when it comes to taxes.

That's not really the question though. I get that a lot of people (probably most?) wouldn't pay more taxes than they have to, and frankly I can hardly fault them for that. Everything from our genes to our industrialized capitalist societies encourages in-group prioritization and selfish behaviors. That's just how this world that we've made works, and even the most fervent idealists dare not dream of making the entire world pool and share income. We just ain't built that way, and that's just opening the floodgates to insane corruption.

The more interesting question, IMO, is whether ANY of it is worth paying for collaboratively. If I could design my own tax system (and have a billion minions happily working in it...) the upper bracket would be insanely high, like 99% or some such, but there would also be a high degree of choice, per taxpayer, for some portion of their funds. Like there might be a mandatory % going to roads, defense, schools, health -- basic infrastructure, broadband -- but then each taxpayer would get to choose where the rest of it goes (say, space, basic research, the arts, land trusts, energy development, whatever). Two millionnaires would maybe each keep 50% of their income, spend 25% on basic infra, but be able to choose (within constraints) how the remaining 25% is spent. One than decide to split his 25% among various pet causes, the other might give all of hers to increase defense R&D. Basically a democratic Robin Hood, where an armed bandit takes your money and gives it to charity, but asks you, "Which charity?". Heh.

That's just me.

My question to you, as someone who goes out of their way to avoid paying taxes, is... how would you do it? Is some of it still worth it, after all the pork-barrel spending and corruption and inefficiency and bureaucracy? Should it all be private? Like take what you're doing, how would you ideally scale it up to country-level with a few million taxpayers?


So, the reason I sent you that link is if you look at the numbers, that 21k per month pre tax turns into 8k cash in hand per month after tax.

That is nothing short of insane to me. That is almost 3 quarters of your money taken by the government. Maybe you can live with that. I can’t. It’s insane to me. I simply refuse to participate in that.

As for taxes, I get your point. Sounds good in theory. Until you realise people like me are not that uncommon. That 25% I still have a say on is going to go to my charity. A social change sort of charity. A charity aimed at offering educational alternatives to promising young leaders. The sort that will grow up to question the justice of your tax regime and advocate for it’s dismantling. And with a bit of luck and push from other rich people, give it a few decades and it will go away.

But to answer you:

1. 1% flat tax, applied to income for people and profit for businesses.

€1000 goes into your business -> you’re left with €990 -> you take them all out as dividends, you’re left with €980.1

This will provide an annual bulk income.

2. Flat transaction fee. Every time money changes hands, the government skims 0.1€ from it.

This will provide a steady income stream.

You buy a laptop, you pay €1000 for it, the vendor only gets €999.9 - This is not something operators worry about, this is done automatically by the banks.

But this won’t be enough to <insert something here>. Well, tough luck. Better start prioritising. Start by cutting foreign aid. Start by cutting aid to ngos shipping migrants from africa, start by cutting welfare to illegals, get rid of the useless government bureaucracy like “period dignity officers”, plenty of stuff to cut.

Make do with less, that’s what I would tell the government.


It's not quite 3/4, is it? More like 1/2? And that's if you're earning 1000/day (sorry, can't type that currency symbol)... which is like 10x the UK median wage. At lower revenues you're paying much less. But still, your point stands -- it's taking money away from you that you want to keep and spend your own way. Whether that exact percentage is 46% or 66% or whatever doesn't seem quite relevant if you'd rather it be as close to 1% as possible :)

Are there any examples of a system like you're describing working out in reality? Doesn't have to be a country, but maybe a local government, a private community, a membership club, etc.?

Sounds like a libertarian fantasy out of Atlas Shrugged or Bioshock... but hey, we all have our dreams, and my utopia would be your dystopia, lol.


If you’re ever audited, the tax man may find objections to lots of expenses billed for activity X when all your income is from activity Y.

They’ve seen all these schemes before.


Don't you ever feel bad for not contributing to society? Living like that seems morally reprehensible to me.


He probably still pays more than an average person in absolute terms, and almost certainly more than he uses in public services. I bet society is better off with him being around than otherwise.


Not even a moment.


this is absolutely bonkers, but one of the more unique things i've read on here. i'm sure there are a ton of tax loopholes that exist, but how do you find these? just reading the IRS website?


IRS equivalent of the jurisdictions I fall under, talking to friends, talking to people who are into this, forums. I’m lucky to have a very good account who often gives me suggestions and who is happy to validate or invalidate my crazier ideas. The same way you get into any hobby I guess.


How does reality work?

How could it be changed in a substantial, positive, intentional way, ideally in a surprisingly short time frame?


Im genuinely interested in this. Which aspect of reality? And when you say “positive”, from whose perspective or within which frame are you defining it?


OP’s HN comments likely have examples:

https://news.ycombinator.com/threads?id=mistermann


> Which aspect of reality?

All aspects. Reality itself.

> And when you say “positive”, from whose perspective or within which frame are you defining it?

All of them. Reality itself.


3D solar cells for lack of a better term


What would the application be?


Recreating already existing legacy apps / plumping other apps to talk to other apps. /s


For my last few positions, I've been focused on working with restaurants and trying to build technology to help them run their business more effectively.

The dirty secret about the restaurant industry is that most independent restaurants are very close to bankruptcy and the people running it often do not know that until it's too late.

There was a thread a few days ago about the true cost of making hardware devices often being less than 30% of the retail price. From a business perspective, the same should apply to every item on your menu as well. Except more often than not, the person running the restaurant often has no idea how much a meal costs to serve.

The real issue is that people start restaurants not because they want to run a business, but because they like food. In a way it is a good thing that they don't approach it with a capitalist mindset, as I don't thing we'd have the same variety of food if everyone did, but more often than not they are running blind and just hoping for the best. They put in long hours and a lot of their own capital, and if they are lucky they can survive a few years by being able to pay their staff, suppliers and rent, and finally being able to afford to pay themselves enough to personally survive on.


Implement some existing c++ with gdnative.

Generate a planar graph, preferably without a Delaunay triangulation.


Investigating how disorder enhances transport in a new solid state material.


Working on kafka clone in golang. Currently adding consumer groups.


would be interested to contribute if its open source


Still early stages but building on top of this. - https://github.com/travisjeffery/jocko

Since, the author has moved onto other projects. decided it would be an interesting challenge


Can someone work on improving the brains NGF?


How to spend less time on Hacker News :-D


I'm not as smart as the other folks in this thread, but I've been lucky enough to work on a few fun, if small, web projects...

One was an indoor web map built on an entirely open-source stack (QGIS and OpenLayers): https://map.fieldmuseum.org/ It was an interesting challenge because the open-source web mapping libraries (and many/most of the commercial ones too) were designed for outdoor use and didn't really account for indoor idiosyncrasies. Take rooms and doors, for example... while you can import vector SVGs geoJSONs to these mapping libraries, you can't easily indicate "this shape is solid except for these two doors" -- whether for visuals or for an eventual turn-by-turn routing/navigation layer. Another challenge was the concept of "floors", overlapped groups of geometry (walls, balconies, whatever) and points of interest that are mutually exclusive. We ended up having to trace floors against each other in QGIS (map editing software) and then adding state at the Javascript level to store each floor as a separate object to render or hide. Then there were a bunch of UX/UI tweaks we had to do, like how to draw arrows for Covid one-way flows, how zooming should work (we categorize POIs according to level of importance, then show/hide them at different zooms, and also dynamically scale fonts to ensure readability), which geometries to make clickable (a sidebar opens with pics and more details), how to hide easter eggs.

The whole thing was written as a vanilla JS single-page app tied to a headless CMS (Contentful at first, eventually DatoCMS) so that editors could easily change copy, graphics, etc. But editing geometries (the shapes and positions of things) still required QGIS knowledge or at least the ability to manually edit geoJSON files.

We launched this as a MVP with the intent of rapidly following up with additional features (blue-dot positioning, turn-by-turn routing, audio directions, a better codebase, etc.) At the end of the project we ended up open-sourcing it (after much begging and pleading with The Powers That Be), but then soon thereafter abandoned it altogether :( Just as well, really, because the code was really terrible -- I can say that with confidence because I wrote it, lol. But it's still an interesting problem space. As far as I know, there isn't a ready-built solution for this sort of stuff... especially not an open-source/source-available one. Some commercial solutions have limited indoor support, but as of our last evaluation (early 2021), none of them were especially powerful, elegant, or user-friendly. Hope that keeps evolving!

-----------

Fast-forward to my current job, and we're now working on a similarly graphical frontend app. I now work for a solar company, and we're designing a map-like tool to help installers plot out the PV modules (the technical term for a single "solar panel") on their customers' roofs, which then enables historical monitoring of various metrics like power output, temperature, etc.

Aside from the graphical challenges (rendering React states to Canvas shapes), there were also some interesting frontend engineering challenges, like how to plot tens of thousands of points in a chart while allowing real-time scrubbing in a timeline, but where no two datapoints were ever at the same time, because the devices we were working with all communicated in a one-at-a-time queue.

Anyway, I won't get too much into the details here... nothing that exciting, just a lot of nuance. What we're building is like a super-simplified version of https://www.opensolar.com/3ddesign (which is awesome! check it out).

---------

But overall, I just love that I get to work on these sorts of fun apps instead of bog-standard blog pages or ecommerce sites. Even a few years ago, things like this would've been complex desktop apps with difficult lifecycles to manage... now they can be web apps that are updated multiple times a week and evergreen everywhere, to all our customers. Pretty cool stuff!

I feel very lucky to be in my line of work -- web dev for nonprofits and small/medium businesses in interesting spaces, who have challenges unique to their industries, not just "how do we optimize this database" or the such (though there is that too). It was easy enough to pick up these technologies and frameworks in just a few months or years. I'm very grateful that is a career option for people now... I'm a high school dropout who eventually went to a natural resources school, but couldn't find a job in that industry so picked this up instead. It's all entirely self-teachable (with tutorials and Stack, of course).


I am working giving exploratory software testing a solid scientific foundation in cognitive science, sociology, and general systems theory. This has been my project for 25 years.

It is my way of providing ammunition for those who uphold humanism against the technocrats who think they can automate testing.


perpetual motion machine implemented with gravity and magnets


thats interresting I had some thoughts about ferrofluids and magnets as a way to build perpetual motion.


Embodied AI / multi-modal AI. Specifically I've been working on a hardware device - nicknamed "Bosworth" - that will be packed with sensors of various sorts:

- a GPS receiver

- a 9-DOF magnetometer/accelerometer/gyroscope board

- temperature sensors

- microphones

- cameras

- etc

and a processor, which can sense the physical world, and - possibly - facilitate learning in a manner somewhat akin to the way an infantile human learns. All of this is rooted in my belief that while embodiment may not be strictly necessary for AGI, it is probably very useful. And the reason I believe that is because I believe an awful lot of our early development is experiential and rooted in experiencing the physical world. That is, we develop our naive / intuitive understanding of physics ("there is up/down and if I drop something it falls") and our naive metaphysics and epistemology of the world ("there are objects and objects persist even when I can't see them briefly", etc) through what we see and hear and feel. So I want to try to simulate that aspect somewhat, but without necessarily going "full robot". That is, my current hardware platform can sense, but it isn't ambulatory and it doesn't currently have any ability to manipulate or affect anything in the physical world, except by generating noise (speech) and possibly by blinking LED's.

Anyway, the GPS stuff is all setup and now I'm working on code for the accelerometer board. I'm using one of those Adafruit BNO055 boards for that. Currently working on getting the sensor readings off of it using I²C.

At present the model has individual code modules for the various sensor types, and new sensor events go to two places:

- sent via MQTT to persistent storage on a server. This is to facilitate pos-hoc analysis, off-line training of ML models, and possibly replay scenarios.

- turned into a percept and sent to a "percept queue" using ActiveMQ. This all happens on the processor (a Raspberry Pi) embedded in the hardware platform.

Then code modules listen on the percept queue for incoming percepts and then react. The initial idea is to use a very SOAR like "cognitive loop" as the primary "thing" that listens and responds to the percepts. But that will probably change over time.

Very likely it will eventually turn into something inspired by a combination of things:

- the old blackboard architecture approach

- Minsky's "society of mind" stuff

- the BDI (Beliefs-Desires-Intentions) model

- etc.

Not to mention, of course, elements of very contemporary approaches to Machine Learning - Deep Neural Networks, Reinforcement Learning, etc.

When I get to the vision and audio stuff it gets more interesting because there will need to be more initial processing of the audio and video streams, and more front-end intelligence to decide what counts as a "percept". I mean, you wouldn't send every video frame or every audio sample. So now you get into figuring out what kind of delta counts as a "change" that's worth of generating a percept and possibly attracting the attention of the system. And now you get into the multi-modal part, because think about how human interactions work. If you want someone's attention you may approach them and say "excuse me", OR you might approach, say "excuse me" AND tap them on the shoulder, etc. So attention might involve physical touch, sound, vision, etc. all combined. So lots of interesting stuff to explore around how to deal with all of that.

An interesting aside to this is that I get to give Bosworth senses that humans don't have. For example, with the GPS board, Bosworth can "know" its location on the surface of the Earth to a fairly reasonable level of precision, at all times, even if blindfolded and transported elsewhere. Likewise it will know the precise time at all tims. And Bosworth will also "know" which direction magnetic north is at all times, and its velocity, etc. Now, will having those abilities be useful, or will this just confuse the issue? Who knows? But exploring that is part of what makes this fun.


I'm working on a people centric search engine. We're looking for more people to join us - particularly React developers. josh@sirchit.com

https://sirchit.com/


The jobs pages don’t show anything




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: