Hacker News new | past | comments | ask | show | jobs | submit | lichtenberger's comments login

Seems, that we already have value classes in the EA builds :-)


Don't be confused with JDK 23 EA, that doesn't mean it is coming on regular JDK 23 as preview, rather that is what the non-Valhala stuff is based on, as upstream.


Yes, in the Valhalla builds of course :-) That said I hope we'll see the value classes at least in Java 25.


I once implemented the backend of a calendar and resource control for a low code platform.

The control is highly customizable, with a lot of views to chose from, daily, monthly, yearly... but also resource views (you can book resources with custom groupings, by plugin, by the resource-ID, whatever...), define "plugins" on the data sources, what's the from- and to- columns, the title column, what's the resource (may be from a foreign key / 1:1 relationship or 1:N if it's from a "child" data source or from the same data source/table).

Furthermore I've implemented different appointment series, to chose from (monthly, weekly (which weekdays), daily...), which column values should be copied. Also appointment conflicts (or only conflicts if they book the same resource). You could also configure buffers before and after appointments where no other appointment can be.

That was a lot of fun and also challenge sometimes regarding time zones and summer/winter time in Europe and so on :-)


So, I think in my local bubble noone is for instance as excited about DB systems as I am, so in essence I thought I could even spend some money to get some expert opinions or rather insights I'm struggling with (currently for instance with bad throughput of my immutable OSS DBS). That said I think noone so far wanted money and some even offered help, but so far I think they didn't have time, thus didn't answer any "pings". So, as I can't spend too much time (and of course not too much money) either on profiling and debugging right now it's kind of a dilemma, as it would IMHO be very interesting to know what's slowing down N read-only trxs in my system :-) that said a couple of years ago I also asked about help with a frontend without much luck. I guess it has to have some value of course, so maybe at least spending some money (even if it's a non profit spare time project since 11 or even more years) should be OK :-)

Guess I just felt a bit frustrated...


That's understandable. The thing about asking others for help is that if it's something that will require more than a small amount of time or effort, then it has to be either a friend who is willing to sacrifice for you, or someone who is really into this stuff. The latter is more rare than the former.

You certainly can hire a contractor to help you out, but that's not going to be cheap. If you can afford the time, I think the best approach is to study up and achieve the level of expertise that you need for the task. You'll gain on two counts this way: you'll solve the issue at hand, and you'll have a new skill in your collection that you can leverage in other ways and on other projects.


Throughput. The code can be "suspended" on a blocking call (I/O, where the platform thread usually is wasted, as the CPU has nothing to do during this time). So, the platform thread can do other work in the meantime.




We're using a similar trie structure as the main document (node) index in SirixDB[1]. Lately, I got some inspiration for different page-sizes based on the ART and HAMT basically for the rightmost inner pages (as the node-IDs are generated by a simple sequence generator and thus also all inner pages (we call them IndirectPage) except for the rightmost are fully occupied (the tree height is adapted dynamically depending on the size of the stored data. Currently, always 1024 references are stored to indirect child pages, but I'll experiment with smaller sized, as the inner nodes are simply copied for each new revision, whereas the leaf pages storing the actual data are versioned themselfes with a novel sliding snapshot algorithm.

You can simply compute from a unique nodeId each data is assigned (64bit) the page and reference to traverse on each level in the trie through some bit shifting.

[1] https://github.com/sirixdb/sirix


The technique is also described in the awesome book "Crafting Interpreters" by Robert Nystrom.


Which is probably not a coincidence, since the article was written by the same Robert Nystrom.


I saw the title and it reminded me that I wanted to work through that book. Was thoroughly unsurprised when I clicked through and saw his face.


Super interesting read about COW, log-structures and a mix thereof (CObW -- copy on bounded writes).


I think it depends, but I wonder if anything can be done about the problem with checked exceptions in lambdas / for instance the streams. I think the enhanced switch with handling failure is only part of the solution, but I'm also a proponent of having only unchecked exceptions.


I think the Java team is getting there. Everything in the sytanx-space they have worked on moved towards a functional representation of data. Once you have a uniform way to describe non-uniform types (sum types, rust enums etc.), adding exception support to the JDK seems trivial.

I've actually made small streaming libraries which also support exceptions, but the problem is that it only works well if you need to support a single exception type.

As far as I'm concerned, I like checked exceptions because it improves discoverability, documentation and makes exceptional control flow more visible when reading code (including code reviews).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: