Hacker News new | past | comments | ask | show | jobs | submit | edparcell's comments login

It is simpler if you use the notation [x]_a^b (i.e. with a subscript and a superscript b) to mean x, clipped to the range a to b, and skip writing +/- infinity if you don't intend clipping on one side.

Then you get a bunch of obvious identities like [x]^b = min(x, b) = [b]^x (x capped by b is the same as the smaller of x and b which is the same as b capped by x), [x]_a^b = [b]_a^x, and [x]_a^b = [[x]_a]^b. Putting these together you get [x]_a^b = [[x]_a]^b = min(max(x, a), b). But honestly it's just easier to stick to the notation most of the time.

A better write-up, for everyone who doesn't like reading new math notations inline: https://imgur.com/gallery/593QEow (Imgur link with white background) https://quicklatex.com/cache3/71/ql_46c49ac709b3789482d0736d... (Original link - renders badly in Chrome due to PNG transparency)


Those are all valid C.


100% agree. People now use “innovate” and “invent” interchangeably. Typically they use the fancier sounding one because they want to impress people with their long words. They are not interchangeable though. Invention is the initial spark to the first version. Innovation is the polishing process of the next n versions. The iPhone 1 is an invention, and every iPhone after that is an innovation.

Now, the iPhone 1 didn’t do very much, and often there is far more value in the innovation than there was in the original invention. But you don’t get the innovation without first inventing something that didn’t previously exist.

Sadly, using words incorrectly swaps into thoughts, and affects reasoning. Because these words have been conflated, organizations are typically no longer able to reason about invention and innovation correctly, and are uninterested in inventing as a result. I would argue we see this in the lack of new underlying technological inventions after the 90s. It is like we have eaten our own seed corn. Very sad.


Loman author here. Thank you very much for the mention. Amazed that I never heard of Athena or pixie graphs. Our intention with Loman was to create a library scoped for a single process - we looked at the possibility of creating a system responsible for executing much larger graphs on a real-time ongoing basis, but it felt like a larger project than we'd be able to execute well. It sounds like Athena was that, and it worked well, subject to being a culture shock for people coming into it?


A similar library from another asset manager - https://github.com/man-group/mdf. Although MDF seemed to work at the level of timeseries instead of scalar values.


I'm a big fan of Graphviz. My old team created a library called Loman, which we open-sourced, which uses DAGs to represent calculations. Each node represents a part of the calculation and contains a value, similar to a cell in an spreadsheet, and Loman tracks what is stale as you update inputs. Loman includes built in support for creating diagrams using Graphviz. In our quant research we have found that invaluable when revisiting old code, as it allows you to quickly see the structure and meaning of graphs with hundreds of nodes, containing thousands of lines of code.

We've found it quite useful for quant research, and in production it works nicely because you can serialize entire computation graph which gives an easy way to diagnose what failed and why in hundreds of interdependent computations. It's also useful for real-time displays, where you can bind market and UI inputs to nodes and calculated nodes back to the UI - some things you want to recalculate frequently, whereas some are slow and need to happen infrequently in the background.

[1] Github: https://github.com/janushendersonassetallocation/loman

[2] Docs: https://loman.readthedocs.io/en/latest/

[3] Examples: https://github.com/janushendersonassetallocation/loman/tree/...


My team has a similar library called Loman, which we open-sourced. Instead of nodes representing tasks, they represent data, and the library keeps track of which nodes are up-to-date or stale as you provide new inputs or change how nodes are computed. Each node is either an input node with a provided value, or a computed node with a function to calculate its value. Think of it as a grown-up Excel calculation tree. We've found it quite useful for quant research, and in production it works nicely because you can serialize entire computation graph which gives an easy way to diagnose what failed and why in hundreds of interdependent computations. It's also useful for real-time displays, where you can bind market and UI inputs to nodes and calculated nodes back to the UI - some things you want to recalculate frequently, whereas some are slow and need to happen infrequently in the background.

[1] Github: https://github.com/janushendersonassetallocation/loman

[2] Docs: https://loman.readthedocs.io/en/latest/

[3] Examples: https://github.com/janushendersonassetallocation/loman/tree/...


The ease of the calculation tree in Excel versus having to keep track of what cells in a notebook you have updated was a large part of why we built and open-sourced Loman [1]. It's a computation graph that keeps track of state as you update data or computation functions for nodes. It also ends up being useful for real-time interfaces, where you can just drop what you need at the top of a computation graph and recalculate what needs updating, and also for batch processes where you can serialize the entire graph for easy debugging of failures (there are always eventually failures). We also put together some examples relevant to finance [2]

[1] https://loman.readthedocs.io/en/latest/user/quickstart.html

[2] https://github.com/janushendersonassetallocation/loman/tree/...


I had the same trouble with order dependence as notebooks got to a certain size, so my team and I created and open-sourced a library, Loman, to help with that. It allows you to interactively create a graph, where nodes represent inputs or functions, and then keeps track of state as you change or add inputs, intermediate functions and request recalculations. Our experience has been broadly positive with this way of working. As graphs get larger, it's easy to lift them into code files in libraries, while continuing to modify or extend them in notebooks. The graph structure and visualization make it easy to return to loman graphs with up to low hundreds of nodes, which would make for a fearsome notebook otherwise. It also makes it easy to bolt Qt or Bokeh UIs onto them for interactive dashboards - just bind UI widgets and events to the inputs and widgets to the outputs. They can be serialized, which is useful for tracking exceptions in intermediate calculations when we put them in airflow to run periodically, as you can see all the inputs to the failing calculation, and its upstreams.

[1] Github: https://github.com/janushendersonassetallocation/loman [2] Quickstart/Docs: https://loman.readthedocs.io/en/latest/user/quickstart.html


My team has been working on a Python library called Loman that represents computations as graphs. We've open-sourced it [1][2]. One of our aims is to make it as natural as possible to use graph-based programming, and within an already-familiar programming language. Be interested to know what you think.

[1] https://github.com/janusassetallocation/loman [2] http://loman.readthedocs.io/en/latest/user/intro.html


Can you demo your library with a more complex example, e.g. the Dining Philosophers Problem. Here[1] is the solution using TBB, and here[2] a more recent version - using a multioutput function node to optimize the flow.

[1] - https://software.intel.com/en-us/blogs/2011/01/10/using-the-...

[2] - https://software.intel.com/en-us/blogs/2011/09/13/using-inte...


Thanks for the links. I took a look, and I think that the intention is quite different between the libraries. Our library would not directly apply to the Dining Philosophers Problem. Both libraries use graphs to represent dependencies between tasks, but they do so for different reasons, and to cover different uses. The Intel library does it with the intention of scheduling a given workload. Our library uses a directed acyclic graph to track state as either the data or function for given nodes of the graph are exogenously updated, either interactively during research, or from new incoming data in a real-time system. We cover where we think our library is useful in more depth in the introduction section of our documentation[1].

[1] http://loman.readthedocs.io/en/latest/user/intro.html


Hi Jon,

Interesting idea. Seems like it'd be a good novelty gift for my family to get me for example.

I guess you are already thinking this way, but it seems fairly natural to offer a birthday card, and maybe a range of other geek products around this.

On the product itself, it might be good to do alphabetical sudokus also - for 16x16 sudokus this could lead to some interesting message possibilities perhaps? Also, are there any other puzzles that lend themselves to this sort of customization - wordsearch perhaps?

I guess those are the two ways I'd consider expanding on an appealing starting idea.

Good luck, and let us know when you launch.

Best, Ed.


Thanks Ed. Yes, I'd started looking at eg Cafe Press for a way to produce physical goods like cards, mugs, mouse mats etc.

I hadn't thought of letters or the 16x16 versions. Also I guess if I do word-search and other puzzles then there's potentially enough to print a custom book(let) of puzzles.

Thanks again for the feedback.


I second this idea. Put the puzzle on something tangible, like a shirt, a cup, a birthday card, etc. Then it would make a more meaningful birthday gift. Manufacturing, of course, is another question...


I think that one approach that may yield domain-specific improvements would be to add certain numerical routines into the x86 instruction set.

When I was working in finance as a quant, I was shocked by the amount of time code spent executing the exponential function - it is used heavily in discount curves and similar which are the building blocks of much of financial mathematics. An efficient silicon implementation would have yielded a great improvement in speed.


CRC32 instructions: http://www.strchr.com/crc32_popcnt

String processing instructions in SSE4.2: http://www.strchr.com/strcmp_and_strlen_using_sse_4.2

AES encryption instructions: http://en.wikipedia.org/wiki/AES_instruction_set

So, if you didn't know about those, give yourself a point, because you nailed it. (No sarcasm.) There's a definite trend there.


It can definitely help in certain domains, but adding special-case instructions in silicon can sometimes complicate a chip design enough that it slows it down overall. The trend for a while was in the other direction, towards not implementing in silicon things that were even already in the x86 instruction set, like the transcendental arithmetic functions, and doing them in microcode instead (the "RISCification" of x86 processors). It's possible that trend is now reversing, though.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: