Hacker News new | past | comments | ask | show | jobs | submit login
Graph theory, graph convolutional networks, knowledge graphs (albertazout.substack.com)
211 points by simonebrunozzi on Feb 7, 2021 | hide | past | favorite | 15 comments



This post is a good example of the initial realization that folks experience with graphs as data structures. I can see author exclaiming, "Graphs are an intuitive way to express knowledge!" Graphs as data structures can be more nuanced than is usually discussed.

Hypernode graphs seem to be very applicable to knowledge graphs, though they have different names depending on the discipline -- nested state machines, statecharts [1], UML hierarchically nested states [2].

I recently participated in a lecture series on Applied Category Theory [3] that touched on the underlying math of composition. The course focused on optimization of design choices in complex decision spaces.

[1] https://statecharts.github.io/

[2] https://en.wikipedia.org/wiki/UML_state_machine#Hierarchical...

[3] https://applied-compositional-thinking.engineering/lectures/


It's always funny to see people mentioning hypergraphs in relation to knowledge graphs, this is exactly what we do at Grakn Labs (disclaimer: work there) https://grakn.ai

For others: we're also starting to look into ML on knowledge graphs, check out our initial work at https://github.com/graknlabs/kglib :D


I too have been facinated with graphs and all related systems/framworks that make use of them in one way or another, but the problem with (hyper/multi)-graphs is that they can represent almost anything, in multiple differing ways and people usually gloss over the second part. In reality it's what make or brake what you can do and express and infer with them.

said in another way, saying to yourself i'm using graphs is pining down a single part of the syntax used while the semantics of what is talked about are almost orthogonal.

In CT parlance the underlying can be almost anything, and changing a few rules about how you want to reason about stuff can drasticly changes the underlying category you're working with.

(PS: my comment is probably biggerish incoherent mind soup)


The nested FSMs were likely motivated as modeling tools for GUI event graphs. Unlike the relatively obscure wiki example [2], a GUI toolkit example would drive the 'hierarchical' part home: events are delivered to the lowest level in the component graph that can handle it.


Is the obscure example that you are referring to the pocket calculator state machine at https://en.wikipedia.org/wiki/UML_state_machine#Hierarchical... ?

Between pocket calculators and GUI events (reactive programming), I think obscurity is probably a matter of personal experience. In any case, I think the connection to GUI events is worthwhile. Thanks!

I believe there are a few important caveats to the connection:

1. The "most popular" GUI frameworks have semantics that go (way) beyond "events are delivered to the lowest level in the component graph that can handle it": https://developer.mozilla.org/en-US/docs/Learn/JavaScript/Bu...

https://stackoverflow.com/questions/4616694/what-is-event-bu...

2. You can still have this hierarchical handling of events without modeling the event handling itself as a state machine. And if you do want to model it as a state machine, and if multiple levels of the hierarchy may handle the event, I suspect you must rely on some decomposition where each layer of hierarchy can only influence some components of the state, while leaving the others unchanged.

It seems like UML state machines addresses this!

From https://en.wikipedia.org/wiki/UML_state_machine#Orthogonal_r...

> In most real-life situations, orthogonal regions would be only approximately orthogonal (i.e. not truly independent). Therefore, UML statecharts provide a number of ways for orthogonal regions to communicate and synchronize their behaviors. Among these rich sets of (sometimes complex) mechanisms, perhaps the most important feature is that orthogonal regions can coordinate their behaviors by sending event instances to each other.


Thanks, but note no claim was made of needing fsms to handle events [in component graphs]! /g I merely stated the [conjectured] motivation for UML's design team to introduce this level of sementics at the notational level.


I think it's a good article, and reading some of the past posts in this blog I see the author is a fan of Judea Pearl's causal do-calculus, as am I, so that was also a good read. The article missed an interesting new area of research in graph theory taking advantage of the dualism between graphs and matrices called The GraphBLAS

https://graphblas.github.io/

And the of course the plug for my Python wrapper:

https://graphegon.github.io/pygraphblas/pygraphblas/index.ht...


Has anyone run Graph NNs on normal NNs to figure out stuff like hyperparameter tuning or analysing which subnetworks are significant in pulling out features? So you can NN while you NN


If you're interested in graphs as an ostensibly intuitive way of representing knowledge, I'd like to point you to a paper by W.A. Woods from 1975(!): "WHAT'S IN A LINK: foundations for semantic networks" [1] in which the author critically examines the theoretical underpinnings of semantic network representations.

[1] https://www.csee.umbc.edu/courses/graduate/671/fall20/resour...


As always in ML: Even with the awesome capabilities that graph neural networks offer, one should always stop to question whether you actually need all that power though.

https://www.youtube.com/watch?v=9KlDUazbcMc

(only 2:14 long; damn I love that video)


Recently came across this beautiful paper [1] about embedding graphs in hyperbolic space. Graph embedding algorithms for euclidean space are flawed because you essentially need to add an additional dimension for every node. Of course there are techniques you can use to reduce the number of dimensions by graph clustering etc, but fundamentally the dimensions scale with the size of your graph.

Hyperbolic space, on the other hand, enables you to embed hierarchical information in a single dimension due to the curvature itself (yes this is hand-wavey mainly because I still barely understand it, read the paper). This is very conducive to embedding graphs, and especially structured graphs (or trees).

[1] https://arxiv.org/abs/2005.00545


> Graph embedding algorithms for euclidean space are flawed because you essentially need to add an additional dimension for every node.

A euclidean embedding can't even approximately embed a graph as simple as C4, no matter how many dimensions you add. Nevertheless, I've been surprised at how good the results have been when blindly embedding stuff in euclidean space (e.g., word2vec) given the high error floor for any possible such embedding.


there was a startup called freebase that crowd sourced a knowledge graph. they got bought by google and shut down. some of the remanants are at wikidata and the google knowlege graph.

https://www.wikidata.org/wiki/Q317521


Graphs to organize annotation labels are necessary to avoid concept drift! Even though BLAS on graphs and fancy JVM property graphs are cool and interesting, my small brain is fully satisfied with LTREE labels in Postgres :)


The idea is old, and called Mind Map: https://en.wikipedia.org/wiki/Mind_map




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: