Hacker News new | past | comments | ask | show | jobs | submit login
Where machine learning meets rule-based systems (foretellix.com)
140 points by yoav_hollander on July 7, 2017 | hide | past | favorite | 37 comments



This is a really important problem to solve, for more than just self driving cars. We have no good way of controlling AIs right now. Particularly reinforcement learners. All we can do is make a fitness function and hope they maximize it. Often they will find a way to exploit it in a way the programmer didn't expect or desire.

And even making a fitness function is a pretty difficult task. You can't exactly let a self driving car get into thousands of accidents until it learns not to do that. Making a good simulation is extremely difficult and very expensive. Training it to mimic humans means it will always be worse than humans.

As AIs get smarter this will become more of a problem. They will become more clever at exploiting the reward functions and even fooling human supervisors. One could imagine a self driving car causing accidents indirectly in some way. But it doesn't care because the car itself didn't collide with anything.

In the extreme case, AIs much smarter than humans would be extremely difficult to trust. They could figure out how to manipulate humans to get them to press their reward buttons or whatever equivalent. The famous thought experiment is an AI in a paperclip factory that desires to maximize paperclip production. So it invents nanotech, tears apart the Earth, and eventually turns the entire solar system into a massive paperclip factory.

Perhaps that's a long way off. But it's not comforting to know that AI ability is increasing faster than our ability to control AI.


Training it to mimic humans means it will always be worse than humans.

The success of AlphaGo says otherwise. The distinction is that you're not mimicking one human but some large set of humans and you're using your fitness function to guide you toward a maximum 'meta human'.

We don't need self driving cars to be perfect drivers. If they're better than any human driver then that's a good enough reason to replace human drivers.


AlphaGo was not primarily trained to mimic humans, but to win games. This included playing many games against itself and semi-random tree searches for better strategies. If it was only mimicking humans it probably would have lost to the world's best.


AlphaGo was only initialized by mimicking humans. It then played millions of games against itself. And learned what moves were more likely to lead to wins. And then it was combined with a tree search algorithm that let it explore many more moves into the future than humans could consider.

Training a bot just to predict what move a human would make, would always lose to that human. It will just absorb all the human's mistakes and weaknesses. And on top of that it will add it's own, since no AI algorithm is anywhere near 100% perfect.


"Training it to mimic humans means it will always be worse than humans." This is not necessarily true, there is work in learning from demonstration where robots have exceed the "experts" reward. I'll post some papers later if you're interested.


Please do. I'm not the OP but I'm interested.


Sorry for the delay. Here's some early work from Pieter Abbeel where a helicopter does aerobatic manuevers better than the demonstrations it received: http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=EAE...


The title says it all - machine learning and rule-based systems are complementary tools, not mutually exclusive.

I have prior experience applying both to fraud detection, and for fraud detection, there are lots of regulations that are actionable as hard and fast rules running in an expert system.

There are also patterns of fraud that are not obvious, and data mining techniques along with machine learning are incredibly useful for detecting them.

More importantly, if you use a rule-based system first you can often use the output (for example, a rule-based score for a given input case) as an input into machine learning. So, for example, the rule-based system is used to help classify concrete cases, whereas the machine learning might help classify more "gray area" cases based on the hard and fast rules (depending on how you configure your algorithms).


This is indeed a very interesting topic. I am working in the area and we get some rule-based models perform as good as neural networks and overall there are good steps: http://www.srl.inf.ethz.ch/papers/charmodel-iclr2017.pdf

These are models that can be inspected by an expert (or even non-expert) and then analyzed. But I guess that we must go past the neural network hype. At the moment it is very common that if I mention machine learning to people, they correct me and say "deep learning" :)


I honestly read this article as taking more of a philosophical stance than a technical one. The fact is that we as humans will require proof and verification of human control over a system is an inescapable one. Basically, on some level we need guarantees and proof that the machine will never come up with a rule that says "kill all humans".

This means that we'll need human-supplied rules, and any mature ML platform will need to be able to incorporate these rules. IMO it's a decent problem-based hypothesis as to what the future evolution of ML platforms will look like, but as with any hypothesis, it will need to be tested, proven wrong in some way, and adjusted accordingly.


The word "need" in this comment seems a bit exaggerated. Yes, we'd want to have some proof and verification, but that is not an inescapable requirement.

I'm fairly sure that we as a society in general are willing to launch systems where we have only a reasonable expectation and trust that it will most likely work properly instead of total proof and verification that it definitely will do so. Just take a look at pretty much any life-critical system in use today; some verification and testing is required, but formal proof of correctness is a very high bar that's never required for practical systems. It's highly desirable, but it will only be an absolute requirement if it's reasonably easy to achieve.

I'm fairly sure that if it turns out that we can't provide guarantees and proof about what the machine might do, then we'll just do our best even if the result is not provable and verifiable and launch such systems anyway.

And I think that even if we have a choice between two systems where one is verified and proved that it never can do anything bad but is otherwise inferior in what it can do, and the second system has no such proof but seems safe in general testing and simply performs better than the first one.... then it's quite likely that we'll choose the second one anyway.


That is my opinion as well. And indeed the post talks mainly about dynamic verification and achieving some "good enough" verification quality (as determined by coverage and other metrics).

And it is in that context that "soft" techniques like ML can help a lot, and thus the question of how to connect them to "hard" rules (which are also part of dynamic verification) becomes interesting.


Is there a hard boundary between "the philosophical" and "the technical"? Or rather, I don't think "the philosophical" is necessarily that useful a tool for understanding how to resolve the thorny question of "human control" of AI.

Human beings are very good at communicating our "intentions" through natural language. Such communication, however involves "good faith". So communication with an AI would seem to involve more than just following logical directive, the AI would need a model of a person which would allow it to deduce and follow the wants/desires/intentions behind a given group's humans' communication with it.

Of course, if we go down deeper, we can notice that these wants/desires/intentions themselves "don't really exist" in the sense that a single person generally has impulses that go in different directions in the general case and it is the way that they live in a stable society that keeps those contradiction from appearing (which brings to mind many "sausage and sausage factory" analogies - lots of people want the results of a process but would object if they knew everything that goes in - while vaguely knowing/hoping there is a bit of regulation keeping things not too far out of bounds).

Traditionally, philosophy is where conceptions of language, society and logic meet and supposedly get sorted-out. But unfortunately our traditional philosophy seems a woefully inadequate tool for the problems of language interactions coming out of social processes and we most likely should look at some more contemporary theories of language and society for this (standard philosophy especially arose strengthening the false intuition that an intention could exist as a "free-standing" rather than getting that interactive fabric that human society and just as much the idea that a phisolophical would not be "technical" illustrates the problems). My favorite for understanding this stuff instead, is evolutionary game theory.


OP here:

I indeed meant it in the philosophical sense you describe. But I am very interested in the possible technical solutions. I tried to describe (in the chapter "Connecting ML and rules") the approaches I know of, none of which are very exciting.

I'd love to hear if anybody knows of good approaches.


As part of my graduate work with George Konidaris we've been exploring the creation of symbols and operators with ML. The goal being symbolic planning for continuous systems, however I see similarities in our approach, and the goals of rule based systems.

There's a journal paper that's under development, but here's a conference paper that addresses some of George's early work: http://cs.brown.edu/people/gdk/pubs/sym-prob.pdf


Thanks a lot - will look it up.


maybe your test cases are your rules. as long as you're recording things over time you have data to feed back to and learn from. also, each level of abstraction you could store less data to potential learn from. instead of storing every pixel just store edges and other low level features from the first layer.


don't kill humans as a rule won't work just don't recognize that "meat-bag" over there as a human goal accomplished rule circumvented. or maybe it's lower which colors are skin colors? how do you define valid colors? to define human properties?

At the end of the day I think there is a reason for the old saying the exception that proves the rule.


This reminds me of Brill tagger, which learns rules to do part-of-speech tagging -- rules are learned, but execution is entirely deterministic application of rules.


They are gonna merge I think. NNs will be used to extract rules.


On the TIRES project (http://daly.axiom-developer.org/cmu/book.pdf) we used Scone (http://www.cs.cmu.edu/~sef/scone/), a knowledge representation system as the core structure. Rules based on OPS5 (http://repository.cmu.edu/cgi/viewcontent.cgi?article=3430&c...) were used to do planning.

Scone concepts were linked to machine learning for two tasks, recognition (e.g. giving grounding to the concept 'wrench' as a recognizable object) and actions (e.g. giving grounding to concepts like tighten as a series of actions).

So ML was an "interface" for recognition and "compiled knowledge" where the system knew how to perform certain actions without any consultation with the rest of the knowledge (similar to how you can recognize and type words without thinking).


Some Machine learning is rule based ie. Random forests or random decision forests


In this context rule-based usually means rules handed down by a person/people. Tree-based algo's create their 'rules' using the data only.


Note though that the moment you go beyond single decision trees, with say random forests, boosted trees or bagged decision trees, the models become less and less interpretable. They "look" like rules, but with multiple disjoint sets of them its hard to intuit how they interact.


I do think that random forests (and Inductive Logic Programming) seem easier to connect to rule-based "human-written" logic.

It does seem though that Neural Networks are the main ML story, at least for now, by a wide margin.


Probably relevant, there has been work on converting feed-forward networks to decision trees e.g. [1].

EDIT: why is this relevant to the parent comment? You can learn a neural net, convert it to a tree, and then integrate rules from domain experts.

[1] [PDF] https://www.aaai.org/Papers/FLAIRS/2004/Flairs04-089.pdf


I enjoyed your article very much and it touches on many of my own interests, but if we got to the point where "it's not neural networks" is now a legitimate reason against trying some technique, then something, somewhere has gone really wrong.


I agree to that. But:

1. Most ML techniques are bad at connecting to rules (random trees and inductive logic programming are a small subset).

2. Most of the ML techniques that one encounters in practice while verifying intelligent autonomous systems are currently neural-network-based: Sensor fusion in the AV itself, coverage maximization attempts I am currently aware of in the verification environment, and so on.

I suspect that most ML techniques, by their nature, will not play nice with rules by default. But this is just a hunch.


Well, eventually any machine learning system needs to integrate with some other piece of software that is not, itself, a machine learning model.

For instance, in AV, is the practice to train ANNs end-to-end, so that they learn to drive a car from scratch, without any interaction with other components at any time? My intuition is that rather than that, the ANN is trained to recognise objects in images and then some hand-crafted logic decides what to do with specific types of objects etc. I think some of the examples in your article say that this is the done thing in some companies.

If this sort of integration is possible, is there any reason why integrating rule-based reasoning with machine-learned models is not?


Right - as far as I know most ANNs _are_ embedded in some pipeline which contains also "regular" SW, and thus by definition there _is_ some way to connect them to a rule-based system.

The only issue is that there is no easy, _natural_ way to do it. For instance, consider the various attempts at adding safety rules to an RL ANN (depicted in fig. 2 in the paper). Say that (in the context of an ANN controlling an Autonomous Vehicle) your ANN decided to do something on the freeway, but the safety rules say "no". There is no easy way to gracefully integrate the rule and ANN: One way is for the rule to disable the ANN's output at this point, take full control and decide what the AV _should_ do. But this leads to duplication and complexity.

So the four solutions I describe take various ways to avoid this problem. They all "work" in a sense, but none does real "integration" of the ANN and the rules (the shield synthesis solution perhaps comes closest). And it looks like you have to invent this kind of solution anew for every new instance of connecting-ANN-to-rules.

And this was just "inserting rules during execution". Then there is the issue of "verifying via rules", and "explaining the rules". It is tough, and I am wondering if there could be some conceptual breakthrough which would make it somewhat easier.


Thanks for the reply.

Your article caught my attention because I was thinking about the problem of integrating probabilistic machine learning models with deterministic rule bases (specifically, first-order logic ones). The rules themselves would be learned from data, with ILP. I'm starting a PhD on ILP in October and this is one of the subjects I'm considering (although the choice is not only mine and I'm not sure if there's enough "meat" in that problem for a full PhD).

My intuition is that in the end, the only way to get, like you say, a "natural" integration between rules and a typical black-box, statistical machine learning model is to train the model (ANN, or what have you) to interact directly with a rule-base- perhaps to perform rule selection, or even to generate new rules (bloody hard), or modify existing ones (still hard). In other words, the rule base would control the AV, but the ANN would control the rule-base.

I think there's gotta be some prior work on this but I haven't even looked yet. I'm kind of working on it, but not from the point of view of AVs and I'm using logistic regression rather than ANNs (because it's much simpler to use quickly and it outputs probabilities). And I'm only "kind of" working on it. And I don't think it'll come to anything.

But, hey, thanks for the inspiration :)


Some of the instances where ML is brought in can instead be handled by fuzzy logic?


Conditional Random Fields?


For a moment I thought this was making a parallel with 1980s "knowledge-base systems", which tried to make hay by piling up rules gleaned from experts.

Those systems sort-of, kind-of worked, and got plenty of press, but vanished from the market without much of an obituary, or post-mortem exam, AFAIK.

My guess is that they were impossible to maintain.

I'm waiting to hear that the problem is now licked for the current crop.


>> Those systems sort-of, kind-of worked,

Rather more than that. For a famous example, check out the wikipedia article on MYCIN (a classic of the era):

MYCIN was never actually used in practice but research indicated that it proposed an acceptable therapy in about 69% of cases, which was better than the performance of infectious disease experts who were judged using the same criteria.

https://en.wikipedia.org/wiki/Mycin

There were many practical reasons why expert systems fell out of grace (although like others say they are still used widely, in real-world applications) but a large part of them was political, rather than anything to do with their effectiveness.

Some historical background:

https://www.researchgate.net/publication/3454567_Avoiding_An...

In the end, the problem with expert systems was probably that they didn't scale as well as one would hope -primarily because it's damn hard to develop and maintain huge hand-crafted databases of knowledge solicited by experts who have very little incentive to collaborate in the creation of a system that will put them out of work.


There are many rule-based expert systems in large scale use today. I know of applications in medicine, supply chain and logistics, and military.


Quite popular for airplane autopilots too (both manned an unmanned).




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: