Hacker News new | past | comments | ask | show | jobs | submit login
Microsoft Concept Graph (microsoft.com)
481 points by uyoakaoma on Nov 2, 2016 | hide | past | favorite | 100 comments



This is a thing I think about often which I always conclude is not currently possible in the way I wish it were.

While we can make a concept graph, what I often wonder is whether it's really possible to make a computer think of a thing, truly have an idea of it in their "head" the way it is for a person.

When you think of an apple, you don't just connect to a text description of it and a picture and a bunch of links to "fruit" and "seed" and "food." You sort of see it and feel it and taste it and know its value. It's rendered in senses, not in text.

I am not confident that it will be possible for a computer to understand something that way for a very long time. I think until we understand how that information is encoded in our own minds, getting a machine to truly understand it the same way will be elusive.

When I was recently considering this, the fundamental difference I came down to was this: a living thing wants things, needs things. So long as a computer does not have any desires, I just don't see how it could ever understand the world the way we do. What would anything matter to you if you didn't eat, drink, sleep, feel, get bored, get curious?

I think those aspects of a living thing drive our understanding of everything else. Without that, it's all just text.

But of course I do understand perfectly that I am speaking of a longer timeline sort of project and that a Probase-like component is still a big part of it and can still independently move things forward quite a bit.


What you're looking for is called the symbolic grounding problem:

| But as an approach to general intelligence, classical symbolic AI has been disappointing. A major obstacle here is the symbol grounding problem [18, 19]. The symbolic elements of a representation in classical AI – the constants, functions, and predicates – are typically hand-crafted, rather than grounded in data from the real world. Philosophically speaking, this means their semantics are parasitic on meanings in the heads of their designers rather than deriving from a direct connection with the world. Pragmatically, hand-crafted representations cannot capture the rich statistics of realworld perceptual data, cannot support ongoing adaptation to an unknown environment, and are an obvious barrier to full autonomy. By contrast, none of these problems afflict machine learning. Deep neural networks in particular have proven to be remarkably effective for supervised learning from large datasets using backpropagation. [..] The hybrid neuralsymbolic reinforcement learning architecture we propose relies on a deep learning solution to the symbol grounding problem.

Source: Marta Garnelo et al: Towards Deep Symbolic Reinforcement Learning https://arxiv.org/pdf/1609.05518.pdf


Well that and the computer hasn't had years of experiences with apples and Apples as well... years of understanding how they taste, that they get paired with things, are more parts of meals with children, are connected with biblical stories, can be thrown, how it fits in cultural contexts (such as jewish new years), etc.

It's not just about perceptual data of an apple but rather having LIVED apples and absorbed their millions of data points. I'm skeptical for how far AI can go from statistics on text alone, NN or otherwise.


>> Pragmatically, hand-crafted representations cannot capture the rich statistics of realworld perceptual data, cannot support ongoing adaptation to an unknown environment, and are an obvious barrier to full autonomy.

Pragmatically machine learning systems can't do any of those things either. In principle they can, but in practice they need so much data and training must take up so many resources (not least the ones needed for supervision, i.e. annotations), that creating a truly autonomous system is unfeasible. Which is why we don't have such systems yet, even though we've had machine learning for a good few decades now.

>> Deep neural networks in particular have proven to be remarkably effective for supervised learning from large datasets using backpropagation.

Oh yes, absolutely- in laboratory conditions and in well-circumscribed tasks (image recogntion from photographs, say). In the noisy, dirty, hostile real world, not so much.

We still have a long way to go before we get to the wholy grail. We're not even at the beast of AAAaaargh yet. And remember what pointy teef that one's got.

(apologies for speaking in allegories- I mean that we haven't yet tackled the hardest problems, because we've yet to encounter them. We're stuck with "low-hanging fruit" as, I believe, Andrew Ng has said).

____________

Edit: But, um- that's a really nice paper. Thanks.


Thank you for linking me to this. I had never heard of it. That is exactly it.


Besides "symbolic grounding" also look up "word vectors". It is an attempt to ground words in the statistical probability of their surrounding words in very large bodies of text.


I also recommend 'Ventus' by Karl Schroeder. It's a fun scifi read, covers some of these concepts and can be downloaded for free: http://www.kschroeder.com/my-books/ventus/my-books/ventus/fr...


FWIW, I don't think computers will need to think of a thing any more than they need to "learn" that 1 + 1 = 2. When you break down how logic circuits work to the point where you are describing a full adder in gates you can see that 1 and 1 "has" to be 2. And when you start tracing out the n-dimensional graph space of concepts you will see that "understanding" that Oct 13, 1881 is a "date in time" is simply because that is the only concept in the graph near the nodal points.

It is exceedingly challenging to conceptualize n-dimensional topologies given our three dimensional (or four) up bringing, but when you consider that the definition of a dimension is a direction orthogonal to all other dimensions you can think of the orthogonal things that might be on another dimension and topologically connect.

For example, 13 is a number, its a prime, it can be a month day, it can be a street address, Etc. You can think of '13' as a dimension which is orthogonal to all of those other dimensions (numbers, primes, dates, addresses, Etc) such that it spears through them. Now when you see "oct" it also spears through a bunch of alternate dimensions but the only dimensions that both Oct and 13 exist in is the 'date' dimension and maybe the ASCII dimension (13 can be an Octal number). But add the 1881 and the three of them no land pretty solely in the "dates" plane of existence.

The trick is searching the solution space of n-dimensions in finite time. Certainly something a quantum computer might achieve more easily than a Von-neumann machine but given that the dimensional space is nominally parallelizable (at the expense of memory of course) I expect you can get fast enough with enough compute threads.

Another challenge is constructing the concept graph to begin with but there is lots of great research going into combining ontologies with natural language processing in order to build concept graphs. If I were getting a Phd today I'd probably be working on that particular problem.


What do you think of learning these things by watching films ?

We have hundreds of thousands of hours of programming right from the Kindergarten level.


That feels to me like stacking challenges because image interpretation is a challenge on top of concept generation as a challenge.

That said, there is the problem of believing your initial concepts. And, like people, if you start with a bunch of bogus concepts its going to be hard to break free of that and establish concepts more liberally. I think about it as the question of not only establishing the concepts but establishing the validity of the concepts that have been established. In a very sparse concept space your "best match" can be really far off from what someone with a more filled out concept space would consider valid.


We don't really know what it is for a human to "think" of something. Introspection of what we think it is does not lead us to understand what it is to think of something. I think our needs often limits our understanding of what things are. For instance, our need to satisfy hunger means we are mostly considering apple in terms of utility to do that. Science lets us view apples in a much more detailed way, it's chemical make up, it's biology, etc. We often don't have that all in mind when we think of an apple however. But a computer, potentially could have a much more pervasive view of an apple. It may conceptualize things in ways we can't. I think that might be more interesting. That it will understand the world in a way we don't ( which may include the subset of how humans understand things )

However, we are struggling at the first steps of doing this and still unsure of whether it's even computable.


> We don't really know what it is for a human to "think" of something. Introspection of what we think it is does not lead us to understand what it is to think of something.

This doesn't seem right. Introspection might not give us all the answers, but it's a critical (and probably the single most important) aspect of understanding how we think. Entire branches philosophy deal specifically with this and have done so for thousands of years.

I personally found Descartes' thoughts particularly interesting in this regard. Also, here's a pretty good overview on introspection in contemporary philosophy: http://plato.stanford.edu/entries/introspection/


I think (heh) introspection allows us to characterise the nature of thinking, but we started making a lot more progress about how our brains work when we started down the neuro science path. Philosophy hasn't answered much in the way of how we think, but it asks a lot of questions about the nature of thinking, what we can know, what constitutes a mind, how a mind relates to a body. How we can recognise that something has a mind. But it has done very little in the way to answer what is it that allows us to think.

Descartes dealt with rational thought and what we can know absolutely. He wanted a logical progression so we can prove everything from fundamental truths.

Philosophically the philosophy of mind is most related, and I guess one of the famous problems is https://en.wikipedia.org/wiki/Chinese_room that we can't really resolve yet


To put it another way, a human learns by experience. When you think of "an apple", you think of various sensory experiences in which an apple has played a role; eating the apple, throwing the apple, etc. At some level, these experiences are finite; an apple corresponds to a physical object, and there are only a few thousand ways that humans and objects interact.

A better consideration is whether computers should be limited to understanding things the way humans understand them. Sensors can characterize apples in uncommon ways; X-rays, microwaves, nanoscale structures, etc. Similarly machinery can interact with apples in ways that humans cannot, such as vaporizing them, disassembling them, or launching them to Mars. Perhaps some combinations of action verbs and nouns are impractical or impossible; that's a physical, experimental property, rather than a property tied to human experience. At the end of the day a computer only needs to know about humans in order to interact with them; its representation of the world is distinct.


I think text is too high of a concept for this kind of thing, all your examples of senses are really just input memory. Text to a computer is simply input and can mean different things in different contexts. Is the binary string coming out of some temperature sensor really any different than a digital audio stream, or string of text? It's simply a case of mapping those inputs to a concept of feeling, hearing or reading.

I think the real challenge to this kind of approach is always going to be raw processing power and size of data sets. Our brains may not be incredibly efficient, but they have so much more stuff to work with over even our largest data centers and the amount of data that comes to us every living moment is basically infinite compared to curated sets we feed our current learning machines.

So imitating the way people learn like this is probably the key to getting something to properly "think", I just wonder when the resources available to our computers will catch up to the resources available to our brains.


If you remove the ability for a human to feel emotions, they have a very hard time making decisions. They can still reason about their options, but they just can't decide. I don't have the citation at hand but there are actual case studies.

Although we think that we make decisions rationally, the reality is that we make decisions emotionally. Our rationality is not the master of our emotions--it serves them.

So if you want a computer to think like a person, you need to give a computer emotions. To my knowledge there is very little academic work in this direction. To use my favorite example, no one is trying to build a self-driving car that just doesn't feel like driving that day.

And to return to the point above, we think that we think a certain way. But when we think about our thoughts, we're using the same mind that we're analyzing. It's certainly possible that we are fooling ourselves. Maybe even we don't think about things the way you describe--but we can't tell the difference, because we can't get out of our own minds, or into someone else's mind.


I think the reason that there is little academic work in the direction of making emotional machines is because we don't have a clear avenue of attack for that problem. We understand so little about the brain in general, and emotions seem buried near the bottom of that mystery.

Psychoactive drugs and hormones are so good at altering emotional state that it doesn't seem implausible that emotions might be as simply "implemented" as logical reasoning, or that emotions and human logic are in fact different shades of the exact same biological system. The hope would then be that emotions will emerge automatically once we've developed a system of sufficiently complex thinking.

Even more extreme, some people hold the belief that consciousness itself is a sort of post facto illusion—that we don't truly "think" at all, and everything we perceive is a backwards looking rationalization that arises as an accident of the complex chemistry of the brain. Timed brain scans seem to superficially support this philosophy. If this is the case, then building mammal-like machine intelligence may not be so mysterious in the long run, though this raises some pretty mind-bending ethical and philosophical issues.

That all said, I fundamentally agree with your point. It certainly seems like there is very little work, if any, that's advancing our understanding of how to do anything other than optimize certain tasks. Those tasks are progressively becoming more and more complex, but they're still extremely narrow in scope. From where I sit, it seems like we'll have to solve a whole lot of "pointless" (unprofitable) problems before we come anywhere close to finding general AI. Not the least of these problems is our fundamental lack of understanding of what our own "thinking" even really is.


> The hope would then be that emotions will emerge automatically once we've developed a system of sufficiently complex thinking.

If we look at nature, we see the opposite: almost all animals seem to experience some sort of emotional reaction to stimulus, even if they don't seem capable of complex rational thinking.

> We understand so little about the brain in general, and emotions seem buried near the bottom of that mystery.

I agree: emotions seem more fundamental to thinking than rational symbolic reasoning.


> If we look at nature, we see the opposite: almost all animals seem to experience some sort of emotional reaction to stimulus, even if they don't seem capable of complex rational thinking.

Well, my (personal) par for "sufficiently complex thinking" is pretty low. I would say any animal we can perceive emotion in has far more complex thinking than the theoretical lower bar. I would take the perspective that emotions are probably present in some animals that are so non-human we don't assume they have consciousness.


The other amazing thing is it takes infants months if not years to grasp certain fundamental realities about the world. They also take in a massive amount of constant data that gets parsed through sensory inputs and lower level instincts before it even registers with emotions.

I would not be surprised if we find the secret is in building up from base instincts and flooding it with sensory data while we"parent' the AI.


Could emotion be modeled as a deviation (amplified or dampened) from an ideal rational response, given the information? Like a short circuit that allows sensation to override the rational processing?


Yesterday there was a short item on the Dutch radio about a query for Siri: "When will world war 3 happen?". Somehow Siri would give an exact date in the future but nobody knows why that date was chosen.

The concepts in the question are clear for current systems. 'When' is a clear concept about a time question (concept of past and future might be mixed up). 'World war 3' can also be a concept that current systems 'understand'.

Lets say there is a news article that says: "If Trump wins the elections world war 3 will happen". And another article says: "When Trump wins the poll on 2016-11-05 he might win the elections". Siri might combine this into: "World war 3 will happen on 2016-11-05".

But Siri doesn't know the context in which the question was asked. And I think the only way to get this right is:

  * ask about the context
  * track everything a user does to estimate the context
I think the movie Her[1] does this. The OS is constantly asking him questions so 'she' can learn about his context. And of course the first question the OS is asking is brilliant: "How is your relationship with your mother".

[1]http://www.imdb.com/title/tt1798709/


What you're describing sounds a lot like a statistically trained system.

"until we understand how that information is encoded in our own minds, getting a machine to truly understand it the same way will be elusive."

Here's a (fairly convincing imo) discussion as applied to language:

http://norvig.com/chomsky.html

Further, I think human emotions are pretty transparent -- e.g. why might people lust after high calorie foods?

The timeline is probably far shorter than you are describing here.


> what I often wonder is whether it's really possible to make a computer think of a thing

Trivial constructive proof that the answer is "yes"; as far as we know, it is physically possible to measure and then simulate a human brain to an accuracy much smaller than the thermal noise floor at normal brain temperature.

That is, you can always literally just run a human brain on a computer, and unless we're entirely wrong about all of physics, it will do everything a physical human brain would.

> So long as a computer does not have any desires

"Desire" is actually pretty well understood in the frameworks of decision theory and utility theory. You can always make a program "want" something in terms of that thing having a positive value in the program's utility function.

> What would anything matter to you if you didn't eat, drink, sleep, feel, get bored, get curious?

What would anything matter to you if you didn't shit, get pneumonia, and die? All the things you mentioned are just random things that humans happen to do; I'm not sure what it has to do with the concept of having preferences.

> Without that, it's all just text.

The representation doesn't really matter. Having desires is a property of the internal behavior of an agent, not how those behaviors are implemented.


I think about this as well. Is it possible to have a mind, the way we conceive it, without a body? I don't think so.

The very notion of oneself being apart from the world is, IMO, sensorial at first. Knowing the limits of your body is essential to defining oneself. A free floating conscience seems unfathomable.

We may need to infuse sensory inputs first before we can have a true AI.


There are some CNNs that will output heatmaps of where the classifier for a label is triggered most strongly in an image. Does that count as being "rendered in senses"?

Also, if you train the NN on purely textual data, there are no senses like you describe to associate it with, since its only senses are symbolic.


What's preventing there being an AI with hardcoded desire to regulate values like hunger level, boredom level, etc.? I could imagine a problem solving AI dedicated to continuously solving those problems.


But how?

hunger = 100;

while (hunger > 0) { seekFood(); }

// Is this what hunger is to a machine, at its basest level? An int and a while loop? Is that really what it means to understand hunger? This and a text description?


I mean hunger is really just a gut reaction created to tell us we need to eat or we'll die eventually.

edit: s/created/developed over time/


while(sugar = 0) seekFood()

And thus the grey goo was created


Hm no, you used = instead of ==, so this will never seek food. ;)


You probably mean that it will always seek food since the assignment evaluates to 'true' when it's successful (which is usually the case).


In what language? In all that I know, assignment evaluates to the value that was assigned (that is, if it evaluates to anything at all). Also in most languages that look like C, 0 evaluates to false. Therefore it will never seek food.


Usually the case as in usually never the case.


Many serious AGI efforts are aware of the need to ground learning in sensory data from (real or virtual) embodiment. Deep Mind is the most famous and popular one.


Well, you can have the best algorithm to optimize, but you still need to define various utility functions.

The field of embodied cognition attempts to approach that. https://en.wikipedia.org/wiki/Embodied_cognition

If you make a drone that feels pleasure when refueling and killing people, guess what the drone will do.


The Terms of Use appear to be very restrictive.

Not only does it have the "non-commercial" restriction, limiting its use to throwaway projects that are not expected to succeed, but derivative works are disallowed.

> Unless otherwise specified, the Services are for your personal and non-commercial use. You may not modify, copy, distribute, transmit, display, perform, reproduce, publish, license, create derivative works from, transfer, or sell any information, software, products or services obtained from the Services.

As far as I can tell, you are free to admire the Concept Graph from a distance, but not to build anything on it.


It reminds me of Freebase [1], acquired by Google and later deprecated, you can find the data in [2] and the new Google Knowledge Graph Search API. It may not be enough for making computers think but can help to augment a search engine. In Freebase you could perform queries like "give me the VCs who have great exits in telecommunication companies". It is very useful to apply this kind of queries to news because it adds context. DBpedia [4] is another interesting project on this subject.

[1] https://en.wikipedia.org/wiki/Freebase

[2] https://developers.google.com/freebase/

[3] https://developers.google.com/knowledge-graph/

[4] http://wiki.dbpedia.org/


Freebase was mentioned in the article (along with Cyc) and was noted to have 2000 concepts compared to MCG's 5.4 million (and Cyc's 120K).


I suspect MS concept graph's concepts can be represented as compounded triples in FreeBase, e.g. in MS Graph they may have Jacques Chirac instance of President of France concept, where in Freebase such knowledge will be represented as:

Jacques Chirac - occupation - President

Jacques Chirac - country - France

I find later to be much more efficient.


I miss the public availability of Freebase, and I used it for examples in two books I wrote, so those examples are broken. DBPedia continues to be useful. I was just looking at an application of DBPedia for work yesterday.


I'm in no position to discuss the actual service, but was really surprised by the citation requests at the bottom. If you use the service, please cite these 6 papers. If you use the data, please cite these other 2.

Is this a new norm that's come about from publish/perish since I was at uni? I've always assumed that you cite what you actually refer to, and even if you just cite as a reference to describe a working project, surely one suffices. Six though?


In the data science world, it's good form to acknowledge where you got your data from. Allows for research reproducibility. More generally it is good form to acknowledge if you've used somebody else's method.


I'm not publishing any papers but I imagine that if I were just using the service as opposed to reading a specific paper I would be at a loss for what to cite without being told.


It's really odd to see a direct request for all of them, usually it'd just be the most recent.


Maybe you were at uni 60 years ago, but yes sometimes you cite more than one source.


Well, when I was at university it was considered bad form to cite sources you hadn't actually consulted yourself.


This has been the norm for quite a while.


Oh, I just realized that 'tagging' in this way is kind of an implementation of Minsky's k-lines. https://en.m.wikipedia.org/wiki/K-line_(artificial_intellige...

Cool!


It doesn't know about "vibrator", but it knows all about "astable multivibrator". Sounds a lot like me as a kid!

This data could be used to automatically generate trivia questions and to power other kinds of word games...


Training data is the mirror of society? https://concept.research.microsoft.com/Home/Demo?instance=wo...

vs.

https://concept.research.microsoft.com/Home/Demo?instance=ma...

In this context, the disclaimer makes much more sense.


"Man" is a somewhat ambiguous word, and the algorithm is clearly interpreting it to mean "the human species" first and foremost.



Pretty neat implementation!

Is there any way to monetize a similar independent project like this? I understand it can help ML tasks with disambiguation but that's even farther out of my expertise. I ask because I did very similar work for my CS PhD dissertation in 2013. Basically covering their 2nd aim, but with fewer scoring methods and a viz component.

It would be cool to dust off my old code and try it on this data set either way...


Say, you were an early contributor to Cayley! (https://github.com/cayleygraph/cayley)

Things were slow there for a while, but we have our own namespace now, we've done about a release per quarter for a bit, and have a small but thriving community on our discussion board: https://discourse.cayley.io

Currently up for discussion is reification :)


This is really cool! I wonder if there's any intersection between this and MIT's Concept Net (http://conceptnet5.media.mit.edu/) somewhere down the road.


Interesting that I was just about to link to the new version of ConceptNet (http://conceptnet.io).

Certainly a lot of the same language used to describe it. Different areas of focus. There's room for both in the world but dang the names are going to be confusing.


Anyone know if we can build something off this? The text "Disclaimer: The data, service, and algorithm provided by this website are based on the automatically computing and training of public available data. They are only for academic use. The user of such data, service, and algorithm shall be responsible for contents created by the algorithm by complying with compliance with applicable laws and regulations." makes me hesitate.


I think this could be used to do some really neat procedural generation of concepts in a game world (ala the kind of experience that Dwarf Fortress offers)


> Microsoft

> largest OS vendor

> "We may not be able to find any reasonable object other than Microsoft."

This seems a bit contrived, considering that Android has the larger install base.


Much like the real world and real people, there's no guarantee that the most popular concepts will be technically correct.


It seems to me one of the problems with machine learning in the nlp domain is that language concepts are mutable, but at varying degrees. In the dog/cat example used in the original post, the degree of mutability is very low, given the concept of a dog and a cat are rooted in the physical world.

However, consider more abstract human concepts or language that is new and changing often. Ironically, much of the language used to describe AI falls into this category (and thus subject to confusion among humans).

Any sort of machine learning algorithm would need to include some sort of 'adaptability' parameter that could tell the machine when to discard the current concept of the word and try forming a new one. This would need to be based on checks in both immediate context of the phrase, and related phrases.

Disclaimer: My knowledge of machine learning is limited to passive reading, so this may already be a part of any nlp algorithm, or I'm just completely off base. So please consider my comments are coming from the perspective of an outsider!


What is the difference with word embedding method? Isn't concept means just another word with high semantic similarity with each other?


It seems to be quite opinionated

https://concept.research.microsoft.com/Home/Demo?instance=hi...

it would be interesting to know more about how the graph is formed, and how it avoids "gaming" the engine

the probase link is giving be a 400 error


My guess is it only parses certain word forms. "Blatant state-shtuppers" is in this blog post: http://www.transterrestrial.com/?p=63723

> "Let’s put blatant State-shtuppers such as Hillary, Bernie, and Obama at about 7 or an 8."

This matches Hearst Pattern #1 from https://www.microsoft.com/en-us/research/wp-content/uploads/...:

> NP such as {NP,}*{(or, and)} NP

Hillary usually appears by herself, rather than in a list. Apparently Probase doesn't pick up the plentiful "X is a Y" associations, e.g. the "Hillary is a liar" from http://thefederalist.com/2015/08/27/poll-voters-overwhelming... or "Hillary is a candidate" from http://www.huffingtonpost.com/jeffrey-sachs/hillary-is-the-c...

Or maybe it does, and they're ranked down. They do have a truth-detection phase, but it's mostly syntactic, and the top categories all have negative examples ("Hillary is not a candidate", "Hillary is not a democrat", etc.).


Wow, those are rather interesting "concepts". It's surprising most of the top results are all totally subjective (and yes highly opinionated): 'unrepentant liar', 'gas bag', 'ruthless and totally corrupt politician'...

Clearly those associated concepts didn't come from the nytimes or wikipedia, so how can they ensure accuracy when scraping these unauthoritative sources?


Where is the data? Probase is sanitized; there is no MS https://concept.research.microsoft.com/Home/Demo?instance=pr...


"sex" >> Sorry that current Microsoft Concept Graph doesn't contain this instance.


Please note that Microsoft have censored some words like BLACK or F*CK so they do not appear in their search results: "Sorry that current Microsoft Concept Graph doesn't contain this instance."

But words like "WHITE" are ok, identified as "neutral color, traditional color, classic color non obtrusive color".

This is the Concept Graph demo where you can verify if the word is censored or not: https://concept.research.microsoft.com/Home/Demo


I hope this censoring was applied to a limited demo dataset and not to whole dataset. Otherwise I can't really trust such a "research".

In the end, it's our digital world that reflect our minds. And we should have courage to look into the mirror.


Oh joy, the Semantic Web all over again...


The semantic web is still in play, in the form of linked data, scheme.org, etc. I was looking at SKOS yesterday to help solve a particular problem yesterday. Google's and Facebook's knowledge graphs are born out of knowledge engineering, semantic web, etc.

I for one, am willing to declare victory for semantic web technology.


I was under the impression that deep learning could already extract clusters of symbols and group them into concepts, with no other input than just large bodies of text, but I could be wrong.


Some interesting highlights from a quick scan of the data: item hot 44

complex carbohydrate entirely grain product whole wheat bread 4620

free rich company datum size 33222

issue stress pain depression sickness 11110

testing device glucometer diabetes blood sugar test strips insulin pump 7138

big deal real estate investment opportunity 4135

small portion couple small cookie 2438

microsoft hardware failure bad hard drive 2281

affordable and multifunctional furniture piece sofa 1750

environmental factor diet 1588

so called designer sandwich cranberry 1460

practical add on towel rack 1459

practical accessory towel rack 1498

shop el corte ingles department store chain 1499

combustible material clothe 1405



There are only screenshots of the Probase Browser. Anyone has a link to a working instance?



Thanks, I saw that, but it's not the same graph-based UI as the screenshots.


How do nlp frameworks like this deal with the fact that "literally" means "not literally" in some contexts?


There are dictionaries of word senses, such as WordNet.

In order to identify word senses - first extract the words from text, collect many examples of their surrounding contexts and apply clustering on them. If there are more than one senses, they appear as clusters. Furthermore, words are being replaced with word embeddings (numerical representations of their meanings).


Somewhat related: I develop accounting software, and I want my users to be able to say, "Show me all the unpaid invoices for Paul Jones" or "Record a Mastercard payment to ABC Supplies for £32.20".

Are there any libraries or platforms that would help me implement this kind of natural language UI?


Microsofts' LUIS service will easily do that for you. You train it to recognize a user intent (show_invoices, record_payment) and classify additional entities (payment_status, creditor_name, or payment_type). It works remarkably well.

You use the portal to register your application and enter some example sentences and specify your expected interpretation. Then you can start recognizing input strings using the API. You can then improve the recognizer by manually correcting input from actual use.

https://www.luis.ai/


Have a look at 'Named Entity Recognition' (NER), particularly a 7-class model to recognize: location, person, organization, money, percent, date and time.

The Stanford NLP Group has a NER project: http://www-nlp.stanford.edu/software/CRF-NER.shtml, and there are many others.


You might look into api.ai.



i think they did not build the graph by hand - they must have automated the process of creating it (interesting how they did that); the article mentions four team members, if you do this by hand then you would need more hands.

... well the article links to this article where they seem to be automating the process

"Probase: A Probabilistic Taxonomy for Text Understanding" https://www.microsoft.com/en-us/research/wp-content/uploads/...

thanks for the link, now i have something to read. MS research has some really bright people working for them; wow.


Reminds me of Cyc.


Different though. OpenCyc has not see an update in a long while, the the data and the inferencing system are still good.


This is amazing! Kudos to Microsoft! This will really help people in NLP applications and small scale search engines for disambiguation.

Anyone has a script for calculating the similarity scores or knows which papers have the formulas?


I was unable to parse "animals other than dogs such as cats" because it isn't a sentence, it has no predicate. Shouldn't language sort of be their thing here?


I'd like to be able to casually browse this taxonomy to see what's related to what, in the same way that I enjoy browsing Wikipedia with no particular goal in mind.


Unfortunately it doesn't sound like the data could be used for this, "Disclaimer: The data, service, and algorithm provided by this website are based on the automatically computing and training of public available data. They are only for academic use. The user of such data, service, and algorithm shall be responsible for contents created by the algorithm by complying with compliance with applicable laws and regulations."


I see a disturbing trend with tech blogs having binary images instead of html tables and graphs.


This looks a _lot_ like word2vec applied in large scale to me. [1] I have a feeling Google already does this.

[1] https://en.wikipedia.org/wiki/Word2vec


Why does Microsoft not like making mobile optimized web pages?


because they dont have a mobile platform


Can this data be put to commercial use?


What do these mean:

- MI

- NPMI

- PMI^K

- BLC


Short text understanding? God, I just want a decent thesaurus.


Funny you should mention that - my friend just submitted https://news.ycombinator.com/item?id=12852302 for feedback on his stab at a thesaurus!




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: