Hacker News new | past | comments | ask | show | jobs | submit | desideratum's comments login

Hi Stefano. I'm an ML Engineer/Researcher looking for a new role, and very interested in learning more about Epistemic. Am I correctly visualizing something similar to https://www.connectedpapers.com/ as part of your product? I can see incredible potential using ML/NLP to build knowledge graphs from a diverse set of sources of biomedical knowledge, from derisking future research to exposing connections and ideas people haven't even considered! Thanks, Salman


Hi Salman, happy to chat, shoot me an email!


Nick Bostrom's "Superintelligence" is a sober perspective on this issue and a very worthwhile read.


Yup that's a good recommendation. I've read it and some of the AI Safety work that a small portion of the AI community is working on. At the moment there seems no reason to believe that we can solve this.


Hi David, wonder if you'd be open to remote within the UK? I'd be interested in the ML Engineer role primarily, but would also happily be considered for other roles.


We're open to remote depending on the role. Naturally some roles can't be remote due to the nature of our work. Please reach out to me at david@optimal.ag.


Some truly impressive results. I'll pick my usual point here when a fancy new (generative) model comes out, and I'm sure some of the other commenters have alluded to this. The examples shown are likely from a set of well-defined (read: lots of data, high bias) input classes for the model. What would be really interesting is how the model generalizes to /object concepts/ that have yet to be seen, and which have abstract relationships to the examples it has seen. Another commenter here mentioned "red square on green square" working, but "large cube on small cube", not working. Humans are able to infer and understand such abstract concepts with very few examples, and this is something AI isn't as close to as it might seem.


It seems unlikely the model has seen "baby daikon radishes in tutus walking dogs," or cubes made out of porcupine textures, or any other number of examples the post gives.


It might not have seen that specific combination, but finding an anthropomorphized radish sure is easier than I thought: type "大根アニメ" in your search engine and you'll find plenty of results


Image search “大根 擬人化” do return similar results to the AI-generated pictures, e.g. 3rd from top[0] in my environment, but sparse. “大根アニメ” in text search actually gives me results about an old hobbyist anime production group[1], some TV anime[2] with the word in title...hmm

Then I found these[3][4] in Videos tab. Apparently there’s a 10-20 year old manga/merch/anime franchise of walking and talking daikon radish characters.

So the daikon part is already figured in the dataset. The AI picked up the prior art and combined it with the dog part, which is still tremendous but maybe not “figuring out the daikon walking part on its own” tremendous.

(btw anyone knows how best to refer to anime art style in Japanese? It’s a bit of mystery to me)

0: https://images.app.goo.gl/LPwveUJPWHr6oK8Y8

1: https://ja.wikipedia.org/wiki/DAICON_FILM

2: https://ja.wikipedia.org/wiki/%E7%B7%B4%E9%A6%AC%E5%A4%A7%E6...

3: https://youtube.com/watch?v=J1vvut5DvSY

4: https://youtu.be/1Gzu2lJuVDQ?t=42


> anyone knows how best to refer to anime art style in Japanese?

The term mangachikku (漫画チック, マンガチック, "manga-tic") is sometimes used to refer to the art style typical of manga and anime; it can also refer to exaggerated, caricatured depictions in general. Perhaps anime fū irasuto (アニメ風イラスト, anime-style illustration), while a less colorful expression, would be closer to what you're looking for.


At least for certain types of art, sites such as pixiv and danbooru are useful for training ML models: all the images on them are tagged and classified already.


If you type in different plants and animals into GIS, you don’t even get the right species half the time. If GPT-3 has solved this problem, that would be substantially more impressive than drawing the images.


What is GIS? I only know Geographical Information System.


probably Google Image Search


Yea, with these kind of generative examples, they should always include the closest matches from the training set to see how much it just "copied".


It's very hard to define closest...


This is a spot on point. My prediction is that it wouldn't be able to. Given its difficulty to generate correct counts of glasses, it seems as though it still struggles with systematic generalization and compositionality. As a point of reference, cherrypicking aside, it could model obscure but probably well-defined baby daikon radish in tutu walking dog, but couldn't model red on green on blue cubes. Maybe more sequential perception, action, video data or system-2 like paradigm, but it remains to be seen.


Yes, I don't really see impressive language (i.e. GPT3) results here? It seems to morph the images of the nouns in the prompt in an aesthetically-pleasing and almost artifact-free way (very cool!).

But it does not seem 'understand' anything like some other commenters have said. Try '4 glasses on a table' and you will rarely see 4 glasses, even though that is a very well-defined input. I would be more impressed about the language model if it had a working prompt like: "A teapot that does not look like the image prompt."

I think some of these examples trigger some kind of bias, where we think: "Oh wow, that armchair does look like an avocado!" - But morphing an armchair and an avocado will almost always look like both because they have similar shapes. And it does not 'understand' what you called 'object concepts', otherwise it should not produce armchairs where you clearly cannot sit in due to the avocado stone (or stem in the flower-related 'armchairs').


> I would be slightly more impressed about the language model if it had a working prompt like: "A teapot that does not look like the image prompt."

Slightly? Jesus, you guys are hard to please.


Right, that was unnecessary and I edited it out.

What I meant is that 'not' is in principal an easy keyword to implement 'conservatively'. But yes, having this in a language model has proven to be very hard.

Edit: Can I ask, what do you find impressive about the language model?


Perhaps the rest of the world is less blasé - rightly or wrongly. I do get reminded of this: https://www.youtube.com/watch?v=oTcAWN5R5-I when I read some comments. I mean... we are telling the computer "draw me a picture of XXX" and it's actually doing it. To me that's utterly incredible.


> "draw me a picture of XXX" and it's actually doing it. To me that's utterly incredible.

Sure, would be, but this is not happening here.

And yes, rest assured, the rest of the world is probably less 'blasé' than I am :) Very evident by the hype around GPT3.


I'm in the open ai beta for GPT-3, and I don't see how to play with DALL-E. Did you actually try "4 glasses on a table"? If so, how? Is there a separate beta? Do you work for open ai?


In the demonstrations click on the underlined keywords and you can select alternates from dropdown menu.


Sounds like the perfect case for a new captcha system. Generate a random phrase to search an image for, show the user those results, ask them to select all images matching that description.


Thanks for this. I had the same thought about this being a lesson they'll need to learn. The other two engineers put up very little resistance and seem to be perfectly happy with their TC.


Appreciate the correction. The first point - I've written and led the submission of a large funding grant to support the development of the product which I'm responsible for.

It seems like having a concrete counter-offer would strengthen my position and make leaving a much more comfortable descision.


> It seems like having a concrete counter-offer would strengthen my position and make leaving a much more comfortable descision.

A counter-offer from another company is a prerequisite for leaving. Even a low-paying job is better than no job.

That said, don't get caught in the trap of waiting around for the perfect counter-offer. If you want to negotiate a higher salary, it's not helpful to delay months and months to find the perfect other job. If you're not getting counter-offers soon, you can skip straight to negotiating. Don't let yourself get stuck in limbo forever.


I've been going through a familiar striking realisation. Responsibilities/work put in != ownership.


Thanks for the response. I definetly know I can easily make double with my experience. I'm in the UK (non-London), and while I enjoy the project and opportunity to lead here, I'm not willing to do it for my current TC. My biggest question is how I can negotiate a salary I'd be happier with before outright leaving.


If you can easily make double and you'd be happy with double, ask for double. (BTW it's crucial to specify location and currency when discussing compensation online. 22K pounds in a non-London location is very different than $22K USD in a major US city, for example)

Best case, you get what you ask for.

Worst case, they say no and you're right back where you started.

Your negotiating position will be 10X more effective if you're actually willing to walk away from this company. If they think you're going to stay regardless of how much they pay you, they might try to call your bluff and continue underpaying you.


Appreciate the advice. I'm not so invested here that I'm not willing to walk away, and if they have considered that I might do so, it isn't reflected in their efforts to compensate me.


Can you refer to any specific theories that don't require expansion? (for reading)


sure!

Electric Universe (Thornhill, Birkeland, Scott, et al)

Plasma Cosmology (Alfven, Lerner, et al)

Infinite Universe Theory (Borchardt)

Recycling Universe Cosmology (Mitchell)

Subquantum Kinetics (La Violette)

Modern Mechanics (Bryant)

Push Gravity: (various, book: Pushing Gravity)

The Static Universe (Ratcliffe)

Steady State Universe (Hoyle, older)

That's a good start at least. ;-)

All of the above have one or more books written about them.


An infinite Universe makes the most sense, let's adopt that


> An infinite Universe makes the most sense, let's adopt that

Why would human intuition about what makes sense have any impact on physical reality?


Reality does not care what makes sense to you. Let's find out what's true and adopt that.


yes, personally I believe that pure logic alone can bring one to believe in an infinite, fractal, eternal universe.

In other words, there is no smallest or largest.

There was never any beginning nor will there ever be an end. (If there was a beginning, what caused it?)

There is no outer boundary, limit, or end to the universe. If there were, beyond it would have to be "nothing". How can "nothing" exist, and what would be the border between "something" and "nothing"?

There is no "time", only matter in motion.

Matter is always divisible into something smaller, and composeable into something larger.

Matter in motion at scale n is perceived as a "force" at scales > n.

anyway, fun to think about!


Anyway, Borchardt's book/theory is pretty good.

A bit different than what I've always thought about as my personal infinite universe theory, but good food for thought and he makes the math work. Definitely worth the read!


Glasgow alumni here. The Haskell course was incredibly well taught, though the the free learning material for Haskell (e.g. learnyouahaskell) is of significantly higher quality than for most other programming languages.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: