Hacker News new | past | comments | ask | show | jobs | submit | Dryken's comments login

Anyway none of the company that pretend doing AI are actually doing AI. AI nowadays is pure branding bullshit.


I heard someone say that A.I. is just what we call technology that doesn't work yet. Once it works, we give it a specific name, like "natural speech recognition".


However, if a robot from scifi were to walk out of the lab, like Data or Ava from Ex Machina, or we had access to HAL or Samantha from Her, we wouldn't just give it a specific technical name. We would consider those to be genuine AIs, in that they exhibit human-level cognitive abilities in the generalized sense.

It's true that in Her, Samantha was just an OS at the start, kind of like how the holographic doctor was just a hologram at the beginning of Voyager, but as both stories progress, it becomes clear they are more than that. By the end of Her, Samntha and the other OSes have clearly surpassed human intelligence.

Those are fictional examples, but they illustrate what we would consider to be genuine artificial intelligence and not just NLP or ML. The reason people always downplay current AI is because it's always limited and narrow, and not on the level of human general intelligence, like fictional AIs are.


I like that definition too (I know it from Seth Godin). It’s honest, in the sense that we just don’t know yet how to that stuff instead of labeling every single code of line as AI.


I think a reason for this is that in the early days of computing and AI research, strong AI / artificial general intelligence (AI possessing equivalent cognitive abilities to humans) was considered both to be within reach, and the most obvious solution to many problem domains. We now realise that things such as computer vision and natural language translation can be approximated with solutions falling far short of strong AI.


Personally I define AI as software that you "train" rather than "program". In the sense that neural nets and other ML tools function as black boxes rather than explicit logic.

By that definition, AI is a real thing—it's built on top of programming that uses compilers and languages and ones and zeros—but it's different and it's valuable.

To say it's all bullshit, I feel, is to cut yourself off from new skills. Kind of like "compilers are all bullshit—it's opcodes at the bottom anyway."


AI carries a set of connotations in popular imagination that a) don't comport with the actual capabilities of what we term 'AI' in the computer science world, and b) are being exploited by marketing teams at IBM and plenty of other companies to sell technologies that aren't particularly new or interesting. The kernel of truth in 'AI is bullshit' is really that the discourse around AI is bullshit, which I think is a pretty fair assessment, and this is coming from someone who's work gets labeled as AI on a regular basis.


>I define AI as software that you "train" rather than "program".

I like this definition. It covers things that are AI but not ML, like DSS / rules engines. I've built two fairly sophisticated DSS before but haven't messed with ML much. It seems interesting, but I haven't had the time.

https://en.wikipedia.org/wiki/Decision_support_system

Eliza is the first AI program I came in contact with on the Commodore. It was built in the 60s.

https://en.wikipedia.org/wiki/ELIZA

AI is a very broad subject and ML is just a particular (promising) technique to perform AI.


Not sure what you mean by that. Is there some industry standard around the term "Artificial Intelligence"? I agree that its become a bit of a buzzword, but I'm not sure that its being misused.


When I hear AI I usually imagine deep learning, but many companies using the term don't specify.


But that's more "machine learning" which always seemed less "sexy" than AI -- basically just regression but better, not magical like AI.


I will say that when my company uses AI, they almost always just mean LSTM-based content generators - usually chatbots or "advisory"-style outputs - but the key idea is that it's generative and not just an evaluator.

I think that's probably the most helpful definition because your ML output has to go into some larger intelligence system (human or otherwise) to produce some decision / activity. So your choices are:

* Human * Expert system with rules of interpretation that include ML output as input * AI system which relies solely on inference and reinforcement / goal-seeking to produce output


if you used a "WebGL 1st" engine it would probably work. Unity is "compiling" it's project for WebGl so that's one big process that can easily break.


Yes we don't understand. We're also exporting with WebAssemly support instead of AMS. The thing loads fine, then gets stuck: http://countryfortress.com/LamaExampleWebAssembly/.

By "webgl-first" you mean three.js or D3? We use Unity for iOS and Android, and we were looking to use a single tool for everything (if possible, of course). Is there any way to turn a simple Unity project into something that won't bring the browser to its knees?


Any informations on: - Where is the data saved - Who has ownership of the data - Possibility/Procedure to get your data "erased" and not just archived/anonymized - Api and other means of interconnection with vero


they own everything you post, check the TOS:

"You acknowledge and agree that if you provide data regarding your end users or email campaigns to Vero in connection with your use of the Services (“Customer Data”), you hereby do and shall grant Vero a non-exclusive, worldwide, royalty-free, transferable right to use, modify, reproduce, and display such Customer Data (including all related intellectual property rights) to (i) provide the Services and (ii) improve the Services’ ability to deliver web and application analytics services to you. You warrant, represent and agree that you have the right to grant Vero the rights set forth above."

https://www.getvero.com/terms-of-service/

Also, their co-founder previously ran Saudi Oger:

https://www.reuters.com/article/us-saudi-labour-foreign/aban...


hope you both get paid and don't get into trouble because of this.


Oh no, trouble! That's the worst thing that could happen to someone! Getting in trouble!

People should stand up for themselves more, you deserve to be paid for your work. The client is the only person who should be in trouble. Don't be afraid of trouble.


You can't just sabotage your product because you didn't get paid. That just creates two wrongs that do not simply cancel each other out from a legal point of view.

There's a legal system to settle payment disputes. Using it is the civilised, and safe, version of "standing up for yourself".

Many service providers will put up nondescript error pages when the account isn't being paid. That's somewhat safer, but might not apply here: service providers are refusing to continue service. They're not changing a product that was already delivered.


> Don't be afraid of trouble.

Legally speaking, you probably should be. Contracts work when done properly.


You could legit go to jail under the CFAA for a stunt like this.


I was just about to purchase $10,001 worth of services from them and have now decided not to because of the computer trespass by the web developer.

According to https://www.cga.ct.gov/2012/rpt/2012-R-0254.htm that makes it a Class B Felony.

Plus the prima facie tort for the lost business.

Are you sure it's such a good idea to go around looking for trouble like this?

There are plenty of legal recovery avenues without going looking for trouble and pretending it's just standing up for yourself - a defense which will go nowhere in a felony hearing.


I guess you assume US laws are applicable everywhere in the world?


I guess you assume that because it's an African country that they don't have computer crime laws ?

http://kenyalaw.org/kl/fileadmin/pdfdownloads/bills/2017/Com...


You posted a link to Connecticut laws/regulations in your original post. I don't think that is applicable to Kenya.


FWIW the business is based in Kenya.


Any reason to choose observable instead of jupyter ? A feature comparison matrix would be nice.



I can't find much information at all about these notebooks.

From what I can tell, they are Javascript only, which could make for an easier deployment story than a full Jupyter notebook server. That said, it's unclear from a quick tour of the site whether deploying these notebooks themselves is something one would or could do.


I find it odd that Bostock doesn't even mention Jupyter notebooks in the introduction. It is by far the most common and popular interactive notebook and supports many languages. There is also a next version - Jupyter Lab[1] which looks fantastic!

I do like the design of the notebook and the ability to pin cells. I got tired of Jupyter's horrendous default interface and wrote a new interface skin called Spin Zero[2]. I am trying to convince the Jupyter community to pay attention to their design.

[1] https://github.com/jupyterlab/jupyterlab

[2] https://github.com/neilpanchal/spinzero-jupyter-theme


not gonna happen anymore. Unless you want to mine new crypto hopping their value go up.


Not a direct response but for the multiple login problem have you considered using some kind of ldap like active directory ?


I have not, but thanks for the suggestion. I don't have much experience here, but I was under the impression that Active Directory was for Windows networks.

A detail I didn't include in my original post is that the primary devices used are Android tablets & phones. I don't know much about ldap in the mobile context (or in general, to be honest).


although active directory is installed on windows, many library allow to use it as a source of truth for authentication. But of course your apps need to be compatible with it (and if you don't want to install AD any other LDAP would work too if the apps are compatible).

Good luck with your search :)


they are saying that their auth system is down


Sadly they use Gremlin that is so often said to have poor performances


AFAIK gremlin is just a query language - it shouldn't have much to do with performance.


Gremlin is indeed the query language but requires a gremlin engine. This is generally passing strings to the DB (which gives you advantages like pushdown-predecate, essentially DB-side filtering) but there is associated overhead with something like Cypher that is now serialised and very fast with the Bolt protocol


that was my point but my Rhetoric was not as good as yours :)


I believe Gremlin is just the query language. There is an original backend that implemented it, which might be what you are thinking has performance issues. But the query language intrinsically doesn’t have issues I don’t thinks.


Currently I work on a project with Neo4J and Cypher. And I miss some of Gremlin tricks to optimize some graph traversals (for example to stop some sub-traversal when a given limit of matches have been reached).


While this isn't natively supported yet, there are some tricks to achieve something like this, either with using APOC Procedures for subqueries, or, if your expansion-stop case is based on labels, APOC's path expansion procedures. https://neo4j.com/developer/kb/limiting-match-results-per-ro...


My point is not to criticize Cypher (at all). Its learning curve is perfect. And it covers most of the requirements really well with a compact readable syntax. Plus it improves with each version of Neo4J. Which is cool.

My point is that gremlin has been super efficient for us to express (in its functional way) tricky traversals. So I do not see any reason to discard it as a "inefficient" technology.


It prevented me to verify my identity with my (non Canadian)bank for a good while :-(


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: