Hacker News new | past | comments | ask | show | jobs | submit | clauderoux's comments login

The exceptions are usually due to words that were borrowed from other languages and hence do not follow French rules. Many of the words that were mentioned here are borrowed from the Occitan language.

Sam Altman looks more and more like Lex Luthor in the last Superman franchise... He is not releasing any kind of alien creature on the world, of course... Wait...


Nah, that's Musk 100%. Sam is still in the small leagues.


I don't buy it, unless there is something that I missed. But for this concept to work you need some initial energy to break down the water molecules, then the output of your power cell should be large enough to power your car and break down more of these water molecules. I know there is a catalyst in this contraption that should lower the energy requirement to break down water, but I fail to see how this system is sustainable. The energy equation seems unbalanced to me.


There is a whole economy that depends on people working 5 days a week, such as hospitality. Furthermore, as someone who is now 60, I can tell you how disastrous for the health it is when you have to work also during week ends. As you can never really rest, tireness stacks up to the point of burn out. You need these days badly to give your mind a bit of fresh air. I went many times through absolute nightmares when a manager had decided on unattainable datelines, which forced us to skip week ends all together. I would spend weeks before being able to work properly again. Those fat geese sitting on a fat matress of stock options, threatning us to make them richer is beyond indecence.


As far as I can remember I've always been sick in cars, when I'm not driving. When I first tried my Q3 headset, I couldn't stand it for more than 10mn in a row. The trick I discovered works for me, but I wouldn't bet on it for everyone else. I removed my shoes and I play barefoot, with my two feet firmly on the ground. I couldn't play Population One until I applied this trick. Now after many months of using it, I must say I feel pretty confortable. My main usage now is watching movies or videos on cinema size screen.


The technology has been around for about one year now. For people who have been working in the field for as long as I have (> 30 years), the output is absolutely incredible. Really incredible... However, for the vast armies of academics and others who went on a different route, the result is bitter. We failed to deliver anything close to LLMs. I put myself in the list. The problem is that many of these technologies have been developped by the GAFAM, which very people have confidence in, for obvious reasons, however these were the only entities with enough computing power to do so. Nothing surprising here. Now we have many people who think that AI is a hoax and that it is going to burst soon. There is something pretty religious here, it reminds me of these people who get convinced by gurus that the world will end on a specific date for reasons... Even in its state, AI is already incredibly useful, and I don't see anything to convince me that AI has reached a glass ceiling. Believe me, 5 years ago this glass ceiling was much.. much lower than today.


In what extent you see the output being incredible?

At appearance the output _seems_ incredible but once one starts pushing for more or requiring consistency for production, it requires a tremendous effort to put in place or it is simply not possible.

I have also a few decades in the field, especially regarding automation of knowledge processes, so genuinely interesting in getting other viewpoints.


I'm always surprise by this kind of article or comments of people who don't know anything of how LLMs work or what they can do. The problem is that it requires, as it is the case for most tools, some learning curve. Prompting is not always straightforward, and after using these models for a while, you start discerning what should be prompted and what won't work. The best example I have is a documentation that I wrote in Word that I wanted to translate in Mardown on a GitHub site (see https://github.com/naver/tamgu/tree/master/documentations). I split my document into 50 chapters of raw text (360 pages) and I asked chatGPT to add Mardown tags to each of the chapters. Not only did it work very well, but I also asked the same system to automatically translate in French, Spanish, Greek and Korean each of these chapters, keeping the Markdown intact. It took me a day to come up with 360 pages translated into these languages with GitHub ready documents. So the electric consumption was certainly high for this task, but you have to compare it to do the same task by hand over maybe a few weeks of continuous work.


I read the article and I found it quite lacking. Why on Earth would you force your LLM to translate sentence by sentence? It ruins the whole interest of LLM, which is to use large contexts to drive your generation. I used Deepl a lot in the past and I had a recurrent problem when translating from French into English, computer related texts. In French, a "chaine" in the context of computer science is mostly translated as "string", however, when translating with Deepl (or Google translate) since the model would not take previous sentences into account, the system would loose the computer context and translate "chaine" into "chain", which of course was usually wrong.

But the funniest part was when I wanted to translate "jeûner" in Greek. "jeûner" in French means "to fast", in the sense of not eating. However, Google translated "jeûner" into "gregoria" in Greek, which means fast in the sense of speed... It went through English to translate "jeûner" into "fast" then "fast" into "gregoria"...


I'm one of the authors on the paper. Actually, sentence-by-sentence translation is important in a machine translation system because in many cases users will only provide single sentences. We also test document-level translation in Section 5, and find large improvements (but it isn't the focus of our paper).


@OskarS I agree with you... In fact, I have been avoiding gotos as a matter of principle for years and now reading the different discussions, I think I was misled. This case is very, very common. You have 3 intertwined loops and you want to get out. The traditional solution is to add some intermediate variables to propagate the end of the loops, with a cascade "if (toto) break;" which are far from being exquisitely elegant... Using a function for that is really adding some more complexity and some more stuff on the stack for nothing.


Exactly. The only reason not to use goto here is because of a dogmatic opposition to ever using goto. In fact, this is a rare case where goto makes the flow of the program more clear rather than less clear, and alternate solutions are just worse.


Also as a matter of technology, using "if" to get out of loops is bad with modern processors and their internal predictive algos.

My solution for a long time was to add the test within the "for" itself, but it is often confusing for users.


Gary Marcus tries to exit his personal hole of irrelevance. I have been working in the fielf of computational linguistics for 30 years. Back in 2000, I worked with a team of linguists to implement a pretty refined syntactic parser (XIP) based on Shallow Parsing, a symbolic approach. We won with this system a SemEval competition as late as 2016 on Sentiment Analysis. But I never expected LLM to reach this level of competency in my lifetime. Critics are prone to describe the errors these models make, but they seem to forget that 1. It is not a search engine and 2. it is pretty knowledgeable in a huge range of domains, which no humans can equal. I use these models every day to create code or to explain concepts to me. It never ceases to amaze me.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: