A few, but none with enough momentum to win over the mainstream. See Tox, Vector/Matrix, Jitsi with OTR... Otherwise there are web-based solutions that work well. Is it talky.io?
EDIT: Not forgetting Signal, of course. But I think you mean video comms?
I'm cautiously impressed by Matrix's momentum, to be honest (although I'm biased given I work on it) - we've got over 300K users on the matrix.org server now and around 1000 other servers visible from matrix.org. Meanwhile Riot.im (the app previously known as Vector.im) is hopefully pretty mainstream friendly...
I'm looking forward to Matrix too. It's still very immature right now through, as (as I understand it) we're still waiting for E2E encryption on mobile and group video calling(?).
It's hard to see Matrix as a standard right now too, with only one server and one client even remotely complete. Would love to see a multiplatform native client!
Thanks for the work you do, I realllllly hope matrix attains some success.
Sometimes I wonder if all these people that criticize or that think that a Latin alphabet can be adapted seamlessly to all languages have tried to study past a beginner level any
logographic language.
I have studied Japanese, and still think that a logographic writing system was a mistake. Consider the time and effort it takes for native speakers to become literate.
I also think that the Latin alphabet could be easily used for Japanese, which does not contain any sounds that do not have an obvious equivalent in English, and even if it did, we could always repurpose a character or sequence of characters for that sound (do we really need a 'c').
Having said that, the Japanese phonetic system writes voiced sounds as a modification of their unvoiced counterparts. why can't we all do that.
The biggest risk of using Latin is that simply sharing an alphabet could cause spelling conventions of other languages to bleed in.
Native speakers seem to do fine. Learning a language while growing up, having the Hiragana as a helper, while all your media is written in Japanese makes everything easier. When they finish school they know enough Japanese to go by. It's obviously different for non-native people.
Also, it's not like you stop learning even after school. For example English has according to the Oxford dictionary 171,476 words in current use excluding inflections, and several technical and regional vocabularies. Does all English university students know these words?
Logographic systems have some major disadvantages:
• It's possible to know how to say a word, but have no clue how to write it. This phenomenon is called character amnesia, and it affects most native speakers.[1] Phonetic languages allow you to write out a misspelled word, which readers can understand (or autocorrect can fix).
• Likewise, it's possible to know what a symbol means, but have no idea how to pronounce it. This is extra-fun in Japanese, where most kanji have multiple pronunciations.
• Looking up words is harder, as there are no "letters" to sort by. Sorting can be done by stroke count, by radical (four corners or SKIP), or by phonetic spelling (in pinyin or hiragana). Modern technology has made this easier, and some phone apps (like Pleco) can even OCR hanzi. Still, it's far less convenient than phonetic languages.
The only aspect in which logographic systems win is information density. You can fit more words on a single page. This is obvious if you've ever seen Chinese or Japanese copies of works that were originally written in English. The Harry Potter books are crazy thin. Also, Chinese and Japanese tweets can express a paragraph of information.
> It's possible to know how to say a word, but have no clue how to write it.
> Likewise, it's possible to know what a symbol means, but have no idea how to pronounce it.
As a second language learner of English I can attest that this is not just a problem of languages written in logographic systems:-)
>The only aspect in which logographic systems win is information density.
I vaguely remember a paper that claimed that information density is pretty much constant across languages and writing systems, but I couldn't find it as for now. There is another thread on HN [1]
where people compared the size of "Universal Declaration of Human Rights" in different languages. I think this misses the point because it doesn't account for intra-character information density.
It'd be much more interesting to render the text into a bitmap and then compare compressed bitmap sizes.
People like to joke about English spelling, but see farther down-thread for examples of how bad things are in logographic systems. Even native-speaking PhDs can forget how to write words like "sneeze" or "toad". It's a failure mode that simply doesn't exist in phonetic languages (even ones as imperfect as English).
Sorry if it wasn't clear, but by "information density" I meant area on a page or screen, not digital bytes. In the thread you linked to, people correctly point out that digital information density depends on encoding and compression schemes matter far more than language.
The paper you're probably thinking of is A Cross-Language Perspective on Speech Information Rate[1][2], which (as the title indicates) studied spoken language, not written. Annoyingly, the study was widely misrepresented in the media. It found that languages with lower information density tended to have higher syllabic rates. That is: Spanish contained less information per syllable than English or Mandarin, but Spanish speakers spoke faster to make up for that. Most media summaries of the paper omitted an important finding: the compensations didn't balance out. Different languages had different information rates. In the study, English had the highest. The runner-up (French) was 10% slower. And Japanese was 30% slower at conveying information.
>Phonetic languages allow you to write out a misspelled word, which readers can understand (or autocorrect can fix).
You can certainly write things out in kana. When I was more serious about studying Japanese, I knew less than 1000 kanji, but had a vocabulary several times that size, and would at times write out the word I meant in hiragana. And if we're counting autocorrect, your IME is going to take that hiragana and let you find the character.
>• Looking up words is harder, as there are no "letters" to sort by. Sorting can be done by stroke count, by radical (four corners or SKIP), or by phonetic spelling (in pinyin or hiragana). Modern technology has made this easier, and some phone apps (like Pleco) can even OCR hanzi. Still, it's far less convenient than phonetic languages.
Eh, I disagree here. It's harder if you're used to looking things up by the spelling, but once you're fast at looking things up by radical, it's not that difficult. My misguided attempts at slogging through 1Q84 while reading at a, at best, middle school level got me pretty fast at looking up kanji. Not any appreciable difference vs. looking things up in a regular dictionary.
You cannot write things out in Kana in Chinese. As such, GP's point against logographic writing systems stands, notwithstanding mixed writing systems such as Japanese.
Even without autocorrect, you can write a word in English such that most people would understand. Of course, in a logographic system you'd just write a homophone (which is what people actually do, write a simpler word pronounced the same).
As for looking up, it is in principle easier though. You only need to learn the order of about 26 things, not about 200, and can then run iterative binary search over it, and don't have to switch to stroke count. It is possible, of course.
Some upper and lower case letters have no clear resemblance, see Aa Rr Gg Nn, so one has to learn 52 symbols. Add other 52 symbols for script, if you have to. Then in the case of English learn how to pronounce or spell words, because in some cases there are no rules (why ocean and not oshean? Because of derivation from Greek, still...)
Anyway, any alphabet is better than Chinese characters.
>• It's possible to know how to say a word, but have no clue how to write it. This phenomenon is called character amnesia, and it affects most native speakers.[1] Phonetic languages allow you to write out a misspelled word, which readers can understand (or autocorrect can fix).
>
>• Likewise, it's possible to know what a symbol means, but have no idea how to pronounce it. This is extra-fun in Japanese, where most kanji have multiple pronunciations.
I don't think English is much better in these cases. In fact, the writing can be so divorced from speech that spelling bees are a thing.
I've had Chinese colleagues who, when asked to write a word they'd just used in a sentence, were simply unable to. At first I thought they were playing a joke on me. But nope, they'd just forgotten the appropriate hanzi, and they couldn't even hazard a guess. It's a totally different failure mode than imperfectly-phonetic languages like English.
> I was once at a luncheon with three Ph.D. students in the Chinese Department at Peking University, all native Chinese (one from Hong Kong). I happened to have a cold that day, and was trying to write a brief note to a friend canceling an appointment that day. I found that I couldn't remember how to write the character 嚔, as in da penti 打喷嚔 "to sneeze". I asked my three friends how to write the character, and to my surprise, all three of them simply shrugged in sheepish embarrassment. Not one of them could correctly produce the character. Now, Peking University is usually considered the "Harvard of China". Can you imagine three Ph.D. students in English at Harvard forgetting how to write the English word "sneeze"?? Yet this state of affairs is by no means uncommon in China. English is simply orders of magnitude easier to write and remember. No matter how low-frequency the word is, or how unorthodox the spelling, the English speaker can always come up with something, simply because there has to be some correspondence between sound and spelling.
To be fair, you can also "come up with something" in Chinese. Since there aren't all that many sounds, you can write in generic characters for the sound of the word that you can't remember.
Yep. The analogy I use is, it's a bit like if someone walked up and asked you to draw the logo of this or that company. Even if you've seen the logo a million times, you might not be able to summon up a mental picture of it, or you might remember the rough shape but have no idea how many lines go where.
Same here - and strangely enough, it's rarely a problem. Faking characters by using the correct radical and a random homophone base character works okay in a pinch.
But because I never write characters by hand, I have a really hard time reading handwritten notes, and that is a problem.
> For example English has according to the Oxford dictionary 171,476 words in current use excluding inflections, and several technical and regional vocabularies.
Here is a website which questions you with some random sample of words from an English dictionary, mixed with randomly generated non-words. Then it estimates the percentage of English words you know.
I'm curious: did you only answer yes to the words whose meanings you knew, or to anything that you knew was indeed a word? There were some that were pretty obviously words, but I wasn't certain the exact meaning (although I could guess), so I answered no. Ended up with 77% (as a native speaker). Apparently average for native speakers is 67%, so 77-89 as a non-native speaker sounds really good.
I just did it, and I answered yes to words I knew, or knew that were actual words but I didn't know the exact meaning of. Like Argon, I know it is something related to chemistry but I don't actually know what it is. Some words were compound words which I am not sure would be in a dictionary, but still valid words.
I got 73% and I didn't say 'yes' to any fake words.
73% is apparently "This is a high level for a native speaker."
Sure, I didn't mean to suggest it can't be done in short spurts. But reading a novel that way would be hellish.
The larger point being, Japanese isn't locked into using a logographic system - it already has two phonetic syllabaries that people could start using exclusively if there was some advantage to doing so.
That sounds like an absolutely miserable experience. I'd rather be forced to look up every 3rd or 4th kanji than try to deal with all hiragana writing.
> I also do not think that the Latin alphabet could be easily used for Japanese, [...]
You stuck an extra “do not” in your sentence
* * *
As far as alphabets go, the Phoenician/Greek/Etruscan/Latin alphabet is pretty ad hoc and mediocre. But hey, it’s what we know. At this point, I think we’re stuck with it.
Similar story for modern Hindu/Arabic/European numeral glyphs. Learning arithmetic would be noticeably simpler if the glyphs expressed some of the symmetries of the number system. Alas.
As far as the alphabet itself goes, I do not think that Latin is that bad. All symbols have a canonical sound associated with them. The problem is that our usage of the alphabet is horribly inconsistent. This is partially due to the fact that English has sounds that cannot be expressed using the "pure" alphabet. Arguably Japanese has this same problem in their system, with the ゃ、ょ、ゅ modifiers. But at least they distinguish those from や、よ、ゆ by size, and are disciplined about their usage, so we can consider the set of compounds to be their own characters and not have a mess.
Of course you still have the ず/づ issue, and the pronunciation of は and を as わ and お in their most common usage. But, even in modern Japanese, these oddities are not universal.
Out of curiousity, are you aware of any numeral system that beats Arabic? By pre-Arabic European standards, Arabic numerals are a masterpiece of symmetry.
It can also be nice to use a “balanced base”, with digits for negative numbers, e.g. in a base ten context you’d have digits for –4 to 5 (or if you’re willing to have multiple expressions for the same number, –5 to 5).
> As far as alphabets go, the Phoenician/Greek/Etruscan/Latin alphabet is pretty ad hoc and mediocre. But hey, it’s what we know. At this point, I think we’re stuck with it.
You mix the whole development line of that Latin alphabet into one dismissive argument. I see lots of difference between the Phoenician and the Latin alphabet and FWIW, the Latin alphabet is quite versatile as its wide application shows.
It wonder what do you consider mediocre about them?
> Similar story for modern Hindu/Arabic/European numeral glyphs. Learning arithmetic would be noticeably simpler if the glyphs expressed some of the symmetries of the number system. Alas.
I don't think learning arithmetic would be much simpler with other numerals. Even the Romans could do it and they had one of the worst possible numerical systems.
I find our numerals quite fine. My daughter was recognizing numbers before she turned 2. There is some mnemonic to the first four (1 line, 2 corners on the left, 3 corners on the left, 4 corners overall) and most are quite distinct from our Latin letters. 6 and 9 are annoyingly symmetrical of each other, though.
Writing a less dismissive / more serious argument about the Latin alphabet would take a few hundred pages. You’re right though, I’m not a speaker of (or expert in) ancient Phoenician, perhaps their alphabet was a bit better structured for that language (it looks pretty ad hoc though). I can primarily speak to the Latin alphabet’s irregularity and mediocrity for representing modern English/Spanish/etc., though it doesn’t seem to have been much better for Greek or Latin. Obviously it works well enough to be the practical anchor for written culture, and I can certainly imagine worse systems (little Egyptian-style pictographs for letters for example). But it’s hardly elegant or systematic. The ordering of the letters is also pretty much arbitrary, and has nothing to do with the separation between consonants and vowels, or the relationship between particular sounds.
For an example of a better designed alphabet, check out Korean Hangul.
* * *
The numerals 1, 2, 3 come from just writing strokes, like tally marks, which over time became connected in handwriting. The other numbers were mostly fairly arbitrary symbols, which morphed slowly over time with occasional replacements and swaps. Otherwise, the symbols have absolutely nothing to do with the numbers they represent or with the base ten number system. Overall, I’d say numbers 0 and 1 are pretty effective. The rest are a huge waste of potential.
Same story for the words/names used to represent the numbers. They are made of arbitrary sounds in arbitrary numbers of syllables, reveal nothing about the theoretical properties of the numbers, some of them are hard to say or easy to mistake, etc. Especially for numbers beyond ten, the names are irregular and confusing. This has a real practical impact. Counting is notably easier for Chinese speaking children than for English speakers.
> I don't think learning arithmetic would be much simpler with other numerals. Even the Romans could do it and they had one of the worst possible numerical systems.
In general, Romans did their arithmetic using little pebbles (“calculus”) on counting board (“abacus”), and used written symbols only for recording the output of their calculations. This made some types of computation very difficult (because using pebbles to record every step gets cumbersome), which helps explain why science has taken off in the past 500 years in Europe after we started developing better notational conventions and using Hindu–Arabic numerals and later decimal fractions, logarithms, etc.
My son is about 2 weeks old, so I can’t tell you yet how well he learns arithmetic using a different set of numerals. Ask me again in about 10 years.
by that measure we should forget about historical languages and learn something constructed like esperanto.
languages are not solely a means of communication but a part of a people's cultural identity. I think the greater dependence on contextual cues and ambiguity in Chinese/Japanese lends itself much better for linguistic art forms like poetry and literature.
I think the debate is more Logographic vs. Alphabet, rather than Logographic vs. the Latin Alphabet.
There are pros and cons. A big con with Alphabets is that words lose their meaning over times. I find reading Old English (1500 years old) to be less comprehensible than "modern" Latin, despite being a native english speaker, and only knowing a little latin.
I find reading even Early Modern English (400 years old) an effort initially before I get reacquainted with it (Shakespeare).
In 300 years time I hate to think what English speakers will think of our texts.
That said, if I had to choose another language to learn, it would be one with an Alphabet, which seems far easier to me to learn, and type, than memorizing 1000s of symbols.
What is the advantage of using a different symbol for each word, that offsets the huge disadvantages of having to learn and remember a different symbol for each word?
Especially considering that the spoken language already distinguishes between all possible words through pronunciation (and context in the case of homophones.)
It's hard to explain. In English, spelling, pronunciation, and meaning are all more or less interrelated, right? In Japanese, writing (kanji) correlates to pronunciation and to meaning, but pronunciation and meaning are mostly unrelated to each other. Kanji is what disambiguates them.
So, obviously learning 1000 kanji isn't easy. But doing that is what makes it possible to learn 100,000+ words whose pronunciations and meanings would be otherwise largely unrelated.
It's quite similar to the role that Latin/Greek roots play in English. When you see a word that includes "-graph-" you know it probably involves writing, and similarly when a student of Japanese sees a word with "間 (kan)" they know it involves an interval or space. Throw away the kanji, and your student now just sees "kan" - which means the word will probably involve an interval -- or a barrier, or emotion, or appearance, or a tube, or a building, a warship, a crown, an ending, China, a publication, a government ministry, or.. you get the idea.
A lot of people think that and personally as someone fluent in Japanese (as a second, well rather something like fourth, language) I also sort of feel the same way. However if you look at it without the learned biases, there is a great example where a country with fairly similar language in terms of grammar and sounds that had used to use chinese characters switched to a phonetic alphabet and are not noticeably worse off for it: Korea.
There are way too much homophones and you don't always have the luxury of the context. Learning a symbol for each root (not word!) is not that bad, English spelling is almost as bad, actually.
Spoken language is quite limited compared to written Japanese.
Assuming yes, do their users have significant problems understanding the written text when pronounced in an audiobook? Are there well-known conventions or shortcuts or explanations that audiobook readers insert into their speech to signal the correct meaning of the word?
Do Japanese audiobooks provide evidence for or against the idea that doing away with kanji in writing would not harm understanding significantly?
Fiction audiobooks do exists (although not nearly as common as in English-speaking countries), but audiobooks can't possibly work with non-fiction and especially technical texts unless you are going to use English words for literally every single term. I mean, Japanese has only about 100 moraes and way too much words are just 2-3 moraes long.
It doesn't work due to the insane amount of homophones. When you are speaking with someone you have context and you can discern the meaning of what it's said. But random words or texts can change it's meaning depending of what character is used. And the tone system doesn't help as it can be seen in Chinese pinyin.
Why do you have less context in writing than in speech?
I'd be willing to bet heavily that the vast majority of those "homophones" are primarily writing-only, domain specific or archaic "shorthands", which are referred to in speech with slightly more verbose alternatives. Switching to a non-character based system would admittedly in that case mean some domain specific writing would be slightly less compact, but that seems a reasonable tradeoff given the unwieldiness of the current writing system.
You'd lose your bet. In that "shuu" link as an example, most (10-12 or so) are common enough that you might hear them in a typical newscast, with that pronunciation.
What makes things manageable is the combinatorics. E.g. there are dozens of kanji read "shuu", and many dozens more read "kan", but most of them are only read that way when part of a 2-character compound, and only a small subset of the possible "shuukan"s are words, and only a subset of those words are common in spoken conversation.
Even then, it is a very homophone-heavy language. I can think of four "shuukan"s off the top of my head that you might hear from a newsreader; it would only be after those that you'd get into domain-specific words. This is pretty typical.
It's not that you have less context, as much as you /need/ less context. Instead of a few extra words to describe something, you get a different character.
In the above example all three are read as おば (pronounced: oba). When spoken you still need to differentiate, but it'd either be obvious from context or you'd just explain it manually.
Because you are going to select different phrases and words while speaking than you are when writing. Even with context clues from the conversation, it can at times be confusing, so you have to explain what you meant. Usually it's verbal, sometimes it's 空書 (sky writing). To avoid having to do this frequently, people will often adopt a subset of the language that is less prone to confusing homophones for their vocal communication.
It creates a situation where you have people who have wildly different voices in their writing than they do in their everyday speaking, which is an interesting phenomenon. (To me, at least)
Korean has just as many homophones and used to be written with characters like Japanese. Now they are doing just fine with their phonetic writing system.
Korean didn't work at all under the imported Chinese system. Japanese also had problems but they solved inventing the Hiragana, a syllabary system. Both languages chose different systems and both languages work fine.
Also Korean avoids many homophones thanks to it's 10 vowels. Japanese has 5.
In addition, Korean spelling is heavily morphophonemic, which is a fancy way of saying that words are written based on its "base form" even when the actual sound is different due to interaction with grammatical suffixes.
A bit like English "packed" being written with "-ed" even if it sounds identical to "pact". Helps disambiguation.
(Actually, come to think of it, it's rather analogous to the Japanese way of maintaining the same Kanji while the suffix changes.)
Koreans did have an old writing system[1] made of Chinese characters, where some were used for meaning and others were used to denote Korean suffixes with a similar sound (kinda like how Hiragana started out, I guess). But it eventually died out.
There's a tongue twister in Chinese with all characters pronounced "shi".[1][2] Hmm, I don't know Japanese grammar, but if those characters are Chinese ones, you can almost write something like that with these characters. 秋收 and 修習 are legitimate words for starter.
All characters are pronounced "shi" in Mandarin, with 4 different tones, leaving 4 distinct pronunciations.
In other "dialects", such as Cantonese or Teochew, the characters are pronounced as 7 or distinct syllables, with 6 different tones, leaving more than 20 distinct pronunciations.
Mandarin has very few available syllables compared to other languages (not only, say, English, but also older Chinese "dialects").
Homophones aren't a problem. Just use a silent radical in front of kana sequences. Swapping 2000 kanji for about 200 radicals is a good enough "90% solution". Some possible examples from the first page of results from you link:
I suppose everyone here is already on a list anyway. Everyone who leaves a criticism for the NSA. Everyone who up votes this conversation and other similar ones. Everyone who contributes to OSS Privacy. Everyone who misspells Thor.
How useful can this really be when there's so much noise?
The rating methodology is absurd. Just looked at the Science and Technology ranking:
- Rating includes noble prizes which span more than a hundred years.
- Ukraine, Bulgaria and Bosnia are above the US. Don't have anything against those countries at all but it seems strange.
- Belarus is at 36 and Egypt at 45 while JAPAN is at 46! It seems the number of international students skews the data way too much(and doesn't seem that relevant for a science and tech rating)
In order for it to be discrimination, it has to be 'unjust' - There is nothing unjust about having larger people pay more because it costs a business more to fly them. If anything, it is unjust that smaller people have to subsidize the cost of others.
You're allowed to discriminate against even "protected classes" of people if it actually materially affects their performance. If a woman is actually pregnant and can't lift heavy things, and that's part of the job requirement, then you're allowed to not hire her. If a heavy passenger is actually costing you more money for fuel, you'd be allowed to charge that person more for fuel. This is moot though, since height and weight are not protected categories and you can already discriminate against them all you want!
You could make a fair argument along that line if you measured the cost. Just having the baby means a few weeks off at most since there's nothing gender-specific about childcare, average person has about two babies, a couple months out of decades of work, you could reasonably argue that women should make about a quarter of a percent less than men.
My point is, don't bring up a flawed argument that's used to support unfair discrimination to attack another argument. That's nothing more than a strawman.
I don't see how this analogy fits, or what you mean about society, but there's a slight bit of difference between charging someone for direct costs, and jumping to conclusions based on fear, stereotypes, or gender.
There's also a difference between charging someone for actual expenses, and charging someone for presumed future expenses. And there's a difference between someone being charged for what they use, and someone being paid, or not, for purely indirect costs not related to their performance while on the job.
Now, if a 120 lb woman was charged twice as much as a 120 lb man to fly, your analogy might be more applicable.
Paid maternity leave is an actual expense, if it happens. But you were talking about salary. And it is unfair and discriminatory when equally capable women are paid less than men, regardless of the reason it happens, and there are many other reasons than the probability of a woman having a child.
While those facts are true, they don't improve the quality of your analogy, the situation you're bringing up does not stack up the same way as (theoretically) charging someone per pound to fly on a plane. There might be reasons that charging by weight is discriminatory, but you're not convincing me.
> There might be reasons that charging by weight is discriminatory
Charging by X is always discriminatory (on the basis of X), the questions are whether it is morally or legally acceptable discrimination, not whether it is discrimination at all.
That's the whole point of discrimination laws. I don't know if you ever checked but people aren't kidding when they say non-christians, non-whites, ... are more criminal and less smart and ... with the exception of one or two asian ethnicities. In theory saying this by itself is not discriminatory, only acting upon it is, but I doubt most people will agree. But you're going to find actual statistics supporting that. Yet any conclusions obviously are discriminatory.
What's worse is this. Suppose you have 2 groups with each a normal distributed variable (like weight, height, money, likelihood of criminal intent, ...) and let's say a 1% difference in the peak (which is what everyone will always report, standard deviations are also always different, but nobody ever reports them, but when people say Asians are less tall than Caucasians in 1% of cases (real figure is about 6%), this is what they mean). To make things simple, lets say
group A ~ N(100, 10)
group B ~ N(101, 10)
(N(100, 10) is like the "standard" normal distribution for things without reasonable units)
What do you see in practice ? Suppose you meet someone (randomly) from group A and someone (again, uniform random) from group B. What are the chances the member from group A is heavier/taller/richer/more criminal/... than the member of group B ?
E(X > Y, X ~= N(101, 10), Y ~= N(100, 10)) ~ 59%
So in our example, if you meet an Caucasian and an Asian, and there's 1% height difference between the groups, the chance is about 60% that the Caucasian is taller than the Asian. If taken the real figure, 6% difference, then the chance becomes 91%.
This is the problem that causes racism. Tiny differences in a normally distributed variable make a large difference in actual encounters.
Is there anything wrong with discrimination against tall people? It has nothing to do with their age, race, gender or sexual preference, so I think height and weight may actually be fair game.
I hate to be that guy but should we be charging people more (effectively punishing them) for something outside of their control? Sure it may be "fair" in terms of economics, but it's a slippery ethical and moral slope if I've ever seen one.
Yes, they are. I got my drivers license and first car when I was 22. Length of driving history was not a question when I got insurance, and I got charged the same as 22-year-old males who had been driving since they were 16.
True, but I do think experience has a lot to do with the equation there. Pretty much all insurances are based on the chance of payout but I don't know if that should apply to the prices of things like physical goods or in this case air travel.
It's dangerously close to an argument such as "People of color have to pay more or can't use this service at all" or "Blonde hair blue eyes" you know?
What's wrong with with discrimination based on eye color? Shouldn't anyone with blue eyes be killed at birth? That's not on the list you mentioned, so it's fair game, right?
Discrimination based on eye color is completely fair! Blue eyed people have more trouble in bright sunlight, so you can use that as a tiny mark against them in relevant jobs.
I don't know how you confused discrimination with murder, but you should probably double check a dictionary.
I think killing babies is a big jump from making heavier passengers pay more. The airlines already consider weight in their cost when you pay for baggage.
I really don't understand the electorate in Spain. Just looking at how the two main parties took 75% of the vote in 2011 (down from 83%). And it doesn't look like it's going to change.
Compare that to Greece (Pasok went from 40% to 8%, Syriza nonexistent 8 years ago almost won) or Italy (with the rise of Grillo's Movimiento 5 Stelle).
Does Spain need a new party with a powerful personality at the helm or after all it has become a de facto two party democracy?
Definitely things are different over here, laws are very negative for new organizations but we are in the way of achieving same results by the end of 2015. We have started much later and we have avoid any populism in all ways, we don't want someone like Beppe Grillo leading us because we are organized in network without leaders. Even though they are a great example that citizens well organized can achieved great things aside traditional parties.
Also, since 2008 main parties in Spain had already lost over 10 million votes(in 35 million total) so we know we are in the right place an the right moment.
TV is still the main source of information for people and it's taken completely by main parties. However in social networks we are already winning, in just 1 years we already have more fans than main parties and people talks about us.
We will have to wait until December 2015 to show this change in a election
The problem is that by the end of 2015 the situation in Spain could improve(I really don't see that happening and if it happens is because the improvement in Europe is dragging Spain).
Also in the elections of 2011 the main parties lost almost 4 million votes (from 21.5 to 17.8 million).
http://resultados.elpais.com/elecciones/generales.html
But it's true that the projections seem to show that trend,
still both parties get +60% votes.
https://en.wikipedia.org/wiki/Opinion_polling_for_the_next_S...
> TV is still the main source of information for people and it's taken completely by main parties. However in social networks we are already winning, in just 1 years we already have more fans than main parties and people talks about us.
Social networks are used mainly by young people, and Spain is an old society.
>(...) because we are organized in network without leaders.
I have yet to see a country without leaders and a large well organized group of anything without leaders. It's really naive.
Nonetheless Spain needs people that starts changing things (like your group), and everything can change by Dec 2015 but I'm not optimistic.