Hacker News new | past | comments | ask | show | jobs | submit login

To a computer science teacher, this feels like trying to walk up a down escalator.

First-year CS students (middle school or high school) with experience using Apple products don't have a concept of a filesystem. The shift to auto-saving documents also corrodes the intuition that files get stored on disk in some non-magical way. In the same way that it's now easier to teach networking and graph theory because youth experience identity and relationships this way, it's now a lot harder to teach lower-level abstractions. In my experience, this has changed over the last five years.




I have a very similar anecdote. I had a long conversation with a friend who is a high school science teacher. She told me that computer literacy has plummeted in the last ten years.

She has her students go into the field and collect data in the form of photos, spreadsheets, and typed reports. She wants students to bundle these files in zip files, and email them to her.

Students have no idea how to do this. Some of them struggle with emails. She has had to make her tutorials on this longer and longer. And she has also been forced to start accepting assignments as a pile of attachments, because so few students understand zip files.


I'm curious if this is actually computer literacy plummeting or if it's a wider population of students being expected to understand computers.

When I was in high school the vast majority of my peers understood how to bundle up a bunch of photos, documents, etc. into a zip file and e-mail them. This is because my peers were nerds, and we were excited about this new computer technology and eager to learn everything we could about it. There were no computer classes in high school. When we had high school science class, we were expected to draw pictures of our experiments (y'know, with a pencil, on paper) and fill out the lab report longhand. There was a large segment of the student body who had no idea what a file was, or how to turn a picture into one, let alone how to bundle it into a zip file.

It was a bit different in college, where we were expected to understand Excel and how to run a regression or graph some data. But then, that drew from a very different population than high school.

It's sorta like how SATs are dropping not because kids are getting stupider, but because a larger proportion of people who would never have been considered college-bound a generation ago are now expected to take the test.


I think vastly more people are reading and writing due to modern computer literacy, and I can't stand the way they are changing the meaning of words because they don't read books. It appears vocabulary is increasing, but people get their vocabulary from inadequate online dictionary entries and/or peers that don't read books either. So to me, because I grew up reading books, it's incredibly aggravating because it seems like a new and inferior language that grew out of ignorance.


I love the renaissance of language, the minting of new words and concepts, the remixing of our heritage words with new Unicode characters and the mashing of new syntax.

I see it as progress - which destroys meanings as much as liberating new meanings. A word becomes a poetic palimpsest of needlessly complex layers.

I am an engineer-type, I like concrete words with focused meanings, I failed English at school while excelling at the hard sciences. The loss of meaning disturbs me, but I value the gains, and I accept that new language requires freedom from old dictators.


There's definitely a mix of good and bad as language changes.

Losing the word "meme" to the colloquial definition is a loss and a massive disappointment to me, because I think the gene analogy is insightful. When I talk to people about memes-as-in-memetics they're bound to get confused or even disbelieve me that it had any other meaning.

I've been accused of man-splaining in situations where I am legitimately being helpful, am an authority on the subject, and have good intentions, so I kinda don't like that words contribution, personally.

Literally should mean literally, and only literally, because it's useless if it doesn't.

On the other hand, there were all sorts of weird and redundant words that existed in English that probably shouldn't. This probably isn't true, but the word "praxis" seems like it exists because someone important misspoke when trying to say "practice" and then insisted it was a word in itself.

My impression is that the effect the parent talks about is real, though. There is a lot of regression happening in the language even as people get more literate, and the echo chamber of the internet reinforces trends into more lasting changes.

My biggest gripe of all with anyone effecting change on language lies with the marketing-focused actors who think they're doing their job well when they:

1) Appropriate colloquialisms and push them into the mainstream where they have more staying power than they otherwise would.

2) Refuse to expose people to words they're not familiar with because it might not appeal to them. Ever consider some people might want to learn new words? That starts with exposure. Lowest common denominator advertising is a well-funded stupefying force.

3) Create new words and change meanings to suit their narrative/sales pitch. A recent example is an article CNN ran about a new alternative to vapes and cigarettes. What they described is actually just a different kind of vaporizer. It emits nicotine vapor. They're calling it something different to avoid the stigma that shallow media coverage of vaping created, and making the word more ambiguous and less useful as a result.


> Losing the word "meme" to the colloquial definition is a loss and a massive disappointment to me

I love "The Selfish Gene", but it make me happy that memes and virality are modern metaphors for chunks of thought transmitted by wet symbolic processors. I believe that children now bellyfeel the meanings which is fantastic, even if they don't know the exact sense.

> Literally should mean literally, and only literally

I think I used to agree, but now I just like to amuse myself (and sometimes others) by misusing it. I do the same with many other words (sophisticated, nice, proud, etc). I like the layers of meaning, and I love multiple connections, although I still don't like most poetry.

> 3) Create new words and change meanings to suit their narrative/sales pitch

This is a true evil: the defiling of words for commercial gain riles me.


The selfish gene is a great book. I read it when it was about 15 and it really resonated with me.

Except the last chapter about f$%#ing memes. Hated it. It was like taking the core idea of the book and providing a cumbersome over-extended metaphor about ideas. So the colloquial meme is just as bad because it constantly reminds me of the original shit-house idea.

Climbing Mount Improbable was good, but I can't rate Dawkins' later work. The Extended Phenotype I found impenetrable, and I've tried reading it a few times. The God stuff ... omg, just please stop shouting about your Daddy issues.


I think that memes are actually driving a lot of human activity.

When you read about how selfish genes work, and then relate that to what is happening with certain "meme" concepts in the world, it makes sense to me.

The Extended Phenotype is really hard to grok, but I still love that book. I'm not sure I've ever been able to apply the idea.

I completely ignored any of his God bothering stuff - to me it seemed he was pushing his own religion (note: religions are a type of meme too).


Praxis has a specific meaning and is a classical Greek concept.


Thanks I'll look into this, and better examples. :)


I blame modern business marketing practices on the erosion of meaning in language. Words are so overloaded and defined in ways to confuse consumers and skirt legalities that it makes many terms devoid of any useful meaning.


Latin was invented for a reason. Common language became vulgar.


Er, the Romans spoke Latin, as well as read it and wrote it. It was their vulgar language (from "vulgus", Latin for "common people"). The first Latin Bible was called The Vulgate because it was written in then common tongue of Latin instead of Greek or Hebrew. Latin was the language of commerce and conquest for the Roman Empire, as English was for the British Empire. Lastly, every language is invented, no?


Vulgar Latin vs Classical Latin, I suppose.

>common people

Thats the definition of vulgar that I meant, not the modern vulgar definition of vulgar :)

Just that scientists and philosophers wanted an in un-ambiguous form of communication that wasn't subjected to the whims of popular culture.


What, precisely, makes it inferior?


Speaking of precision. I think a big problem with modern use of English is that it has reduced precision.

What reduces precision?

1. Increased reliance on idioms to communicate. Idioms use analogies, and usually reduce precision. Our increasingly shared cultural context via the Internet means that more and more idioms are entering common use and (IMO) reducing the clarity of communication.

2. Reduced general vocabulary. We have more jargon than ever before, but regular writing is simpler than ever. Generally speaking, a reduced vocabulary means more concepts are "mapped" onto the same words (like "great," a highly over-mapped word.)

3. Irony and sarcasm are abused as argumentation tools. Mocking a point is commonly accepted as arguing against it. I believe this stems from acceptance of irony as a normal way of communicating (as opposed to using it mainly for humor). This also results in many words having two opposite meanings, a "true" one and an ironic one. (ofc this happened in the past too, see history of the word "egregious.")

4. Ease of publishing. People put less effort into writing today because the barriers are lower. Yes, there are still authors who put a lot of thought into their work, but the Internet is awash in poorly-chosen words and ill-thought-out language that frankly didn't exist in the past. Poor language has poor meaning, and this actually damages the trust people have with the written word.

Of course, this is just my personal thoughts... I don't have strong problems with modern language. I think your question about "inferior" is a good one because superior vs. inferior doesn't make sense without metrics for comparison. If the metric is personal appreciation, then superior vs inferior is just subjective. If the metric is density of information transfer, I suspect we've improved.


Many of your complaints are based on an incorrect assumption: That what people do is writing.

Of course it is writing in the sense of using the alphabet. But has everyone started to write books and essays? No. In fact, people write less than they used to because letters have been mostly replaced by other methods of communication.

What you think is horrible written English is actually normal spoken English, just output through letters rather than sounds. And spoken language doesn't look anything like well-formed grammatical writing that doesn't give your English teacher a heart attack.

> 1/2

I'm getting strong Sapir–Whorf vibes here. Language adapts to what people say, not vice versa.

> Increased reliance on idioms to communicate. Idioms use analogies

No, they don't. Idioms have a meaning that is divorced from their constituent parts. They have a similar role to words.

> Reduced general vocabulary. We have more jargon than ever before, but regular writing is simpler than ever. Generally speaking, a reduced vocabulary means more concepts are "mapped" onto the same words (like "great," a highly over-mapped word.)

That is simply not how language works.

> 3

I, too, am concerned about the recent invention of satire and polemics. I hereby Modestly Propose to eat everyone that uses these diabolical stylistic devices and feed the leftovers to the Irish.


I frequently write to my grandma, who is fairly well-educated, and her writing is technically good. It is, however, bad writing. It's a pattern I've noticed quite a lot with older people. They write perfectly-formed garbage. I think the amount of time people spend writing today is greater than at any time in history, and I think it shows.

That's my intuition. But I'd also note, your points (1, 3 for example) contradict eachother. You complain that writing is simple (2), but unclear (1), idiomatic, but jargon-laced.

My thinking is that modern writing is a sort of colourful digression from the Elements of Style. Irony, simplicity, and idiom seem like good ingredients for playful, powerful writing.


Thanks, I appreciate your perspective on that.

I don't believe that my points contradict each other. So I'm going to address that part of your post, to avoid any confusion. But I think the rest of your argument is great, I just want to explain myself a bit better.

For starters, simpler writing can indeed be more unclear (because it's less precise, as discussed). So "simpler, but unclear" isn't contradictory in that sense. In this case the lack of clarity comes from a mismatch in the shared understanding between the writer and the reader. The more "simple" a sentence, the easier it is to misunderstand (to a point, and with exceptions).

And with respect to idioms, I would say that "uses idioms" does not convey the same idea as "idiomatic." By "idioms" I refer to reasoning by a shared analogy. A random example is the idiom, "Like water off a duck's back." It conveys meaning by analogy. This is usually less precise than using a direct description ("He didn't seem to care at all.") Sort of a dumb example, but I hope it makes sense.

Finally, about jargon -- by definition jargon is not part of shared vocabulary outside of a certain field of expertise. I believe a lot of our most precise words, which previously were common usage, are now becoming jargon. This is fine for experts but it leaves non-experts with a paucity of words to express their ideas in novel domains.

(Once again, this is just my personal impressions, not based on studies or anything. So, don't take it too seriously!)


I see where you're coming from, but I think one way in which we diverge is in our ideas about what is valuable in language. I quite like inaccurate, loose formulations because they're playful - and I think writing is a form of play, as much as it's a tool to communicate with. In its place, evocative vagueness can be better than sharp clarity.

My feeling as to what distinguish good writing from bad is more that good writers say what they want. Bad writers say stuff they didn't mean to, say stuff that other people want (cliches), or say nothing at all (obscurity). So in my eyes, there's no distinguishing textual characteristic of good writing. For instance, while I generally agree with you on jargon, David Foster Wallace uses jargon in a way I really like, and think is pretty central to what makes his writing good.


Genre is important here. One's choice of language when playfully communicating with friends will be different than the language used to communicate in a legal brief or a scientific paper. We adapt our language to our audience and the occasion to communicate our ideas. Being thoughtful and considerate to our audience is important on all occasions.


Are you sure it's not just generational misunderstanding?

A lot of people describe dialogue from early movies in a similar way, just because it's difficult to understand. Especially the early noir stuff. They're speaking perfect English, but it might as well be a foreign language to most English speakers today.


Pretty sure. I read a lot, a lot of which is old. I enjoy a wide variety of writing styles.


Yes, but reading old books doesn't usually capture the old vernacular well. Even if written (professionally) with that intent (Mice and Men, Flowers for Algernon) movies and audio were generally more accurate, (even if exaggerated) in that they're not subject to interpretation.

The writing of an un-edited grandparent might be a more accurate account of this.

Or not.


You might be right. I find a lot of old vernacular pretty objectionable on stylistic grounds - if I was going to describe it, I'd call it 'semi-colon heavy'. Lots of big, chunky sentences with curlicues. It's a hard style to like. Then again, Moby Dick is written in that style, and it's amazing. I think it's like beef wellington or lobster thermidor. It's amazing if it's done really well, but a mediocre one is horrible.


Claims like this have been made from older generations to the younger and about every language since recording began. They were once made in Latin about a degenerate bastardization that eventually became French. Naturally, the French have in turn become rather protective.

Speakers are entitled to "abuse" their language if they want to. It is theirs.


Of course. It's only natural for things to change, and it's only natural for people to talk about the changes.

I believe our language is almost like a living thing that we are stewards of. One of the ways we can help steward modern English is to discuss the good and the bad parts, and try to move the language toward the good, even just in our personal speech.

So don't take my comment as a criticism of the younger generation. It's a criticism of myself and my generation. I'm just talking about the language that's floating around all of us, right now, that we are all building together. And I want to make that language as "good" as it can be.


Not the OP, but will take a crack: A divergence from the acceptable usage of words makes comprehension difficult, and can cause miscommunications, especially where precision is required; which happens often in a business environment. Even in a misunderstanding of proper usage can severely change the intent behind the messaging, if most communication happens over text (Slack/Email etc.).

That being said... if I'm reading this correctly, its specific to communication in non-business settings, where I don't really care much about whats "proper".

This is a losing battle, btw. Every society seems to struggle with the problem of keeping purity in language and all of them seem to have failed in that effort. Europeans failed to keep Latin relevant. Indians failed to keep Sanskrit relevant. The common language that is well understood by a majority will always win. And trying to keep it pure by force, just makes it die in relevance.

Which is why, the fact that Oxford English Dictionary keeps adding new words to the English dictionary is one of the best indicators of the resilience of the English language.


I don't think impurity is what I'm complaining about. What I preferred was when most of the people defining written English by their usage were more of an elite. It's not about who those people were or how exactly they used language, just that they read a broad variety of published material and not just contemporary communications.


>Europeans failed to keep Latin relevant. Indians failed to keep Sanskrit relevant. They kept the languages for thousands of years. Ancient Egyptians kept hieroglyphics going even longer. We are able to maintain languages without any problems, if we just choose to.


How much time have we all wasted reading threads that consisted mostly of folks arguing past one another, because each participant misunderstood what the other was trying to say?

I used Usenet during the 1990s, and communication was much easier, because grammar usage was much more uniform(1)(2)

(1)Changes in the rules of grammar aren't the primary problem, IMHO. The problem is folks who don't follow any rules at all; rather, they simply string together word sequences that sound familiar.

(2)I sympathize with young folks today who don't have a good grasp of grammar. Most of my education came from reading newspapers and magazines, which still employed expert editors when I was growing up.


The Usenet population was more uniform. People argue past each other because they don't agree on the premises, not because language has become less clear.


When people look up and misunderstand the meaning of a word, they base their interpretation on what they already know. So they are likely not to absorb a new concept but only get a new way of saying an old one. That means that the redefined word is likely to be an inferior tool on average.

It's also the case that just losing the utility of something hurts, regardless of the merits of the replacement so it's subjectively inferior to someone like me - getting older and crankier, you know.


> I'm curious if this is actually computer literacy plummeting or if it's a wider population of students being expected to understand computers.

I don't think that's the reason. I did study with non-nerds and they had to deal with computers because they wanted music, or to read an article in the internet or print something with Word/Windows and install the printer drivers.

What changed now is that things are more "streamlined" and all the complexity is abstracted. Software companies have figured out that people struggled with files, so they removed them. This can make the average person less computer-literate since he doesn't have to learn much to be able to handle the computer.


Even fairly computer-savvy younger kids don't really use e-mail. I had 3 HS students in a D&D group a few years back (now they are finishing college), and they were fairly tech savvy (2 of them went on to major in CS). I setup a listserv for our group to coordinate schedule &c. The response was "I don't really check my e-mail, can we use FB messanger?"


I somewhat understand this gripe against email but the issue I have with everyone who hates email is that you then diverge into about a dozen other communication mediums no one can settle on, making the overhead of checking all points of contact and remembering where relevant information as a chore.

Because of that, I prefer email. I'm more than happy to pickup new technology and use it but I have no desire to do so if it's going to take more time to do something I already have little desire to do.

My current location literally has 8 communication mediums and its miserable to deal with. Then I send emails and people complain... it's like people enjoy wasting their time.


I think part of it is that mobile OSes have trained people to treat their computers like appliances. The way you do something is by getting an app that is tailor made to do that thing. There is no such thing as data, that can then be acted upon by different apps and services (I.e. files). What you have, instead, is Instagram, for taking pictures and sharing them with filters, and Pinterest for sharing random places on the internet, Notes for keeping documents, etc.

I think 1 striking manifestation of the loss of the concept of files is that if someone wants to share a Note from their iOS device, the most common way for them to do it is for them to likely take a screenshot of it and share that image.

Since conceptually a note is not a thing of its own, but rather a part of the Notes app, people rightly believe you cannot share it on its own, and so need to go with the all catching screenshot fallback.


I am in my early twenties and I would expect none of my peers during high school would have difficulty zipping up some files.

Perhaps I am just in that brief period of time where more people than just the nerds knew the basics of the filesystem.


few years older, same. But yes, I think that time were nearly everyone was a PC user was only a few years. (and I suspect it depends on country when exactly that was)


There are several layers in this problem. To be able to zip files like a pro is great, but maybe those people shouldn't be doing it when they can use bzip2, gzip or xz instead that perform better.


99% of the time who cares about the performance or the program. If I need to create a compressed archive of a bunch of files for some one off purpose I’ll use whatever is most obvious and easiest on whatever system I’m using.


Eh as a college professor something has changed, not just the population. My experiences parallel the GP. I noticed it first with filename conventions; it rapidly changed from there.

This is in the same population of students. I noticed basically that IO tasks were eating up an increasingly big fraction of effort and time early on in courses, reflecting problems that didn't exist previously.

This seems to be for the reasons stated, that file IO stuff is being abstracted away. What used to be typical necessary computer fluency is now "low level" for many.


> it's a wider population of students being expected to understand computers

There it is.

It isn't just education, obviously. All software has to be subjugated to an ease-of-use that buries details.


I might imply that your highschool teacher friend is instead falling out of computer literacy, and that highshool students are moving away from her form of a computer-based workflow.

I very rarely send emails at all outside of work. Maybe one or two support emails. I also fairly rarely create zip files either. Of course, I know how to do these things, but I don't really do it anymore. I haven't needed to, for the same reason that I don't own a printer.

I haven't needed to "compress" anything since Google Drive and Dropbox meant that all I needed to do to get someone a file is type their email into a box.

Perhaps instead of email, you teacher friend could find out how her students send things to each other and meet them half way? Make up a google drive or dropbox or whatever microsoft or apple are doing instead of insisting on whatever's most comfortable for you.

In short, I don't think this is an example of students falling behind, but teachers getting left behind.

It's a little rudde to say it that way, but I really don't know how else to say it.


> I might imply that your highschool teacher friend is instead falling out of computer literacy, and that highshool students are moving away from her form of a computer-based workflow.

I suppose, in the same way that people using crayons are moving away from the printed word.

> Perhaps instead of email, you teacher friend could find out how her students send things to each other and meet them half way? Make up a google drive or dropbox or whatever microsoft or apple are doing instead of insisting on whatever's most comfortable for you.

Isn't the whole point of school to learn new skills? If students have illegible handwriting and poor grammar, should the teacher really be meeting the students half way?

I actually told her that she should continue doing what she was doing, because it was a real boon to her students. They might not ever learn this stuff.

Her students objectively less able to share information with each other. I have certainly seen kids who have to go as far as taking screenshots of what an app contains and send a screenshot to someone, than actually be able to export text.

I could understand if what the students were doing was in some way comparable in flexibility, but it isn't. They basically only know how to click on the share button in an app, and text links for someone else to install the app. Or send them screenshots of them using an app scrolled to the relevant info.

But the biggest benefit is that having assignments in discrete organized bundles makes the most sense for everyone.


The problem I see is that tech is doing the market-savvy thing of moving from features to solutions. We had enabling technologies that people were expected to learn to use in combination, like tools in a toolbox. Files were generic containers that could hold different things and clustered into projects in "folders" and manipulated by specialty tools.

But the new way is exemplified by iOS. You buy pre-packaged "solutions" that keep your data...somewhere...safe, presumably, and away from you so you don't screw it up. If it's something you should be doing, there will be "an app for that" that you can buy, so don't worry your pretty head. If we gave you "tools", you'd just hurt yourself. Why not stop worrying and buy something nice. We have songs, we have shows...you can send a funny animoji to all of your friends. Good times. You look awesome. You're welcome.

And for market success, they seem to be right, although they're creating a situation where, if you don't have to learn to use a toolbox of specialized tools in customized combinations, you'll never learn how and really WILL need to rely entirely on pre-built "complete solutions".


> Her students objectively less able to share information with each other. I have certainly seen kids who have to go as far as taking screenshots of what an app contains and send a screenshot to someone, than actually be able to export text.

True story: while performing support for development tools, I asked a developer for a source file. They opened it in their IDE, took a screenshot, pasted the screenshot into a word document, hit page-down in the IDE, took a screenshot...

We got a several dozen page word document filled with screenshots of the text file.


Honestly, that's a fireable offense. Who knows what untold damage that kind of ignorant mentality has caused to the company codebase.


I think its important to stress on this part:

> and meet them half way?

In other words, OP is not suggesting that you force students to use another method, but work with them to understand what makes sense to them. In your specific anecdote, if the teacher showed the students how to share hyperlinks, or how to get text dumps and share them instead of screenshots, maybe they would respect her for that.


> I have certainly seen kids who have to go as far as taking screenshots of what an app contains and send a screenshot to someone, than actually be able to export text.

I used to export the text of my travel itineraries, and go through several steps to get it on my phone. Now I just take a photo of the text on my screen. It's just one step :-)


Yeah but what if you are grading papers, do you want to mark up bitmaps? People need to learn that there is more than one way to do something.


> if you are grading papers do you want to mark up bitmaps?

Why not? Get a tablet with a stylus and draw your commentary directly on the image.

That is the way teachers have always marked up student work, just now without a paper copy or a physical red pen.

Trying to mark up documents using MS Word or Google Docs or whatever is a horrible experience in comparison.


iPad OS’s new “full page screenshot” feature for webpages kicks out a long PDF (no page breaks, it’s one long scrollable page like the actual webpage was) and sends it straight to the markup view. You can draw on it just like marking up an image, but the text is still selectable and searchable.

It’s a nice example of software recognizing how people want to work and building to support that.

And one more for the people with Pencils - you can drag from the bottom corner upward to take a screenshot in any app with the same markup tools. Works on either side, so the lefties aren’t left out here.


Years ago, I tried using ghostcript to set the attributes of a webpage PDF so that it is like 8k pixels tall and had no linebreaks, but Preview.app would just render it as a single page really zoomed out, and I would have to zoom all the way in. Does it function better now?


When I open in a Preview window it behaves well and fits to the width.

Switching Preview to full screen trips it up still. For some reason that view wants to fit the whole document, and it zooms very far out to do it.

And on the iPad side there appears to be a limit on how long a document the "Full Page" screenshot will make. An HN thread with 110 comments came through entirely, but another one with 160 comments was truncated. Didn't dig any deeper than that, but I wonder what's going on there.


I don't see why you could not do exactly the same thing with MS Word, or even a properly exported PDF.

And, the added benefit is that it is searchable and can be re-rendered in different formats.

But even if sending obtuse oversized bitmaps to be handled on a tablet did happen to be the best solution, that is really not the point. Her class is not about optimizing itself. Her class is about educating students. Not just about science, but about what goes into collecting data, collaborating, and submitting it in an accessible format.

I think that a high school graduate should be able to understand how to take some arbitrary files, bundle them together in a zip file, and email them to someone. I really don't think this is too much to ask of some aspiring pre-STEM students. And if she bent over backwards to use snapchat and accept assignments in the form of a bunch of screenshots, I think she would be doing them a disservice.


Making hand annotation with a digital pencil more seamlessly integrated with typed and otherwise native digital docs actually seems like a great direction to be moving in. Things are moving there to a degree but it’s still hard to shift back and forth between modes.


This would be fantastically more work than typing. Text is an effective and effecient medium.

If the student can't figure out a slightly different way to accomplish a goal then they are incompetent with computers and only proficient insofar as memorizing one workflow.


What? Have you ever graded written papers? Have you ever seen a paper graded by a good teacher?

We’re not talking about pages of written commentary here. The most “effective and efficient” approach involves circling words or phrases, underlining sentences, writing arrows from one part to another, scribbling some wavy lines in the margin, writing a few words here or there, ...

There is a very limited amount of time available to work through each student’s work. The point is not to entirely rewrite the paper for the student, or explain every problem with the paper in detail. The point is to highlight what the student did well, highlight the parts that make no sense, and give the student a few pieces of quick feedback so they can revise their paper or do a better job next time.

A colored pen on a black-on-white printed copy is much more “effective and efficient” (and “fantastically less work”) than electronic tools prominently involving a keyboard.


"Effective and efficient"? Or just lazy but fast?


Achieving the goal with less effort so that you can spend more effort on other aspects of the job or on other endeavors is effecient.

Wanting each of your teachers to learn how to use a constantly changing array of 20 different social image sharing services in order to receive work in a format that is more work to process is lazy.


the former. Even some professional editors prefer pen/stylus for some review stages.


You can do this with a pdf with annotations and a touch screen with a pen


Email is actually pretty crappy for files. She should setup an https drop and get a way better experience.


I use pcloud.com for a similar problem.

They have custom HTTP JSON RPC protocol, it uses POST and GET verbs. Their web site calls that one. They also have more performant TCP binary endpoints for use by native clients, with the same payload packed into custom binary format.

For users, it's a web app with download and upload links.


How would that look for assignment submissions? Something like WebDAV?


Just a page with a form you can upload files through. You can setup something like that e.g. with Owncloud/Nextcloud. And dedicated education tools also often have it.

For normal document-sized files, email is fine though too IMHO, for university stuff I never really cared which way was used.


NextCloud is quite a dependency for a simple file upload.


Sure, if you just install it for that. But it's a reasonably common service (e.g. around here, many universities have it for their employees and students) with that feature.


I've worked for various companies in the engineering and manufacturing industries, and zipping and sending files over email is as far from dead as you can imagine, and has a very bright future ahead of it. Frankly, today's high school students have wildly deficient computer skills. We're talking about teenagers whose usage of a computer usually consists of browsing Facebook and listening to music on Spotify, not actually doing productive work; informing workflows based on input from such users is like asking a painter how a fireman's tools should work.


Honest question: what do you do when the files you want to send are larger than the max attachment size for the email provider? If you send it through another medium, why not do that by default instead of email?


Because it is significantly more cumbersome. With email you get the file literally attached to the message talking about it, and you store that away and can always find the file and the context for it in the same place.


SMB shares or the like are pretty much universal in such a situation. I generally tend toward making something available on a file share and emailing a link, but because of permissions and access controls, email is generally easier, especially for transient files that really don't need to be permanently saved. Never once have I used Dropbox or Google Drive in such a situation, even with very small companies.


What hiccups do you run into in terms of permissions with sending an "anyone with link can view" link over email? Asking because I'm working on a competing technology and want to get the UX right.


What I'm doing is not "anyone with link can view", it's uploading the file in question to an SMB (or equivalent) share and sending the location of that. If the user in question doesn't have permissions for my department's share, they can't access it.


Sorry I was on mobile and wasn't very clear. I meant in terms of services like Google Drive or Dropbox, what about the sharing functionality doesn't work for you?


From a legal standpoint, there is way more to cover with those services. All sorts of hoops (NDA, compliance, etc.) would have to be jumped through. At my current employer, those services not being on our property under our complete control makes our lawyers nervous; counterfeits of our product are an existential threat to us, and keeping tight control over the software and electrical implementation details is paramount. Any idea involving a third-party service hosting sensitive engineering data goes straight into the shitcan.

If you managed to address the legal and security concerns, you'd have to fight against decades of momentum and user entrenchment. Drive and Dropbox are rare; any customers or vendors that you're working with will probably have the headache of figuring out how to interface with these services instead of just applying their ~25 years of experience administering massively powerful Windows services. Power users have assimilated those 25 years of workflow into their very being, and all the added complexity of SMB/Windows shares and other related services are extremely powerful tools that help them ensure that the right people have access to what they need and no access to what they don't. In-house tools are written to integrate with the existing infrastructure, and nobody has enough knowledge or balls to attempt replacing them. Internal data sharing services are generally based on software which has been continuously improving for decades, and has best-in-class feature sets and support.

The points above are themes that I've encountered pretty much universally in every engineering firm that I've worked for or interfaced with.


Makes sense, thanks for the detailed response!


Not using a particular workflow ≠ unable to understand a particular workflow.

The "zip up your files and email me" is only hard when the concepts of files and email are alien, because everything lives in its own silos. Which is the point.


Anyone suggesting using proprietary cloud products for something that's mandatory for participating in society hasn't thought things through:

1. The privacy policy of Google & Dropbox is a catastrophe.

2. They can ban you at any time for all sorts of silly reasons and then you can't even turn your homework in any more.


My kids have been to 4 schools so far (we move around a fair bit). In each one they are expected to have a Google account which is linked to their school in some sort of G Suite arrangement.

I don't know if it's a coincidence or a department policy, but Google gets them early and if you think that's bad, teachers aren't to blame - maybe schools are or possibly the education dept.

Honestly I was a little surprised that there was no discussion about it. I talk to my kids about the way the web and privacy is going and I don't expect schools to do everything but they should be a little more aware of the issues imo.


My kids user office 365, but the iPad requirement is my but beef.


> the iPad requirement is my but beef.

I feel your pain. At an obligatory 500 bucks a pop I would have been much happier to buy them laptops.


I rent them, but unless you have other Apple products you can only setup adult accounts, so my kindergartner has to be "13".


We somehow dodged the iPads for kindergarteners bullet but that would have made my blood boil. I guess the FingerPaint app is less messy than actual fingers and actual paint.

In my less charitable moments I think teachers are getting lazier, but when I go to those parent conferences and see the teaching staff looking like rabbits in the spotlight as parents of 3rd graders grill them on the latest academic theories I realise that really they're caught in the middle of a whole set of weird expectations.


1. Nobody besides idealogoues actually cares about that. See their huge install base as a demonstration of this fact.

2. The professor tells you to create a new account and submit it via that. Sure it probably violates the ToS but nobody cares.


> Perhaps instead of email, you teacher friend could find out how her students send things to each other and meet them half way?

They probably just don't. People are sharing images, text, and links. Mostly on non-accessible, ephemeral, and unreliable platforms. They are not used to share generic data and not used to reliable platforms at all (as even email is unreliable nowadays)... except at work, and that's where people use email.


The problem is that all the new ways of doing things involve being a slave of some ecosystem that doesn't play well with other ecosystems. If Instagram, Snapchat, Dropbox, Slack, and Google Drive all interoperated properly, this wouldn't be a big deal.


And if you wanted them to interoperate properly... you'd essentially be reinventing a file system.

The problem is all these services try to control as much as they can. So interoperability becomes whatever contracts on the use of APIs they can sign with each other. You as a user have nothing to say.


I'm guessing there's a decent chance that the students are already using an email provider that ties them to an ecosystem (which may already have a service geared towards sharing files!)


> Perhaps instead of email, you teacher friend could find out how her students send things to each other and meet them half way?

"Please turn in your assignments via Snapchat by 7AM tomorrow"


A friend recommended an accountant to me here in Japan. He wanted to do all correspondence via the Line App including tax forms etc.

Another contract made a Facebook messenger group and shared all versions of files by pasting them in to the messenger chat.

Wasn't particular happy about those but fortunately both were fairly short term things and to be honest I couldn't really think of anything better that I believe I could convince them to do. The Line accountant was definitely not going to use anything other than Line.

The FB Messenger file sharing, well, what. Email? Dropbox cost $$$ per user. Github requires more training. I suppose maybe Slack would have been better than FB messenger but still requires getting the other party to install apps and get used to a different workflow, So yea, I just put up their chosen method of communication for the duration.


Line accountant!! Same in Korea but with KakaoTalk. People use it for everything. People who are worried about the dominance of WhatsApp or Messenger should see what 90 percent+ dominance by one platform looks like. I did all the organising for a rental contract here through KakaoTalk. It also has a bank and in-app payments system. You can use it to buy someone a coffee at starbucks.They just recieve a barcode to be scanned at the counter. You can buy gifts for people and have them shipped without knowing the address, because the recipient just inputs it after recieving the notification.

And despite this vast monopolistic ecosystem DaumKakao put ads in the app this year. Greedy.


> You can buy gifts for people and have them shipped without knowing the address, because the recipient just inputs it after recieving the notification.

Dang, I wish I could do that.


I don't need to compress anything either, but zips/rars are good for getting around other types of limitations. For example, splitting a collection of files into chunks to send one by one or to increase copy speed. As an example, my archive of downloaded comics takes well over an hour to copy onto a drive as file folders, but as a single archive file it takes only a few minutes and I can unzip it on whatever device I'm putting it on.


> I haven't needed to,

Erm, you need to have an email to register for any kind of service out there, so "know how to use email" is pretty much a requirement even for everyone. And this is not going away, unless you want a future where everyone has to be reached thru uncompatible walled gardens.


A little of both, maybe. In defense of your position, though, I agree that zip files are an odd choice. Why not bundle everything into a single document, a Word file or a PDF?


How do you bundle a spreadsheet into a PDF? Even embedded into word is a lot less usable and runs the risk of linking to the document and not embedding.


Depends on what you're planning to do with the spreadsheet. If it's display only, it's easy to bundle into a PDF or Word doc. If you need to interact with it, sure, standalone is better. But this the first case is something I deal with all the time. If I'm looking at financial statements for a board meeting, I really don't need a live spreadsheet, and it's massively more convenient to have it bundled in with the agenda, minutes, reports, etc.


On the flip side, I do public transit advocacy, and having spreadsheets of the monthly data reported to the board, it would be so much better than ocr/text extraction and manual editing.


> I have a very similar anecdote. I had a long conversation with a friend who is a high school science teacher. She told me that computer literacy has plummeted in the last ten years.

I suspect it is the same progression as any other new technology that undergoes mainstreaming. Take automobiles for example. In the early days if you owned a car you either made yourself something of an expert (and if you were an early buyer you were probably kind of an enthusiast already) or you hired one. Today outside of enthusiast circles they are just an appliance: you get in, turn it on, and go do whatever it is you need to do.


I agree that technology will be more streamlined as it becomes more widely adopted.

But people still understand that they need to put gas in their car to make their car go, and if they run out of gas, their car can not go. We don't have to know what gas is, but it is still a quantity of something.

But our concept of files and data is becoming so abstracted that we do not even know where our data is anymore, how much we need, or how long it takes to transfer it. But until we have much more bandwidth and storage, there will be problems.

Imagine a world where we tried optimizing our cars interfaces so much that we removed gas gauges, and just had reminders that told people that they had to go to a gas station. And as a result people would frequently run out of gas, or be shocked by how much gas costs. But gas gauges were considered a power-user feature, and everyone was convinced that users would just never be able to understand them.


People used to need to understand that the gas went through a carburetor, where it was mixed with air that came through an air filter, both of which could get clogged. And then it would be drawn into a cylinder through a valve that was opened by a camshaft, where it would be ignited by a sparkplug. Sparkplugs could get dirty, so you better know how to change them yourself. And it'd be very bad if the sparkplug was live while you tried to change it, so you also needed to know where your car's battery was and how to disconnect it.

Nowadays, carburetors are gone, replaced by fuel injection, which controls the fuel mix so tightly that sparkplugs rarely get dirty. And the whole assemblage is so tightly tuned that if you need service, you take it to a dealer and let a professional deal with it.

The world you're asking us to imagine already exists, you've just forgotten enough of the old world that you only remember the gas gauges.


I mean, all of that still applies to small engines.

Even in cars, you still need to clean/change your air filter an change your oil and oil filter, even if you pay someone else to do it. There's still coolant, that needs additives (e.g. antifreeze), and windshield wiper fluid that needs to be checked and filled.

I don't think it's asking people too much to have a basic concept, maybe 1 or 2 why's deep, of how the devices they own and use function.


I don't want that world back at all, and actually I never grew up with any of that.

But, there is a very simple and direct relationship between fuel in, movement out. And most new cars can also estimate how far they can go on how much fuel.

I think that this is actually the perfect relationship, and any more optimization at this point would be harmful.


I'm looking forward to self-driving electrics. You hop in the car, it takes you to your destination, and then you hop out. At off-peak hours they drive themselves to a car farm far away from the city, and recharge. The car service handles all maintenance.

There's this phenomena where anything invented before you turn 10 is just part of the natural order of things, anything between 10 and 30 is a great new opportunity that you might be able to make a career in, and anything invented over 30 is a threat to the natural order of things that must be resisted at all costs. I've put a lot of work into maintaining the ability to see change as opportunity past the age of 30. I suspect that most of this site is just hitting that demographic where new things become offenses against the natural order of things, though, and that's why we see such resistance to things like self-driving cars and cryptocurrency.


> I'm looking forward to self-driving electrics. You hop in the car, it takes you to your destination, and then you hop out. At off-peak hours they drive themselves to a car farm far away from the city, and recharge. The car service handles all maintenance.

I really think that you're being overly optimistic about this. Kind of like people were about the cloud 10 years ago. There is going to be all sorts of new bullshit that you can't even imagine. I bet there will be surge blackout periods, where only rich people can travel. This will only trend worse over time, to the point where poor people are less mobile than they were when they could own cars. And you might have to subscribe to different car fleets, and will get screwed a bunch of different ways that way too.

> There's this phenomena where anything invented before you turn 10 is just part of the natural order of things, anything between 10 and 30 is a great new opportunity that you might be able to make a career in, and anything invented over 30 is a threat to the natural order of things that must be resisted at all costs. I've put a lot of work into maintaining the ability to see change as opportunity past the age of 30. I suspect that most of this site is just hitting that demographic where new things become offenses against the natural order of things, though, and that's why we see such resistance to things like self-driving cars and cryptocurrency.

I try not to be a luddite. But I think you are discounting how people with more wisdom can see how things have degraded over time. Obviously those in power have a vested interest in you believing that things are only getting better. Phone manufacturers don't want to remind you of the days of headphone jacks and user swappable batteries.


>There's this phenomena where anything invented before you turn 10 is just part of the natural order of things, anything between 10 and 30 is a great new opportunity that you might be able to make a career in, and anything invented over 30 is a threat to the natural order of things that must be resisted at all costs.

As a 26 year old working in the tech industry, I see lots of newer tech around that I'd say is threatening, and I have for several years now. Though I suppose I'm probably in the minority. :)

The trend of technology today seems to be towards more centralized ownership of everything we interact with, when the platform providers can get away with it.

20 years ago you would buy a book and it was yours. You obviously still can now, but if you buy an ebook from Amazon, they can take it back from you. (Or if you poorly chose to buy an ebook from Microsoft... well, they all stopped working recently.)

IBM, though never a bastion of openness, had a very detailed repair manual[0] for the original IBM PC. Sure, modern machines don't have nearly as many user-serviceable parts, but in many cases today you'll find legal and technological barriers to repair in place of even the most basic of repair manuals.

It's a trend that's hard to fight, and opting out means sacrificing a lot of convenience, but I try all the same.

[0] http://classiccomputers.info/down/IBM/IBM_PC_5150/IBM_5150_H...


I like to build my own PC and also help friends. Most PC gamers also build their own PC. Most parts come with instructions and almost all pieces fit nicely, its kinda like building Lego. I switched to Linux a couple of years back. But last month I helped a friend build a PC and when installing Windows 10 it just said "missing drivers" with no clue. Then my friend went to a computer shop and they installed Windows without any issue.


One would think the resistance to crypto has more to do with it's basic value proposition. Watching 4 years of every crypto coin functioning as a honeypot trap for the naive, then exploding, leaves a mark. Bitconnect making billions from scamming people rings truer in my mind than some vague ageism.


Also a good chunk of the crypto coins are based on a positive feedback loop between greed and electricity waste, which is kind of a scheme you'd expect from a supervillain.


You're making some great points here. I'm curious to know how you're handling this part:

> I've put a lot of work into maintaining the ability to see change as opportunity past the age of 30

I do agree with you mostly and most younger coworkers don't seem to want or care about hacker news. I remember when I was in college and found this website, it was the greatest discovery ever and there was so much interesting content. But (purely anecdotally) the viewership seems to be limited to that demographic.


> But (purely anecdotally) the viewership seems to be limited to that demographic.

From the opposite perspective of a 20 something, my more technical friends with a genuine interest (the type to build their own PC/NAS clusters, Arch Linux/Gentoo users) are the ones I know that do browse HN regularly. We're still around but I also can see most people would rather browse Reddit.

I'd like to think the conversations here have more substance than on other sites and that is still a major draw.


> I'm curious to know how you're handling this part:

> > I've put a lot of work into maintaining the ability to see change as opportunity past the age of 30

There're a few big skills, both of which are really about mindset and worldview than anything else:

One is to recognize and embrace impermanence, and to do so as a way of inoculating yourself against sunk-cost fallacies. So for example, I put in a lot of work to learn Python, Django, and web development when I got out of college, and then to learn C++, scalability, and optimization while I was at Google. When I left Google, I had the idea to do an API-compatible reimplementation of Django where all the framework bits are written in tightly-optimized C++. But as I started evaluating that idea, I looked around and realized a.) Django was no longer the preferred way to build webapps b.) The web, arguably, was no longer the preferred technology to build apps at all and c.) users of Django either didn't care about performance or they'd gotten to be big sites that can afford massive AWS bills. Sucks to be me. Better to recognize that early before sinking a lot of work into that project. I've still got those skills (though both Django and C++ are moving targets), and they came in handy when testing and rejecting the following couple startup ideas.

A second skill is to view learning as rewarding for its own sake, and something that you do lifelong rather than just when you're young so you can get a job. I'd internalized this pretty well as a kid.

A third is to pay attention to people around you, and when they're doing something seemingly stupid, ask yourself why they're doing it rather than immediately judging. And a fourth is to pay careful attention to things that disconfirm your previous hypotheses.

As an example of both of these - when I first heard about Bitcoin in 2013, I read the whitepaper, mentally filed it under "Distributed database; might spawn 2-3 interesting companies but won't go anywhere else", and then forgot about it for a few years. When the bubble hit in 2017, I was like "Pyramid scheme. Actually double pyramid scheme, which is kinda clever. Wait for it to burst."

But then from that assessment comes a hypothesis - when I looked in detail at the ICOs being funded, I should expect to see 100% scams. I only saw roughly 35% scams, plus another 50% that were well-meaning teams who were in well over their head. With over 6000 ICOs being done, that's hundreds of projects that might actually have a chance of being something real. So while the vast majority of crypto projects are scams, there's still something very interesting going on, and perhaps its younger boosters may be onto something.


> And it'd be very bad if the sparkplug was live while you tried to change it, so you also needed to know where your car's battery was and how to disconnect it.

I haven't worked on cars old enough where this would be an issue, but, as far as I know, all you need to do is disconnect the ignition coil before removing the spark plug.


I can't imagine how you could get a shock off an engine that isn't running...

60s tech was rotor and coil. Modern tech needs running ECU.

Also AFAIK the spark is mostly just painful and not particularly dangerous (memory of spark plug lead with cracked insulation). I have also had a shock from a charged CRT tube (youch!), and the occasional 240V mains shock (careless me).


Yeah, that'll work, it's just that the ignition coil is usually harder to reach than the battery. (I think actually the preferred way to do it, safety-wise, is to disconnect the ignition coil.)


Also, I learned how to change a tire very early. From what I read, a large number of people can't do that these days.

But spark plugs and air filters are still a thing.


My dad taught me parts of an engine, along with how to service them, when I was in elementary school (mid-80s). I was on a big "learn how cars work" kick at the time, so he figured he'd indulge me and maybe teach a few practical skills at the same time.

I've owned my car for 10 years now, and the total number of times I have had to change my oil, air filter, spark plug, or any other part of the engine is zero. My A1 service indicator comes on and I take it to the dealer, where they relieve me of a large amount of money and give me a car back that drives okay and has no maintenance lights on. I did have to change a tire once and jumpstart a car once, but that's about the extent of car maintenance I've needed.


It's also just...literally not worth your time to do anything else. Servicing your own car rapidly gets into needing a ton of tools, takes a bunch of time, and makes you no money while taking away your free time.

If you genuinely enjoy the process, and it is cathartic and serves the function free time and leisure needs to in your life, then fantastic. But otherwise it's a false economy.


Maybe. It depends on your luck in getting a professional who is actually knowledgeable and competent, and as a secondary aspect, knowing enough to do it yourself generally enables you to accurately judge the job someone else does. For the common oil change and lube, mechanics have a tendency to skip the lube part because there is no easy way to verify it has actually been done —- you actually need to crawl under the vehicle and check the zerks for fresh grease at which point there’s no reason not to just do it yourself and save the money.


It can be difficult bordering on impossible to find a trustworthy independent mechanic. Even an honest one is not necessarily going to be able to be as thorough and have the attention to detail that the owner might. And even though they are faster, they also have to spend less time than might be optimal.

I have a 30 year old car and there are one or two specialists in my city that people recommend. One of them told me I needed an engine rebuild due to an oil leak, when it was actually a specific seal that was relatively minor to replace, and the other correctly diagnosed the problem, and apparently fixed it for a reasonable price, but insisted the oil I was using was too thin, and when I deferred to their expertise, they put an even thinner oil in (I found out later) and lied about it...maybe because they thought I would come back with more leaks...so the only local mechanic I currently trust is a dealer that will work on classic cars and they charge a ridiculous amount.

So, comparing cost and expertise and tools is kind of missing the point. I would always take a new or newish car to a dealer. I have no idea how to find a decent independent mechanic, and I've tried quite a few over the years. If it takes me several times as long to do something, it doesn't really matter if that's my only practical way to get it done right.


> It's also just...literally not worth your time to do anything else. Servicing your own car rapidly gets into needing a ton of tools

It really depends on what service you plan to do. If you're going to do things like changing bulbs, air and cabin filter, wipers, and the battery, you could get along with just a conventional socket set if that.

Oil changes will require a oil filter wrench/cap, a torque wrench, and a funnel. And those tools can also be used for transmission fluid changes.

Once you get to spark plugs and brakes, you'll need more tools, but those services aren't frequent enough to really warrant buying the tools as opposed to renting them.

> takes a bunch of time

I typically will change the oil on my vehicles myself and a oil and filter change takes me about 45 minutes (including the time to drive to the auto parts store to recycle the old oil). The last time I took my car somewhere for an oil change, I had to drive to 5 different places and finally had to settle on one that made me wait about 3 hours before they could get to it and change the oil.

There are certain services or repairs that will take time (brakes, suspension, exhaust) where I would just as well take it to a mechanic to do it for me, but there are plenty of others that don't take much time at all and cost significantly less in terms of saving on labor and your time for setting up an appointment, taking the car there, getting alternative transportation or waiting, getting the car back, etc.


I depends on the car model, but I find that it's quicker to do some maintenance yourself rather than scheduling an appointment, waiting or using their shuttle service, and getting the car back.

The air filter and cabin filter replacement come to mind (and, at least in my Honda Odyssey, can both be replaced in about 10 minutes give or take).


With the tools they include in the car and how hard it is to get to the spare tire (in some models), I would be hard pressed to get the wheel off (especially if some shop decided it would be a good idea to really overtorque the lug nuts with an impact wrench).

In fact, the last shop that worked on my car did exactly that and I had to buy a breaker bar and stand on it in order to loosen the lug bolts on my car before I retorqued them to the correct value.


In all fairness a non-trivial number of vehicles don’t even have a spare tire these days much less a full-size one. And the stock jack even SUVs come with is often next to useless.

Having to swap out tires is pretty unusual in normal driving these days and it’s not like you carry spares of everything else that could break and strand you in a car. So I sort of understand it even if I don’t really like it.


I have had a tire totally self-destruct within the last decade, so I feel like I still need a spare. I don't believe that once a tire is damaged at speed the cans of sealant are likely to work, since it will probably tear apart.

I also have AAA, and don't intend to change a tire if I don't have to, but AAA will not provide you with a suitable tire and wheel. And I don't see why someone would want to arrange for their life to depend on their cell phone working if they drive anywhere out of town.

In fact, right now, I have run-flat tires, a spare, and AAA.


Maybe we'll get there. They'll put induction coils in the streets and electric cars will charge and get powered as they go. Unless you're going on a long trip there'd be no need to ever worry about power.

Or, simpler, wireless powering in most parking spaces so for most people never even having to think about power

Or, robot chargers. Put a QR code on your plug, maybe in infrared so it's invisible, and some simple device and find and start charging your car anytime you park it.

Anyway, as an old computer literate person the idea of not understanding files scares me but if people are getting by without them then it's just probably me being out of date.


> Maybe we'll get there. They'll put induction coils in the streets and electric cars will charge and get powered as they go. Unless you're going on a long trip there'd be no need to ever worry about power.

I think that is a bit optimistic though. It would be far more obtuse than that. You'll have to subscribe to one of a several charging services, that sort of work with each other most of the time. And you won't be able to inspect for yourself how much voltage your car has left without jailbreaking it.


Teslas do this, they show battery charge in terms of miles or kilometers which is obviously not the correct unit. It's more abstract and often wrong but closer to what the person wants to know.

Oddly enough the iPhone doesn't do this, it will happily tell you how many GB you have left when what the user really wants to know is "How many more songs or photos can I store?".


I actually think that what Teslas do is good. It is the most accurate representation of driving capacity. And a lot of non electric cars are doing that now too.

The first few years of bluetooth headphones would just give you an alarm when they were at 5% capacity, and that was it. And in general, there is very little feedback given about what your storage is doing and where it is located. Just a simple question of "where are your photos" can be hard for most people to answer now. "Uhh, the 'eye cloud', but also on my phone. I mean some of them are and some aren't."

And that is why people are constantly losing their data now. I have seen plenty of people hold onto their old phones because they have data that they don't know how to get off, and it is just easier to hold onto the entire device.


when what the user really wants to know is "How many more songs or photos can I store?"

That depends on how large they are, and that can vary widely, which is why abstracting away files (or more specifically, file size) is such a bad idea.


How many KM remaining depends on the terrain, traffic, driving technique, weather and a host of other factors. Nobody is expecting 100% accuracy just a rough estimate. They are already mentally doing the conversion anyways.


The range of a car can't change by several orders of magnitude in normal use, unlike sizes of files.


> we do not even know where our data is anymore, how much we need, or how long it takes to transfer it. But until we have much more bandwidth and storage, there will be problems.

I'd argue the problems will only start when there's enough bandwidth. That's when companies will be able to fully transparently store everything remotely and that's when people will really get screwed. Because if Google locks down your access for TOS "violation" then even having a needy friend will not bring back fully remote data.


This has already happened at least once in the automotive realm, as auto manufacturers replaced the dashboard gauges for important measures like oil pressure with "idiot lights".

Of course the idiot light has achieved an apotheosis in the era of computerized cars, where the car's onboard computer signals a problem and illuminates the "CHECK ENGINE" light. Of course, what that means is that you need to take the car in to the garage and have a Certified, Licensed Professional Mechanic attach a doohickey to the engine computer to tell him what the problem is, so he can tell you.

My father is an old-time car geek. This shit drives him up the wall.


> and illuminates the "CHECK ENGINE" light. Of course, what that means is that you need to take the car in to the garage and have a Certified, Licensed Professional Mechanic attach a doohickey to the engine computer to tell him what the problem is

You can buy a OBD-II scanner for around $30 and get that information yourself without having to take it to a mechanic. That should for work for any car built since 1996.


Bluetooth ones are less then $10 online.


>> But people still understand that they need to put gas in their car to make their car go, and if they run out of gas, their car can not go. We don't have to know what gas is, but it is still a quantity of something.

>> But our concept of files and data is becoming so abstracted that we do not even know where our data is anymore, how much we need, or how long it takes to transfer it. But until we have much more bandwidth and storage, there will be problems.

I think you're mixing up analogies at different levels of abstraction... yes people know they need to fuel their vehicles, and they know they need to plug their computers into power or charge the battery, and connect to the Internet. People probably don't know much about fuel injection, and plenty of them probably don't know what a spark plug is. It's possible that they also don't really need to know what the underlying storage abstraction for data is.


For a look at just how hands on early motoring was, check out this article about Kipling and cars (he was an enthusiastic "early adopter"): http://www.kiplingsociety.co.uk/rg_steamtactics_kipearly.htm Reading this really does sound like the early days of computing (or as I imagine from reading about it).


I don't need a license for my washing machine. I understand how the functions on my washing machine work less than I do my user facing car controls. People spend thousands of hours driving with the keen knowledge it is possible to kill other drivers on the road. They master it's abilities even if they make bad decisions behind the wheel.


This is very interesting. I've heard similar reasons as to why computer literacy is apparently low in Japan; the keitai ecosystem was so good that e-mail in Nippon is synonymous with messaging.


Teaching to non-CS students (LibreOffice, etc). I confirm most of them have no idea what a file, folder, file system or zip is.


The same can be said for newer cars and the people not knowing how they work to fix them up Or gps and knowing how to navigate This maybe another example of users just caring about using something to make lives simpler and more fun


These are both interesting insights and I, am, somehow more, selfishly surprised and amused that my career isn't going away because of increasing literacy but because user experience on few devices made it so.


”She has her students go into the field and collect data in the form of photos, spreadsheets, and typed reports. She wants students to bundle these files in zip files, and email them to her.”

As others said, that’s because they don’t need to do that.

In my experience, they will know how to share their Google Docs (which is where their spreadsheets and typed reports in all likelihood get written) with you.

They probably also will understand that editing said documents after informing you of their existence may not mean you’ll see said changes.


> In my experience, they will know how to share their Google Docs (which is where their spreadsheets and typed reports in all likelihood get written) with you.

From what she has told me, and from what I have seen, some of them don't even get that far. Their idea of sharing information is to send screenshots in MMS messages or over snapchat.

And, I think that if you graduate high school, you should have these basic skills. Just in the last year, I have emailed zip files to several different people, like my attorney, or when I applied with the local town clerk for a parking permit. There is never going to be an app for everything.

And call me old fashioned, but I think a high school graduate should be able to write a check, put it in a physical envelope with a stamp, and mail it. And if someone lets you expedite something by emailing forms to them directly, you should be able to do that too. God forbid a medical office asks you to fax them something. (Which I still have to do several times a year)

When I was in high school, everyone knew how to use AOL to send word documents and pictures and even mp3s to each other via email. Everyone. These are skills that can be taught.


> God forbid a medical office asks you to fax them something. (Which I still have to do several times a year)

Doing that was much easier in the days when computers came with fax/modems and most places had landlines. Now you have to search for some online service that will inject adverts in your fax or go to a copy place (if you can find one) and pay lots of money to send a fax. I'd almost rather mail a certified letter with return receipt instead :)


>go to a copy place (if you can find one)

Office supply stores will often send faxes.


From a security perspective, zips can be dangerous. Some email systems strip them off automatically.


From a truly paranoid security perspective, all files can be dangerous. At one point, a computer could be infected by a virus from a JPEG image (thanks, Microsoft!).


Is it a problem that students don't know abstraction layers lower than they regularly use, or that we're collectively climbing up the abstraction ladder as the years go by? I'm a full generation older than these students, and most of my colleagues can't read or write assembly language. I've worked with programmers who have never even written a compiler, database, or central processor.

I'd say we're seeing that filesystems weren't a great user experience, at least the way they've been implemented in the past, and I'd place the blame on the other side. The design of computer systems today isn't the only possible way to design them. They exist to serve users. If our old systems aren't well-liked by users, and we spend all our efforts hiding their inherent design so people can actually use them, maybe our foundation is wrong.

Even the most technical people in the world have been trying to replace our filesystems with object databases for a few decades now, and plenty of people use only "services" (database in the cloud) or "apps" (database on phone). Isn't it possible filesystems have outlived their usefulness? For a lot of people, they really only exist to boot the OS.

Computer science students need to learn about filesystems and C and Unix, sure, like cooks and carpenters need to learn US customary units.


There are good and bad abstractions, and the whole issue here is that filesystems are a good abstraction, both intellectually and ethically, and the "new" replacements are mutually incompatible lies that reduce your ability to accomplish what your goals with your device and simultaneously try to take ownership of the data and the device away from you.


Well, I've been using and programming computers for decades now, but I have to admit I've never "written a compiler, database, or central processor". This is quite a high bar for computer competency, I think.

The comments about file systems are interesting, but I doubt that programming is going to get much easier just because you store your source code in database tables instead of files.


My hairdresser, who knew I was into computers, once asked me “when I delete a picture on my mobile phone, where does it go?”.

I first thought he was asking if there was some kind of Recycle Bin on Android, but then he said: “when we die, our body is burier or burned, that is it becomes bones or ashes, but what happens with the picture on the mobile phone, what does it become?”


There's a kind of pillow covered in small, reversible sequins, which makes a pretty good analog for the situation inside the computer. If you pet one of these pillows, your hand's motion flips the sequins to show one color or the other, creating patterns. Petting all over the pillow in a single direction restores the pillow to a single color, erasing the pattern. But the pattern didn't really "go anywhere" in the sense that a physical object would -- it was just an arrangement of sequins.


I like this analogy better than comparing it to a bunch of light switches. Gotta keep this in my back pocket.


The real answer is actually the same in both cases: the bits (atoms) get used for something else. Although bones can linger for quite a while, not sure if there's a file parallel for that.


Sure there's a parallel. The bits are still there, in the same place and same order, just there's no longer an index to find and use them, so they're not "alive".

And they remain in the same place until the file space they occupy is reclaimed by the OS and rewritten with new content.


The most funny thing happens with SSDs. Even when same block is reclaimed and overwritten by OS, it very rarely overwrites the bits in NAND flash, due to the wear leveling implemented in SSD controller chip. When the actual bits are destroyed is unpredictable, probably that's why some vendors have hardware encryption there.


There is a issue with sd cards that they can revert back to an earlier state.


Perhaps assure him it's not really gone, Google, Zuckerberg, the NSA, Russia et. al. still have a copy?


Nothing to add, but I really love his question. It's so pure and innocent haha


Waste heat, maybe? (cf. Landauer's principle)


What I see as most problematic is the fact that people want to become CS majors before they become computer enthusiasts.

Long before I studied computer science concepts, I was enamored with computers. I was a power user then. When I was in elementary school I couldn't have Internet access any time I wanted (it was literally a call on the family telephone) so I spent lots of time playing around with the operating system and the installed applications. Many nights were spent exploring the nooks and crannies of Windows and Office. I learned about the cmd.exe and wrote batch scripts before I had any idea what computer science was about.

Later when I moved from Windows to Mac in the early 00's, I did the same. The same kind of curiosity led me to naturally explore the various system utilities, from Mac-specific like hdiutil or diskutil to general Unix-y.

I don't know whether this is fighting a losing battle, but I doubt anyone who's not curious enough to learn about the innards of their computers can really become successful CS majors and hackers.


A lot of the people I saw in college who were CS majors didn't really seem to care about CS as a topic and only saw it as a lucrative career path. These same people would constantly struggle to understand the most basic concepts and not seem to care about understanding it beyond being able to finish their assignments.

It's sad because I feel like I and a handful of other people in the program were the only ones actually enjoying it. It probably also has a lot to do with older people telling so many kids as they were growing up that they're "so good with computers" because they showed them how to setup their email accounts and it gave them a false sense of skill that made them think they should do it for a living.


I work in an office environment with relatively young people and it is absolutely shocking how little people know about computers or some underlying technologies,despite using them all day,every day.Very few if any, go beyond of the basics of how to open up a website or create a word document. The willingness to learn something new is close to zero and always downplayed with "I'm not good with computers". I've got a feeling that to most office workers pivot table in Excel is the highest possible level of technical knowledge.


I had to explain to someone in my office how URLs worked - he only knew how to get to sites by typing what he wanted to visit into google search. Our intranet is obviously not crawled by google so he had no idea how to access it.

I got the same "I'm not good with computers" response.

Modern web browsers are the cause of this they don't distinguish between an address and a search query -frustrating.


Browser vendors get a kickback for each search.


Isn't this the way it has always been? Then they expect help from the rare few who do understand how computers work, while at the same time somehow looking down at them for not being "normal".


I agree, nothing has changed much and I even think it's going that route where the ones that "know" are awarded handsomely, while the "don't knows" are left on a side with peanuts.


I consider myself very knowledgeable about computers and computing.

Yet I could not fix my car, my microwave or my refrigerator if my life depended on it.

Does it make me less? Is it wrong that I use my car 2 hours a day and the only thing I know is to press a bunch of buttons?

Not for me. I enjoy CompSci technology. I wouldn't like to spend my time fixing my car.


I don't expect an average user going about fixing their computer. It's more like they know how to drive a car, however it's always in 1st gear and in a straight line.


URLs are also getting to be a problem. Browsers started to trim URLs, and it's resulting in web developers who are very savvy with computers who don't really understand how URLs work or that there is always a trailing slash on the root of a website and what it means.

Google is in the process of making it worse by hiding URLs in the search results. (It's also a security risk. Google's site-name detection often makes terrible mistakes and points users to malware.)


> it's now a lot harder to teach lower-level abstractions

I now have to teach teenagers the concept of "computer".

The idea that an app is a "recipe", a set of instructions, and something needs to execute that recipe, a "computer"--aka your phone, is now a foreign concept.


Sort of as a response, I'm currently developing/piloting a 9th grade CS curriculum designed around constructionism ("making with code"). The first unit spends a lot of time introducing students to Terminal. By the second unit, assignments are git repos.


Hm, does "constructionism" mean a new thing in educational theory now? You don't seem to be using it like this:

https://en.wikipedia.org/wiki/Constructionism_(learning_theo...

?


No, Papert's constructionism is exactly what I'm pointing at. My phrase, "making with code," was meant to suggest that like maker education, this approach to introductory CS emphasizes personal relationships with powerful ideas, through personally-meaningful projects done in a community of practice. By leading with Terminal, I hope to emphasize students' laptops as tools for making things, not just providing access to content.

A lot of current approaches prioritize scalability over everything else, leading to canned and sandboxed curricula.

I'm curious, though, whether this is in tension with your understanding of constructionism.


They should teach them Terminal on OSX. I would expect most CS folks would need it per building code or Git. Maybe I am too disconnected, and life is now IDE.

I showed my son at a young age terminal commands to browse the filesystem. It really seems to be helping him now that he coding.


How old was your son when you started teaching him the command line, coding etc?


Started using command line at around 11 on his Mac. Really started to understand coding at 13, writing in Java and Python.


Frankly, we’ve been fantasizing of moving away from the file-system-as-tree abstraction since forever, wanting to embrace a metadata rich database approach.

Can’t happen soon enough imho


But we're not moving towards a metadata rich database, we're moving towards balkanized proprietary walled gardens in the cloud.


That's true. The problem is that even when you pay for service, the incentive is to suck your data away, data-mine the hell out of it and hold you at ransom.

It's like cable TV, originally one paid a premium to avoid advertisements. Today it's just the norm, and you get the advertisements anyway.


Both of these are true. We're moving towards walled gardens in the cloud, but within these cloud, your "files" are becoming rows in a highly sophisticated database, perhaps in addition to blobs in a highly sophisticated object storage system.


But it doesn't matter because you - the user - aren't exposed to the benefits or even the abstraction of that database. You don't get to make arbitrary queries for metadata. You get access to whatever half-featured interface the vendor bothered to implement, whereas the database is optimized for their operational and datamining purposes.


Agree! Can't happen soon enough.

My project Aquameta is one such attempt, trying to rethink the programming stack, and by extension all user data, as relational data instead of files, with the filesystem still there under the hood, but basically as a bootstrapping mechanism to get the database running.

Here's an essay about the reasoning and approach:

http://blog.aquameta.com/intro-chpater2-filesystem/

The TLDR is that the filesystem lacks a sophisticated information model, and doesn't provide mechanisms for defining structures, and we really need to embrace this basic requirement at a systemic level.

Boot to PostgreSQL! :)


> a metadata rich database approach

Can you ELI5 what this means? Very curious to understand ideas on alternatives to the traditional file-system abstraction.


Not the guy you were asking but...

I do a lot of self-study, and I use Calibre (ebook library software) to organise my reading materials. I used it because my Linux file browser has problems displaying covers for EPUB files. (And a couple of other) I like that my reading material has lots of ways for me to organise it without creating duplicates (tags mainly) search and browse my books. Then one evening I found myself wishing I could have all my other relevant 'files' in the same system: audio, video, source code, repos, blogs, podcasts, notes etc... Basically a metadata rich database - except not really a database in my case as I need to pull in 'stuff' from all over the place, and not only on one device.

Files are nice, and I hate anything that takes away my ability to organise stuff on a physical level (which, truth be told Calibre stubbornly refuses to let you set up your folder system, so I cheat by using tags) but at the same time, the problem with the skeuomorphic paradigm of files and folder is that there are so many more options open in the digital world than in the real world when dealing with items.

Imagine being able to sit at your office desk and open the drawer to pull out a piece of paper. When you look at the paper it shows you some text. Rotate the paper to the left and the text disappears and a video starts playing, turn it to the right and you hear a podcast, place it on your desk and 4 more pieces of paper appear next to it - or it calls someone relevant to the info.

All possible with digital info, but not with a real piece of paper - yet.

That is the only thing I see as a real negative about files and folders - it may be limiting what we can really do with our digital world because we're stuck in a file-system metaphor created hundreds, if not thousands, of years ago.


Typically when people talk about that, they mean they want to move from an approach where files have a location to an approach where files are queried based on metadata.

So rather than going to /home/kaslai/images/ to (hopefully) see all my images, I'd instead query my filesystem, perhaps like /owner:kaslai/type:image.

The specific syntax would need some serious thought, especially if we want to make it fairly ergonomic to slot in to existing systems.

There are a large number of approaches that one could take. I'm personally a fan of a hybrid, where you could navigate to a location and then make queries on anything within that location.

IMO one of the coolest features that such a filesystem could have is dealing with music. There's no perfect way to map CD rips to a hierarchical filesystem, but that wouldn't be a problem if I could just open a "directory" like /home/kaslai/music/?artist/?album/ which would automatically make virtual directories based on the artist and album, or I could just use /home/kaslai/music?artist=Smash\ Mouth to get a list of all 182 copies of All Star that I have in my library.


Have a look here: https://arstechnica.com/information-technology/2018/07/the-b...

Basically the idea is that the OS identifies and indexes all data on disk providing primitives to access content by query rather than only by a hierarchical path.

This applies to user content but also object code, libraries (providing versioning, compatibility, integrity, whatever criteria.)


Almost all file systems have metadata. Its up to the apps what metadata to store and display. Music files have artist, album etc. photos have location, camera settings etc. The tree structure is just a feature. You can for example make one file appear in many folders. User rights are often linked to the tree structure, which is very useful.


We're definitely moving away from file-system-as-tree, but our destination seems to be more like grandma saving every file on the desktop than anything resembling rich metadata.


Wasn't that the way Google Docs was originally implemented, but everyone preferred having it organized more like a file system, so Google Drive has a folder structure.


The same could be said about terminals; they used to be used by everybody, now they're only used by power-users. I don't think that's inherently bad. I didn't use a terminal until college, but it wasn't a barrier by any means.


I have similar anecdotes, I experience a lot of Gen Xers and Boomers expecting a linear progression of computer literacy, inversely correlated to age.

"You're so lucky you grew up using this technology, so you're used to it whereas I have to struggle - yes, do ignore that I had more than your whole lifespan to get up to speed on this ubiquitous reality"

but I encounter people under 24 that can navigate a touch screen pretty well but lack any fundamental knowledge, let alone typing on a keyboard in a proficient way.

fortunately most activities no longer require this kind of thing. auto-saving, and better UI solves alot.

I can see how wanting to teach these things will have challenges.


I have a theory that in ten years or so the tide may turn. That’s enough time for this generation to have lost, or have had to deal with unpleasant migrations of, their valuable data due to online services shutting down... repeatedly.

I don’t know. On the other hand, most older people like myself have had to deal with data loss from failed, obsolete or lost storage devices. There’s still no “one size fits all” very-long-term solutions for consumers.


If the market works its magic, we probably won't go back, but will end up with services that keep getting more reliable with time and outages. If Google/AWS end up with an outage where folks lose business critical data, there will now be a market for extremely high reliability data services. As you point out yourself, not dealing with the hassle of doing backups yourself has been a huge advantage. Ease of use and convenience will trump quality for the majority of use cases and the market will keep shifting in that direction.

If I try to view this from a non-technical persons viewpoint, this shift has been transformative, really. Most folks don't care if their data has been transcoded a million times as long as its still useful and easily accessible. This has protected a ton of businesses and users from being at the mercy of their IT team.


One of the dangerous factors of cloud-first thinking is that it tends to abstract away people's concerns for redundancy. "Oh, CloudBrand handles distributed backups for me in 32 different regions, I checked the box for that." And when CloudBrand has some new scale of failure, it just takes down all your copies at once.

Every service is "extremely high reliability" until it isn't. Sometimes, the causes of failure are even beyond technical ones. Look at the situation with Adobe Creative Cloud users in Venezuela-- the discs and wires are still fine, but customers are losing real value because of legal mandates. Your high availability platform can withstand a network cable cut, but can it withstand a court order?

If you're still thinking local first, it encourages awareness of multiple baskets. iCloud is down? You can still pull the photos off your camera's flash card, or your workstation's SSD, or the external spinning-rust hard disc you used for cold backups, or the third-party dedicated backups service you use...


> If I try to view this from a non-technical persons viewpoint, this shift has been transformative, really. Most folks don't care if their data has been transcoded a million times as long as its still useful and easily accessible.

Until they try to print their photos for the family album and discover that pictures taken by their high-end camera or an expensive iPhone were degraded by successive transcodings to the point they look like garbage in print, and at that point there's no way to fix it.

> This has protected a ton of businesses and users from being at the mercy of their IT team.

And put them at the mercy of third-party vendors. It's a trade-off, but having a local IT team has its benefits.


It will probably expand in both directions. There is work going in to new creation tools. They function best client side.


The escalator never went down in the first place. Files are a weird, unclean semi-abstraction (growable virtual sparse block devices addressed as seekable byte-streams with heavy metadata and OS-level memory caching?) that we only feel a degree of primacy about because of how common they've been.

Consider: unikernels (like, say, any old cartridge game ROM) don't have any need for files. They have precisely three abstractions they deal with data in terms of:

• a .data section in ROM (maybe needing bank-switching to get in place);

• some kind of byte-addressable NVRAM (like battery-backed "save RAM", or CMOS memory) either bus-mapped, or through MMIO.

• tape or (floppy) disk, sometimes at an extremely low level (send commands to the drive motor, write guard nybbles, etc.), sometimes through a DOS where you can just request to seek to a given track then read or write a given sector on that track. Either way, more like a block device than a filesystem.

---

For today's kids, I'd suggest: don't start with files. Teach key-value storage first. Interact with a library like LMDB without explaining where it's putting the data. They'll understand this just fine.

Then, teach object storage in terms of key-value storage. Object storage—especially once you add object versioning—is much closer to the modern metaphor that user-facing apps expose. You compose a complete new version of an object in an in-memory scratch buffer, and then it atomically replaces the previous object. You can't corrupt an object by half-saving it. Etc. Again, don't bother explaining how this works yet; just give them a scripting runtime hooked up to a Minio instance.

After they get that, you can ask them what they'd do if they needed to create an object that wouldn't fit in memory. Then you can explain block devices, as a "place where large scratch buffers can live"—but don't force them to figure out how to allocate those buffers from the block device! That's gonna pull in a whole bunch of prerequisite teaching. Instead, pull out another API: Linux's LVM. Logical volume management takes block devices in, and spits block devices out. The logical volumes are the scratch buffers. Explain mmap(2), and how these buffers end up a lot slower that memory buffers. Explain how these buffers survive a disk crash.

After you get to that point, then you can explain that all the other ways computers durably store data are built on top of these logical-volume durable scratch buffers. You can explain how LMDB works in terms of disk pages; and then you can explain how a content-addressable storage works in terms of combining an LMDB-like KV store with hashing and splitting.

And, after that—if you like—you can explain that sometimes, when we need something that's like object storage but where everything in the storage bucket is actually a tiny scratch buffer, we use filesystems. You can explain what an "extent" is by talking about how something like LMDB, that has a "freelist" of pages from its underlying logical-volume, can reserve a contiguous set of those pages and then let something else access them. Then you can explain a filesystem as a key-value store that has buckets called "inodes" with page-range keys, and extent-address values. (And an associated versioned object store of directory-objects, where each object is a serialized list of records (dirents), each containing a reference to an inode and giving it a name and other stuff.)

Filesystems are hard!


> Filesystems are hard!

Huh, no. Just show how the FAT file system works. It's still used on USB drives.

Now that you have a simple model for the low level, you can gradually add complexity such as standard file operations, caching, wear-leveling, etc.

You say Minio instance, I say hex editor.


I didn't say the implementation of a filesystem was hard. You can certainly communicate, as a complete standalone fact, the brute engineering perspective on what makes a particular filesystem work. Probably even in a single lesson.

But the point isn't to teach students about a filesystem; the point is to teach students what files are when they don't understand why you'd even want the abstraction that "files" represent; when "files" aren't an abstraction they're familiar with compared to other abstractions. You have to justify "files" in the context of things they do know—to explain what purpose files would serve if they were invented today; what reason you'd introduce a filesystem into a data architecture that had all these other things (KV storage, object storage, logical volumes) but not files.

Students would ask: Why are files streams? Why are the streams arbitrarily writable, when this means files can become corrupted from a partial write? Why are files extendable, but only at the end, not in the middle or the beginning? Why do files have names? (And, once you explain inodes+dirents, they'll ask why there's a directory tree if hard links exist; and why it's the dirents that have the names, but the files themselves that have the other metadata—xattrs; alternate data streams, etc.) What is the difference between a file stored in a directory, a file stored in a filesystem stored in a mounted loopback image stored in another file, and a file stored in a subvolume or filesystem container? Why do directories need special system calls to read and write them? Why is a virtual filesystem used in many OSes to represent kernel objects like named pipes and device nodes? Etc.

(This isn't even a hypothetical; engineers at AWS and GCP probably had to answer this very question when asked to justify building EFS and Cloud Filestore. Why do we need a filesystem as a service, when we already have these other services that provide these other abstractions?)

This is not altogether unlike teaching a student what a programming language is and why you'd even want one of those, when they're immersed in an environment where software is created without them. Would you just sit down and show the kids C, because it's small and easy-ish to understand?

A filesystem is a data-storage abstraction, like C is a machine abstraction. But neither are primitive abstractions. They build on numerous other, theoretically purer abstractions. It's much easier to explain the motivation for the creation of a language like C, if you already understand the motivation of the creation of a simpler language that rests upon fewer other abstractions—like ASM, or like Scheme. Likewise, it's much easier to understand the motivation behind the creation of the abstraction that is filesystems, if you already understand the motivation behind the creation of simpler data-storage abstractions, like logical volumes or memory-page-backed search trees.


The answer to "why do you want files as an abstraction" is that files are the units of ownership of data. If you don't control the files that represent your data, you don't own it. You might think you do, but you don't, because someone else ultimately decides the fate of those files.


My point was that there don't need to be files anywhere to represent data at all. Computers can work entirely without files. For example, your whole hard drive could consist of an RDBMS, where you'd not "download" files, but rather "download" streams of tables, which would import directly as tables into the RDBMS.

"Files" are a very specific abstraction; thinking they're the only way to transfer chunks of data around is a symptom of lack of imagination. There are very similar abstractions to files, such as object-store objects. The only practical difference between files and objects is that updates to objects are transactional+atomic, such that nobody ever sees an "in progress" object. But an object store (backed by a block device) is a simpler system than a filesystem (backed by a block device.)

You can control the objects that represent your data. You could also control, via an RBAC-like system, the K-V or tuple-store records that represent your data. Or the Merkle-tree commits. Or the blockchain transactions. Or the graph edges. Or the live actor processes holding in-memory state. You can transfer all of these things around between computers, both by replication (copying) and by migration (moving.) What do files have to do with any of this? An accident of history is what.


They are not the only way to transfer chunks of data.

They are the most successful and versatile mass way to transfer chunks of data and define ownership in the history of computing.

I’m sure an RDBMS or a graph DB can do those things as well. But no one has succeeded in doing it even close to as effectively as files managed to. And many have tried. In fact, probably the greatest computer software failure of all time, Windows Longhorn, was largely a failure in trying to replace a file based system with a graph DB based system.

People very much can imagine alternatives. There are no shortage of imaginable alternatives. There is a huge shortage of successful in use alternatives that are as versatile or effective as files.


You're focusing on "files" as they compare to things very different from them. But imagine for a moment what an OS with an object store in place of a filesystem, would be like. Pretty much exactly the same, except that the scratch buffers backing temp files and databases wouldn't hang off the object-store "tree", but rather would either be anonymous (from mmap(2)), or would be represented by a device node (i.e. a logical volume) rather than being objects themselves. All the freestanding read-only asset bundles, executables, documents, etc. would stay the same, since these were always objects being emulated under a filesystem to begin with.

And downloads would also be objects. Because, when you think about it, at least over HTTP, downloads and uploads already are of objects—the source doesn't get allocated a scratch buffer on the destination that it can then freely seek(2) around and write(2) into; instead, the source just streams a representation to the dest, that gets buffered until it's complete, and then a new object is constructed on the dest from that full local stream-buffered copy. (WebDAV introduces some file semantics into HTTP's representational object semantics, but it doesn't actually go all the way to enabling you to mount a DBMS over WebDAV.) Other protocols are similar (e.g. FTP; SMTP.) Even BitTorrent is working with objects, once you realize that it's the pieces of your files that are the objects. Rsync is the only weird protocol, that would really need to be reimplemented in terms of syscalls to allocate explicit durable scratch buffers. (That and SMB/NFS/AFP/etc., but those are protocols with the explicit goal of exposing a share on a remote host as something with filesystem semantics, so you'd kind of expect them to need filesystem support on the local machine.)

Now, want to know something interesting? We already have this. Any inherently copy-on-write filesystem, like APFS or btrfs, is actually an object store masquerading as a filesystem. You get filesystem semantics, but they're layered on on top of object-storage semantics, and it's more efficient when you strip them away and use the object storage semantics directly (like when using btrfs send/receive, or when telling APFS to directly clone a container.) And these filesystems also have exactly what I mentioned above: special syscalls (or in this case, file attributes) to allocate scratch buffers that bypass Copy-on-Write, for things like databases.

There's no reason that a modern ground-up OS (e.g. Google's Fuchsia) would need to use a filesystem rather than an object store. A constructive proof's already there that it can be done, just obscured a bit behind a need for legacy compatibility; a need that wouldn't be there in a ground-up OS design.

(Or, you can take as a constructive proof any "cloud native" unikernel design that just uses IaaS object/KV/tuple/document-storage service requests as its "syscalls", and has no local persistent storage whatsoever, never even bothering to understand block devices attached to it by its hypervisor.)


> "Files" are a very specific abstraction; thinking they're the only way to transfer chunks of data around is a symptom of lack of imagination.

I didn't say they were the only way to transfer chunks of data around. I said they were the units of ownership of data. If your data is somewhere in a huge RDBMS mixed together with lots of other people's data, you don't own it, because you don't control its fate; whoever owns and manages the RDBMS does. The same goes for all the other object control and storage systems you mention: individual people who have personal data don't own any of those things.


This is true. Way back when, I was teaching a commercial course on C to a class which included a COBOL programmer. At the end of one session, he came up to me and asked, "But what is this 'memory' stuff, and why would I ever want to use it?"


That's a cool idea! What resonates most with me is starting somewhere in the middle of the ladder of abstraction, low enough that a lot of the fundamental CS concepts are visible, but high enough that beginners can make something real, now. I like that networking would be built in from the start.

I'd also want to teach beginners in a way that lets them keep building and tinkering outside the environment I provide for them. This is where I'd love to see schools providing and encouraging the use of hosting space and resources like Minio, so that students can just count on them being available.


I also teach computer science (part-time adjunct, I have a regular industry job too), so the quality of computer literacy in the incoming student body is on my mind pretty regularly. This has been especially troublesome when teaching operating systems, because the OS is the deepest layer in the pile of abstractions students have to peel-back over the course of their CS education. The pile of abstractions is only getting higher - the day is not far off when we have to peel-back the abstraction of "persistent data generally" in terms of files before we can get on to peeling-back the abstraction of files themselves. Or who knows, maybe we'll all abandon the desktop analogy of computing altogether, with it's hierarchies of files and links, to move on to some new purely-graph-theoretical database notion of persistent data. Somebody will still have to talk to the disk controller, however, and I can only imagine how distant that reality will be (and already is) from most students' day-to-day experience with computing.

At the same time, I can see a great equalizing force in all this. Computers have reduced the technical acumen demanded from their users to the point that owning a computer and using a computer and even being a "computer enthusiast" doesn't put you very much ahead of anybody else on-average when it comes to starting out towards becoming a computer scientist. I think this, in combination with that everybody wants to be a software engineer these days, will eventually put us somewhere around the 1970's in relative terms with regards to technical literacy in the cohort of incoming computer science students. This might sound nightmarish to the current establishment, but it has at least a positive side-effect: computer science is getting more accessible because teachers can no longer assume that pupils come from a background of "quasi-technical" computer literacy (again, this is because conventional computer literacy has become decreasingly technical in nature).

I've heard one of the general causes behind the lack of diversity in gender and economic-background in tech workers is, at some point around the mid 1980's, CS instructors started asking of their students tasks like: "Open your editor and type..." and someone in the classroom would raise their hand and ask "Ah, um... what's an `editor'? And by the way, I don't own a computer either" and the instructor's reaction would be to privately advise that student to seek a different major, at best, or open derision at worst.

So I think we're getting away from that, which is at least a way to look at the bright side. It does make the teaching job a bit more challenging.

On a more personal gripe, a minor irritation of late is the number of students who want to do their OS homework in the Windows Subsystem for Linux, instead of even setting up a basic VM.


> On a more personal gripe, a minor irritation of late is the number of students who want to do their OS homework in the Windows Subsystem for Linux, instead of even setting up a basic VM.

That's easy to solve, and can give the students a history lesson.

Give them some tasks that work with ~/con :)

Or you can have them work with files all named the same but differ with case. Windows based machines get "confused".




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: