Since 1 June 2012, I've been taking notes in unicode text files, which contain (occasional or adjacent) lines starting with 'nb ' and then a list of tags. I wrote a simple tool ("nb") in Inferno's shell (thanks to Robert J. Ennis for the port to Plan 9's rc), to (1) search for given keywords in per-directory index files pointed to by the global index, (2) index all of the nb lines in files in the current directory, and (3) if necessary, append, to a global index file, a reference to the index file in the current directory.
I've found that I'm comfortable with the eventual consistency this offers, in exchange for fast lookups when I want something (as opposed to indexing first, and/or indexing globally, and so waiting for indexing to get a result). This distributed-file approach also allows me to add tags to a variety of files: local files, or networked file-system files, or sshfs-mounted files, or Dropboxed files, or files under version control, or files with varying text formats; and find tags across all of them and across all the time I've been indexing.
It runs in linear time with respect to the number of tags I've entered, plus the time to read and process the global index, so obviously there are many ways I could improve the time performance (as an easy example, I could permute the index to list all the tags in alphabetical order, and next to each tag list the files that contain that tag).
I also wrote other tools, since the layout is so simple: for example, "nbdoc", to catenate the actual contents of the references returned by the primary tool (nb); and "so" (second-order), to return all the tags which appear in any nb line with the given tag(s).
I've also found that it's not easy for me to remember what tags I might have used in the past, or how I was thinking about something, so I try to use the conjuction of several tags to narrow down search results, rather than try to remember one specific tag (this seems to correspond to the observation that it can be difficult to remember exactly where in a hierarchy you put something).
The modular approach, of per-directory indexes referenced in a global file, also makes it easy for me to combine work-specific notes, with public notes, with private notes, all in the same global index file, at work; but only have the same public and private notes at home.
I made my own Secret Hitler deck, and have been playing it for a few months with both adult and teen groups for whom I usually run Werewolf games. SH has been really well received, keeps everyone more involved with the game (both because of fewer deaths, and the basic mechanics), and is well on its way already to supplanting Werewolf as the social game of choice.
I used Emacs from 1993-2004, and switched to Acme for the 11 years to present. I don't miss trying to memorize all the key combinations from Emacs. I like that Acme presents a clean, simple, and direct Unicode interface to what I work with: mostly editing shell scripts, and running shell commands, as a build engineer. It takes a while to get used to mouse-button chording, but I don't even think about it now. I constantly use guide files, in many directories, to store and modify commonly used commands to highlight and run, so I make many fewer typos now, and don't forget which commands to run or how I run them. I can also switch contexts a lot faster, both because commands are laid out in the directories where I use them, and because the Dump and Load commands store and retrieve sets of files in the tiled editor subwindows. When I had to work on Windows I enjoyed having a pared-down unixy userland that I could write scripts in, to use also in my Linux Inferno instance (mostly communicated from one instance to the other through a github repo for backup and version control). The biggest drawback to me with Inferno is that so few other people run it, that I have to compile it myself on any new platform on which I run it (there are not really rpms/debs/etc available to just install it). But your experience with Plan 9 Acme might be better, I just prefer also working with the Inferno OS improvements, such as bind, /env, sh, etc.
I told Facebook to delete my account as soon as they implemented this feature, years ago, and I found that (at least at the time) I could not block it on my Android phone. So, at least, I've not since then been party to giving Facebook information (they most likely have anyway) about other people, since the Facebook app is not installed on my phone.
Composing a larger program, by combining functions in different languages in a new framework, seems to take things in the direction of large and complicated programs. Would it not be simpler to write smaller, independent, individually named and reusable programs in several languages, according to each language's strengths, and pipe the output from one of these programs to the other? As an added bonus not all the programs need to understand the entire problem domain.
This line of thinking ignores simple domain specific languages like SQL, which 9 times out of 10 are held in strings in the host language (and thus undergo no static checking because the contents of strings are basically ignored by the compiler. There are other examples too: html files contain nested Javascript, CSS etc. PHP files host HTML. C++ effectively hosts C and assembly.
Haskell hosts a few dozen languages called "extensions" - specified in a {-# LANGUAGE #-} pragma at the top of the file. This one is of particular interest because the language appears to have some kind of extensible syntax which allows these extensions to occur - except when you look under the surface, they're all combined into the same grammar and they all interact with each other, such that other extension developers basically need an entire understanding of all of them to know where conflicts may lie.
Using a framework like LanguageBoxes instead to host these kinds of extensions would allow individual developers to put their own, independant extensions into the language, without having to hack on the compiler and rebuild it.
Also, writing several independant programs and using any IPC to communicate between them is ideal in theory, but in practice is more often unsuitable - because Unix processes are a bloat. They take time to initialize and use lots more memory than necessary for what amounts to running code for a small amount of time and discarding of it. Perhaps if we had a more lightweight model of processes ala Erlang style, this kind of reusability would be practical and not just a good philosophy to follow.
Was this a permanent or temporary effect on Algernon the mouse? If we continue the amino-acid mutation along the line of the difference between the mouse and human FOXP2, can we expect great things from Charlie?
I use Inferno to develop software, as a virtual OS over top of Linux or Windows. Its community doesn't see any value in writing software just to make it easy to use for newbies. It is still actively maintained, with new changes from Plan 9 development. There are even some new software tools developed in it (eg, I wrote a build tool). Maybe this qualifies?
The redo-inspired build tool I wrote abstracts the tasks of composing
a build system, by replacing the idea of writing a build description
file with command-line primitives which customize production rules
from a library. So cleanly compiling a C file into an executable
looks something like this:
Find and delete standard list of files which credo generates, and derived objects which are targets of *.do scripts:
> cre/rm std
Customize a library template shell script to become the file hello.do,
which defines what to do to make hello from hello.c:
> cre/libdo (c cc c '') hello
Run the current build graph to create hello:
> cre/do hello
Obviously this particular translation is already baked into make,
so isn't anything new, but the approach of pulling templated
transitions from a library by name scales well to very custom
transitions created by one person or team and consumed at
build-construction-time by another.
I think this approach reduces the complexity of the build system by
separating the definition of the file translations from the construction
of a custom build system. These primitives abstract constructing the
dependency graph and production rules, so I think it's also simpler
to use. Driving the build system construction from the shell also
enables all the variability in that build system that you want without
generating build-description files, which I think is new, and also
simpler to use than current build-tool approaches. Whether all-DSL
(eg make), document-driven (eg ant), or embedded DSL (eg scons),
build tools usually force you to write or generate complicated build
description files which do not scale well.
Credo is also inspired by redo, but runs in Inferno, which is even
more infrequently used than Go (and developed by some of the same
people). I used Inferno because I work in it daily, and wanted to
take advantage of some of the features of the OS, that Linux and bash
don't have. Just today I ran into a potential user that was turned
off by the Inferno requirement, so I'll probably have to port it to
Linux/bash, and lose some of those features (eg, /env), to validate
its usability in a context other than my own.
EDIT: Replaced old way, to call script to find and delete standard derived objects, with newer command.
Inspired somewhat by redo, I wrote "credo" as a set of small command-line build tools, so build description files are in the shell language rather than a standalone DSL.
(I don't like how makefiles have so many features that reimplement what you can do in the shell. I also don't care for big languages with build-tool DSLs--though you could say credo is a build-tool DSL for the shell, like git is a version-control DSL for the shell. With only language directives, no constructs.)
I wrote it in the Inferno shell to take advantage of some nice OS features and its cleaner shell language. One of these days I should port it to bash, so other people might use it.
In high school I was blacklisted from an admin position for demonstrating that you could write in Digital Command Language a program that simulated the login environment, stored login attempts, and then after three tries exited to the real login environment to let the user in. In college I was nearly expelled for just mentioning to the IT guys that they didn't have a password on some database, and I could get in with just telnet. These attitudes haven't changed much since 1990 at least.
> These attitudes haven't changed much since 1990 at least.
Why would they?
A blatant oversight is a sign of incompetence and by making such incompetence public, you're threatening their job security. Why would anyone react positively?
You're better off making the disclosure anonymously.
> You're better off making the disclosure anonymously.
When the info comes from an anonymous source they can't take their frustration out on the messenger. (Instead of thanking the messenger as they should.) I don't get why these hackers often give up their anonymity.
> I don't get why these hackers often give up their anonymity.
My guess is that it's because they're hackers, and they don't expect that the other side consist mostly of boring incompetents with zero sense of humour or professional pride. Geez, if I were to ever be responsible for ITSEC in a school, I'd take such hacker for a beer and dare him to try and break some more stuff. The state of mind which leads people to prosecute hackers is a very sad one.
Funny enough, I did the exact same thing when I was in high school, only we were running Novell on NT4 and I did it in basic and started it from autorun.bat which loaded before the network login screen.
It would let you try one time, tell you you entered the wrong password (saving it to file) and exit, at which point windows would load the novell login screen that looked exactly the same.
Hah! Exact same thing, I used... Borland Basic, IIRC, to build the executable that I called from autorun.
I collected many passwords - I never used them or intended to, I just wanted to see if I could do it.
I made the classic mistake though - I told someone about it. A few days later word got around. I was suspended for a week and was banned from computers for the rest of my time there.
Edit: Now that I think about it (I haven't in years): What kind of response is that? Someone shows some creative thinking and does so in a way that is obviously[1] quite naive/without ill intent. While I understand that you want to discourage the specific behavior, perhaps steering the culprit to use talents with more foresight would have been a better answer.
[1] Looking back, I was something of an asshat in the personal skills department so it's entirely possible that they simply didn't believe my lack of nefarious intent.
Actually, I doubt it had anything to do with your personality (and if it did, shame on the authority figures - and the same is true if they were reacting to being made look foolish). No, instead I imagine this was a pure security play: you had a bunch of passwords, and you hadn't done anything with them yet. But they would have had to believe that (and chances are they had no way to independently verify this) and in addition that you would never do anything with them in the future. The second claim is rather tougher to believe than the first.
So in the simplest possible manner you became a "known threat", and they dealt with you in the simplest possible manner, digital ostracism.
Now we can all tell the alternative story, about the wise teacher who sees something special about us in the misdeed, and who takes the time and the risk to cultivate that positive seed rather than throw the baby out with the bathwater, so to speak. Our very own Mr. Miyagi to safe us from a misspent youth, and who understands our behavior as an expression of exploration ignoring limits, outsmarting the system, rather than your basic mean-spirited destruction for no reason. (Although tagging and hacking do share many qualities, and both are driven, I think, by a young man's desire to prove himself, and yes, even aggrandize himself as someone special - bold, clever, crafty, and someone who can't be "kept down by the man". Rebellious, but also desperately needing to prove himself.)
(Of course in this story the Mr. Miyagi would have hacked onto your personal systems, encrypted the passwords you'd stored, and then left a personal message notifying you that if you wish to understand what he did and how he did it, he'll meet you after school in room 10 for a primer on real hacking.)
I too did something similar in high school. We were running Novel on either Windows 98 or 95 (Can't recall now the specific version). I started the Visual Basic program using the autorun.bat like you, except instead of presenting a fake login dialog, my program would listen for the OK button click event in the real dialog (using the Windows API) and would get the contents of the username and password textboxes and then POST them to a web server a fellow classmate had setup. This had the advantage that it was completely invisible to the end user. The program was also hidden from Task Manager (also using the Windows API).
We did end up getting the admin password and getting access to the server. I had written another program (also in VB) that would run hidden in the background and randomly open and close the CD-ROM drive. I uploaded this program to the server and attempted to get it to push to all of the computers in the school, but I don't believe I was successful as I didn't really know anything about Novell and never saw it working on any machines.
One of my fellow classmates also found the schools SOCKS proxy so we were able to run AIM and ICQ on the school machines. Our teacher pretty much let us do whatever we wanted in that class. It was my third year taking a programming class with her and she allowed the advanced students to work on their own projects. In that class I also wrote a Group/IM chat client in VB with a Perl server. As GrinningFool said, responding to teens who are obviously interested in computers with bans or expulsion or worse is just stupid. If I hadn't had the freedoms that my teacher gave us in those classes, I wouldn't have learned anywhere near as much as I did.
Witches were burned at the stake, basic human reactions have not evolved since dark ages. So basic premise of somewhat trivialistic movie "Hackers" the one with Jonny Lee Miller holds right - rest of the world is sheep(and it will get worse as we migrate from general purpose computing to specialized devices). Most of the world is made up of unwashed masses that consider computing something that resembles magic - you only scare and confuse them talking all the smart things that they do not really understand or grasp.
I'll conjure up respected Arthur C. Clarke - Third law: Any sufficiently advanced technology is indistinguishable from magic. Put scared people and magic together and you got bonfires going. This why hackers rot in jail for longer than murderous psychopaths.
But then again I was more of a black hat for most of my life than white.
I was threatened with a ban on computers at high school from my middle school.. because I had copied game demos to other user's accounts. [They gave me their username and password]
I also figured out how to access the middle school's library database without a login. [That wasn't secured, nor did it require a password]
Also, nearly got in trouble with the IT administrators at my high school because I found out how to send Novell messages.
https://github.com/catenate/notabene
I've found that I'm comfortable with the eventual consistency this offers, in exchange for fast lookups when I want something (as opposed to indexing first, and/or indexing globally, and so waiting for indexing to get a result). This distributed-file approach also allows me to add tags to a variety of files: local files, or networked file-system files, or sshfs-mounted files, or Dropboxed files, or files under version control, or files with varying text formats; and find tags across all of them and across all the time I've been indexing.
It runs in linear time with respect to the number of tags I've entered, plus the time to read and process the global index, so obviously there are many ways I could improve the time performance (as an easy example, I could permute the index to list all the tags in alphabetical order, and next to each tag list the files that contain that tag).
I also wrote other tools, since the layout is so simple: for example, "nbdoc", to catenate the actual contents of the references returned by the primary tool (nb); and "so" (second-order), to return all the tags which appear in any nb line with the given tag(s).
I've also found that it's not easy for me to remember what tags I might have used in the past, or how I was thinking about something, so I try to use the conjuction of several tags to narrow down search results, rather than try to remember one specific tag (this seems to correspond to the observation that it can be difficult to remember exactly where in a hierarchy you put something).
The modular approach, of per-directory indexes referenced in a global file, also makes it easy for me to combine work-specific notes, with public notes, with private notes, all in the same global index file, at work; but only have the same public and private notes at home.