It would also have been quite impractical before the development of affordable solid state drives - GCs tend to involve quite a lot of random access, after all...
That said, I'm far from convinced allowing cycles in a directory structure would actually be useful for anything.
Yeah, I was wondering if it would allow applications to create "detached" filesystems. I could see that being (sorta) useful: a filesystem which is automatically cleaned up when the application exits.
Of course you can already do that easily enough with /tmp, but /tmp has its own problems: if it is shared between all users then it is a well-known source of security problems, and if your OS has private /tmp then that has its own problems too (ie. it's not shared between users of the same application). The other problem with /tmp is that it isn't "garbage collected" very quickly -- on my Fedora server, unreferenced /tmp files stay around for up to 10 days.
With GCFS it looks like you could get rid of /tmp altogether. Applications could just create a directory anywhere (eg. some random name under $HOME) and then "detach" it by removing "..", and then keep it open for as long as they need it, after which time it gets GC'd quickly and automatically.
The other problem with /tmp is that it isn't "garbage collected" very quickly -- on my Fedora server, unreferenced /tmp files stay around for up to 10 days.
Technically, apps using temp files are supposed to unlink them after opening them, so that they get cleaned up as soon as they are closed. This is sometimes true of things that are shared, like UNIX sockets, too.
Yes, this was my first thought. Graphs are more generic than trees, but sometimes the complexity cost doesn't justify the opportunity cost. The main byproducts would be a) easier to crash your system with an infinite directory loop and b) Making the term 'up a directory' more wishy-washy.
That said, I'm far from convinced allowing cycles in a directory structure would actually be useful for anything.