Hacker News new | past | comments | ask | show | jobs | submit login
Fix 260 character file name length limitation: Declined (uservoice.com)
126 points by cryptos on Oct 7, 2013 | hide | past | favorite | 102 comments



I was made aware of this artificial limitation through a pretty vivid edge case while I a student some years ago. TortoiseSVN had somehow borked and began to recursively create .svn directories inside of each other, as fast as it could, until such time as I managed to kill the process.

Many can guess what happened next: I went to delete the erroneous top-level .svn folder and was given an error, the gist of which was: "Cannot remove. Path too long." There's more to the story, but it turns out the real answer was to boot into a Linux LiveCD and delete from within there.

I found it strange that one set of Win32 calls allowed TortoiseSVN to create paths of a certain depth, but the other set, used by Microsoft in their tools, would not allow me to delete them. These days I understand the reasons behind it, and so instead find it merely annoying.


This is only related to the path, if you recursively delete from the top level then it doesn't work, but if you switch CWD to a lower level directory then you can delete the files.

eg.

  rm -rf C:/REALLY/LONG/PATH/ 
  doesn't work but
  cd REALLY
  cd LONG
  rm -rf PATH
  cd ..
  rm -rf LONG
  cd ..
  rm -rf REALLY 
  works (windows equivalents of course)
In past projects I've ended up rewriting most of of System.IO that dealt with paths to fix this issue. Basically all you have to do is prefix your path with \\?\ and voila it works. (Except in .NET which detects you're doing this and stops it)


I just googled \\?\ and got no results because google just is that way. Can you explain more about that please.


The following should explain a bit better than I could: http://msdn.microsoft.com/en-us/library/aa365247%28VS.85%29....


It's hard to Google, but that syntax is called UNC: http://en.wikipedia.org/wiki/Path_(computing)#Uniform_Naming...


As other posters have said it's UNC, however if you're interested in other weirdness around files in Windows, checkout NTFS streams.


A UNC always references a network path. You can use \\?\ to reference a network path, but it can also be used to reference a local path too. \\?\ is not always used to represent a UNC.

\\?\ is an extended-length path.


\\?\ is an unparsed path - the path is passed directly to the filesystem, bypassing checks and other Win32 API restrictions - a side effect of this is that maximum length checks are bypassed, but this is NOT the only effect.

See GP's link for more details.



This is going to look linux because I don't use windows, but you can automate this by writing two scripts and putting both in your path

#!/bin/bash

while [ 1 ]

do

  cd .svn
done

The other looks like this

#!/bin/bash

while [ 1 ]

do

  cd ..

  rm -Rf *
done

Now just run the first until it can't cd any deeper, and then run the second to clean your hard drive very throughly.

(note I know darn well what script #2 does ... this would probably make a good debugging question for a sysadmin interview... in case you don't get it, for gods sake don't run script #2 unchanged)


Actually I saw script #2 on an interview question a little while back (not that the interviewer knew what it did; he didn't seem to have a programming background). Don't exactly recall what the question was, but it was something to the effect of "what are your thoughts on the safety of the following script" or the like.

The good thing about nix platforms is that you have nuclear options at your fingertips. The bad thing is that you can't be impulsive or an idiot because you have nuclear options at your fingertips ;)


Ditto here, it must have been a management interview fad a decade or two ago. The sysadmin version of fizzbuzz. My particular version was one page script that wiped an existing software installation (and a bit more LOL) and dropped a new version on top of it. The kind of thing that has pretty much been eliminated by using git/etc as a distribution tech and/or puppet.

So VLM... nice to meet you, here's the first version of our software upgrade script I was wondering what you'd change. I started laughing, pointed out a few things, he said I passed.

It had a lot more wrong with it. Weird quoting mistakes (single vs double, also something along the lines of grep "-R blah blah"). And there were basic conceptual issues, like copying the binaries to /usr/local/bin before replacing the old distribution with the new one, such that you'd get old ones. And the path contained a version number and the script tried to change user PATH env variables ... to the old version path.

The proper interview-style fix for my creative solution would be something like rm -Rf .svn instead of rm -Rf *, or bothering to check the output of pwd is longer than X characters using grep -c and some comparisons or playing games with chroot. Its a great interview question because you get a feel for the candidates style, are they most comfortable with chroot, or doing text manipulations of pwd etc or are they a minimalist who likes to make the smallest possible change or into rewriting the whole blasted thing or...


>in case you don't get it, for gods sake don't run script #2 unchanged", fairly harsh...

Down voting a guy who does have a warning? However for safety's sake, I would recommend prefacing the code with a warning instead of having it at the end.


You should post both scripts unaltered to stack overflow for the LULZ.


When I was still at MS, a lot of these kind of bugs didn't get fixed -- the code checking against the old (smaller) path size would often not be owned by anyone anymore, and nobody wanted to touch it (because that would make them the new owner).


THIS is what is so crazy-making about this problem. You can keep all your own paths you make intentionally under 260 no problem, but when a piece of code screws up and goes over, it can make your life hell. Over a stupid file.

If they're not going to fix the problem, and I respect why they're not going to, at least make it easier to work around. It just seems like on Windows you often have to deal with silly, incredibly annoying problems like this that other platforms don't have.


If you ever want to fix an issue like this in the future, it's very easy to just create an empty folder on c:\ named something like "empty" and use robocopy to mirror the empty folder into the folder that is too deep.

robocopy c:\empty c:\so_many_subdirectories /mir will do the trick.


NTFS has actual support for hard file links, but only usage I ever seen was for jokes similar to this one.


I use junction/hard links to alias part of my filesystem into dropbox. There is a real use case for it.


I use them for snapshots.


I use them as OS based file versioning for my backups.


Same here, but with Windows' own backup tool.


I had a similarly annoying experience, but with Dropbox.

On my Ubuntu PC, I created an encrypted folder in my Dropbox using EncFS. Turns out EncFS produces very long folder names and file names if you use the default, most secure setting with per-file IVs. Still, everything worked fine until I booted my Windows PC. As soon as Dropbox started trying to sync with the Windows PC, it ran into the 260-char path length limit and quietly began to truncate their names. Result: corrupted EncFS folder.


The same issue happened to me some years ago. The solution to get rid of .svn directories was to use a simple robocopy script I found online (ntfs-3g support for writing was rudimentary back then).


Unfortunately this kind of thing is the reason why Windows is in a death spiral that it probably won't get out of.

Microsoft has had many chances to introduce clean new APIs: for instance, .NET could have been built on top of a simpler platform, or Metro could have been an opportunity to take out some cruft.

In the end Metro breaks everything (if you think the median programmer could write non-trivial apps with asynchronous I/O, think again.) Amazingly, Metro almost feels like a tablet operating system when it is running on top of the line hardware, but the ultimate attack on latency is to remove things from the stack and not to add them.

Personally I think Windows today (even Win 8) beats the pants off Linux and MacOS as a GUI operating system, but I seriously have to ask questions such as "will Intel be making new x86-compatible processors 10 years from now?" when I see Microsoft and Intel consistently painting themselves in a corner.


Yup. Each version of the .NET framework makes developers go through a cycle of excitement and disappointment when they find out that (1) there's all these neat new awesome libraries, but (2) the new libs take weeks to properly understand and (3) are just as baroque and poorly-thought-out as their predecessors, but in completely new ways and (4) the old ones will never be fixed and are effectively deprecated (but nobody formally admits it).

So many opportunities to settle down the MS development ecosystem into something sensible and consistent... so many pathetic missteps. Linq2SQL getting deprecated almost as soon as it was launched, Click Once installers just being an unusable mess (and now they've killed the old-but-usable installer projects), the endless shuffle of GUI frameworks, etc.

So many talented people firing out projects that are almost good.


It shouldn't matter. NT was designed to be platform agnostic; it was written on the Intel i960, a long-forgotten RISC processor from Intel, by guys who had cut their teeth on VAX and Alpha, and was released on x86, Alpha, MIPS and PPC. There was even a SPARC port which never saw general release. I was running SQL Server on NT on Alpha in '96 or thenabouts, and bloody quick it was too! Porting NT 7 (or whatever Windows 8 really is) to ARM or POWER or whatever is not going to be a big deal.


The trouble isn't that "Windows can't be ported to ARM, etc." but more that Intel and Microsoft have, together, been making consistently bad decisions for a long time.

For instance, I'd say the tablet experience is more defined by having an SSD than having a touch screen. The hybrid hard drive that has been pushed in the WinTel ecosystem (including ReadyBoost) is a joke.

Customers don't feel minimum latency, mean latency, or median latency. They feel maximum latency. An SSD reduces maximum latency, but half-baked caching schemes don't. Yes, they improve measurable things like boot time, but a quick boot is little solace when your web browser regularly locks up for 20 seconds. (Even if the machine could reboot faster than the time the browser is locked up)

If there was one fundamental tenant of the anti-customer corporate ideology it is "throughput computing", or, the idea that nine woman can produce a baby in one month.

Customers don't perceive throughput, they perceive latency. When you're stuck on the 405 going 5 mph you aren't going to be philosophical and multiply that 5 mph by the density of vehicles, you're going to feel like it's slow for me...

Had the Windows 8 requirements included "no HDD", people would be saying "Wow, Windows 8 is fast!", instead a touch screen adds $100 to the cost, as does the Windows license, so users get shortchanged on the fundamental performance of the machine when they are trying to hit a given price point.


> Had the Windows 8 requirements included "no HDD", people would be saying "Wow, Windows 8 is fast!"

That's really clever. I was very underwhelmed by Windows 8 but I was moving from an SSD-only Windows 7 system, which are still relatively rare. If W8 black-desktoped on a system with less than 1000 IOPS on the system drive the way it does if you have an unsigned driver, there'd be a whole ecosystem of modern hardware and software that can assume fast storage IO opened up, with no backwards compatibility loss at all.


I had a netbook that I was planning to do wearable computing experiments with and the first thing I noticed was that the HDD would shut down if I had it in a backpack and moved faster than a slow walk.

The combination of moving from Win 7 starter to Win 8 (meaning remote desktop and all the goodies get unlocked) and going to an SSD has made it a really awesome machine. Win 8 really does get a lot out of an SSD, but if you hamstring your machine with an HDD or a "Hybrid" Hard Drive it doesn't matter how fast your CPU is if I/O makes the machine go out to lunch.


I have a netbook that has outlasted two macbooks. 1GB RAM, 1.6 GHz Atom processor, but it has an SSD... and I think the latter has proved to be the most important feature.


> Porting NT 7 (or whatever Windows 8 really is) to ARM or POWER or whatever is not going to be a big deal.

I'm uncertain they kept cross-platform systems running around (whereas I'm reasonably certain Apple has a full OSX running on ARM internally, as they had OSX/x86 long before they left POWER behind). Although NT was built to support a number of architectures, it's been almost 15 years since NT last ran on non-x86 systems (AlphaNT, the Alpha port of Windows 2000; MIPS and PPC had already been dropped by this time), that's a very long time, and more than enough for x86 dependencies to creep in.

On the other hand, WinRT is supposedly a full NT kernel on ARM, so...


Windows through Server 2008 ran on Itanium, so they've been fine with non-x86 at least up until that point.


The Itanium build continued through Server 2008 R2.

Thus, every version of Windows NT has been released on at least one non-x86 platform.

- NT 3.1-3.5: x86, Alpha, MIPS

- NT 4.0: x86, Alpha, MIPS, PowerPC

- 2000 through 2008 R2 (7.0): x86, Itanium

- 2012 (8.0): x86, ARM


2012 is actually NT 6.2 (as is Windows 8).


> 2012 is actually NT 6.2 (as is Windows 8).

Notice that I listed Windows Server 2008 R2 as 7.0 rather than as 6.1. And that I listed Windows 2000 as 2000, rather than as 5.0.

In other words, I was giving the equivalent workstation OS -- not the internal NT version number.


I think WinRT is evidence enough that Microsoft can build NT for other architectures if need be. I'm sure it wasn't as simple as typing "make" to get an ARM version of Windows, but it certainly didn't take them very long to do it either.


I would point out that NT is already running on ARM, it was demoed at one point recently prior to the launch of windows 8.

http://blogs.msdn.com/b/b8/archive/2012/02/09/building-windo...


I'm uncertain they kept cross-platform systems running around

WinRT is a full NT kernel on ARM so your statement is incorrect. Also, Windows runs on IA64 (Itanium).

With Visual Studio you can currently compile code for IA32, AMD64, IA64 and ARM.

NT is very well done in regard to cross-platform compatibility: all platform specific code is centralized in one location and if you need platform specific instruction, you must you the Hardware Abstraction Layer (HAL).


Technically, Cutler started NT on PRISM¹, an ancestor of Alpha, when it was still called Mica.

I've never heard of NT for the i960 before. Are you sure you don't mean the i860? Very different machines.

¹Not that one, this² one.

²http://en.wikipedia.org/wiki/DEC_Prism


NT originally targeted the Intel i860 (just an emulator, at the time). Intel's codename for the i860 was "N10". Microsoft's "NT" name was an abbreviation of "N Ten".

https://en.wikipedia.org/wiki/Windows_NT#Naming


That I can believe. Somehow the i860 fits well with Windows in my mind. Though nominally RISCy, it made the mistake later repeated by Itanium: high theoretical performance impossible to achieve in real programs (not to mention interrupt latency long enough for a vacation on Alpha Centauri).

The i960, on the other hand, had IMHO a great instruction set (even after losing the capabilities¹), with clean orthogonal 3-address instructions combined with programmer-friendly addressing modes.

¹http://en.wikipedia.org/wiki/Capability-based_addressing


You are correct.


Windows 8 already runs on ARM.


Better than that: The original NT design is from Dave Cutler, who designed a series of good-to-great OSes for DEC, including RSX-11M, VMS, and Mica, which never saw the light of day but was meant to be VMS-meets-Unix for the brand-new RISC chip DEC was making before it transmuted into Alpha (the chip, not the OS).

He's on the Xbox team now. He helped design the OS for Xbone, which, at this point, can't possibly tarnish his reputation that badly.

http://en.wikipedia.org/wiki/Dave_Cutler

http://www.krsaborio.net/research/1990s/98/12_b.htm


>In the end Metro breaks everything

Does it actually break anything? In the sense that programs written 10 years ago don't work.


The Windows backwards compatibility model includes a lot more than just old binaries continuing to run. To a substantial extent you can take code written for 16-bit windows and recompile it in 64-bit mode, intermixing it with code targeted at the latest version of Windows.


"will Intel be making new x86-compatible processors 10 years from now?"

I currently have in my hand an ARM tablet running Windows 8.



Yep, our Maven project routinely fails to build on Windows as there are some paths which are just too long.


The "forever" in the title does not exist in the linked response, and I would also note that the question seems to be in regards to Visual Studio and has a very VS-oriented response.

Some Windows APIs support 32k paths but since not all do, very few applications can claim to support more than 260.


> gets in the way of having a deeply-nested project hierarchy"

Do people actually want deeply-nested project hierarchies?

For example, from a real-world, full operating system, the deepest path I can find[0] is in a build directory and 184 characters:

> "./ports/lang/python26/work/Python-2.6.1/portbld.static/build/temp.XXXXXX XXXXX-vHEAD-amd64-2.6/XXXXXX-home/XXXXX.git/src/ports/lang/python26/work/Python-2.6.1/Modules/_multiprocessing"

(Names changed to protect the "innocent.")

[0]:

    $ mkfifo ../names
    $ find . > ../names &
    $ python
    >>> with open("../names") as f:
    ...   l = 0
    ...   nm = None
    ...   for n in f.readlines():
    ...     if len(n) > l: l, nm = len(n), n
    ...   print l, nm


I've often seen normal people come up with deeply nested directory structures with long descriptive names and they get bitten hard by the 260 character limit. For example names like this are very common in my experience:

    C:\Documents and Settings\Joe Random User\My Documents\Some Cloud Software\Company\Projects\2013\156 Big Customer Inc\Locations\Someville Foostreet\Problem reports\2013-10-07 product not working\images\video of product not doing what it's supposed to do.avi
Also:

    find . | python -c 'import sys; print max(sys.stdin.readlines(), key=len)'


The longest I can find on my real-world, full operating system is also a build directory. However it's 349 characters:

    /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_lang_v8/v8/work/v8-3.21.3/out/x64.release/.deps/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_lang_v8/v8/work/v8-3.21.3/out/x64.release/obj.target/v8_base.x64/src/extensions/externalize-string-extension.o.d


That path looks like it's growing exponentially.


Not necessarily want, but I've come across it just by installing a node project into my home folder on XP, something along the lines of:

C:\Documents and Settings\____._______.________\project\node_modules\package\src\tests\somelib\helpers\test123\file.html

What made this troublesome for me is that my networked profile refused to sync while the file was there (and couldn't be removed without renaming each layer to single character folders. Additionally the profile sync failure copied back files I had deleted confusing the heck out of me :)


    $ find / -mindepth 10 | wc -L 
    273
I seem to have some slightly longer file paths on my home computer. Do I need that? Probably not. I could move those files somewhere else. But should I really have to worry about such things?


reached 248 here when run against my home dir. Interestingly the winner is the path to a transitive npm dependency five(!) levels down the dependency tree. The command I used:

    find -mindepth 10 | awk '{ print length(), $0 | "sort -n" }'  | tail


Right now, sure. But suppose some piece of software goes rogue and starts creating dirs and files. Further suppose that there's enough of them to fill the disk. OK, now your disk is full and nothing on the machine works, so you need to delete those deeply-nested files. Except, whoops!, the paths are too long, so you can't delete them with the command line. OK, now you're writing a script or burning some godawful boot CD to try to get to a safe mode or Linux kernel. All in order to delete some silly unnecessary files.

Problems like this aren't problems when everything is working normally. But that's beside the point. When the shit hits the fan and the system is only getting in your way and adding to the problems instead of helping solve them, it's really really really not fun.


I know, when the shit really hits the fan, visual studio is the tool I would use to try and clean up those messy directories.

</sarcasm>. The complaint is about Visual Studio's decision in particular, not the weakness of (some) (old) Win32 APIs.


Or use Cygwin, limited only by the OS (32K path IIRC).


Certain SBT plugins and build settings can generate really deep nested paths under the "target" folder of a project. I think the point of this is to enforce file immutability during the build process - whenever a file is modified by a build stage, a new file is created instead.

I could be completely misunderstanding this though since SBT is fairly complex.


I dont think people do, that doesnt mean it doesnt happen by accident. Until we can get file system as a database, and all of this can be abstracted, it can be a real issue. That said, the target here is a developer - a bit of careful planning can avoid said complications.


Errr, it says Visual Studio will limit the path length for now. Windows already support longer path names, if you use the right APIs.

My understanding is that it's not worth going over all of their code to use the newer APIs since they also rely on third parties that they can't force to update. Those would still break when dealing with longer paths.

Still sucks (the path length limit is indeed problematic, especially when dealing with Java code), but let's not exagerate things shall we?


I have the same feeling as you... I don't understand where other people's hostility is coming from. This sounds like it would be a minor change. Will Buik, who posted the decline note, said that it would affect many products and features. Besides making those changes, they also have to be rigorously tested. And then the change may not even be consistent all across the board because of third-party tools that depend on the 260 character limit assumption... In the end, Microsoft has limited resources just like every other company. They have to allocate it in a way that creates the most value for customers, I'm assuming. Bug fixing and addressing customer complaints is creating value, but it has to be weighed against other priorities, depending on the severity.


People are hostile because they're tired of Microsoft holding things back. These decisions, while reasonable, ultimately don't add up to good decisions. They're not obligated to do anything else, but I'm not obligated to be nice about it.


Third parties in their own company...


MAX_PATH is a #define, so by increasing it you immediately cause buffer overflow vulnerabilities in every program which ever does a wchar_t foo[MAX_PATH] = {0}; since the old limit will be baked into the binaries.

Even if VS restricts itself to 32k-aware APIs, you'll hit this eventually when shelling out or calling any third-party code. Never going to happen.


Not only is it never going to happen, they would be mad to try.

Even, if they did make the change and somehow managed to re-test and fix all their software, they would still have created the potential of breaking every piece of Windows software in existence.

Making the change has to potential to cause untold chaos, the likes never before seen in the software industry.

It would make Y2K look like a paper cut.


Ok I may just be overtired but how would changing a define cause this? Its a static number - you make it bigger and ordinary programs just can't see the extra length in buffers. They're not going to be assuming they can execute memory they didn't allocate, and Windows won't allocate them memory from a buffer string that's already been handed out.

Even if they're just looking for a null terminator, they'll crash when they overrun the bounds if written properly.


Consider something like

    wchar_t buff[MAX_PATH] = {0};
    int (*callback)(void);
    
    if (NULL == ::PathCombine(buff, /* ... */)) {
      return; // Error, don't continue
    }
    callback();
If this is compiled with a small MAX_PATH and run on a system with a large MAX_PATH, then the system can write a value that's too long, overflow `buff` and overwrite `callback`, which would then execute the buffer contents and probably crash. Building an exploit out of this is left as an exercise for the reader (e.g. supply PathCombine with some shellcode)

I picked a random API, but there are several win32 functions which accept a character buffer without an explicit size, and mention MAX_PATH in the MSDN documentation.


You could patch those functions if you were Microsoft and overload them so they had a form which worked explicitly with long paths, though I suppose that's getting beside the point since Windows already has the necessary APIs - the problem is MS for some reason insists on not transitioning to them.


There is nothing to say the developer did not create their own version of that API function:

   if (NULL == ::MySpecialPathCombine(buff, /* ... */)) {
      return; // Error, don't continue
   }
There is no way Microsoft can patch that, but the problem is still there.

Also back to the PathCombine example. When the third party executable was compiled, the Windows SDK would have defined PathCombine to take a string the size of 260 (or whatever value MAX_PATH was at the time it was compiled).

If Microsoft did patch the call and then return more than 260 it would just crash the calling third party executable as it would only be expecting at most 260 characters.

In effect Microsoft would effectively re-defined the static PathCombine API signature at runtime and making it some sort of dynamic signature.


Am I the only one that had a response of "OK" to this? While I think this limitation is so sad it's funny I haven't had problems with it in years.


Check out some of the replies on the page. It seems that under certain not-rare circumstances, Visual Studio itself will generate paths longer than 260 characters.


What's important here is not the bug or how it can be fixed, or worked around. What's important is the fact that they'd rather create new features than fix bugs. Now that might be undue, this is a the very definition of an edge case, but it's just one more nail.. Proving that they'd rather "move forward" than look back. Not a good image to have. Even if it's just an image.


oh no, how will my Enterprise Java Enterprise Application work on our Windows Enterprise Production Enterprise Datacenter Edition?


This is a global issue with Windows, not with Visual Studio. If you use these "new" APIs to write above MAX_PATH [1], the file system will obey you, but tons of other tools will not. One of these minor tools is Windows Explorer![2].

So if you updated Visual Studio to support it, a ton of other tooling and UX would fail. This needs to be a corporate wide, product wide, and 3rd party initiative. The scale of which is likely not worth the result.

[1] http://msdn.microsoft.com/en-us/library/windows/desktop/aa36...

[2] I don't heavily use Windows anymore, but when I tested < Windows 8 machines with > MAX_PATH, Explorer did not work.


Looking for the longest path on my machine, I was slightly amused to learn it was this one (length of 345 characters):

/Applications/Xcode.app/Contents/Developer/Documentation/DocSets/com.apple.ADC_Reference_Library.DeveloperTools.4_6.docset/Contents/Resources/Documents/recipes/instruments_help-stack-trace-help/Displaying_Your_Source_File_That_Contains_a_the_Symbol_in_a_Stack_Trace/13_Displaying_Your_Source_File_That_Contains_a_the_Symbol_in_a_Stack_Trace.html


Dear Microsoft, you made thousands of us to suffer the pain to deal with your API to access long paths.

Please have now at least the decency to do the same without complaining.


I always thought this limitation came from NTFS, or at least a previous version of.

Maybe this might be resolved in the gradual move to ReFS which I am under the impression is the long term future of the windows world.


> I always thought this limitation came from NTFS, or at least a previous version of.

NTFS allows paths up to 32767 UTF-16 code units (after expansions) with each path component (between `\` separators) up to 255 code units, and this is available through UNC paths.

MAX_PATH is a Windows API limitation, and thus a limitation implicitly inherited by many windows-based software which use MAX_PATH as storage limits and buffer sizes.

Changing filesystem will not fix WinAPI issues.


I wonder whether there was a right answer at the time MAX_PATH was conceived. It certainly is nice to have a hard upper bound for buffer sizes path API functions will fill and 32k would be a bit large to use by default.


NTFS is actually a pretty glorious filesystem. It even includes support for soft and hard links.. but for some reason Windows doesn't use them and instead prefers it's weird shortcut file abstraction which never works outside of Explorer


You haven't looked at the default filesystem layout of Windows Vista and later, did you?

Hardlinks are extensively used in WinSxS to link the DLLs and executables to places where they used to live. Symlinks are used for many of the legacy directory names that some applications have hardcoded, e.g. C:\Documents and Settings links to C:\Users.


But the user has no access to file symlinks without escalating to admin every time they want to make one.


> But the user has no access to file symlinks without escalating to admin every time they want to make one.

By default.

This is a security restriction, like UAC. And so you can turn it off, just like you can turn off UAC.


And requiring that makes it useless for most purposes.


The 'shortcut' thing came from Win95, which ran on the VFAT filesystem.


It comes from the operating system pathname parsing code. That is why you can do things with a deeper current working directory.

There is an alternate parser, but the name has to start with \\? and then it takes up to 32kb of text.


I think that's file name limitation. I think NTFS allows 32767 for full path...


Yes, NTFS would support much longer paths than 260 characters.


perhaps someone has posted it, but I don't see it here:

https://en.wikipedia.org/wiki/Subst

Gotta really stupid-long base path? alias it.


This .NET library from the .NET BCL team helps a little: http://bcl.codeplex.com/wikipage?title=Long%20Path&referring...


Fixing it on visual studio would still leave it unfixed the Windows API and for almost all other software, e.g. you can't open files with path > maxpath from windows explorer. Visual studio support would not help much.


If Windows has good enough support for symbolic link, this wouldn't even be a problem in practice. There always is some way to solve the problem other than declining it.


Another fun one: the Line Number variable in Visual Studio's debugger is 16-bit. In huge generated files, you can't step through code beyond line 65535.


This is not actually true. I just tried it.


Oh my god. I don't use windows anymore, but I remember this limit of 260 characters from DOS 6.0. I just assumed that they already fixed it long ago.


And this, children, is why we take proper abstraction very seriously.


There is old joke which is now actually true. The best Win32 implementation is Wine on Linux :-)


Why downwote? Oh I see; fixed version: Wine on Mac




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: