Hacker News new | past | comments | ask | show | jobs | submit login
Visual Studio and Team Foundation Server will have Git support (msdn.com)
395 points by logictrip on Jan 30, 2013 | hide | past | favorite | 213 comments



Rock on, Microsoft — this is great news for Windows developers. As much as the open source community (of which I am a part) loves to rag on Microsoft, they seem to have recognized the threat of platforms moving off of Windows (Steam, iPads, Android, ...) and are taking reasonable steps to encourage development for Windows (make the developer experience better).

This — a reasonable response to a potential threat — is a huge step for Microsoft. Kudos, VS team.


As a (now former) long time Window developer, and even though people like to bash on it, I still think VS (>2003) is the best god damn IDE I've ever used.

And with .NET it leaves everyone in the dust, IMHO.


You've obviously never suffered a large project. Crashes, memory gobbling, random debugger failures, glacial speed, lag, project configuration corruption, UI glitches, refusing to load projects, half complete refactoring operations, search/replace just stops, having to reset environment at least once a day, grey screen of death.

And that's just the IDE. CLR is the only framework and language combo I've used which will quite happily just stop working with no human intervention one day.

It's a bag of shit for me and I resent using it.

For ref, I've been building software on windows since 1994 from win32 to wpf to asp.net to mvc to WCF.

The only positive thing I can say is the money is good, but it's danger money.


Yes, agreed (having worked on middlish-big projects with maybe 40 VS projects for each solution, 45+ minute re-compile times (not from scratch) on the latest gen CPUs, etc) with several hundred million LoC but believe me, if you think that's bad, you don't want to see Eclipse/NetBeans/XCode/etc. with a project a thousandth of that size.


Slightly curious, what kind of projects were you working on that are "several hundred million LoC"... wouldn't 45-minutes be reasonable for that many lines of code?

I ask because I've never even come close to touching a project with that many SLOC and I was also under the impression that most modern operating systems are barely fitting into 100million+ LoC category, correct? This is your chance to redefine my perspective on "big project" haha.


A lot of the code may be automatically generated.

Some C# developers use T4 to generate data access code, for instance. This can amount to several hundred thousand lines of code for a moderately sized database. If there are several large databases being accessed, like is often the case in reality, then I could easily see there being millions of lines of automatically generated code in a single project.


I think when you hit several hundred million lines of code in a solution, a governor kicks in to give you more time to think about just what the hell you're doing.


If a single project is several hundred million lines of code, VS probably isn't the problem.


I know nothing about nothing, but having that many projects under the same solution its the biggest clue that you are doing something very wrong.

Even if you have a reason to have 40 projects in what should looks like one solution you can still create various solutions files with just the subset of projects you need. No one works on 40 projects at the same time.


We have a solution with more than 80 C# projects, and visual studio 2010/2012 handles it fine on modest hardware. I usually hit shift+F6 to build just the current project when I'm iterating on some change. This builds very fast because our individual libraries are small. It doesn't really feel "wrong".


Unless u have a very good reason to deploy and distribute 80 different DLLs it should feel wrong.

Many people think that In order to have a well layered and decoupled application you need to breakup every single piece in a separate project and that makes no sense. Thats what folders and namespaces are for.


For C#, maybe (not enough experience, but I would be surprised if what you say is a good practice).

For C / C++: You do that with static libraries, not shared libraries, and that's the only sane way to work: Have all library projects part of your main workspace, so you can easily debug and fix stuff in them, yet manage them independently.


For production/distribution you can always use a tool like ILMerge or SmartAssembly to merge dlls/exes together into something that makes sense for that particular distribution. I've had plenty of success with both tools.

Saying that, I do get what you're saying. Good practices can be taken to the extreme. Some approaches such as prism, take the idea of breaking the UI up into modules that can be registered with a shell - where on reflection, that type of flexibility is rarely going to be needed.


It's sadly far too common for .NET developers to carve up their solutions into far more projects than are necessary. This can seriously slow down compilation times.


Actually if you are changing a dependency right at the bottom of the chain, it makes things much faster as each compilation unit (assembly) is less likely to be coupled to the changed code.

Large assemblies are the enemy of compilation time as you can't partially compile them (like you can with java individual classes).


Yes but that's not the case if you're changing a dependency in the middle of the chain, as is more commonly the case.

Additionally, as far as I can tell, the main thing that slows compilation down isn't so much the actual compilation step as loading in and copying all the project references. Visual Studio does this separately, from scratch, for every project that you rebuild, since each project compilation runs in a separate csc.exe process.

Quote from Jeremy Miller: "I took 56 projects one time and consolidated them down to 10-12 and cut the compile time from 4 minutes to 20 seconds with the same LOC" (http://codebetter.com/jeremymiller/2008/09/30/separate-assem...)


It depends on the size of the project. Our dependencies compile into 130Mb of DLLs excluding all external references.

That's expensive to build.


In that case it's probably sensible to break it up not only into separate projects but separate solutions too.

What bugs me is when relatively small solutions are broken up into large numbers of projects. Very often it's done for no reason whatsoever other than aesthetics. They're often divided up "against the grain" too, putting every layer of your application (presentation layer, business layer, repository, domain model, services, interfaces etc) into a separate project, with the result that a single task requires you to make changes to several different projects.

The general rule that should be followed here is the Common Closure Principle: classes that change together should be packaged together.




Half of those projects could be class libraries. You can't put those in different solutions cause you miss the whole point of having everything tied up.


We don't have hundreds of millions LoC, but we have a couple of million. We solve the problem you are describing with svn:externals, pulling in the DLL/PDB from other project/solutions as needed on svn up.

I agree it's not as comfy as having everything in one big solution, but being able to count your compile time in seconds as opposed to minutes makes it worthwhile.


We used to use several different solutions with subsets of the projects, now though I use the solution load manager extension, http://visualstudiogallery.msdn.microsoft.com/66350dbe-ed01-... which brings down load times significantly. It basically lets you specify how projects should be loaded, so most can be loaded on demand. We have ~150 projects in total, ~10 million LOC, and VS handles it almost as well as a small solution.


I'm pretty sure I've read on an msdn blog that visual studio was 100 million loc total. I would be very curious to know what kind of projects you're working on.


My 100 MLOC figure is incorrect. The blog post actually states: "With upwards of 50 million lines of code, broad changes to Visual Studio can take some time to fully incorporate."

Here's the link: http://blogs.msdn.com/b/visualstudio/archive/2012/06/20/the-...


For what it's worth, I'm currently working on a number of C/C++ projects that amount to ~15MLOC. Visual Studio handles that load just fine.


Eclipse is fast, but it needs to be configured well and run on a decent system. I guess the same is true for VS. After all both wouldn't be so popular if they wouldn't scale.


This is one of those anecdotes that just pollutes discussions, allowing people to cheer on their biases. My anecdote is that Visual Studio deals with very, very large projects with gusto (albeit far below the absurd "hundreds of millions of lines of code" scenario described by someone else, which if in one solution borders on insane). The CLR has worked ridiculously well for years on end with nary a hiccup in sight. And so on.

Visual Studio isn't perfect, nor is any IDE (Eclipse...xcode...Geez, turn on the coffee machine because we'll be spilling complaints all night long).


sounds like you should get a new computer. 1994's a long time to go without at least a fresh install of windows - have you tried defragmenting your hard drive?


I've got a nice Dell Precision dual 4 core 2.8ghz xeon, 32gb of ecc ram, 500gb sas 15k RAID, Samsung 840 pro scratch drive and quardro card. What am I doing to deserve this pain?

My 5 year old Lenovo t61 is fine for ALL other tasks in all other languages.



I guess that taste in IDEs is a very personal thing. I am using Visual Studio 10 at the moment to do C++ coding and it is by far, and without question, the worst IDE I've ever used.

I find it slow, buggy and quite unfriendly. My guess is that people pay money for plugins to get it fit for purpose.

IDEs I prefer: Eclipse, Netbeans, IntelliJ, QtCreator


VS is the only large scale IDE I know of that refuses to add a search bar for options. Both Eclipse and Intellij IDEA both have it. Trying to deal with finding what you want in the myriad of options VS has is tedious. Also changing syntax colorings without something like resharper is also taxing.



It was also possible in 2010 via the quick access feature in the power tools extension.

http://visualstudiogallery.msdn.microsoft.com/d0d33361-18e2-...


Thanks, I would have never guessed it was that sort of search in 2012. I would have figured the more intuitive option was to put it within the actual options area. Then again, I guess I should have known better with the way search is in the rest of the Win 8 UI (though I don't use Win 8 enough to instantly think of that, lol).


I change one line of C# in a unit test, and ask VS to run the unit test.

It then spends 30 seconds building.

This kills TDD and many other ways of exploratory development.

This is not the case in any Java or Python IDE.


Dream on. Java ecosystems have left you guys by 10k miles.


In what way? What things are you comparing?

All I know is that the next version of Java will have features inspired by the last version of c# http://mail.openjdk.java.net/pipermail/lambda-dev/2011-Septe...


C# tools and libraries have been inspired by Java since the inception.

I'll give C# a +1 for lambda/functional feature but the rest does not equal +1.

MSBuild -> Ant

NuGET -> Ivy

DoesNotExist -> Maven (don't forget to count maven plugins).

NHibernate, Log4Net, etc -> inspired by its Java counterpart.

How about sane deployment/packaging system? WAR, JAR, EAR push to App Server (tomcat, glassfish, etc). Not so easy with IIS.

I argue that Java open source library is far richer than .NET (no hadoop, no hbase, no cassandra, nor GWT)


But if you talk language, Java has never held a candle to C#.

The original C# language and virtual machine was inspired by Java.

Then, C# got: * Better generics

* lambda expressions

* the yield keyword and compiler magic for iterators

* explicit interface implementation (rarely needed, but very well thought-out for when you need it)

* LINQ!!!

* Type inference

* Dynamic keyword

(I'm forgetting some things, it's late).

Anders has guided the C# language brilliantly ... Java has been outpaced at every step of the way. So many somewhat radical features have been added to C#, and from my standpoint, every single one of them was very well-done.


Language alone does not mean tons of productivity gain. Building software requires more than just syntatic sugar.

C# has all these features yet I have never seen killer tools that changed the .net landscape like Rails did with Ruby. While I would not call those language features smoke and mirror, I argue that they are merely nice to have and not ground nor thought provoking.

At the end of the day, Java ecosystem still move forward way faster than .net. Spring, cloudera, datastax, alfresco, liferay, ehcache, jboss, tomcat, Embedded server like Jetty, and other great, free, mature, serious, and open source tools are available at our disposal vs taking to sales rep to buy licenses, which are a common activity in .net world.


I could pick those up one by one, but time does not permit. You are largely correct that there seems to be more experimentation in the the java open source ecosystem, and that some of the key parts of the same in .Net are ports from java (e.g. nUnit, log4net, nhibernate). Though in all those cases there are alternatives, it's just that the particular tool mentioned is most popular. Possibly due to familiarity.

With maven, I'm not sure that it's a good thing https://www.google.co.uk/search?q=maven+hate and there are lots of ways besides msbuild to do builds and installs. The rake, jake, psake family of tools are all viable.

With things like Cassandra, I don't care what language the server is written in as long as I can connect to it. This is IMHO the way forward, and not just for .Net. Though if you're looking for a noSql db written in .Net, there is RavenDb.


Sir, Maven is an awesome thing. Would you want me to do a query in Google for each and individual .net and the word "sucks"? Since when that matters?

There will be minorities that just happened to dislike everything. There are also people who just happened to use maven in the wrong way and ended up fighting with it.

This suggests that you have never used and experienced Maven. Combined eclipse with m2clipse and you will get awesome development experience. Imagine not having to download 3rd party library manually by visiting their website. Imagine autocomplete of the freshly acquired 3rd party lib via your IDE to also shows you the javadoc. Imagine trying to navigate to the 3rd party class and method implementation without setting up your IDE or messing with path/folder setup automagically. You cannot do any of these in .net nor vs.net.

I know RavenDB but can you compare it with Cassandra, Hbase? Not by many many miles. The latter two are battle tested by top most traffic website while the former has yet to reach that level.

I also argued that the javaee 6 stack provides way better, simpler, and modular approach to building back end systems. There is no equivalent EJB 3 in .net (I will be damned if you do another google query for EJB sucks. the old one is, but not the new one. Also experiencing the tools before making your judgement would not hurt). All in all .net framework for the most part of it have always been behind Java (except in the category of presentation/UI).

I prefer not to continue the discussion when the obvious is there right in front of us: C# has cool language features but honestly nothing has been groundbreaking in the .net world. The last one probably was asp.net mvc and the changes in the core asp.net as some sort of an api instead of the old asp.net webform stack, mimicking the JEE web profile approach.

Rake and the rest can be considered sub features of maven. Not. Even. Close.


> Maven is an awesome thing. There will be minorities that just happened to dislike everything

I don't know maven from a bar of soap, but would you diagree with this recent hn post then? "Why Everyone Eventually Hates or Leaves Maven" http://news.ycombinator.com/item?id=5105164

Now I've had a look at the basics of what Maven does ... http://en.wikipedia.org/wiki/Apache_Maven " It can also be used to build and manage projects written in C#..."

Honestly, if it was that far beyond everything else, people in the .Net community would be talking about it a lot. And they're not.

> Imagine not having to download 3rd party library manually by visiting their website. Imagine autocomplete of the freshly acquired 3rd party lib via your IDE to also shows you the javadoc. Imagine trying to navigate to the 3rd party class and method implementation without setting up your IDE or messing with path/folder setup automagically. You cannot do any of these in .net nor vs.net.

Factually incorrect. The equivalent happens in VS via nuget right now.


The fact that many people in that hn thread said they had no problem with maven? Again sir, what you were doing were just linking gossips as oppose to using it, trying it out, experiencing it, understanding it uses/features.

I can link to many2 .net blog posts how people have left it because it is too limiting but that is not the point. Personal preference does not equal real world evident that suggests that .net is not that limiting.

Last but not least, your nuget can't:

Run unit tests without any setup.

Run integration tests without any setup.

Run code analysis as part of build.

Run code style as part of build.

Package your project and make it ready as dependencies to hour other projects easily without having to import the whole source folder as another project below a solution.

Deploy to hour app server.

Generate javadoc or .net doc.

I recalled there were challenges to use maven for .net projects. Would you want me to query people praise of maven? Or would you want me to query how .net community wishes or is looking for maven equivalent tools in .net and how nuget is just a piece of subset of what maven can do? Or would you like to take a look the current landscape of build and dependency tools and how almost all of them mimic what maven can do?

The equivalent of maven would be msbuild, msdeploy, nuget, and various other tools in which you have to setup manually and requires a lot of effort. In the rails world they required gems, rake, and bundler to match maven capabilities.

I think I've explained too much. There is absolutely no point to continue the discussion if all you do is merely performing google search query of Java bashing because likewise can be done with .net and that would be a time wasting.

Otherwise, let me know when there is a huge revolution in the .net world that shook the software development world because so far you guys just following java footsteps in almost every area except the c# language syntax.


> your nuget can't ... Run unit tests

One tool that does everything has never been my taste - a package manger that runs unit tests wouldn't be a good idea at all.

You're not saying anything of interest or useful to me anyway.


Yet you're using:

C# - A language that can do (almost) everything: mobile, web, desktop, etc (Java still beat you guys on embedded devices).

VS.NET - An IDE that can do build, run your tests, UML modelling and many more...

Final thoughts on Maven: What I care is a tool that perform build for me and in 2013, validations are part of the build: validate that your code compiles (compiler), validate that your code can be packaged according to the agreeable standard (dll, jar, whatever), some level of behaviour validation (unit-test, integration-test) .

If you disagree then perhaps we have philosophical differences when it comes to good software engineering practices since the beginning.

Eventually you either: build something from scratch to mimic Maven on .NET ecosystems or use various tools (MSBuild, NAnt, NuGET) that perform the same workflow that Maven gives to you. Either way you got nothing like Maven in .NET ecosystems which is a huge loss for me since why would I learn various tools or build some piece of the puzzles on my own when I have _the_ tool that can do what we all have to do on day-to-day base anyway...

You're not saying anything remotely close to display how the .NET ecosystem is richer than Java. Perhaps because it isn't.

PS: Maven is composed by plugins, the fact that some of the plugins can do unit-test while others can do static code analysis are just... awesome.

I'm done.


Regarding deployment/packaging, there is MsDeploy for that. You can deploy files, IIS settings, db scripts, registry settings from a sip package


I'm curious, what Java IDE is 10K miles ahead of Visual Studio 2012/C# in your experience? Have you recently used VS with C#?


Haven't used 2012, but I think that IntelliJ IDEA is much better than VS 2010, anyway. Not to knock on 2010, but IDEA is pretty amazing.


There's a VS plugin from the company that makes IDEA, called Resharper. Resharper + VS >> IDEA :D


Sorry but I don't share your view. I am using IDEA everyday (Python). It's great, but it's a HUGE memory and CPU hog (although IDEA 12 is much better). I always lament VS...


What's stopping you from using VS with PTVS? http://pytools.codeplex.com/

I am a heavy-duty user of both PyCharm/PhpStorm and VS myself.


Because I don't like doing Python on Windows


Have you ever used Eclipse with various Maven plugin? (and any source control plugins: SVN, GIT, Mercurial, etc).

Thing is, Eclipse/IntelliJ plugin ecosystems are far richer and waaay more cost effective (and powerful) than VS.NET plugin ecosystems.

I know I keep pointing to Maven but Maven alone is a huge reason why .NET is left to dust.


Java just doesn't scale, if you want fast code you wont use java. C++ one of the best, and for quick development c#, you can always invoke C++ libs in C# anyway and both are supported in visual studio. If your into webdevelopment even there java is slow.. despite it has some fun libs, but most people use them because their lazy programmers, a good programmer wont rely that much on much external libs


Twitter uses Java. It seems to survive major events fine these days. I'd say it scales reasonably. You also have to take into account huge enterprise deployments.

As for 'fast' it really depends what you mean. Nobody's going to dispute that running a compiled application written in C is going to beat the pants off anything running on top of a VM, but is that speed factor important all of the time? Of course it's not. Most of the time a short wait is perfectly tolerable in exchange for the assistance in writing correct code that languages like Java can provide.


Are you saying those c# devs who use C++ libs are okay but Java devs who use Java libs are lazy simply because they use Java?

So a good programmer reinvent the wheel whenever he can?


I attempted to write a simple web project in Java to learn it. I haven't used Java before, I normally use Python for web stuff and C# for other misc stuff.

It was hell. I installed JetBeans IDEA quickly enough, but it went downhill from there. My AntiVirus (Kaspersky) fucked with Java 7's networking meaning it couldn't connect to anything, so it took an hour and a lot of googling to fix that. Next step: make a struts2 project. Wait, maven doesn't like the archtype IDEA gives it so it explodes and doesn't install it. Ok, do that by hand. Next install a webserver, which one to choose? Install one, doesn't work with IDEA - have to install an earlier version.

Ok. Finally got a blank project up. Read the docs for struts2, brain hurts, uninstall everything, fire up VS 2012 and write the whole thing in C# and run it on mono. Easy.


That sums up non-professional non-real-life project. Sorry to say this but your particular example does not count for "productivity gain" discussion.


I don't see what this has to do with Java vs C#, it's a Struts 2 vs ASP MVC (presumably) debate. Struts sucks so there is no surprise in the outcome.


JVM ecosystem may be ahead of .net, but on the language front Java doesn't hold a candle against C#.


So I recalled correctly that you already argued that Java's the best thing since sliced bread recently [1].

Apart from the low jab in this particular line, why's Java related to VS, VS features and the .Net eco system?

Language wars are boring.

1: http://news.ycombinator.com/item?id=5098006


But the response is years late, as usual.


Years later than what?


Late, not later.


Years late...from what epoch?


I'm a TortoiseHg (mercurial GUI front-end) developer and an (occasional) mercurial contributor.

I think this is really great news, both for git, Microsoft and OSS in general. It is definitely a great move for Microsoft.

I hope they also add support for Mercurial in the future. Git is a great tool but I think Mercurial is equally powerful yet easier to use and understand (IMHO). It is not as widely used as git, particularly in OSS circles, but there are many OSS projects (e.g. Python) and many companies (e.g. Mozilla and Facebook) that use Mercurial very successfully. Choosing Git as the first DVCS they support makes a lot of sense, but Mercurial would be a nice second choice.

In particular, being able to use mercurial with TFS would be awesome in an enterprise context. Plus I'm sure all in the TortoiseHg project would welcome the competition if Visual Studio were to get builtin support for Mercurial as well as git :-)


Considering it's history of abandoning projects, products and APIs, Microsoft will probably neglect Mercurial.


When I read "Q: Does this mean Team Foundation Version Control (TFVC) is dead? A: Not for a second."

The thing that comes to mind is a politician saying "I'm not thinking about resigning". His resignation has just become nearly inevitable.

I have more belief in the statement about Microsoft working with open source, or at least MS Developer Division working with open source.

On the whole this looks cool, though VS integration will have to be very good to tempt me away from git bash and TortoiseGit.


While there are a lot of compelling reasons to use a distributed version control tool, there are also compelling reasons to use a centralized system. Having a lot of giant files (game resources are the canonical example) suits a checkout/edit/checkin system much better than a system that scans the disk like a edit/merge/commit system or a DVCS. These sorts of repositories exist within Microsoft (and DevDiv) itself. For that reason alone, TFVC won't be going anywhere anytime soon.


A couple of other reasons to use centralized version control: 1: You are a Fortune 100 company and need to have access control on your codebase (so you can give contractors access to only a few files for example) 2: The code you are writing is subject to regulatory control.


Interesting you should mention that. We have our code on a corporate SVN server but I used git (via git svn) to create a mirror for some offshore developers outside the export control bubble. Git allowed me to exclude a set of files very easily. Not an argument pro or con either system, just saying.


This is why we use svn still.

Also, its easier for developers not to fuck it up royally.


sorry, help me understand here. How does something who's only method of control is tied to the ide you use keeping any control over the source code?

you can still use git-tfs to use tfs like you would use svn. you can also just copy the file to another folder and suddenly everything is good. or you use time machine or another other backup mechanism.

Do i miss something magic that tfs does, that I don't understand?


I can't speak for TFS, so maybe they do something daft, but it's usual that when you get latest from the server you simply won't get the files you aren't allowed to. You won't even see them. It's access control at a finer grain than the repository level, applied on the server side.

(with git always giving you the whole repository, there's not much you could do with git, but but many systems don't do that.)


I've never used TFS but my understanding is that TFS server allows you to configure it to only give a specific person access to a specific set of files.

For example: One could tell the TFS server "Deny Contractor-X access to all files, except for files A, B, and C". I assume that this would only allow Contractor-X to access files A, B, and C. Even if they were using git-tfs.


> 2: The code your are writing is subject to regulatory control.

Can you expand on what, precisely, you mean by that?


If I remember correctly, certain regulatory environments require that an audit log be kept of who saw which file and when.


> If I remember correctly, certain regulatory environments require that an audit log be kept of who saw which file and when.

Okay, let's say that's your environment. How do you prevent Mole Manny (who is a legit developer in your organization) from pulling down all the files in his project (as he's entitled to do), and then copying it over to his $super_sneeky_storage_system?

We've talked a lot about this where I am and we've concluded that to ensure a super-tight environment we'd have to do a slew of really heinous lockdowns (epoxy usb ports for instance) which would likely not really do much against moles but slow down and anger legit users.


Really? The only places I've run into regulatory requirements for audit logs of seeing data is when the data itself is sensitive government data or is legally protected personal data; outside of national security, those aren't the kind of things that generally apply to code (and in the national security space, I'd imagine that you'd need more comprehensive monitoring of your systems, desktop or server, such that using centralized VCS for the purpose would be redundant [alone, centralized VCS for that purpose has holes big enough to drive a truck through, since it can't monitor who sees information once it is checked out, only who checked it out].)


If you write medical device or flight-control software, you have to have traceability all the way from requirements to the delivered binaries. With a central server & build system, that's much easier to do & also explain to federal inspectors.


I've worked on things like poker machine software.

Pieces like the random number algorithm for that stuff is tightly controlled.


I can speak from my own experience there are times where depending on funding, portions of a project must be developed by U.S. citizens... but the bigger project can have parts developed over seas. Depending on funding and the project.


Some code like encryption algorithms cannot leave the United States by law (if developed by a US company/Citizen).

Another example might be Credit Card Processing software, you don't want a lot of people knowing how you generate your encryption keys.


If your encryption key generation security depends on people not knowing how you generate keys, then it's totally busted.


Ah, but the question isn't Is this a good idea? but Is this legally required?

Not always the same.


It may not depend on it, but it will improve it by some small margin most of the time.


Centralized version control doesn't have any advantage over distributed version control for either of those scenarios. Particularly, it doesn't give you any more control over where the code goes after someone with authorized read access to the repository makes their own copy of it.

For that, you need comprehensive monitoring on every system from which the repository can be accessed that tracks what is done, and once you have that, it doesn't really matter what you do for VCS for that kind of monitoring.


One think that I've been impressed with is that changes you make from the command line are reflected instantly in the GUI. I like to change branches from PowerShell with posh-git. I'm using the GUI and command line interchangeably.

(Disclaimer: I work for MSFT but not in the git/vs group)


Glad you like it! (I work for MSFT and I tested this feature).


This is one feature that Eclipse don't have since you have to refresh everytime there is changes outside the IDE.

Every time someone says VS blows Eclipse I wonder what VS have that Eclipse don't. When I dabbled C# few years ago, I find the textual support for refactoring and such are much lacking in VS IMO. I haven't tried Resharper though.

My brother applaud VS for its WYSIWYG, but you rarely do that when you program in Java.


Resharper adds so much values to VS, that's it's hard for me to work without it. I've been using resharper for about a year, and it's simply amazing. My productivity has gone through the roof (I've been using VS since 2003). Alt+Enter gives you the magic.


Currently working in VS and Eclipse at my current gig. One thing that I definitely miss when working in Eclipse is the ability to move the instruction pointer around at my own discretion. Though, admittedly, I'm unsure if this is a limitation of the languages more so than the IDE. /digression


> This is one feature that Eclipse don't have since you have to refresh everytime there is changes outside the IDE.

That used to annoy me a lot. But isn't there a setting that makes Eclipse scan for changes on the file system now? I don't remember what it's called.


God I wish emacs would do that with magit, though maybe there is some elisp out there that would aide me. Too lazy to look.


Put

   '(global-auto-revert-mode t)
In the "custom-set-variables" section of your .emacs, and it will automatically reload any file open in emacs that changes on disk (excepting files with unsaved changes). While it's a little slow doing the actual reloads under Cygwin, it's blinding fast on Linux. Works a charm with egg, haven't tried magit.


File system notifications are proprietary to each OS, so you may not find anything great.


I'd never imagine a git UI that totally replaces the CLI (/git bash), but that doesn't mean you can't benefit from IDE integration alongside it.

MS's enterprise customers already on TFVC would be mighty ticked if they outright killed or deprecated it; wouldn't be terribly surprised if development slowed down though.


Considering Visual SourceSafe's extended support runs through 2017, I think it's safe to say TFVC will be around and supported for quite a while yet.


This seems alright, but it definitely feels incomplete. I tried it out with one of my github projects, and the setup wasn't impressive at all.

First, it didn't automatically detect that there was a 'GitHub' remote. My first guess was that I needed to call it 'origin', but that didn't fix it. Instead, I needed to go into the command line and specify the master branch's upstream branch like so: "git branch --set-upstream master GitHub/master".

Second, as soon as I tried to fetch I got "An error was raised by libgit2. Category = Net (Error). This transport isn't implemented. Sorry". Turns out I have to use the 'http' link instead of the 'ssh' link as the remote destination.

Both of these errors could have been avoided automatically, or at least given better help. Branch has no upstream? Assuming that it's the only remote branch with the same name is a pretty good heuristic, especially when you do no work without user action. Don't support SSH? Try the obvious HTTP alternative, or tell/ask the user to try it.


It's important to note that this is a "community technology preview", so while we've put a lot of work into this, you're right, it's nowhere near complete. The underlying technology here is libgit2, which currently doesn't support ssh, although it's something that is being worked on. (We'll improve the error message, of course, for the future. We appreciate the feedback.)


I didn't really intend to sound like I was making damning criticism. After setup the process seems fine, and solving those issues only took ten minutes (but ideally they wouldn't occur in the first place).


It wasn't taken as such - and we appreciate the criticism, as we've got a fair ways yet to go. I was just wanting to set expectations appropriately since there's been a lot of excitement around here today and even we have forgotten that this is oh so very rough around the edges.


Libgit2 contributor here. The SSH transport is actually in progress, so this specific issue will go away fairly soon. I'll agree that the error message could be better, but my gut feeling is that most people will clone from within VS and use the HTTPS transport anyway.

This is very much a pre-version-1.0 UI, but I like the direction they're taking it. Really in touch with what their users want and need.


This company here (small.. 40-50 git users) are using GitExtensions, Gitorious - and exclusively ssh for git.

So while I don't claim to be representative, for us that means that this announcement is good news, but not usable so far.


Did you actually clone the branch or did you manually set up the remote and branch?

I ask because a clone sets the upstream automatically, and inferring the remote from the branch's upstream is the correct behaviour. You can also use the -u flag in push/pull to set this.


In this case I created the project on my machine then pushed it to github at a later time, which is why the upstream wasn't set.


This is (a) absolutely shocking, and (b) utterly fantastic!

If MS had tried to make a DVCs to compete with git, they would have always been third-fiddle (to git and Mercurial, and possibly others). But they could still have made money selling it to all-Microsoft shops.

Instead, they acknowledged the situation and incorporated git support!

This is so right, so beneficial to their customers, and yet so completely opposite to what I expected them to do!

I must give credit where credit is due ... fantastic decision, Microsoft!


MS seems to be much more sensible in the Developer Tools division, i.e. .NET, VS, etc. There are several quite vocal open source supporters there and they seem to have enough influence that this part of the company does some quite good things. The Windows division ... not so much.


I am surprised that they are moving towards git and not Mercurial. Isn't Microsoft a sponsor of Mercurial? Perhaps this is because hg already has very good tools for Windows. I suppose I am just a little confused as to why git gets more attention than hg.


The article seems pretty clear on that.

"When we made the decision that we were going to take the DVCS plunge, we looked at many options. Should we build something? Buy something? Adopt OSS? We looked at Git, Mercurial and others. It didn’t take long to realize that Git was quickly taking over the DVCS space and, in fact, is virtually synonymous with DVCS."


It's worrying that Git is becoming the OSS "monopoly" in the DVCS space.


For me, this is primarily a reflection of the quality of service and community on Github rather than any quality intrinsic to git. I prefer mercurial to git, but find myself using git significantly more as virtually every dependancy in the apps I build lives on github.

I do think however that there are lots of teams, particularly in enterprise, that are quietly and happily using mercurial.


I think you first have to ask why Microsoft is supporting git when Team Foundation is, itself, Microsoft's primary source control system.

The answer is most likely because people use git. Lots of people use git - particularly the crowd Microsoft is actively struggling to appeal to which is the entrepreneurial/startup crowd.

I've never used hg so I can't attest as to which is "better" but I really don't think that matters to most people. The only question is if it is "good enough" and git is.


The OP is simply saying that it seems odd that Microsoft would pour $20,000+ into Mercurial and then add git support to VS.

http://mercurial.selenic.com/sponsors/

I'd bet this is simply a case of the right hand not knowing what the left hand is doing. I had no idea Microsoft were sponsoring Mercurial--even that seems odd, spending money on an open source competitor to a product they also make.

I think your analysis is spot on.


It's not like $20k is a lot to Microsoft.


It's more than the $0 it's given git.


Actually, MS has contributed significantly more than $20k worth of developer time to git / libgit2 over the past months, all open source GPL code


This only makes the $20K to Mercurial seem weirder, don't you think? It underscores my point that it's hard for large organizations to keep their official positions on things straight.


From a bigger-picture strategic sense, the Developer Division at Microsoft is concerned about making Microsoft a good platform for developers. Fundamentally, if developers want to use Mercurial on Windows or Git on Windows or TFS on Windows, we're happy. And - increasingly - we'll donate money or even developer time to help make this a good experience.



I've been an hg fan for a while and like it better (hg jives with my brain better and git occasionally throws weird problems at me) but quality wise I can't say there is much of a difference in my experience.

That said, I have never met a single other person who uses hg. Not at work or hackathons. Most have never even seen an hg repository and some haven't even heard of it. Git definitely has "won" this "war".


The problem is that large swathes of the Git community have been treating the whole DVCS scene almost in Hunger Games terms -- there can be only one winner, and all the others must die.

For what it's worth, I've heard quite a lot of anecdotal evidence that Git is pretty contentious among many teams that adopt it. Git adoption is often driven by an aggressive few, against the wishes of their colleagues who can be quite unhappy about it. Case in point: Git has more "hates" on amplicate.com than Subversion and TFS put together -- and "hates" outnumber "loves" by something in the region of four to one. (http://amplicate.com/hate/git)

(For reference, Git and TFS have roughly similar market share in the enterprise at the moment, and Subversion is about twice as widely used as either of them. Source: itjobswatch.co.uk)


I use hg for personal projects, but I agree that git has become almost a standard, and, as others mentioned, almost synonymous with the distributed version control systems. My feeling is the differences between git and hg are smaller than the advantages of switching from one to the other and reconditioning yourself to a slightly different work mode. (I guess I should have said "reconditioning myself".)


We use hg for all our stuff at work, and all the openjdk stuff uses it too.


I believe Mozilla uses hg for their source control.


Microsoft initially made Codeplex work with Mercurial. I don't know why Mercurial was chosen over Git at that time, but perhaps it was because of Git's reputation for working poorly under Windows. I suspect that the sponsorship dates to those days.


Well, more than anything this underlines the incredible success of Git.

Initially I didn't think that it would take off, a SCM created by kernel developers for kernel development needs (not that this has to limit it's uses in other areas) with little regard for hand-holding.

Joke is on me, it's a runaway success and even Microsoft acknowledges this.


Microsoft technical fellow Brian Harry, who made this announcement this morning, will be answering questions Friday, 10a (Pacific) on a Reddit AMA.

This link will resolve to the AMA once it kicks off: http://aka.ms/ama_bharry


Until the Day Now Known as Before Git TFS (aka Yesterday), this was my workflow for checking in code (I work on a Mac):

* get far enough along in my code that I want to check in, launch VMWare Fusion, start Windows, login to Windows and run security updates, connect card reader, login to the VPN, oops bad password, login to the VPN again, more security updates, launch Visual Studio, connect to TFS, launch project solution file, check out my project, find the local directory on Windows where the files are stored, copy files from Mac to Windows, check in the project. Cry a bit.

And THEN: a couple hours after Git integration was announced, we moved a project I was working on over to Git on Team Foundation Service.

My new workflow:

* Make a change in the code, commit, pull, push. From INSIDE Emacs on my Mac. If you didn't know any better, you'd think I was just pushing code to GitHub.

In short, I love you.


Microsoft has a cross-platform command line TFS client (also an Eclipse plugin):

http://msdn.microsoft.com/en-us/library/gg413282.aspx

  > You can perform version control operations by using the Team Foundation 
  > Server plug-in for Eclipse. You can also use the Cross-platform Command-Line 
  > Client for Team Foundation Server to perform those tasks.
I'm not sure how Git support gets your code back into your corporate TFS server?

Edit again: did not see you using their service. It will be interesting to see how well they implemented all the ACL-type stuff, also just in general I wonder about the transport security since SSH is not supported at this time. I'd recommend against using this Visual Studio Git support to push over the internet for now!


There's no ACL for most git clients, they use standard HTTP credentials. Transport security is handled by SSL, we obviously don't expect you to pass this stuff around in plaintext!


Ironic. I doubt Visual Studio could even compile the Git source code because of their 20 years obsolete C compiler which is two standards behind (C99, C11). I doubt the Git developers feel hamstrung to support stupid compilers.


Git compiles from source in portage on gentoo-prefix on interix using msvc. I don't know about from visual studio though I don't see any reason it wouldn't work.


On a lighter note, TFS not supporting Git was always our excuse to use private repos on GitHub/BitButcket within our Microsoft-heavy organization, cool features and all.

I fear those days are over :( .


This is a great step. Now, will they stop alienating developers by omitting crucial tools like PIX and ATL/MFC headers from the free versions of their development tools? Or are they still under the impression that their platform is powerful enough that developers should pay them hundreds of dollars for the privilege of writing apps for it?


How are ATL and MFC crucial in 2013?


If you wish to make a Visual Studio style application, there are few quicker options than MFC. You get a lot of functionality just with the wizard generated app.


This is a tad off-topic question, but what do you do if you use that wizard and then work on the app for a bit, and then realize you forgot to cross one of the options in the wizard?

(I've not used VC++ since version 5 or 6)


There's no way to change your wizard selections after project generation. You have to dig into the project settings dialogs and change the options manually. Or you could generate a new project with the right settings and merge your code into the new project. I'd say it's worth learning how to manually change the project settings, though, if you're going to be working with Visual Studio for any length of time.


The amount of code the wizard generates is actually remarkably small - most of the interesting stuff is happening in the base classes. You could probably create a new wizard project and merge in what you need, depending upon the options needed of course.


For me it's the easiest and fastest way to throw together a C++ app with a simple GUI that still works on most WIndows PCs.


For compiling legacy code. Found a library that does exactly what you need, but it uses ATL somewhere in its guts? Sorry, won't compile with Visual Studio Express.


VS Express isn't exactly aimed at supporting legacy code for you.

The Express products are free, light(er)weight and don't support everything the other editions do. You should have known this going into this.

There are however other library downloads from Microsoft that may make both MFC and ATL libraries available to the Express editions.


I know that Express isn't "aimed" at supporting legacy code. What I'm saying is, that's a bad product decision by Microsoft at a crucial time when they can't afford to alienate any developer who's still interested in developing for their platforms.

I'm not saying that Microsoft should actively support new development in MFC/ATL. All I'm saying is that they shouldn't delete ATL/MFC from the Express SKU.

It's true that it's still possible to get ATL/MFC in the free Platform SDKs for (much) older versions of Windows, but finding and installing it and integrating it with Visual Studio is a royal pain, and Microsoft won't tell you how to do it (or even that it's possible). And there's no way to get PIX for Direct3D 11.1 without paying $400+.


Either the pain is worth $400 to avoid, or it's not... seems simple enough to me! (Same for 'PIX for Direct3D 11.1')


Any company that sells software on Windows can surely afford the professional license.


I'd argue that isn't really the use case for their free tools. They want you to make ASP.NET web sites, Windows Phone apps and maybe a few simple desktop apps.


I'm sure that's their argument too. "Express isn't 'for' that." However, Microsoft is not really in a position now to dictate what developers "should" be using Windows/Visual Studio for. If they want Windows to remain relevant as a platform then putting roadblocks in front of developers is the wrong thing to do, even if the roadblocks are well-intentioned "steering" towards a development path Microsoft prefers.


But most people using the express versions probably aren't building these types of applications.. Also, the download size is important here. You can download the developer libraries GP refers to separately IIRC... Windows SDK etc.


The free Windows SDK doesn't include ATL/MFC either, for several versions now. You have to comb through the Microsoft Download Center archives to find an old download that happens to still have it. (For anyone who's actually looking, you can get it from, of all things, the Windows Driver Kit, version 7.1.0 and below only. http://www.microsoft.com/en-us/download/details.aspx?id=1180... )

Download size is not an issue because Visual Studio Express is primarily installed through a <1MB installer stub these days. You can choose the options you want and it only downloads what you select.


However, Microsoft is not really in a position now to dictate what developers "should" be using Windows/Visual Studio for.

Microsoft's developer tools have been a fairly lucrative part of their business. They are entirely within their rights, and with a robust, well-justified business case, segmenting the way they do. I would love to have everything for free as well, but the real world doesn't always work that way.

Intel sells their compiler suite for $1000 or so, it's worth noting. All so you can buy their chips.


What they should do next: A developer-friendly command prompt/terminal


It's called powershell.


No, not really.

You cannot natively SSH from powershell. You need PuTTY. And Pageant. And... ugh....

A developer friendly terminal is definitely necessary.


Ok, so what we're really saying is, I want a vt100 emulator.


What I'm really saying is I want a terminal, in which I can run whatever shell I want.

See also: Unix, Linux, OSX, etc, etc, etc.


Oh, that's easy. Start a cmd.exe terminal. Run bash (or whatever). Now you're running the shell of your choice in the cmd terminal.


Do you really find cmd.exe to be as good as OSX Terminal, or iTerm2, or the default Terminal on Ubuntu, or ??

I've used PS on Windows 7, not on Windows 8. So I'm sure it received an upgrade. I did like being able to essentially pipe objects from one script to another. But none of these apps are as flexible and powerful as the OSX and Ubuntu examples I mentioned earlier.

I'm really not one for internet debates, if you feel this protective of an under-featured and clunky terminal app then ok, good for you. You can have the last word. But the GP that you replied to originally was correct and your random half-answers are really missing the point.


Maybe conemu is what you want? Not made by MS though. I spend far less time copying and pasting into terminals in Windows, so the relative oddities involved aren't that bothersome.

I think cmd/powershell is different, and lots of people write it off without even considering it. Parent wanted something "developer friendly". That could mean anything to anybody. I can only give half-answers to a half-specified problem. :)

Sorry, if I wasn't very helpful. Far too often, it turns out "real development tools" means "exactly like on linux", and it's a waste of time to explain how to achieve similar features.


Most developers using VS don't have any need to use SSH.


Install Git Bash and/or msys. It's really not that bad. I fired my mouth off about PowerShell before learning more about it. PowerShell in Windows 8 does come with powerful remote conenction abilities and if you grok what PS is, it's better in many ways than it's unix (bash, zsh) brethren.


I've been working on a Git migration inside a fortune 50 primarily windows shop for about 6 months.

This line caught my eye:

"Some of its benefits fit well with the trends we see in software development: loosely coupled systems, distributed teams, lots of component reuse, incorporation of OSS, etc."

It's not a new trend. Nevertheless, I'm smitten that MS has acknowledged and embraced the model.

For us, the biggest issues with git on windows are:

1) SSH inconsistencies (cygwin/putty/msysgit/securecrt/etc) - I support users that use any combination of these.

2) Git implementations (cygwin/msysgit/git extensions/etc) and UIs all handle git and SSH differently. Users will typically have multiple copies of git installed and used, depending on context.

3) Support for HTTPS mode is inconsistent and in some cases non-existent. None of them can cache credentials.

To summarize; multiple git environments + multiple SSH environments + limited https support = pain.


I've been using Git from command line with my visual studio projects already. Integration with TFS is a welcome change. Although I wonder how much effort Microsoft has to go through to change TFS's underlying SVN architecture to support Git. I will be the first one to try this and report.


TFS doesn't have an underlying SVN architecture - Team Foundation Version Control (the centralized version control tooling that was the the only version control services available through TFS 2012) is unrelated to SVN. There are no changes here to support git - git repositories are first-class citizens, hosted separately from the existing centralized version control services.


But developing OpenSource software in Windows is still a pain: You still need Make, GCC, Autotools, etc. etc. and even if you can install Cygwin it's messy as hell.

And powershell sucks, we don't need another shell: we have been using Bash for decades.


Not a powershell fan here, rarely used it.

But 'Bash for decades'? Is that true? A quick search for bash 1.0 shows me tarballs from '95, which isn't yet decadeS old.. Ignoring the question of whether bash is the optimum shell and that some systems migrated to replacements (think dash, all the zsh lovers).

So that last line of yours seems a little over the top.


If you do any sort of Windows administration, PowerShell is a godsend.


Apparently libgit2, which they're using for this has the license: libgit2 is under GPL2 with linking exemption. This means you can link to the library with any program, commercial, open source or other. However, you cannot modify libgit2 and distribute it without supplying the source.

That sounds nice to me, but I'm not sure how that differs from the standard LGPLv2.


LGPL requires that any distribution be able to replace the LGPL component with a modified version. If you use it as a shared library (er, DLL in this case I guess) you get that for free. But if, for example, you want to link a LGPL library statically you need to provide a static library containing the rest of your program in a linkable form.

The point is to allow the user the ability to exercise their right to modify the LGPL library and use it. The license you describe would presumably allow Microsoft to ship a binary with a fixed and unchangable libgit2 implementation.


For what it's worth, our analysis is that the ability to replace the libgit2 DLL is a requirement. (Not being a lawyer, however, I don't really remember the rationale here.) So, of course, we ship the source we used to build the DLL and you're welcome to replace it.


That sounds an awful lot like Sun's CDDL.


This is great news.

The sooner TFS goes away, the better the world will be.

(No, seriously. There is one thing that VCS should never do: lose your changes. Ever. TFS does this in some specific circumstances. Yet, despite this, and the other problems with it, people continue to champion it and use it because of the tight VS integration. The sooner this stops happening, the better)


Well done, MS! Hope this will be better than Xcode integration. Next step would be to add OpenGL ES support for Windowz Phone.


This is great news!

At work we still use TFS2008 and there doesn't seem to be any way to migrate the source with history except for upgrading the whole TFS installation to a newer TFS. So when TFS2012 is up and running on another machine we are going to move a snapshot.

Contrast this to the freedom and "decentrialessness" when using git.

MS must seriously use their own products more.


MS uses a custom internal-only fork of Perforce. They've used it for quite some time now. It was considered a "competitive advantage" so they used that while Visual Source Safe (lol) and TFS were used by the outside world.


I know, and it shows. This is exactly what the VS/TFS/Azure team must stop doing.

Edit: I can't reply to your comment for some reason. If they do use their own tools, how come it's so easy to get insane merging problems on so many of the xml files in use by a VS project? Especially the .dbml files, but also the project files. :-(


As I have said (here) before, MS is a massive company so anyone making as statement like "MS does X" is almost assuredly wrong.

Devdiv, to my knowledge, as a whole (VS, CLR, etc..) uses TFS. The fork of Perforce the original commentor is talking about is probably Source Depot. When I started at Microsoft we (VS) were still using Source Depot. As I understand it the primary reason was because we had been using it forever (okay, well back to the SLIME days) and there was a massive amount of history in there. Porting it all immediately to another source control provider was a large task. I 'fondly' remember the transition during development of VS 2010. Though to be fair the TFS team was great aboout finding/fixing issues exposed by suddenly onboarding the entire VS team and our possibly 'interesting' source control requirements. I believe Windows still uses Source Depot, likely for similar reasons (the amount of history they have in SD makes the devdiv history seem like a tiny blip).

As for merging problems, I don't know. We routinely merge huge branches with hundreds of thousands of files, including many project files, other XML config files, etc.. and I don't recall many 'insane merging problems' (just your garden variety merging problems that occur with that many files). Then again I am not intimately involved in the merges (other than for occasional fire-drills on files I may have modified). I suppose it also depends on what the changes were on both side of the merge.


I wonder when exactly MS took the old SLM servers down. It was mentioned on Raymond Chen's blog that it is difficult for even employees to access source control history of Windows prior to 2000.


Both the Visual Studio and TFS teams use, obviously, TFS for version control. Parts of the TFS team are using git and some of the team is using git-tf, but mostly it's straight up TFVC. But the VS and TFS teams do not use source depot (the aforementioned internal-only perforce-like tool.)

Edit: as for merging XML files -- TFVC uses a standard automerge algorithm, very similar to the one git uses. I'm not sure why you're having merge problems with your XML files, but I'm also not sure it's TFS's fault. But it would be interesting if you filed a connect bug for us to take a look at!


.dbml files are a pain to merge because on any change, that element is moved to the bottom. To avoid other people smashing my changes I manually move it back to where it was before checking in, so the diff is only one or two lines. No one else on the team does this though, so they still run into problems.

I tried to find where the 'Startups for the Rest of Us' AuditShark guy (Mike?) talks about writing a tool so that his outsourced devs wouldn't have to deal with it, but it would take me too long to find. He doesn't seem like the type to believe in open sourcing anything but I wish he would consider it.


The developer division has used TFS for over four years now. You can read about the scale and topology at http://blogs.msdn.com/b/buckh/archive/2012/06/08/developer-d.... Ryan was being nice when talking about the transition. We put the division through hell for a while (fall of 2008 until spring 2009), but the result was an amazingly scalable centralized version control system. Now with our full, native support for git in VS, on the server (just on the service http://tfs.visualstudio.com for right now), and joining the community to help build libgit2 (check out the committers page), TFS also supports the best DVCS.


Rest assured, there are tens of thousands of people at Microsoft using TFS -- including nearly everyone in the Server and Tools division which produces Visual Studio and TFS.


Linus must be grinning :)


Git doesn't transplant from the UNIX ecosystem very well. It's why the only options for Git in Windows so far have been as part of a UNIX interop solution like msys, cygwin, interix, etc.

It will be really nice to see a proper Git client for Windows especially if it means Windows is also getting a proper SSH.


I've been using git on Windows for years, and while the need for msys stuff is a little clunky, I've never had any problems using it. In fact, the built-in gui tools (git-gui and gitk) work WAY better on Windows than they do on OSX.

libgit2 will eventually make it possible to make a nice GUI client without shelling out to git itself. GitHub for Mac and GitHub for Windows both use it now, and it's way better than parsing shell output.


I've been using git on windows as well and it works for sure but it's obviously not a native windows tool. It doesn't integrate with any windows stuff (credential management, indexing service, scripting objects, etc.) it brings all of its own stuff from the unix world (ssh, grep, bash, etc) and depending how you set it up all of those things run in a weird emulation sandbox (where's your .ssh/config? your .bashrc, .gitconfig?). If you use egit in eclipse and msysgit you probably have had to copy your ssh keys and host settings in two places. When I use it from the command line I'm switching between windows and unix style paths (msys and cygwin both have a path wrapper utility but I'm using git on SUA for best performance).

I think a proper windows git would entail a proper windows ssh where your keys and host settings are managed in one place in the control panel or something (or at least %USERPROFILE%\.ssh\config) and would have a seamless integration with explorer and the filesystem so that other tools can leverage it transparently similar to how ssh is used transparently by so many unix tools.


Git on Windows works fine for me, no cygwin in sight. It wouldn't have occurred to me to call msys 'part of a Unix interop solution', though I suppose it does give me a working version of Perl, which is a feature since it was the occasion for me to start using ack instead of grep.


Definitely UNIX interop, at least at the tooling level:

> MSYS is a collection of GNU utilities such as bash, make, gawk and grep to allow building of applications and programs which depend on traditionally UNIX tools to be present.

[0]: http://www.mingw.org/wiki/MSYS


You can use git from PowerShell and it works almost as well as using bash.

Only downside is you have to convert scripts you find online from bash -> powershell


Converting scripts does sound like a serious down side. Are there any automation tools that could help?


It's not as bad as it sounds, and you don't have to do it for all scripts.

1) Most scripts will work, you just have to tweak how loops are written and the way you declare functions. 2) If the script uses basic commands like ls, mkdir, etc their are powershell aliases built in already that point to the equivalent PS command.


Great to see, for certain, not just for VS and Git, but for distributed version control systems in general.


TypeScript, git support...what's next, an open-source Windows OS to compete with Linux?!


Now they just need to combine this and their Git-TF project and you can use both git and tf on the same project for mixed teams.

Another nice feature would be to convert existing projects from one to the other but that would be a huge undertaking.


Anyone know if there is a way to convert a TFService repository from TFVC to git? I'd like to convert a new project, not yet started that id full of the backlog and test cases, etc...


Try Git-TFS

http://github.com/git-tfs/git-tfs

Disclosure: I am a contributor to Git-Tfs


But .. Lets face it, the PC is dead, and servers might be next. Its time that Microsoft starts to look at Android and Yahoo OS, to take at least something from their pie


What the heck is Yahoo OS?!


Cool - when will Windows support the 'fork' call? If they do that and support the '/' as a path separator (which was always there), Windows might get a resurgence.


Because my grandmother cares whether it's / or \ ?


Windows has a POSIX subsystem. It has fork. Not cygwin style. There is kernel support.


"SUA is deprecated starting with [Windows 8] and will be completely removed from the next release." http://brianreiter.org/2011/09/15/sua-deprecated-in-windows-...


// Why are we incorporating Git? Cus, MicroSoft can't write something equivalent to git - but a shitty VSS!


I bet they will screw up and their git will be incompatible with the rest of the world.


Not if I can help it. They're using the project I work on as their git layer: http://libgit2.github.com/


They have a long list of what's wrong with git. They think git is too hard for their average customer. They have a list of features, some will be easier when storing some metadata in the git repo.

There are certainly ways to build a compatible and good product. It is possible and not incredibly hard. But it is Microsoft we are talking about. What is their track record in playing nicely with the community and supporting open standards? IE? OOXML?

They have deadlines, technical burden of millions LOC, backwards compatibility with whatever crap RCS they are currently supporting, etc.

When all these come into play, guess what will be sacrificed or postponed till next release?

I get it that it can be done right, I believe some of their engineers sincerely want and try hard to get it right. They have already made quite a few design decisions and big steps in the right direction.

But there are "real world" constraints. Management wants to report gazillion new features this quarter, Marketing wants unique value propositions, Engineering wants to hold back the release until git maintainers accept all the patches. Guess who will lose this tug of war?

straight from the horse's mouth: > Git can be, um, esoteric. We’ve been working to > codify the standard “best practices” for Git in the community to > make Git approachable and easy to use > by everyone while not sacrificing the power. ... > give you the best all-up ALM solution ... > work item association, change tracking, build automation, > My work, Code review ... > We are doing work on auditing, access control, > high availability, online backup, etc. All the things that > an enterprise is going to be particularly concerned about. ...


It's straight git via libgit2. Try it out!


Awesome. Any chance you could update the C compiler to something post-C89?


This is a great news! I love you guys for this...


Haha, aaaand they're waaay behind. Again.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: