I use Windows as a desktop but have it host a VirtualBox Linux VM for my real work because I don't find either Windows or OSX acceptable for serious Linux development. OSX is actually more insidious because it sort of looks like Linux and makes you think it just about can be Linux, but it's just not.
Under VirtualBox my development environment can match my production environment almost completely. There's no awkward "but here's how you do it in Windows/OSX" that mostly works but often doesn't in some subtle way. I haven't tried Windows 10 yet but this setup has worked very well under Windows 7 and 8.1 so I expect it will be fine under 10 since it's just an evolution. I've used this setup in OSX and Windows and it goes more smoothly in Windows.
One of the keys to it is that you use SMB to mount your VM disk as a local drive to the host, so you get the benefit of any Windows editor.
Using VMWare Workstation fullscreen in Windows 10 is great, since Microsoft finally added virtual desktops. Just put the cursor on the toolbar at the top of the screen (to send keyboard input to the host) and press Ctrl+Win+Left/Right to quickly switch between VMs or host desktop.
I suspect it helps to remove shitty window managers like Unity or Gnome3 and go to something more basic like xfce. I just set up an example Ubuntu VM today. Unity is annoyingly laggy. My primary VM with xfce is fine.
Is VMWare easier than Virtualbox in sharing files between guest and host? I can never get Shared Folders to work on Virtuabox, and end up configuring Samba.
I've found that Samba is better than the shared folders feature because it understands Linux file permissions better. Also Samba is a more transferrable skill and will work the same way if I switch to another virtualization product.
I toyed around with a similar setup recently (but I'm not actually doing any Linux development) to try out Nuclide, Facebook's big addon for Atom.
Basically you run a server in your Linux VM that Atom/Nuclide (running in Windows) connects to, and it lets you edit files in the VM as if your project folder were local. For those who like a GUI, I think it's worth looking at to save yourself the hassle of VirtualBox guest tools, etc. that never seem to work reliably. And frankly, I've never been particularly happy with the performance of a virtualized Linux GUI, though most of that blame goes to Ubuntu and its apparently impossible to get rid of (in some locations) fading animations.
Anyway, neat tool, worth checking out. You set up a VM to match whatever you'll be deploying to and then work on files in the VM from an editor in Windows. When it works smoothly it's really cool.
I do something very similar when I'm on a workstation, but I've found that running a full VM is less than ideal on my laptop because of increased battery drain.
I found compiling on Windows always a bit of a PITA.
On Linux I just install build-essentials and everything works fine.
On Windows I had to install a specific version of VS and fiddle around with the path to get things running. But maybe there is something like the build-essentials package in chocolatey?
The terminal command history also feels strange. But I like cls more than clear, haha :D
I use msys and mingw for Windows compilation. It's so much easier than Visual Studio --- no messing about with stupid configuration panes and easily scriptable. I also get to use the same makefiles as the Unix port of my program.
The only difficulty is that msys doesn't support parallel make, so builds are fairly slow, but my program's small and I do most of my development on Linux, so I don't care.
If it's C++, I did learn cmake, and now I use the same script for Ubuntu and for Win7/VS2008 and Win8.1/VS2012, and so far I just don't care about compilation issues anymore. Sadly the VS site only has VS2015 now.
But yes, the first time you have to check the VS version and add all the proper env vars for the cmake scripts to find the libraries.
But anyway, it is much better than when I used mingw and made makefiles by hand.
How about code written in C11? Or Fortran 95? Throw in a couple of numerical libraries you want to link to, maybe CUDA or some MPI or an optimized BLAS/LAPACK, and it's an all night party.
I lurk on a few sci.comp mailing lists, and the number and nature of problems that the Windows people have with compiling is crazy compared to Linux where the OS actually has a package manager and stuff Just Works.
As for cmake, I don't think it's any better than (gnu)make. I've seen big projects with complex buildsystems (eg. PETSc from Argonne) switch to cmake and then switch back again to make quite quickly. A frequent "problem" with make, I think, is that people learn just enough about makefiles to compile HelloWorld.cpp and then use that knowledge for everything.
I haven't used C11 or Fortran95, only C++11, and I had to add a cmake script for it, you can find it as CheckCXXCompilerFlag somewhere in the web.
I have to use Visual Studio instead of mingw because I'm using some winSDK libs, and VS has no (gnu)make, and what VS offers for the command line is not cross-platform, making it a poor investment of my time to learn about it.
After climbing just a section of the the steep cmake learning curve, I have deleted the solution and project files from my repository, and now I use cmake to compile and run the project in the command line with both VS2008 in Win7 (the PC) and VS2012 in Win8.1 (the laptop), without any path dependency. Previously the solution files depended on both my username and the path, and just moving the folder was cumbersome and required a lot of fiddling with those files.
Another thing I really like is the concept of an out of source build, and that's very easy to set up and use with cmake.
I now can write
git clone somewhere:my_project.git
cd my_project
mkdir build && cd build
cmake -DCMAKE_INSTALL_PREFIX:PATH=../install ..
cmake --build . --target install && cmake --build . --target run
in both Ubuntu and Windows and see my program running, which means I will not go back to make, or nmake or gmake anytime soon.
Out of curiosity, do you use vagrant for everything?
I used Windows a little while ago, and I couldn't do any work without firing up a VM. I missed my environment, but if I did everything in a virtual machine, I could accomplish what I needed.
Eventually I gave up. I was actually doing my development in linux and windows was just acting as a VM host. It's almost comical that I ever said I did development in Windows.
I use babun for everything. It's cygwin on steroids with oh-my-zsh. You have a packet manager called "pact" that works way better than chocolatey (so far)
It's more about muscle memory and not having to retrain your brain.
After using Mac for such a long time, it's nice to not have to recondition yourself to having to use backslashes or using commands like `dir` instead of `ls`.
They're trying, they're moving in the right direction, but it's still a ways away. Some nice things recently:
- The console host application on Windows 10 finally doesn't suck. Resizeable, copy-paste supported, word wrap text selection, etc.
- It seems like they're trying to make their dev tools use a lot less "black box magic", the kind that that's wonderful when it works right and impossible to debug when it goes wrong. At an ASP.NET workshop recently, Scott Hanselman went to great pains to emphasize that everything he was doing in the VS GUI could also be done from the command line.
- Nuget keeps improving. It's still not a replacement for apt or even Homebrew, but they're listening to feedback and making the changes people request. I feel like in another year or so it could be a real contender.
As an Android developer, I don't think I could seriously use Windows.
Anecdotally, out of my 10 colleagues working on our app, 8 use OsX, one Windows and one Debian.
- Drivers are a pain in the ass for Android devices on Windows. You always need to install one when you first connect a device.
- The linux like commands are awesome on OsX and not something I can give up easily.
- Some very important tools are OsX only : ColorSnapper2, Sketch, Zeplin (it does have a webapp though)
While a great fan of PowerShell in theory, in practice it seems to be extremely cumbersome. Sort of the opposite of bash and other Unix shells, which suck in theory and are very useful/convenient/powerful in practice.
Cumbersome or you just aren't familiar with it yet?
The whole design of PS is meant to make it so you can "guess" the names of cmdlets you've never used before. Everything is Verb-Noun, Get-Service, New-Service, Restart-Service, Stop-Service, etc.
Those two lines do different things. They're not analogous.
The equivalent to the UNIX line above in Powershell is:
ls "C:\app\*.log" -R | sls "Error" | sort
You cannot just tack on a bunch of extra requirements for Powershell (grouping, sorting by certain things and in a certain order) and then not include them in the UNIX example, that's disingenuous/misleading.
The only big difference between PS and UNIX in an actual analogous example is that the PS version of grep gets files fed in one by one and processes them, whereas UNIX's grep processes files itself.
PS - The above Powershell code may not work in 2.0 (2009). You'll need 3.0 (2012) or higher.
Hence "I can think of". That was copypasta from an internal wiki. I don't appreciate being accused of "disingenousness" from an off the cuff example.
By the way, your example is precisely 12 characters longer than mine and still multiple times shorter than the equivalent PS, so if your goal was to somehow disprove PowerShell's annoying verboseness rather than snark at me, you failed.
> I don't appreciate being accused of "disingenousness" from an off the cuff example.
Then check your examples before you post them to make a specific point. It is a point of fact that calling those two things "analogous" is incorrect, and your offense doesn't change that fact.
You also specifically said "I can think of" implying that you created the examples rather than that you got them from an "internal wiki." The fact that you said you thought of the UNIX example gives me solid ground to describe the created example as disingenuous.
I cannot help it if you're being misleading about how your got your examples.
> By the way, your example is precisely 12 characters longer than mine and still multiple times shorter than the equivalent PS, so if your goal was to somehow disprove PowerShell's annoying verboseness rather than snark at me, you failed.
My point was your examples were poorly constructed and misleading. I proved my point I believe. The difference between my accurate example of PS and your inaccurate example are night and day.
I'll leave it up to the reader to decide if PS's syntax is more to their liking. I just think they should start off with accurate information so they can form an accurate determination themselves.
In other words I won't get drawn into comparing my PS example with your UNIX example. I just want both examples to be truly analogous of one another.
Fetches http://Slashdot.org (`iwr ...`), parses it as an HTML page and extracts all the anchor tags (`.Links`), and selects the href of each anchor tag (`select href`). Since this is the end of the command, the hrefs are returned and printed as an array of strings (so one href on each line).
Well, with just grep and curl, it'd be something like:
curl http://slashdot.org/ | grep -o "href=[\"'][^\"']*" | sed -e "s/href=\"//"
But presumably this is being discounted due to the lack of HTML parsing, so not the same as the Powershell example. Then one somewhat ugly method would be using the html xml utilities provided by the W3C and available on most package managers:
Yes, a naive grep will pick up variables named href in JS, the text content of a div that contains "href", an href attribute on a non-anchor element, etc. so a utility that specifically parses HTML is necessary, but not sufficient.
I'm not sure how robust your hxpipe example is against those.
It's a shell script. If you're architecturing more than that in your shell script, I would suggest that you shouldn't be doing this in a terminal in the first place. And if you're just trying to click links for a quick shell script, why not just do it via wget in the first place, rather than have this intermediate list of strings?
But since you asked, hxpipe assumes that href on a non anchor tag is an error and should be represented as an Ahref... which isn't too bad an assumption to make, tbf. The other situations is dealt with (text content, javascript).
I'd agree it's a bit more to type out, but it does seem to make more sense from a readability perspective. And for those of us who didn't grow up with the UNIX shell to understand the reasons why things were kept so short, it's a bit easier to digest. (I do appreciate why bash is so short and succulent. =)
But really, taking the above into consideration, the might in powershell comes not from the terms used, in my opinion, but how it works on things. With grep for example, you're parsing a file. If say, you wanted to filter on that more, you'd be using awk to pick out parts of the text.
In Powershell, everything's an object. Everything's already an object, you don't need to pick the file apart to isolate the date, you just filter on the date object. It's got a data type.
That makes it really powerful, in my opinion.
(Example largely pulled from "Windows Powershell in Action". I really like this book, as it goes into detail to describe /why/ things are they way they are in PowerShell, because he wrote it. =)
https://www.manning.com/books/windows-powershell-in-action-s...
> but it does seem to make more sense from a readability perspective
Not really, as everything is given an unreadable alias anyway, and many are given multiple aliases: Copy-Item -> copy,cp,cpi. So there are just more names to learn.
Lots of powershell cmdlets have aliases too. They are less discoverable than the verb-noun names, but for cmdlets I use often they are great. Get-ChildItem is aliased to gci, ls, and dir.
Try links and environment variables to substitute long commands and flags, respectively. Fine, it’s not there by default, but as a power user you have the ability to adapt it to your needs. (That would also be a cool fix to release on GitHub for others like you to use as well. Yay hackability!)
Right. When I found out that this was supposed to be the alternative, I almost lost it.
For the thing to become usable, you have to do personal configuration, meaning your system will be unusable to anyone else and theirs will be unusable to you.
On Unix you can do pretty much whatever you want as long as "whatever I want" can be represented by a string of text. In PS you get to work with objects. PS is more like a Python shell than bash.
If you want, your PS commandlets can return strings as well, so it's not like every part of your pipeline has to be in PS either.
yeah, powershell is limited on the backward compatibility with dos. the few times i had to do anything on windows more than that one-off time, i end up installing a sshd (there are a few options) and use putty to localhost.
As a developer my first question would be: Can I dual boot linux on it? If so it would be an extremely attractive developer machine and switch to Windows when you need it. The hardware is the show stealer here.
It seems like running in a VM is a much more attractive option than dual booting these days. The performance hit is negligible (especially with 16GB RAM), you can use both OSes at the same time, and you can use all the native Windows touch/pen drivers and have them carry over to the VM.
Good point, but not ideal for everyone. Especially with IOMMU and an SSD, performance isn't so bad when it works. I haven't liked the overhead with this in the past though, plus no full disk encryption, (although I guess you could do it in the guest), and Windows security in general, and bad 3D drivers for the guest (compositing please), and anything that wants low level hardware access is out. So if you don't actually want to _use_ Windows it makes a pretty crummy hardware abstraction layer.
Sorry, I didn't really mean to say that Windows securtiy is inherantly worse, so I should probably rephrase. What I mean is I am more comfortable dealing with client security issues on Linux systems than with Windows. Of course you're right that I'm not running grsec patches.
I feel like I know how to reduce the attack surface a bit more easily on linux client systems. Most OEM Windows installations are pretty bad, so I would want to install Windows myself, sans crapware, and with unneeded built in services, apps and hooks and so on removed. If the bootloader was locked, I'm not sure whether I could reinstall the Windows OS of my choosing. Maybe these products have less crap on them though, since the OS image comes directly from Microsoft.
I didn't mean to refer to situations other than personal clients used by me, and I don't really have an opinion about this in general, except maybe: It depends... :)
> Windows security in general is better than Linux security in general.
Maybe. Have they fixed UAC not being security boundary [1] if you are on a default administrator account? It's hard to take them seriously when most software for most users still runs effectively under 'root'.
We really need to differentiate between the two types of "security" when we're talking about them. Are you worried about your personal information being stolen? That's security. Are you worrying about being spied on? That's privacy. People generally aren't worried that the NSA will steal their personal information, run up their credit card bill, etc. They're worried that the NSA will see something that could be used against them in court, or used to target government actions against them in any way. Not to say that an NSA backdoor couldn't compromise both security and privacy, but this is a simplified view.
BitLocker is secure in that it keeps out the attackers. If it keeps out the NSA is a different story (one that is much harder to determine).
As someone from EU, if it was only NSA I would be mostly okay, but I can't trust NSA/USG to be competent enough to safeguard the private key controlling possible BitLocker backdoor from the Chinese and other governments running massive industrial espionage campaigns.
So far I have mixed feelings about tablets. I like the "tablet" part but not the "it is cryptographically sercured from doing anything that might upset a government, corporation, or business model" part.
The Surface Pro 3 you could. Just had to disable Secure Boot for setup, boot from a USB stick, and after installation you could re-enable it (assuming you had a Linux distribution that supports Secure Boot signing).
Obviously nobody outside of Microsoft will know if that remains true with the Surface Pro 4.
The Windows 10 hardware certification permits the OEM to make Secure Boot not disablable. But the way the major distros work, they get a pre-bootloader signed with the Microsoft key through Microsoft's signing service, and that pre-bootloader contains a key permitting the chaining of the real bootloader, the kernel, signed with the distro key. So it's not a big problem.
On non-ARM systems, the platform MUST implement the ability for a physically present user to select between two Secure Boot modes in firmware setup: "Custom" and "Standard". Custom Mode allows for more flexibility as specified in the following:
...
B.If the user ends up deleting the PK then, upon exiting the Custom Mode firmware setup, the system is operating in Setup Mode with SecureBoot turned off.
...
Enable/Disable Secure Boot. On non-ARM systems, it is required to implement the ability to disable Secure Boot via firmware setup. A physically present user must be allowed to disable Secure Boot via firmware setup without possession of PKpriv.
I can't find the doc for W10. Has the language changed? Can you link it?
For most PCs, you can disable Secure Boot through the PC’s firmware (BIOS) menus. For logo-certified Windows RT 8.1 and Windows RT PCs, Secure Boot is required to be configured so that it cannot be disabled.
which seems to imply that it is no longer a hard requirement for x86 unlike before.
I'm not finding an actual Windows 10 hardware certification requirement document. Yet Windows 10 is out and shipping pre-installed on hardware, so how can there not be a document somewhere?
"The precise final specs are not available yet, so all this is somewhat subject to change, but right now, Microsoft says that the switch to allow Secure Boot to be turned off is now optional. Hardware can be Designed for Windows 10 and can offer no way to opt out of the Secure Boot lock down."
http://arstechnica.com/information-technology/2015/03/window...
I have a Surface Pro 1, and installed Ubuntu and Linux Mint pretty easy... but I wouldn't use any Linux in a Surface honestly, the Desktop Environments are far from having decent HDPI support, and the touch experience is still far from feeling usable.
Ubuntu 14 doesn't support a lot of the latest hardware at the moment. I had an Intel i7 NUC and installing ubuntu 14 on it was a nightmare... You fix one thing to move the installer, another thing breaks. It's trial and error. Ubuntu 15 was just straight forward, one click install.
I use both a MBP and a Win10 laptop. I really do like Win10, and honestly probably better than OS X.
I have a powershell console open on Win10 just as often as I have the terminal open on OS X. It's just a different dialect of command line than what some folks are used to
Unless you are reading this in lynx then you do sometimes use graphical terminals. For some people reading websites and consuming other media that don't require much textual input is more comfortable in 'tablet mode'. Dual use nature of Surface computers allows people to carry around one device instead of two and with no syncing required.
I was intrigued by the first Surface tablet until I played with one in person and realized even with the fancy Metro skin, it's still crappy-old Windows.
Indeed windows for development is not always as streamlined as a unix-based system like OS X. I use a surface pro 3 with windows 8.1 and wrote a python script that opens a cygwin terminal at the top of my screen to provide Guake-like functionality for windows. Most development work I do however is done within a Ubuntu virtual machine, but then again I've usually been doing web development over the past couple years.
Someday someone should write a book published by one of the tech book publishers about how to set up non Visual Studio dev environments on Windows (RoR, Django, PHP/MySQL, etc). It seems like things have gotten to the point where developing on Windows == MSFT dev tools, and everything else is Mac/*nix.
It honestly feels a lot like all the kids in the neighborhood just got new mountain bikes and you're still using a scooter. You can usually keep up, but as soon as they go off the road, you're on your own.
VMs are 100% the way to go, but I am consistently frustrated by incompatibility, especially with the nodejs ecosystem. Windows users are a huge minority so tools are rarely built with a non *Nix environment in mind.
It certainly feels that way, perhaps because I'm so used to apt on linux and windows just doesn't have any equivalent tools that I'm aware of. There's still a long ways to go on the software side to bridge the gap between the Microsoft and open source ecosystems.
been using windows for web development for a long time. almost every tool comes with windows support. there are some exceptions but there are always alternatives.
If all you want is a dropdown terminal emulator with Cygwin/git-bash or the windows command prompt why not just use https://conemu.github.io/ . You can configure it to behave just like guake/yakuake. No need for Python scripts.
I occasionally have to work with Windows servers. Most of the rest, for me, is somewhat painful, but Powershell is probably going to make you wish somebody would port it to unix-y OSes. Just a shame that all the convenient line-editing commands don't work.
I feel the same. The biggest annoyance is not having a better command-line application, at least provide tabs or split-windows. With enough work I can get around not having unix but having open 4 powershells is annoying.
RE:Homebrew. I feel like most of the time I'm developing on OSX I'm just trying to fill in the gaps. I wish I could get the same battery life out of my air on real unix.
Try Cygwin! It’s a *nix environment for Windows. I plan to install it once I get my hands on this beauty. Package management still works for CLIs, I suppose.
I'm running Windows 10 with multiple Linux server distros in the background. Hyper-V shuts them down and brings them up with Windows. WinSCP is my file manager and Putty is my terminal.
I prefer this over OS X on Apple hardware because I can get the exact Unix experience that I want without having to fight with Apple about who is in control of the machine. Perhaps more importantly, I get the desktop experience that I love, which is Windows.
If you haven't tried it out I'd suggest checking out SuperPutty (https://github.com/jimradford/superputty) - it allows for multiple Putty windows in a tabbed interface.
I use exact same configuration , ARCH and DEBIAN cli in background and putty in windows 10 for my C development , and btsync for syncing file between two .
I hate tabbed interface , especially in command line.
One good reason is window managers , with windows 10 window manager I get almost perfect tiling window manager functionality , but with tabs ?
Thanks for this! I'm using putty and without tabs it drives me nuts. When I read the GP comment my first thought was "yeah same but tabless terminals sucks!" Then I saw your comment.
I really enjoy Bitvise over PuTTy. If you haven't checked it out, give it a gander. BUT, in the upcoming Windows 10 update SSH support is coming to Windows' shell. So technically you won't need either. It will be more like Terminal on Mac. I still will be using Bitvise, just because of how it manages key files, etc.
Thanks! I have tried Bitvise a few times over the years, but it seems to be missing some features of WinSCP such as "Keep remote directory up to date..." which I use a lot. The terminal immediately seems nicer to use than Putty, but I have to find some way of using Bitvise terminal with WinSCP, or I need to find that one feature of WinSCP in Bitvise' file explorer.
I love my command line and linux like commands and tools. - Homebrew - Bash scripts - Docker (Windows 10 currently not supported) - Vagrant
I just feel the tooling for MS isn't in the direction I am. I still have a Windows 7 desktop and it's just not the same.