Several years ago I did want to fix something in Firefox, and it was far easier to just patch a few bytes in the binary (I think it might've been a string constant) than go through the whole build process with all the resources that takes, including trying to find where in the original source to actually make the change, and then still have a totally different binary than the one you wanted to fix, with other (bug-causing) changes you didn't want due to the different environment you built it in.
Of course, if you try to fix/revert something that was purposely broken or removed for some Mozilla reason, it is not going to accept your changes either.
One of the ideals of open-source was to make it easy to modify and share software. For a lot of the larger projects, I feel like that has missed the mark. I write (mostly small and always portable) software, yet the "build bloat" is a huge turn-off. On the other hand, I've had good experience with the smaller open-source projects, those that need a minimal amount of tooling and resources to build. Some examples I can think of include libpng, zlib, and even OpenSSL.
Honestly, this shows why projects of major complexity need professional full-time developers.
I think it also shows a very minor flaw in the open source / pull request process: Expecting casual and amateur contributors to work at the same level as a professional who works with a given codebase on a daily basis is unreasonable.
In my day-to-day job, sometimes I can fix a bug in an hour, but then I need the rest of the day to update tests, avoid regressions, and think through my fix to make sure it's long-term maintainable / readable / performant. Even more important, if I'm working with code that I don't usually touch, I have to make sure I follow the expected conventions that everyone else expects; and I have to be much more careful about assumptions that I haven't learned yet.
So, a 1-hour fix easily balloons into 5-15 hours!
Perhaps the easiest way to encourage new contributors, who are assumed to be casual, is to encourage incomplete and partial fixes that the main team completes?
> Honestly, this shows why projects of major complexity need professional full-time developers.
I think the opposite, it shows why projects that care about the practical side of open source and free software should always have in mind their "hackability" from regular developers.
I remember a case where RMS didn't want to make some change in a project (i think it was Emacs) because it would make the project harder for people who didn't work on it to understand it.
And it makes sense, if you believe that what makes FLOSS good is that anyone could change and/or improve it, then it follows that you should try to maximize the pool of that "anyone".
Unfortunately I believe that mercurial requires a large amount of the history to reconstruct the current version in the tree. Last time I tried to pull down firefox I was unable to because I was working on a system with 1GB of ram and the firefox repository requires at least >2GB to pull down properly -- I believe this is a fundamental defect in mercurial, but I might be mistaken.
Which then means that, as a side effect, you need to reject features!
Features bring side effects and complexity. If your goal of FLOSS includes allowing anyone to add a feature, that puts your goals in conflict.
And, to get back to the topic; I somehow doubt it's possible to support all major modern HTML, CSS, and Javascript requirements and keep the code immediately accessible to newcomers and casual coders. The only way to keep the goals out of conflict is to have a simpler version of the web.
You could easily make an argument that the same philosophy will benefit rapidly growing projects for full time devs. The less "arcane" knowledge needed to work effectively in the code base, the faster new engineers can onboard, the faster old code can have small changes made without breaking something else unexpectedly, and the list goes on.
Wouldn’t it be inversely proportional to the complexity allowed in the project ?
I imagine for instance a situation where for instance the project starts with a clear REST approach where GET requests are purely idempotent with no side effects whatsoever.
Then people start adding internal side effects (e.g. tracking, behavior scoring, suggestion building, and so on).
It’s harder to explain to a new dev that from some angle these requests are read only and nothing changes, and from another angle they change a lot of things.
But it would be counter productive to give up on features because it makes things less simple and brings a learning curve.
Sure, FWIW i wasn't trying to make an argument about keeping fulltime devs away. Trying to keep the codebase hackable for non-fulltime devs will also help fulltime devs too and even attract non-fulltime but otherwise dedicated (in their free time) devs. It is a good thing for everyone involved (assuming of course that the project does want others to be involved - there are projects that only release code but development happens behind closed doors).
Of course projects generally do not go with a goal of making it hard to have people contribute, it happens organically as the project grows. But i think after a project has realized they are in such a situation, they should make the necessary changes and improvements to get out of that situation.
> Perhaps the easiest way to encourage new contributors, who are assumed to be casual, is to encourage incomplete and partial fixes that the main team completes?
A better way is to reduce the complexity and, especially, tight coupling in your code base. Then it doesn't cost your own developers 15 hours to do something that should take one hour either.
If you look at the gnu utilities as a "project" then it's quite a lot of code, and in a sense it is a single project. To make ls(1) useful you need a shell, to make a shell useful you need things like readline(3), etc.
But if I have a contribution to make to readline, it shouldn't really affect ls, or vice versa. And if I want to swap out bash for csh or make my own toy shell that uses gnu readline, or use bash with the BSD version of ls, all of that is reasonably possible.
Browsers do that a little. Maybe the TLS library is a separate component, or the javascript engine. But how hard would it be to use the Mozilla javascript engine to create a toy browser to play with or to add javascript support to wget? How many external projects are actually using it?
Isn't the point of a pull request to submit code that functionally fixes a bug/adds a small feature, and then work on that patch until it's of sufficient quality to merge in? Things like style formatting, running tests among other things could be totally automated away here.
I would have to disagree, because there's a logical fallacy in your reasoning here. I agree with your point that in addition to fixing a bug, there's also the need to test, document, take account of maintainability, performance and coding standards and to do a lot more work that isn't quite so visible. And some of that burden is pushed onto the reviewers, testers and integrators of that work.
However, none of this is a justification for making the contribution process baroque and opaque.
I have noticed over the last 20 years or so that as a general rule the bigger the project, and the more closed the development process, the more baroque and opaque the contribution process is. Larger projects with more management layers seem to accrete unnecessary complexity and process, and generally behave in a more insular manner than smaller projects. I don't think things have to be this way. But I do think that keeping the processes simple, accessible and visible is important and that projects need to spend some effort upon keeping their projects open to external contributions. It's all too easy to adopt processes which work internally but deter and confound external contributions, and I think that should be something which project managers should be aware of and try to avoid doing out of short-term convenience.
This is IMO one of the under-celebrated benefits of Linux distros. You should be able to (e.g.) sudo apt-get build-dep firefox; apt-get source firefox; cd firefox[TAB]; debuild and have yourself a working package. Whatever dependencies it needs are all installed in a standardized way.
Not 100% sure I follow you, but isn't that what Gentoo provides? I can easily tell Portage to fetch the sources. If I want to make a change, I create a patch with my changes, and put it in the patches directory where the Firefox ebuilds live. I don't even think you have to edit the ebuild to add a patch - just name the patch properly and drop it in the right place.
Yes, basically any Linux distro or equivalent product (like Homebrew) provides this. I'm drawing a distinction between having a distro and putting the pieces together yourself, not between any two distros. (I'm mostly familiar with Debian but I think it is just about as easy everywhere.)
I'm not sure that's an applicable term here. I see "dependency bloat" as an application having a lot of unnecessary dependencies, like when I start a basic project skeleton in Hot Javascript Framework of the Week and get 1,000 dependencies loaded in (which, in npm land, is a conservative number; I felt bad about saying 10,000 but from what I've seen it isn't necessarily far off the mark).
But when you have a project that really does touch every XWindows library there is, and really does need to know about every major image format, and really does speak every major network protocol, and runs a full JIT interpreter and compiler, and integrates with every OS security mechanism there is, and uses every build protection library it can find, and on and on... well, better those dependencies than re-implementation of all that from scratch.
Fortunately, there aren't very many things nowadays other than a browser that has that characteristic. (Office suites, maybe. QT and other similar things considered as a whole, but not necessarily what any given user of QT will need.)
I was indeed thinking of npm when I wrote that comment but we are starting to see similar problems in some large rust projects. It's simply too easy to add a new dependency.
Anyway, some poster child FOSS projects in C++ take hours to compile due to dependency bloat.
Sometimes you depend on stuff you don't even use. Or your dependency depends on it. Or the stuff only gives you little advantages. E.g. in Rust the failure crate saves you maybe 10 lines of code or such. But it has tons of dependencies. Or the lalrpop library crate depends on things only the binary crate needs because both are named the same way. You end up downloading and building things you don't even need. This stuff is fixable, the problem is that people don't care much about it.
I think browsers are somewhat unique even among "large projects" for the sheer breadth of their codebases. Everything from networking to graphics to midi is somewhere in there, and they all have to provide a somewhat common interface. It just has to be complex, no matter how well the project is organized.
On a sidenote, firefox has fantastic build times (like 20-45 minutes cold), and the codebase is fairly well organized, so if you want to try contributing to a browser, that's probably the one to start with. Chromium has a much nicer source IMO, but cold build times can be as high as seven or eight hours.
Browser MIDI is kinda half-baked- you can use it to do most MIDI things, but it’s not visible to the rest of your MIDI network. So, for example, you can make a web page that functions as a MIDI synth, but your recording studio app won’t be able to select it as output.
Also, they expose MIDI messages as arrays of bytes rather than parsing them, which is a weird choice
I recently discovered Chrome had MIDI as well while looking for a music sight-reading app. Unfortunately, Chrome for iOS does not support it even though Android does.
I always have a ton of problems getting C binaries building. Java and JS are generally super easy. Python and Ruby somewhere in the middle. Kinda a proxy for cross platform compatibility I guess.
Some projects now just build in their own docker container. I hope this becomes more common, it's always a single click build no matter how many crazy dependencies you need installed.
I don't think that whether a program is small or large is the issue. The problem here is that most build systems suck.
I work on large-ish scale Rust software (~250kLOC), we ship to all major OSes and many minor ones: android, macos, ios, windows, linux, freebsd, openbsd, ... we build binaries for x86 (32 and 64 bit), arm6, arm7, arm8, arm64, ppc, ppc64le, sparc64, s390x, wasm, ... and many platforms (e.g. glibc and musl on linux).
We did pretty much all of this with a wrapper over Rust's cargo build system called "cross", and to build for a target, with just had to write `cross build --target=...` instead of `cargo build`. If you want to run your tests for a target, you can actually write `cross test --target` and your tests will run.
For a long time, that was it. Pretty much zero setup, cross compile ready-to-publish binaries to any target, run the tests of your program on that target, all of it by just writing cross instead of cargo. It just worked.
The way cross works is quite simple. It comes with a bunch of maintained Docker images, containing cross-compilation toolchains for C, C++, ..., right versions of kernel headers, glibc/musl, some utilities, etc. These also come with qemu-user or qemu-system, depending on the target. So cross would use the Rust compiler and the C toolchains and appropriate libraries to cross-compile on your HOST, which is quite fast, and when running the tests, it would often just run them under qemu-user, which is quite fast as well. For Windows targets it uses wine, for Android targets it install the SDK, NDK, emulators, for iOS you can use the iOS simulator that come with Xcode, etc.
Cross is by far, the damn best build tool I've ever used. Zero configuration, and it not only worked, but it did a lot. Want to target sparc64? No problem: cross test --target=sparc64-unknown-linux-gnu compiles your project runs your test, and you are good to go.
The reason we stopped using it wasn't that our project got big. It was that, at some point, the build tool wasn't enough anymore.
We added more dependencies, and needed to modify the Docker container slightly. cross lets you reuse their containers to build your own adding whatever you need, so we did that for a while, and we still do.
The MacOS support wasn't really there, so we were running macos and cross natively. There is another Rust build tool for macos, that is like cross, but supports iOS, so you could run `dinghy test --device=...` and it would cross-compile to the device, copy the tests to the device, and run your tests on an aarch64 device. That meant having to ssh into the right machine, install the tool in your account, and do all that, just to see if a test fails.
At some point we wanted to start supporting other targets, like FreeBSD and OpenBSD. Both operating system are, in my experience, very hard to cross-compile to, providing no cross-compilation toolchains for Linux at least. So with cross we were compiling to FreeBSD under qemu-system which was super slow. When we started supporting OpenBSD, we ended up spawning native FreeBSD and OpenBSD VMs, and compiling directly from those hosts.
So how are we doing all of this right now ? Well we initially forked cross, which is written in Rust, and hacked some specific modifications in, but at this point we just wrote our own crappy duct-tapish python script, that calls cross when it can, and does something else when it cannot.
This is where the barrier of entry for new developers appears. We aren't using cargo, cross, or anything else that they are familiar with. We are using an undocumented, super hacked, unfamiliar solution, that calls other build systems. Most of the time, `./b.py build` or `./b.py test --target=xyz` is all they need to know, but if they need to modify our build system, the curve is steep, because they now need to learn about all the horrible problems it solves and the super hacky ways in which it solves them.
So why do Firefox, Chrome and other big projects have their own build systems? Being big has nothing to do with this. These projects run on all versions of all OSes on all hardware that these OSes support with all possible user-system configurations running the tests that must be run for each of these configuration and generating and publishing documentation, packages, and artifacts for all sorts of documentation and package formats to all sort of domains while preventing hundreds of developers trying to break everything with their changes from doing so.
Imagine if we were to solve this problem writing Makefiles, CMake, SCons, or any other crappy DSL of any build system in existence. I'd rather suffer for all eternity in hell.
Yup. And the fact that you don’t need a build system is one of the things that makes JavaScript so great.
Of course, lots of people add build systems to their JS projects, but you don’t NEED one.
I spent the last five years writing a web development toolkit that lets me keep doing JS work without any build step. It’s been a lot of work but it’s so dreamy.
I hardly have to worry about a black box ever. My sites start more or less instantly. If anything is wrong anywhere in the system I just put a breakpoint and trace execution. There are no “artifacts” to puzzle over. It’s the best.
Check out browser-bridge on NPM if anyone is interested.
No need to, HTTP responses are compressed already. Also it makes debugging production issues a nightmare.
Minification helps a tiny bit with client side memory too I guess, and parse time. But I save so much memory by not putting random pointless crap into the browser environment from web pack, Babel, etc, that’s where I want to make my savings. Not shaving a couple chars off a variable name.
Interesting! Not useful for me because I target 100% browser compat. I try to code in ES5 exclusively. I don't like leaving poor folks with old computers in the dustbin.
The reason JS works everywhere is that mozilla went through the effort to ensure their JS engine compiles and runs everywhere. JS operates on a much high level of abstraction.
Or, ask someone from a undeveloped country why they go fetch water from the local well when you can just pour some from the tap?
I disagree with "One of the ideals of open-source was to make it easy to modify [...] software". I think the ideal is/should be "make it possible to modify software".
There's a lot of cognitive dissonance around "why are you creating your own when X exists" and then saying, "Well I don't have to do a good job on X because it's free. If you don't like it make your own."
We can't actually handle an infinite number of solutions in a space. If I enter that space and do not leave, I'm carving out territory whether I want to or not. There's an opportunity cost to me producing a library, the way there's an opportunity cost of throwing a party (don't think so? Throw a party the same time as your friend's birthday party and see how that goes over). And like a party, you owe it to your guests to put in a modicum of effort. They are perfectly entitled to badmouth you if you show up and do a lousy enough job.
Showing up is step one. It's the most important step, but it's not the only important step.
If I were going to hire full timers to maintain a piece of FOSS software, I think I might start with a designer. You need conceptual coherence and a designer is one of several ways to get that. Next I might hire a toolsmith. Someone to create a coherent system for contributing. If there's any money left over after that I'd start courting some of my most prolific contributors (although they may be prolific because of the project they work on, and taking them off that project may kill the oracle.)
It was always "possible to modify software". PC magazines from the 80s and early 90s even published patches of the form "change bytes at offset X from Y to Z", almost always discovered by someone with a debugger and not the original author. There are quite a few unofficial Windows patches around too, and that was never open-source.
Perhaps it's time we rid ourselves of the misconception that source code is necessary to understand and modify software. It may make it easier, but it may also make it harder (by, among other things, helping to propagate the misconception.)
Imagine if RMS didn't base the GPL on the availability of source, but simply enforced the legality of understanding and distributing software. I suspect the "free software" movement could've become something far more interesting and powerful, and reverse-engineering techniques would also become far more advanced than they are today.
Some shameless begging: if anyone picked up https://github.com/tridactyl/keyboard-api, it would make Vimperator-style WebExtensions much better as we'd be able to respond to user input everywhere in Firefox. We haven't been able to find the time to work on it. I'd be delighted to help anyone who was interested get started.
Is this article a bit outdated? I have noticed some projects not only switched their repositories but also programming languages i.e.:
> If you know Python, you can contribute to our web services, including Firefox Sync, or Firefox Accounts
Not quite recently (about 6-7 months ago), the syncserver got rewritten to Rust [0], while the Python codebase it's a bit stalled [1]. I guess Mozilla is going to switch to syncserver-rs soon, because they started implementation of Google Cloud Spanner and this looks like a green-light for future deployment.
> If you know Java, you can contribute to Firefox on Android, Firefox Focus for Android and Firefox for Fire TV.
There is new Firefox Preview on Android, which might disrupt the mobile browser market share soon (if it gets addons support). Some mozillian also worked on projects like Lockwise [2] or Firefox Notes [3] which are written in Kotlin, not Java.
Unfortunately it's the nature of a wiki to get out of date. There's plenty of opportunity to contribute to services used by Firefox users or developers whether you write python, rust, JavaScript, golang, or something else. Take a look around https://github.com/mozilla-services/ and https://github.com/mozilla/.
I got interested in contributing to Firefox in the hopes that it would lead to a job.
My thought was since obviously I could do the job, then they would hire me without having to go through the usual interviewing steps.
Every time I have applied, I never got pass the recruiter. Never got a call to talk about my patches, just a generic rejection response.
Then I realized that I was spending too much effort on a single company, so I stopped contributing and applying. Seems leetcode grinding would be a higher reward for the effort.
Depending on how significant your patches are (No hello-world-123/typo/style fixes) putting 'Contributed to the Firefox Browser' on your CV/Cover letter and I find your email in the source code or AUTHORS file is a instant on-site from me. You don't need to show me that you can program/write tests or use version control. This saves me lots of time on finding candidates that are experienced, rather than throwing a leetcode/hackerrank puzzle which can be easily cheated or plagiarized.
I think I might have said this before on another HN thread, but of course as an interviewer, I sometimes double-check the CVs and candidates to look for 'hidden qualities' such as open-source contributions. If one recruiter overlooks a candidate, I really ask them why and if the candidate DOES have significant and relevant patches, I actually bypass their decision and I bring them in onsite.
I find these technical programming tests useless for finding qualified candidates unless you are preparing for a programming olympiad or some other competitive programming competition. But open-source contributions allows me to find the patches you made on code reviewing tickets and see it in the open.
It would be better if more companies used open-source contributions as acceptable evidence of relevant experience these days.
(disclamer: I'm a noogler, and I failed google on-site once).
I think that citing this example every time "hiring at Google/Big tech" is discussed on HN is useless.
We know that there are a lot of false negative and and lot of chance in the hiring process (so many things can fail in a process involving so many steps and people).
There are also legitimate rejections, like hm, bold statements ("90% of your engineers use my software"), fingerpointing or not being able to seek for assistance when you don't know something.
I discussed interviews with many people, and I'm convinced that solving the problem with the theoretically-optimal solution is not the only way to get a positive review. Being open about things I didn't know helped me: for instance "Hm, I'm not sure about how to do this exact thing, I'll use the function do_this_thing() and implement it later if I still have time, if that's OK for you" buys you time and allows to keep the discussion focused on the main problem.
I don't think his statement, while bold, is at all unfounded. If he truly was rejected for not being able to invert a binary tree on a whiteboard, but has a proven track record of writing excellent software, I would agree with him that there are some serious issues with the Google/Leetcode/2nd-year-CS-trivia style of interviews.
>I would agree with him that there are some serious issues with the Google/Leetcode/2nd-year-CS-trivia style of interviews.
I think both you and your parent are in violent agreement.
It's not as if many people who were in FAANG haven't admitted there are serious issues with the interview process. Even advocates of the process are open about it.
No one has found an interview process that lacks serious issues. Hiring people solely on the basis of major contributions to software has its own serious issues.
placeholder function strategy is something that should be stressed more, it's always what the interviewers want you to do if you are stuck on some detail of the algorithm in my experience.
Well, he was invited for a on-site interview at Google rather than being a direct hire. But after this rejection, Apple directly hired him to work with the Xcode team.
In both cases he bypassed the recruiter and leetcode stages which are common steps before the on-site interview at most companies.
I would agree with the OP that on average open source contributions seems to have very low impact on the chances of getting hired somewhere, and about potential roles.
I also submitted patches to various projects, including some quite impactful ones that the associated companies used and that e.g. increased performance significantly. Also have written a library that one of the companies (in the extended FANG circle) needed but didn't achieve it to write on their own for a lot more time.
I have never been asked about any of those things in a typical interview loop. I think this is due to a mixture of interviewers not knowing the technologies themselves (there are actually a very low amount of developers who know their foundational technologies in detail) and interview loops mostly standardized on CS basics (extended FizzBuzz).
If the main goal is to land a good job I wouldn't recommend people to try to get recognition with open source projects. Doing Leetcode exercises, having a good network and doing some job-hopping from time to time might help a lot more.
However I still think contributing to OSS projects can be awesome, and can provide a very satisfying feel to oneself.
Are you currently hiring? I’ve made many contributions to the VSCode and ApolloGraphQL projects over the course of several internships. Currently looking for a full time position.
My hunch is that you have a blind spot that’s getting you flagged at the recruiter stage. I’ve interviewed a ton of devs, I’d be happy to run through a sample interview and give you feedback if you’re up for it.
I’ve never had a job interview before. What kind of things should be focused on when preparing for one? (I’ve still got a few years yet until I need to start thinking about jobs though.) I personally feel my confidence is a big weakness, but I guess that’s different to technical ability. Should I start with learning different algorithms, data structures, etc? Thanks
You need all of that (structures, algorithms, confidence, etc), especially if you're going for a FAANG. But the biggest thing I see new grads lacking is the ability to communicate and collaborate effectively in the workplace and on teams. At most smaller startups that's maybe even more important than raw ability.
Find any way you can to gain practical experience. Internships, consulting, open-source. Really anything. The bonus is that you'll gain confidence through this path as well. You'll know you're capable of doing the full job (not just the work).
If you've made a number of contributions to a project such as Firefox, there should be existing employees or senior project members who are familiar with your work (having reviewed your patches, etc.) Asking those people if they'd consider recommending you for an open position might then be a way forward.
I made some contributions(not very much, to be honest) to big open source projects. Was interviewed about 10 times in recent few months, and not a single interviewer was aware about my contributions to OS, despite that there's link to my GitHub in my resume. Also, when i mention my contributions to OS, interviewers usually not interested in that.
It's remarkably much work to discover the significant and relevant contributions someone did to OSS just from their github profile. Lots of people have forked millions of well known projects. No interviewer has time to go through all of them to find the ones that were forked for good reason, i.e. to make amazing PRs and not just to fool around a bit.
Also, the contributions graphic (with the colored boxes) is often off-putting, because many people tend to also use GH for random weekend hacks, which are shown as "contributions" just as much as "oh I made Firefox 2x faster on BSD" kind of things are. To GitHub, a commit is just a commit, and "I ran create-react-app" and "Fixed impossible 10 year old race condition in Hadoop" are both shown as a green box.
Spell out your relevant OSS contributions in your CV and/or your email. Don't just link to your GitHub profile, GitHub profiles are shit.
To be honest, if you only link to your GH and then conclude that interviewers aren't interested in your open source work, then this gives me the idea that you're not that interested in your interviewers either. Help them out a bit!
I interviewed at three companies over the course of a few weeks once, not one asked me about anything on my GitHub profile nor about my blog (200+ technical articles). I was a bit miffed. At the last one I asked to see the CV they’d been given; the recruiter had removed those links from my CV. (This was in Austria.) So had I worked on open-source or blogged solely to get a job, it would have been a complete waste of time.
Speaking for myself, I’ll put in time on contributing to a codebase when that codebase is something I use and like. I’ll include links to significant contributions when someone wants to see my OSS experience. If it helps in their decision process, great. If not, I still got what I wanted, which is to better understand something I use and, to a lesser degree, improve it.
When it comes down to it, nothing ever entitles you to face time. You put stuff out and hope for the best, but it's always a risk. Nothing is guaranteed.
This is how I got hired but that was many years ago. Getting hired wasn't even my intention, I just wanted to contribute. Mozilla has become bigger and things are more formal now. It's certainly still a big plus if you're part of the community and can let your work speak for itself, even more so if you captured someone's attention who'd like to get you hired, but you most likely won't be able to skip the interview process.
>My thought was since obviously I could do the job, then they would hire me without having to go through the usual interviewing steps.
I would certainly hope not. It's one thing to fast-track from an internship or a coding camp, but any company that would be willing to do that for an open-source contributor who's never been interacted with in a professional environment would have serious problems with its hiring standards.
If it's already obvious that someone can code, then parts like leetcode look like either dimwitted corporate inflexibility, or a transparent hazing power play.
It seems like the majority of people in supposedly innovative dotcoms have oddly lumbering-big-corporate-drone ideas about this right now.
Someone thinks they shouldn't have do to a distracting battery of whiteboard leetcode hazing, when obviously they can code, and they're happy to have real technical discussions-- and we're turned off by their prima donna arrogance. Their failure to comply with every unreasonable or abusive corporate demand is a showstopper. (That sounds like the kind of person who might object if they find out our entire business model is secretly spying on people and controlling them!)
If the applicant further suggested that they also would like some basis for evaluating their prospective team members (just normally; not whiteboard hazing them), we'd totally flip out, and for years later would be mocking the arrogance of that one uppity jerk, who don't appreciate how our revered VC fore-funders commanded us, over 10 years ago, how software development culture works.
Because a bootcamp requires face-to-face interaction. Contributions are no indicator of conduct - they just show you can submit patches. How you behave in a professional environment has to be judged from actually speaking to you.
Whoa, you are saying that demonstrably skilled practitioners without face time need an interview (fine), but people who face timed somebody else with no demonstrated professional skills deserves to skip interview?
On that note, why should someone want to contribute to a project controlled by a single company if they don't need those contributions for their business?
Many open source projects are far removed from the ideal of a disjoint group hackers making something nice in their own time. Contributors to Firefox, Chromium, Linux, etc. are usually paid to do work to enable things that the company they work for want[1]. If your contributions are significant but don't further those goals, you may be unable to get them merged. In my opinion, don't devalue your own work when you could spend it on a project of your own, or a smaller project that's genuinely a community effort.
Are you saying that having previous Open-Source contributions to Mozilla in the portfolio would mean you're not going to have a chance with a job there?
edit: I'd love to be wrong here. Feel free to correct me instead of just downvoting.
GP was saying that you can avoid talking to the useless HR drones and get in touch with someone more technical at the company, and have them recommend you for a position, based on your contributions.
That's really weird. They reached out to me to do R&D with them and the work I did at that point in my life couldn't be further from what they do. I dont think there's very much rhyme or reason to getting people to contact you for a sw engineering position.
I've had Google, Amazon, Stanford University and a host of other prestigious places reach out to me and go in good directions. On the other hand, I have never heard so much as a peep from Microsoft or Intel despite dozens of applications. I had friends that got into Microsoft right out of college and their programming ability wasn't above average nor were their social skills that much better. I have friends that I worked with that were at Intel and they complained that they could have done their job straight out of high school; conversely, most of my work up until recently required graduate level math.
It indirectly helped my carreer: It gave me confidence that I could be an occasional contributor to OSS. I learned valuable things when doing each contribution.
One great resource for those interested in contributing to Mozilla, is mikeconley's live coding sessions[0], which are cataloged here[1]. These can be used to help understand internals.
Chrome is a dangerously slick browser, I found it takes a determined effort to switch and eventually get comfortable with Firefox as your primary browser. But we should support Firefox and give efforts and user feedback where we can to get it to the same competitve level.
Can you elaborate on what you found most unconfortable or difficult to adapt to?
I switched to Firefox ~1.5 years ago and it does everything I need, actually never picked chrome up again. The android version is very good as well, but I just switched to iOS and it's not a very smooth experience there.
I'm a long time Firefox user but unfortunately Firefox has bad issues with performance/battery usage on MacOS. Multiple long standing issues are open. (such as https://bugzilla.mozilla.org/show_bug.cgi?id=1404042 - open for 2 years)
I love Firefox but I had no choice and had to ditch it on MacOS.
It seems Mozilla does not have the resources to fix these problems.
Firefox has performance and general software bloat issues on all plattforms. They sacrificed their original "tailor the browser exactly to your needs" approach on the altar of keeping up with Chrome on features.
To be fair, Firefox actually has a lot of good stuff and you can get its UI to match the Chrome experience quite easily. The Developer Tools are also quite good.
Biggest issue for me is not being able to import passwords from Chrome. I have to switch back to Chrome if I need to access some web services quickly, so building up saved passwords in a browser makes it more sticky. There's probably a reason for not being able to do this though.
It also starts to get a bit sluggish when you have a couple (6-7) heavy JS/DOM tabs open.
File download UI could be better. Skip the confirmation dialog, show file and progress at the bottom of the browser window.
Firefox developer tools are awesome and they get better with every release! Lots of cool stuff recently added for debugging CSS related issues.
JS sluggishness might be because of a recent Firefox Pioneer study. I've experienced it for the past week and in the end I uninstalled Firefox Pioneeer completely because of it.
imho, file downloads are actually great. When you have active or recent downloads - you'll get a nice litle icon on the right from the URL bar. For active downloads it'll show the total progress for all downloads. Once clicked, it will open a small menu with your downloads. When you don't have any recent/active downloads it just gets out of our way. Much better than that ugly bar at the bottom of the window.
You can browse to https://passwords.google.com from Firefox if you want to look up one of your Chrome saved passwords (provided they're synced with your Google account of course). Not quite as convenient as a bulk import I admit but it's how I slowly migrated my passwords across as and when I needed them.
Consider using a third party password manager - helps in not being locked into any particular browser. I find LastPass to be quite good for my purposes
I switched to Firefox at home about two months ago due googles plans to restrict ad blockers. It's mostly okay but there are some things that bother me. Like some pages will never get auto completed. I can visit those pages as often as I want, I always have to type in the full URL each and every time. This kind of sucks because auto completion is often far faster than looking for the respective bookmark within a hundred others. It also frequently crashes not just the tab, but the whole browser during WebVR development and debugging.
I use Chrome (logged with GMail) for "work" and Firefox in parallel for all the rest (banking, docs, general browsing).
For a program that has 1 textbox (2 at max if the search box is enabled) I find it difficult to understand what is the thing one has to get comfortable with by taking determined effort.
I'd really like to work for Mozilla on Mozilla products, but somehow the interview process is broken.
I've managed to go through 3 interviews (2 technical, one with the team-coordinator) but then was rejected because they've found some other with more years of relevant (?) experience on the topic for the position.
Now they write this. So my question is: why don't you open more positions, letting people be hired (and paid) to work on products, instead of looking for charitable work from volunteers?
> I'd really like to work for Mozilla on Mozilla products, but somehow the interview process is broken.
I've managed to go through 3 interviews (2 technical, one with the team-coordinator) but then was rejected because they've found some other with more years of relevant (?) experience on the topic for the position.
I'm sorry I don't really understand. They interviewed you and found other people who (they thought) were more qualified. In what way is the interview process broken?
> Now they write this. So my question is: why don't you open more positions, letting people be hired (and paid) to work on products, instead of looking for charitable work from volunteers?
I can only assume that it's because they think there are better places they think they should put their resources.
> Mozilla uses Phabricator for code review. Either use Mozilla's Phabricator instance (preferred) or attach a patch as an attachment to Bugzilla.
Oh neat, they're using Phabricator, the self-hostable open-source github/gitlab-like repo host/code review system spun out of Facebook. I use this in the office and everybody loves it.
Have you searched the bug tracker for proposals to implement this? Have you submitted these requests? Did you review any guidelines or outlines of features that Firefox should support?
Leaving a comment on HN to (what seems to me) complain is easy; actually going to the issue tracker and mailing lists would be a lot more productive.
I suspect you'll find that it has been proposed but rejected. I suspect it's because support can be easily added via addons, keeping the core more lightweight and extensible.
Cliqz made their own browser based on Firefox, and a Firefox extension. Pocket started as a Firefox extension and even when it got integrated into Firefox it was technically still an extension. If anything these cases seem to indicate that the core is lightweight and extensible. What's your point?
I remember reading the issue on adding torrent:// years ago, the gist of it was "make an extension for that and if it becomes popular we might consider it"
> If you know JavaScript or HTML/CSS, you can contribute to the front-end of Firefox
Huh? I'm interested in that, but I'd think most of the GUI's behavior is in C++ and hopefully threaded for responsiveness? And only the top ‘fluff’ layer is styleable, possibly being made with XUL.
Alas, links in that post don't seem to lead to architectural overviews of the components.
That's not true. Majority of the front end is in html+js+css these days. Most of it is /browser and /toolkit directories.
The rest is Gecko engine and it's written in a mix of Rust, C++ and JS. We're moving more to Rust with the back end but staying with JS on the front end.
You can develop almost any feature of Firefox without ever leaving JS.
Wasn't the story of Stuart Parmenter being flown out to California to help with the browser based on his open-source contributions from his parents house in some midwstern town?
Of course, if you try to fix/revert something that was purposely broken or removed for some Mozilla reason, it is not going to accept your changes either.
One of the ideals of open-source was to make it easy to modify and share software. For a lot of the larger projects, I feel like that has missed the mark. I write (mostly small and always portable) software, yet the "build bloat" is a huge turn-off. On the other hand, I've had good experience with the smaller open-source projects, those that need a minimal amount of tooling and resources to build. Some examples I can think of include libpng, zlib, and even OpenSSL.