Contributors volunteer skilled labor that can help a project be even better. The worst thing a maintainer can do is dissuade them from helping.
A good rule of thumb is to assume positive intent from contributors. People who are requesting changes are trying to solve problems; rarely are they doing it for personal aggrandizement or to fulfill some philosophical mission.
One thing that really upsets me when I propose changes -- especially when they're accompanied by code -- is getting a thumbs-down from (or, even worse, an issue closed by) a maintainer, without any constructive feedback as to how to resolve their concerns. I can work with someone who can inform me about their concerns or the weaknesses of my proposal, and who says, "yes, but...," but I can't work with someone who just says "no."
In my own projects, I have a rule of thumb: I never close an issue without the consent of the submitter. I try to ensure I've either convinced them that there's a better way to do what they're trying to achieve; I've resolved their problem as best I can; or I simply don't have the resources to give them a proper resolution.
> rarely are they doing it for personal aggrandizement or to fulfill some philosophical mission.
Sometimes. Spending hours tweaking a CoC or changing pronouns in documentation is not solving any problems and results in unnecessary back-and-forth, flame wars, HN posts, etc.
In an ideal yet non-existing world that is true. In reality there are also contributors volunteering not-so-skilled labor resulting in quite some overhead for the maintainer. That doesnt make the other things you sais less of a good advice, but you almost make it sound like it's all flowers and sunshine.
One thing I wish GitHub made easier is responding to a pull request with a patch rather than encouraging reviewers to type up a bunch of suggestions that the original submitter will have to turn into a patch themselves.
If you have a better idea how to accomplish part of a suggested change, you can communicate that more clearly by making a patch and leaving a comment explaining why. GitHub should encourage this and allow the submitter some interface to easily incorporate that patch into his PR.
This is especially useful in situations where the only changes are in documentation or writing rather than code. Dealing with a dozen responses of "s/topy/typo/" should be as simple as clicking a few buttons to accept all the corrections. It would be less work for both sides.
One of the killer features of Gerrit for code review is web-ui editing that can be turned on for the author and other contributors. See a typo? Feel free to click "edit" & fix it, saving everyone time. The edits increment the patch-set number, so no information is lost.
The Gerrit maintainers are also working on the idea of comments that include a proposed fix, letting the author just click "apply" to patch the change in the editing interface.
Yes, that is very nice, but it doesn't help if I am not a maintainer of the upstream project.
Since I haven't used it, I'm not sure, but it seems like it would also help avoid the case of "Can you squash this before I accept it?" leading to pointless delays and extra work.
Sorry, I thought you were speaking as a maintainer. You mean reviewing someone else's PR on a project you're not a maintainer of? Yeah that's "send a PR to their branch" right now.
> Does it help avoid the case of "Can you squash this before I accept it?" leading to pointless delays and extra work?
Absolutely, and I've done that before, but the web interface doesn't really help with this. You also need to alert the original thread about it manually so others can follow the thread.
If you have five different suggestions, should that be five separate PRs to their feature branch? Probably? Or they will just need to modify or rebase out the commits they don't want.
Anyway, I just like all solutions where "doing the work" is encouraged over "talking about doing the work." Often, the doing takes less time for both sides.
It would be great to have "PR patches" as something handled gracefully by the web interface. Even if they're just branches under the hood, a "click to accept this change" feature on open PRs would be a nice way to encourage simple fixes.
The worst maintainers are the ones who refuse to say no when they mean no or want to say no. I've seen maintainers waste weeks or months of a contributors time on work that they never intend to accept because they're trying to avoid confrontation up front.
I'd say that's a distinct skill that requires a different approach. Big projects have this problem less, because they have so many conflicting stakeholders and they get so much more practice rejecting random patches. But for little projects with one or few maintainers it's easy to look at every random contribution and be flattered that someone would bother to take the time to try and give back, which at the same time predisposes you to feeling like rejecting them would be some sort of betrayal. It's important to be magnanimous here when possible, but at the same time when they inevitably ask "why wasn't this accepted?" sometimes there simply isn't a more satisfying answer than "this doesn't fit with our unspoken philosophy of what the project should be".
Working with the DCSS maintainers is great though. I recently contributed some code to put FreeBSD support into DCSS, and it was great working with them!
The Compliment Sandwich* is known to be a poor technique for providing criticism. It doesn't help the person you're criticising (people see straight through it) and it also negates the compliment. It also braces people for criticism next time you pay them a compliment.
Worse still, it distracts from the message you are trying to convey.
There is a way of being critical without making a person feel bad, by being factual and not making it personal, and by remembering that you are trying to help that person in some way.
One of my favorite tricks as maintainer for a moderately popular project is asking every user to give themselves credit in the release notes, e.g.,
- Fixed a terrible bug that crashes lots of crashes. By John Smith.
Contributors feel great seeing themselves get credit. New users see all the different contributors listed, giving themselves confidence that the project is well maintained. And finally, contributors are much less likely to forget to update the release notes!
> But before you fly off the chain, prove your undeniable superiority, and prove that they are wrong, let me suggest instead that you do something crazy, that you do the opposite: that you prove they are right.
I'm a bit uneasy about this. If you spend your time agreeing with a suggestion that your initial reaction is to reject, you probably won't have the patience to go back and question it after the fact. You'll also possibly 'brainwash' yourself into thinking it's a good idea?
I agree that you shouldn't knee-jerk into rejecting an idea, but I believe thinking critically is important. Take the opposite side, but do it civilly. Come up with counter-examples, but phrase them constructively: "have you considered this problem, do you think it's an issue, can you think of a solution?"
An important aspect of thinking critically is the willingness to entertain opposing ideas as true. If you can reject an opposing position even while granting it the benefit of the doubt, then that says good things about the strength of your original position. But if you can't reject such an opposing position, then that helps to inform how you should respond to people with that position since now you know that, rather than being illogical, it's merely that their axioms are different.
I think you have to differentiate between thinking and communicating. Thinking comes first, unhindered by concerns for social niceties and hurt feelings.
Then, you communicate by expressing what you've concluded in a way that does account for the fact that you're conveying a message to a human being, with all the baggage that entails.
I agree, at least in general- I think of critical thinking and communication as overlapping but very distinct skills.
I believe that if an idea hasn't been articulated yet (put into words, whether in writing or speech or thought), it hasn't been fully formed.
There's a saying that you don't truly understand something until you have to explain it to somebody else, and that's because it's not until you have to explain it to somebody else that you are forced to articulate the entire thing by serializing it (ugh) into words. Putting things into words nails down meaning in a way that can make logical problems or false assumptions much more obvious.
So, IMO communication is a two-pronged challenge. One challenge is articulating something that may have previously existed in a partially-formed state in the back of our heads. The other challenging is adapting that articulation to be as lossless and efficient as possible when directed at a specific audience. Communication is only successful if the signal is both successfully transmitted and successfully received. When we fail, it's easy to blame the audience, but IMO more often than not it's our own failure to read the room and we just don't like admitting that to ourselves. It's human nature to get defensive and use excuses like, "they were focused on nitpicking my words and not listening to my ideas!" That excuse in particular has become a huge red flag for me.
All those things are basically like a software job, except for the part that you aren't paid.
The skills to handle it are the same.
In commercial development, you ideally have some layers between you and the customers. That depends on how large of an outfit you are and so on.
How it works at Reasonably Big Co. is that all those requests from the end users don't go to you directly but to some customer support people (who are actually technical: they develop solutions and fix issues by actual coding; do not think "Microsoft customer support" here). You, in turn, support these people: but your manager should be coordinating that. If you are asked anything that would require significant development, you redirect to your manager; other than that, you help as much as you can without derailing your official work schedule.
Secondly, if customers have some major request like for a major feature, support may defer them to "product management" to have that work put on the road-map for the next release or whatever.
Product management doesn't bug you directly to develop anything. They negotiate with your managers to hammer out what is in scope, and from there a schedule is devised and so on. You end up with some tasks out of that schedule.
Similar principles can be applied to your OSS project: you (and possibly the handful of other contributors) just have to wear all these hats yourselves.
Here is the thing: those aforementioned support people independently produce solutions to problems. They check them into their own repos and they release that stuff to customers. Then later, customers want all that stuff carried forward in the software. So there is a tension: there are all these wild and wooly patches whipped up by support, of varying quality and they (or equivalent solutions in their place) have to be somehow integrated. The support people tend to say yes to customers to make them happy which isn't always best for the product.
That is quite similar like when in OSS you are receiving complete patches from highly technical users who can code.
This is great stuff and applies to more than just OSS. Working with vendors? Other parts of a large organization? A lot of the times the dynamics(or at least the learnings here) can be applied.
Kudos to the author, I'm sure that was no short amount of work to put together.
Most open-source code is not managed very well (why should it be?)
I always fire off an issue with an offer to patch. If no response, then no patch submitted even if I have a patch.
I don't submit the patch because code moves on. A few commits later by someone else and the patch can no longer be cleanly applied. So I don't do the work on the patch if it will not be accepted.
Also I don't do nit picky code clean up. For example, if someone wants 2 space indentation and I did 4 ...
they can accept the patch and use their IDE with their formatting rules to fix such things.
What I mean is, there are lots of people who are not as smart as they think they are, and they won't STFU and RTFM, etc...
Send not to know at whom the Torvalds curses, he curses at thee.
Frankly, I think the author's original position was correct: Emphasize credit for authors of RFCs and you'll get fools authoring RFCs for the credit. Sheesh!
Maybe I could put it another way. September 1993 ended, just not the way some people would have wanted it to.
The net has become a medium for everyone, not just a small enough group that this group could become acclimatized to a dominant discourse style (whatever the virtues might be of the Linus Torvalds "I will call you a fucking moron when you fail" school of discussion for a rarefied group). And given that, it is now necessary to have the standards of discourse for the net be the standards of discussion for society as a whole.
And sure, maybe being nice can produce a series of problems of its own. But it's now necessary to solves those problems while being nice.
> Torvalds "I will call you a fucking moron when you fail" school of discussion
Can we please drop this trope? Torvalds spent a lot of his time patiently explaining things to people. Yeah, he gets verbal at times, but it's never straight out of the gate.
That's true. However, maybe the people you do want to interact with--or the people you should interact with in order to advance your project, whether or not you particularly "want" to interact with them--will also go away and leave you alone.
With one side project, I've had people almost leave the project because I was being accomodating to someone who clearly had no idea what they were doing. They were getting frustrated with the constant low quality discussion, and one ended up private messaging me to say that either $IDIOT left or they were going to go. Another probably just dropped of without saying anything. Just showed up less frequently in chat, until one day they were gone.
Being nicer doesn't always help attract people that you want to interact with.
That's quite anecdotal. It might be true and work for you. I'd then ask if it's the general rule that maintainers being helpful to people who have more to learn is causing lots of problems. I doubt it given the huge ecosystems with beneficial results that came from software catering to such people.
Now, it might be quite different for those focusing on extra-high quality, reliability, or security. These seem to take a certain minimum amount of skill or determination. People being too nice might bring in people that are purely a drag. You and I both seem to be in that boat a bit. So far, the best path has been filtering out the chaff plus inviting skilled people or mentoring those with aptitude plus desire to learn. I'm sure I could learn more on people side to increase my effectiveness quite a bit. Like you, though, I need a certain baseline to be effective at the levels of success I want with the kinds of people that follow my work. Even if I was nice, the others would tell them off just to prevent damage from being done. Maybe nicely and with tips but the effect is still the same.
The problem when it affects communities with higher focus on narrow tech or high quality is worth further research by people interested in this stuff.
First off, at least in my mind there's a difference between people asking good questions in order to learn, and people constantly making stupid proposals without taking time to learn enough context to understand why their proposals are bad.
And, yes, of course it's anecdotal, and of course it's not universal. I don't believe my preferences for interaction should be universal. The software world is big enough that I can go join places that match my tastes, and avoid ones that don't.
A lot of times when I start getting frustrated about low quality discussion, I try to reimagine the noisy participant as a kind of "human fuzzer". They have randomly wedged themselves into a corner with some of the unspoken rules of the community. Then it's (usually) sufficient to explicitly and (politely) articulate the expectations for participation and collaboration.
"Being nice is something stupid people do to hedge their bets." ~Rick, fictional smart a-hole.
> Maybe I could put it another way. September 1993 ended, just not the way some people would have wanted it to.
You could put it that way, if you wanted to be a patronizing fool. Try to understand that you are evidently one of the people I'm complaining about. You're not disagreeing with me; you're calling me a fool in a passive aggressive format.
> The net has become a medium for everyone...
I think you mean Facebook. Most humans haven't Clue One when it comes to computers, let alone networked computers. Even wordpress users are digital peasants.
> ...just a small enough group that this group could become acclimatized to a dominant discourse style (whatever the virtues might be of the Linus Torvalds "I will call you a fucking moron when you fail" school of discussion for a rarefied group). And given that, it is now necessary to have the standards of discourse for the net be the standards of discussion for society as a whole.
But I'm not talking about the "normals" and their precious discussion, I'm talking about the art, craft, and-- yes --science of computer programming, the thing I've dedicated my life to, and that suffers from a huge influx of fools who think that they know what they're doing when they just don't. It pisses me off.
Getting software right is important. I estimate that about 9 out of 10 people getting paid to write software today shouldn't be. You do realize that today, in 2017, most of the needful software has already been written don't you? Think about it.
So, my premises are: Most new software is unecessary. Most people writing software are not qualified.
If we're talking about software development (not "the standards of discussion for society as a whole") whether FOSS or closed "being nice" is waaaaay down the list of priorities for for what needs to happen globally in the software industry (IMHO).
We should establish strict standards and ensure that pro coders meet them. (like y'know engineers do)
Software is machinery. There's not a lot of scope for touchy-feely in software: you can usually make arguments with math and numbers and these do not care about people's feelings
Certainly I'm not arguing in favor of unbridled assholism, but if someone doesn't have the emotional maturity to deal with, e.g. Linus Torvalds being cranky and calling names (over email!) then that's probably a fine reason for that person to go do something else and quit wasting his time. (I've read some of his ranty responses and generally the folks he's popping off at are being thick and stubborn about it. I've got no sympathy at all for that sort of thing.)
"Getting software right is important. I estimate that about 9 out of 10 people getting paid to write software today shouldn't be. You do realize that today, in 2017, most of the needful software has already been written don't you? Think about it."
I'm curious if you've read the Richard Gabriel essays. Certainly the two of us have seen something like engineering of software. Anyone looking has seen high-quality software. Yet, most software isn't written with that goal in mind. It's about taking markets, politics, squeezing more money out of existing customers, scratching an itch, etc. It shouldn't surprise you that almost no high-quality software comes out of such abysmal demand for it. After a while, it shouldn't even bother you since it's pointless to worry about what most will do to quality for reasons aside from quality.
The best we can do is convince people who might give a shit, companies that might differentiate on better things, governments that might regulate to a baseline of methods that work, and so on. Plus advocate voting with our wallets for "The Correct Thing." Best advice I can give you after way too much time doing the opposite. ;)
I haven't read them but I'm familiar with the area of discussion.
My argument here rests on the idea that we are late in the game of software development. Most of the software we need has been written already. What we are doing now is a thundering herd attack on the global mind-space of algorithms+data. (How many chat apps? how many serialization formats, etc.)
I am kind of off in the corner. For example, I don't see Rust lang as cool and innovative, to me it looks like a tarpit.
There is a better way.™ ;-)
Over in the "The Power of Prolog" thread https://news.ycombinator.com/item?id=14045987 there are posted three solutions to teh Zebra Puzzle. There's one in Ruby all bloody minded, one in python even longer and more bloody minded. Then there's the one in prolog I wrote after reading the docs for half the day instead of working. (D'oh! Hi boss.) The Prolog version is less than a page of code and half of that is direct translations of the puzzle hints to CLP constraints (the other half just sets up the variables and such.)
The other two aren't bad because they weren't written in Prolog. They're bad because they both would be better off implementing the resolution algorithm (i.e. mini-kanren) and then using that to solve the puzzle. It still would have been shorter and easier and faster.
Now, why didn't they know that? That's the essence of my concern.
The prolog solution is much shorter, returns the solution instantly, and I'll wager it took me much less time to type it in once I had figured out how to translate the puzzle into CLP(FD) style. Am I the 10x programmer? No. I think what I (and others) do is learnable. I've maintained for years that anyone who can solve a sudoku puzzle has the intelligence to learn to program. In fact, I've just realized that anyone who solves Sudoku puzzles already knows the resolution algorithm, that's what their mind is doing to figure out the puzzle.
So most people can be taught Prolog. The machines are vast and fast enough now. Why is everybody so keen on cracking out code, to make a buck or scratch whatever itch, but not on doing it better using tools and techniques that are hella old!? Is humanity really so perverse? ;-)
(P.S. I'm still working on replying to your kind and excellent email.)
"I haven't read them but I'm familiar with the area of discussion."
"What we are doing now is a thundering herd attack on the global mind-space of algorithms+data."
Nice way to put it. Ok, you really need to read those references since most programming is supposed to suck if it's done by humans. It used to bother me when I was younger but I accept it as inevitable. Much like evolution itself producing tons of waste on its way to overall dominance of most of the search space. Once you get basic principles driving it, you must then work within that to squeeze as much of that good engineering as possible into those constraints. It's only way to have high impact or shift segments of the herd in better directions. Hell, it's a little like CLP for people & development processes. ;)
I'm including original essay, a great commentary tie-in in historical proof, and one by ex-high assurance guy Steve Lipner at Microsoft:
" The Prolog version is less than a page of code and half of that is direct translations of the puzzle hints to CLP constraints (the other half just sets up the variables and such.)"
It was beautifully concise compared to the others. Of course, they were trying to re-invent the parts like execution strategy that are hidden in Prolog. Your point seems to be that nobody taught them this or they didn't learn enough. That comes from combo of education system training people for workforce and popularity of imperative languages for FOSS. The environment is real problem. Changing that leads us right back to above essays. Two, good examples are Ocaml and Clojure. One gives them escapes back to imperative on their way to gradually learning functional. It's done better in uptake than most FP. The other made some changes to LISP while dropping it on top of one of evolutionary-strongest ecosystems. Got uptake no other LISP had up to that point. A subset of its users also started learning other LISP's.
"They're bad because they both would be better off implementing the resolution algorithm (i.e. mini-kanren) and then using that to solve the puzzle. It still would have been shorter and easier and faster."
" The machines are vast and fast enough now. Why is everybody..."
It would've been. Now we get to the point where you find that you may be caught in the same problem you're accusing them of. I can't remember Prolog or most programming now due to my head injury. Yet, I remember quite a bit about the market & what people did with it back when what you suggested was tried on a large scale by smart people. That was the AI boom where they coded in LISP, Prolog, Poplog, OP5, etc. I read all that, built expert systems with some of it, and tried to stretch it in new ways. I confirmed myself that it was very difficult to apply it to all kinds of problems where other models allow concise expression of problem or dramatically, higher efficiency. We collectively needed that development pace or efficiency to be competitive. The Japs even threw piles of money and brains at Prolog-specific hardware in their Fifth Generation project to bootstrap the goal you're talking about. All of that failed miserably. The AI Winter finished most of those companies off with a few pivoting and surviving.
So, in case you're wondering what we learned, it was that you don't want to write all your code in Prolog. Even those doing logical search in some cases. The default on bottom of the stack should not be Prolog. You want to use the most powerful language you can that supports DSL's & FFI's. You then embed things like Prolog in it to use when ideally suited for problem. Anything that's easier to handle a different way is done differently. LISP and REBOL were main proponents of this approach with Allegro CL still bragging about their Prolog implementation. "sklogic" added Standard ML to his LISP for coding safer, easier-to-analyze modules on top of DSL's for parsing & Prolog. Haskell has recently joined the fray where a number of DSL's are letting one mix benefits of Haskell and low-level languages like C. Galois Ivory language & Bluespec come to mind. If a tool such as SWI Prolog is used, the typical case should be prototyping & verifying Prolog source that's then embedded in a better tool like those. There's times like Zebra puzzle, NLP, etc where constaints allow it used standalone. Also, possibility of doing things in reverse extending Prolog with foreign primitives instead. More space to explore in R&D.
Point is that was the hard-learned lesson of decades of failures to do everything in Prolog. It just didn't work. It was ideal for some things but slow, hard-to-write, and hard-to-maintain for other things. Same was true for many models. Hence, a unified tool can express and integrate each of those models to let builder use best tool for each job. Alternatively, data formats, calling conventions, or protocols standardized to integrate separate programs using separate models. High-assurance recently did something similar for verification in DeepSpec program that led to CertiKOS OS. A bunch of DSL's are used. Prior efforts tried to build & verify it all in one tool like FOL or HOL but work was miserable.
" I've maintained for years that anyone who can solve a sudoku puzzle has the intelligence to learn to program."
I've never thought about it. Especially in light of Prolog. Very interesting. Now you got me wanting to drop Prolog on some Sudoku fan sites to see what happens. Have to have syntactic sugar, libraries for common things, and great tutorial so the start is painless. I'll hold off for now but keep it in mental peripheral if I see someone messing with sudoku.
"P.S. I'm still working on replying to your kind and excellent email."
Cool. :) Also reminds me I still need to take black and yellow highlighters to that book. Probably take it to work with me to mess with on lunch break.
It used to bother me when I was younger but I accept it as inevitable. Much like evolution itself producing tons of waste on its way to overall dominance of most of the search space. Once you get basic principles driving it, you must then work within that to squeeze as much of that good engineering as possible into those constraints. It's only way to have high impact or shift segments of the herd in better directions. Hell, it's a little like CLP for people & development processes. ;)
Reflecting on that calms me down a little. Evolution has no purpose, so it cannot be inefficient. I think my problem may well be in unrealistic expectations. ;-)
I'm including original essay, a great commentary tie-in in historical proof, and one by ex-high assurance guy Steve Lipner at Microsoft:
http://www.dreamsongs.com/RiseOfWorseIsBetter.html
http://yosefk.com/blog/what-worse-is-better-vs-the-right-thi...
https://blogs.microsoft.com/microsoftsecure/2007/08/23/the-e...
I'll read them ASAP. I'm changing jobs at the moment so I'll either have little to no time or too much.
...re-invent the parts like execution strategy that are hidden in Prolog. Your point seems to be that nobody taught them this or they didn't learn enough. That comes from combo of education system training people for workforce and popularity of imperative languages for FOSS. The environment is real problem. Changing that leads us right back to above essays. Two, good examples are Ocaml and Clojure. One gives them escapes back to imperative on their way to gradually learning functional. It's done better in uptake than most FP. The other made some changes to LISP while dropping it on top of one of evolutionary-strongest ecosystems. Got uptake no other LISP had up to that point. A subset of its users also started learning other LISP's.
Part of it is education, part of it is environment, and part of it is just the state of the industry, what counts as "professional" education and behavior is kinda grotesque compared to most other groups of people who call themselves "engineers". I'm working with a guy who has never heard of Alan Kay. What's worse is that he doesn't care. He's not ashamed of his ignorance. Yet he has zero qualms about pulling down six figures as a hotshot developer.
When I finally learned LISP I got mad. I didn't even learn it: I read the TOC of pg's book. That was all it took. My experience and brain cells were so primed that I "got" LISP just from that table of contents. And for about twenty minutes I was really pissed off at all of my fellow computer geeks en masse. How much time and energy, how much sweat blood and tears shed? We could have just been using LISP the whole time!
I really really think it's time we collectively turn our attention from chasing our brain-tails, and focus on the real issues: How to map human intention to automation in the most efficient manner? If we can just get out of our own way I think this physical "human condition" is mostly licked. All our problems are psychological now.
But, uh, I rant...
"They're bad because they both would be better off implementing the resolution algorithm (i.e. mini-kanren) and then using that to solve the puzzle. It still would have been shorter and easier and faster."
" The machines are vast and fast enough now. Why is everybody..."
It would've been. Now we get to the point where you find that you may be caught in the same problem you're accusing them of. I can't remember Prolog or most programming now due to my head injury. Yet, I remember quite a bit about the market & what people did with it back when what you suggested was tried on a large scale by smart people. That was the AI boom where they coded in LISP, Prolog, Poplog, OP5, etc. I read all that, built expert systems with some of it, and tried to stretch it in new ways. I confirmed myself that it was very difficult to apply it to all kinds of problems where other models allow concise expression of problem or dramatically, higher efficiency. We collectively needed that development pace or efficiency to be competitive. The Japs even threw piles of money and brains at Prolog-specific hardware in their Fifth Generation project to bootstrap the goal you're talking about. All of that failed miserably. The AI Winter finished most of those companies off with a few pivoting and surviving.
So, in case you're wondering what we learned, it was that you don't want to write all your code in Prolog. Even those doing logical search in some cases. The default on bottom of the stack should not be Prolog. You want to use the most powerful language you can that supports DSL's & FFI's. You then embed things like Prolog in it to use when ideally suited for problem. Anything that's easier to handle a different way is done differently. LISP and REBOL were main proponents of this approach with Allegro CL still bragging about their Prolog implementation. "sklogic" added Standard ML to his LISP for coding safer, easier-to-analyze modules on top of DSL's for parsing & Prolog. Haskell has recently joined the fray where a number of DSL's are letting one mix benefits of Haskell and low-level languages like C. Galois Ivory language & Bluespec come to mind. If a tool such as SWI Prolog is used, the typical case should be prototyping & verifying Prolog source that's then embedded in a better tool like those. There's times like Zebra puzzle, NLP, etc where constaints allow it used standalone. Also, possibility of doing things in reverse extending Prolog with foreign primitives instead. More space to explore in R&D.
Point is that was the hard-learned lesson of decades of failures to do everything in Prolog. It just didn't work. It was ideal for some things but slow, hard-to-write, and hard-to-maintain for other things. Same was true for many models. Hence, a unified tool can express and integrate each of those models to let builder use best tool for each job. Alternatively, data formats, calling conventions, or protocols standardized to integrate separate programs using separate models. High-assurance recently did something similar for verification in DeepSpec program that led to CertiKOS OS. A bunch of DSL's are used. Prior efforts tried to build & verify it all in one tool like FOL or HOL but work was miserable.
Yeah, I get it, I do. I'm old enough to know about things like "Fifth Generation" computers and the AI Winter, and I agree with the pragmatic issues. I still have what I guess amounts to faith that there is a better way. Personally, I think I'm onto something with a system based on George Spencer-Brown's "Laws of Form" and Manfred von Thun's "Joy" notation, implementing something similar to Hamilton's HOS but without the deficiencies. I ahve no idea if I'm a crackpot or not here, but I think I see a glimmer.
At the very least, we today have the benefit of hindsight, if we'll avail ourselves, eh?
" I've maintained for years that anyone who can solve a sudoku puzzle has the intelligence to learn to program."
I've never thought about it. Especially in light of Prolog. Very interesting. Now you got me wanting to drop Prolog on some Sudoku fan sites to see what happens. Have to have syntactic sugar, libraries for common things, and great tutorial so the start is painless. I'll hold off for now but keep it in mental peripheral if I see someone messing with sudoku.
To me, the essential piece was learning the Logical Unification algorithm by walking through mrocklin's port of Kanren to Python. Once I understood how resolution worked, the next time I was figuring some puzzle (in programming as it happens) out, I was startled and pleased to be able to recognize the resolution/unification process as my mind performed it to solve the puzzle.
I'm not sure that folks could go directly from Sudoku to Prolog, although I would wager that anyone good at Sudoku would be able to learn Prolog under some conducive conditions. In my experience the limiting factor is interest. I've told one friend of mine several times now that "the only difference between you and a 'real programmer' at this point is number of lines of code written!" but he has other priorities or something...
----
edit: the quoting came out weird, but I'm going to leave it. (And now I know how to do that quoting.)
I agree that perverse incentives are possible, but I think that stance is a bit pessimistic. Being the author of an RFC grants you no social capital unless that RFC is good enough to get accepted (and if it's good enough to get accepted, then it's a good thing someone took the time to write it!). Additionally, the amount of social capital one receives from being known as the author of an RFC is very minor (especially since Rust RFCs only represent the beginning of the conversation, not the end, and the features any given RFC describes can change drastically between RFC and eventual stabilization, and god only knows how many people are responsible for the end result by that point).
(There's only one accepted Rust RFC whose author I can name from memory, and that's only because it's an RFC that I didn't like. :P The knife cuts both ways!)
I think it's basic manners that when someone writes a long piece about a subject they're struggling with, you shouldn't just respond that it's "stuff people should have learned by the time they turned 12 years of age".
It's basic manners, but we all struggle with basic manners sometimes, especially on the Internet.
The basic manners that you learned early on for interacting with people face-to-face don't scale to interacting with many many people in an issue tracker every day, anonymously, asynchronously, over the internet. It's a weird circumstance that we didn't evolve to handle, so I think it makes sense to break it down and carefully look at how to handle these situations efficiently and positively.
It's also important to remember that social norms are not consistent across the entire world. Some cultures are more deferential and others are more confrontational. On the Internet you are never sure exactly who the audience is, and in fact the audience is probably mixed. What seems like stern but helpful advice to one user comes across as browbeating to another. Very polite (deferential) people may not convey the seriousness of a situation to someone more used to direct speaking. Even inside of cultures there can be quite a lot of variety depending on factors like social standing, gender, and cultural identity.
This is why it is useful to talk about manners and customs even though it seems like child's play.
A good rule of thumb is to assume positive intent from contributors. People who are requesting changes are trying to solve problems; rarely are they doing it for personal aggrandizement or to fulfill some philosophical mission.
One thing that really upsets me when I propose changes -- especially when they're accompanied by code -- is getting a thumbs-down from (or, even worse, an issue closed by) a maintainer, without any constructive feedback as to how to resolve their concerns. I can work with someone who can inform me about their concerns or the weaknesses of my proposal, and who says, "yes, but...," but I can't work with someone who just says "no."
In my own projects, I have a rule of thumb: I never close an issue without the consent of the submitter. I try to ensure I've either convinced them that there's a better way to do what they're trying to achieve; I've resolved their problem as best I can; or I simply don't have the resources to give them a proper resolution.