Hacker News new | past | comments | ask | show | jobs | submit | more calibwam's comments login

There should be a karaoke bar when playing the instrumental. Also, high seems to be low for me?


According to wikipedia [1], make appeared in 1976, so it should be younger than C.

[1] http://en.wikipedia.org/wiki/Make_(software)


As this is an issue of copyright violation, I don't have any issue with this. A DMCA complaint fixes just this.

A few years ago I got an email from a TA in a subject I took the year before, that they wanted me to take down solutions I hosted on Github. However, there was no provided code in the assignment, and it was basically implementations of previously known algorithms (A* among others). Not a formal takedown request, but they said I should understand that they had to use the same assignments each year, and if I were providing solutions this could be hard. My answer was that if they are worried about plagiarism, it's better that my solutions are public, as they can be checked against.


It's not a copyright violation though, or at least not inherently. The statement of a homework problem is essentially an API, and if those are copyrightable then we're in big trouble as an industry.


Well, we can't see the code that is taken down here, but if assignment had a lot of set up code, and the assignment was to fill in some functions, there can be a copyright violation. If these repositories only included student made code, then I agree that there is a problem.


I'm a student in CS@Illinois and I took this class last semester. For assignment one, they gave us a library and the following where we had to make a working shell (sorry for formatting):

/ @file shell.c / #include <stdio.h> #include <unistd.h> #include <stdlib.h> #include <string.h> #include <sys/types.h> #include <sys/wait.h> #include <unistd.h> #include "log.h"

  log_t Log;

  /**
   *  * Starting point for shell.
   *   */
  int main(int argc, char ** argv) {
      /**
   *       * Analyze command line arguments
   *             */
      while(1)
      {
  	/**
   * 	 * Print a command prompt
   * 	 	 */
  	
  	
  	/**
   * 	 * Read the commands
   * 	 	 */
  	
  	
  	/**
   * 	 * Print the PID of the process executing the command
   * 	 	 */
  	
  	
  	/**
   * 	 * Decide which actions to take based on the command (exit, run program, etc.)
   * 	 	 */
      }
      return 0;
  }

The provided code is rather sparse (by design), but the University holds copyright on it.


You can use four leading spaces at the start of each line for a code block.

What you've posted doesn't look substantial enough to be copyrightable to me; the idea of writing a main containing a while loop is certainly not original to the university, and the comments would presumably not be present in this repo since they'd've been replaced by working code. I sure hope the university of Illinois doesn't own the copyright on any shell I write from now on, because that would be ridiculous.


Inkscape has been a project for 11 years, why the hesitance to have a version number above 1.0?


Time is pretty irrelevant for versioning, unless you have paying customers and you want to trick some into paying the full price for a minor upgrade (eg. Windows 98, FIFA 2015, etc.)

1.x tends to fulfil the original goals of the project. As long as there are some unfulfilled goals, it's still 0.x. 2.x, 3.x, etc. are breaking changes, often a rewrite.


One reason we're not at 1.0 yet is because the original 1.0 goal was to have complete SVG 1.1 support in, which will likely never happen. We changed the version numbering scheme to better reflect where we think we are.

The reason why we're still not there right now is because there are some essential things like canvas coordinates, fixed "flow text" support (so it works in browsers), etc that we need to rectify first before we're comfortable with 1.0.


As a long term user I'd agree to some extent, the 1.0 version feel has definitely been and gone - probably I'd say this is about a version 4, to my mind.

http://wiki.inkscape.org/wiki/index.php/InkscapeInvariants are the stated aims of the project and AFAICT they've not achieved the SVG spec compliance completely yet. Though I thought I recalled them aiming at the reduced SVG "basic" set (? if that's what it's called) and reaching it.

I've been a user since it forked from SodiPodi (and indeed was a SodiPodi user too).


I miss SodiPodi, with it being multiple-document single-toolbar'ed. Iirc this was the sole contention of the fork- Inkscape was born explicitly to have each document have it's own suite of controls.


No, it was born because of the structure of SodiPodi. Lauris was effectively the gatekeeper and others did not have commit access. He wasn't accepting patches he didn't want to, so a handful of our founders decided to fork and have a much more open approach. As it stands, if you have two patches accepted with the project (Inkscape) and want it, you can have commit access.


Please do argue javascript. How else could you do dynamic and logical operations in the browser presently?


I think like Flash and Java, javascript should not be removed from the browser. Some websites do have a need for flash (for streaming for example).

I think the problem is rather that flash, java and javascript are all enabled by default, allowing any site and all the associated advertisement websites to execute code without the user consenting or being aware. This is a major security and privacy problem.

The model should rather be per-website opt-in of javascript if required (and most websites don't need it). And even in that scenario enforcing a same source policy, ie only javascript hosted on the domain visited would be enabled, not third party javascript.

To me the current model is like windows XP's autorun. It is just designed to be a perpetual source of security and privacy breaches until someone finally takes the decision to kill it.


Neither flash nor java are "enabled by default.". I think that's a poor analogy. JavaScript engines are built directly into the browser.


Flash runs by default and without consent from the user in most browsers. Not sure what you mean by "not enabled by default" then.


When you install a browser, Javascript is enabled. You need to actively install and add Flash and Java to modern browsers.


The problem with disabling javascript by default and making it opt-in, is that most users expect things to just work.

If you want to disable javascript yourself, fine, but don't degrade the experience of the web for everyone.


But there is an over reliance on javascript to make up for the shortcomings of html and css. Most websites should not need javascript. If you look at your own browsing history, how many of these websites genuinely need to run client side logic?

If a user ends up on a webmail or an online gaming website, I don't think he will be surprised if he is asked whether he would like to allow this website (not third parties) to execute javascript (and to keep that setting). If that same user goes on any blog, forum, news or ecommerce website, there is really no good reason to execute client side code.

Auto completion in a text box (or server validation by input without a full POST) should have been embedded in html a long time ago. And it took 20 years before they finally agreed to add a datepicker tag, not exactly a new problem. Because of the glacial pace of evolution of html, we became used to javascript tricks to make up for what html doesn't do. And flash is mostly used for the same reason.

That doesn't mean that flash and javascript don't have any use.


I'd rather have our current lighter, generic HTML spec that can be enhanced by running code in JS than a heavy HTML spec that tries to do everything in pure HTML/CSS just so that way we can avoid running JS by default.

The more complicated HTML/CSS gets the less efficient it will be and the harder it will be to get all the browsers to maintain compatibility with each other.

In a way the current scenario where a lighter, shallower HTML and CSS are enhanced by custom code in the form of JS is better for the ecosystem because it allows creative freedom in development without overloading the core engine with cruft.

If you choose to do so you can create a bloated abomination of an HTML page powered by tens of thousands of lines of JS, but since the core engine itself is very light it is also possible to make a clean and simple page that is extremely light and which uses no JS at all.

On the other hand baking lots of functionality into the core engine would force everyone to experience the bloat and cruft, even if you were trying to make a clean and simple website.


It's a bit of a subjective debate. But having to run all that interpreted code in the background is probably not making the engine leaner and faster. I am sure it would be much more efficient to declare your intent in html and having the native renderer handling it. And certainly a lot less verbose and subject to bugs.

Say you add to the input tag a validation attribute with an URL and the browser posts the content of the input box in the background for validation, and give feedback to the user in a standard but customizable way.

For auto-completion, same thing, just adding one attribute with a url and a standard feedback.

Also an attribute to allocate a change to a css property when clicking on an element, to enable css pop ups.

There are probably a dozen more attributes like that and that would pretty much make javascript redundant on 99% of the website without making the HTML syntax much heavier (and overall your page would be lighter, because you don't have to write the plumbing every time). And you save having to interpret javascript so it has to be more efficient in term of performance too.

One counter example is something like bootstrap, where javascript is used to adapt the formatting to the form factor. But this is a good example of a horrible use of javascript, having to use javascript for presentation. It is just a case of html not being fit for purpose (since its purpose has evolved).


I'd rather have our current lighter, generic HTML spec that can be enhanced by running code in JS than a heavy HTML spec that tries to do everything in pure HTML/CSS just so that way we can avoid running JS by default.

The more complicated HTML/CSS gets the less efficient it will be and the harder it will be to get all the browsers to maintain compatibility with each other.

In a way the current scenario where a lighter, shallower HTML and CSS are enhanced by custom code in the form of JS is better for the ecosystem because it allows creative freedom in development without overloading the core engine with cruft.

If you choose to do so you can create a bloated abomination of an HTML page powered by tens of thousands of lines of JS, but since the core engine itself is very light it is also possible to make a clean and simple page that is extremely light and which uses no JS at all.

On the other hand baking lots of functionality into the core engine would force everyone to experience the bloat and cruft, even if you were trying to make a clean and simple website.


I'd rather have our current lighter, generic HTML spec that can be enhanced by running code in JS than a heavy HTML spec that tries to do everything in pure HTML/CSS just so that way we can avoid running JS by default.

The more complicated HTML/CSS gets the less efficient it will be and the harder it will be to get all the browsers to maintain compatibility with each other.

In a way the current scenario where a lighter, shallower HTML and CSS are enhanced by custom code in the form of JS is better for the ecosystem because it allows creative freedom in development without overloading the core engine with cruft.

If you choose to do so you can create a bloated abomination of an HTML page powered by tens of thousands of lines of JS, but since the core engine itself is very light it is also possible to make a clean and simple page that is extremely light and which uses no JS at all.

On the other hand baking lots of functionality into the core engine would force everyone to experience the bloat and cruft, even if you were trying to make a clean and simple website.


I'd rather have our current lighter, generic HTML spec that can be enhanced by running code in JS than a heavy HTML spec that tries to do everything in pure HTML/CSS just so that way we can avoid running JS by default.

The more complicated HTML/CSS gets the less efficient it will be and the harder it will be to get all the browsers to maintain compatibility with each other.

In a way the current scenario where a lighter, shallower HTML and CSS are enhanced by custom code in the form of JS is better for the ecosystem because it allows creative freedom in development without overloading the core engine with cruft.

If you choose to do so you can create a bloated abomination of an HTML page powered by tens of thousands of lines of JS, but since the core engine itself is very light it is also possible to make a clean and simple page that is extremely light and which uses no JS at all.

On the other hand baking lots of functionality into the core engine would force everyone to experience the bloat and cruft, even if you were trying to make a clean and simple website.


I'd rather have our current lighter, generic HTML spec that can be enhanced by running code in JS than a heavy HTML spec that tries to do everything in pure HTML/CSS just so that way we can avoid running JS by default.

The more complicated HTML/CSS gets the less efficient it will be and the harder it will be to get all the browsers to maintain compatibility with each other.

In a way the current scenario where a lighter, shallower HTML and CSS are enhanced by custom code in the form of JS is better for the ecosystem because it allows creative freedom in development without overloading the core engine with cruft.

If you choose to do so you can create a bloated abomination of an HTML page powered by tens of thousands of lines of JS, but since the core engine itself is very light it is also possible to make a clean and simple page that is extremely light and which uses no JS at all.

On the other hand baking lots of functionality into the core engine would force everyone to experience the bloat and cruft, even if you were trying to make a clean and simple website.


I'd rather have our current lighter, generic HTML spec that can be enhanced by running code in JS than a heavy HTML spec that tries to do everything in pure HTML/CSS just so that way we can avoid running JS by default.

The more complicated HTML/CSS gets the less efficient it will be and the harder it will be to get all the browsers to maintain compatibility with each other.

In a way the current scenario where a lighter, shallower HTML and CSS are enhanced by custom code in the form of JS is better for the ecosystem because it allows creative freedom in development without overloading the core engine with cruft.

If you choose to do so you can create a bloated abomination of an HTML page powered by tens of thousands of lines of JS, but since the core engine itself is very light it is also possible to make a clean and simple page that is extremely light and which uses no JS at all.

On the other hand baking lots of functionality into the core engine would force everyone to experience the bloat and cruft, even if you were trying to make a clean and simple website.


I don't like predicting the future.

> So give up on parallelism already. It's not going to happen.

> End users are fine with roughly on the order of four cores,

> and you can't fit any more anyway without using too much

> energy to be practical in that space.

End users was fine with a single core Pentium 4 on their workstation. We progressed. How would even Linus know that we won't find a way to make parallel work en masse?


First of all I think that many people are still fine with single core Pentium 4 workstations and that what we have today is not that much better. For example the "death of the PC" was greatly exaggerated by the simple fact that people aren't upgrading so often because their 4-6 year old workstations are still good enough for most purposes.

Of course, many of us do need the extra power. But what Linus is saying and I agree with him, is that for mobile devices (phones, tablets, laptops), Moore's law doesn't work so well, as batteries aren't keeping up with Moore's law. A mobile device that doesn't last for 2 hours of screen-on usage is a completely useless mobile device (and here I'm including laptops as well).


End users was fine with a single core Pentium 4 on their workstation.

Not really. 2-4 cores have been available on workstations for decades, so no one is arguing the 1 core is all you need. Even Linus is saying that having 4 cores is probably a good thing in many cases. The argument is not 1 vs 4, but more 4 vs 64, especially if you assume a fixed power budget.


I'm not saying they would be fine now, but once (10 years ago), that was what you had. And now we have 4-8 cores on our laptops, and 2-4 on our phones. Why shouldn't we have 64 cores in the future if we solve the programming problems, and can make them energy efficient? Just because 4 cores are fine now, doesn't mean we shouldn't try to increase that.


but once (10 years ago), that was what you had.

No it wasn't. Multi-processor Intel based workstation have been available since the very early 90s. People have realized for a very long time that having 2-4 cores is useful.

I'm still not convinced that, given that I have X Watts to spend, that I'm not better off with 4 CPUs using X/4 Watts each rather than 64 cores using X/64 Watts each. But I'm willing to be proven wrong.


Really? Do you mean people chained together multiple processors, or that Intel produced something? For I can't find it and would be interested in reading about those old systems.


Intel was relatively late to the game, their multiprocessor support started getting decent around the Pentium Pro. The Unix workstation vendors (Sun etc) had dual CPU workstations a while earlier, but mostly SMP was used in servers.

https://en.wikipedia.org/wiki/SPARCstation_10

Feather in the hat for first multi-core CPU on single die goes to IBM and the Power4 in 2001, preceding Intel's attempt by ~4 years. (Trivia: IBM also sold a Power4 MCM with 4 Power4 chips in a single package).

(Yes some people managed to stitch together earlier x86 processors too with custom hardware, but it wasn't pretty or cheap or fast).


Sequent had proprietary multi-processor 386 systems. The Intel MultiProcessor Specification dates back to 1993. Most Pentium II chipsets supported dual processors, which drove pricing down enough for enthusiasts to build them for home use.


There where a handful of companies making 2-8 socket motherboards for 486 processors. I know Compaq and NCR were early in offering workstation based around those motherboard designs.


Absolutes are dumb. Practically, a CPU is throttled on memory access, but it is not hard to imagine scenarios where the computation is the bottle neck. For instance, if most of the program is run in tight loops, where cache misses are few, and the whole loop can fit in L1, almost the whole overhead would be in computation.


What's your source on Uber and AirBnB actually endagering life and causing damage? I get that you might not like their business plan, which is a totally legit opinion, but a claim like that do need a source.


> Blah blah blah source....

Around here to run a taxi you need several things. One is a commercial vehicle driver's license, which requires training above and beyond the normal license. You also need commercial insurance, to protect yourself, your passengers, and the insurance company (else they would have to raise premiums for everyone to accommodate risky, illicit behaviour). Taxis are required to have cameras, to protect the drivers and passengers. From what I've seen of UberX's 'requirements', they violate in nearly every way, and I don't think I need a 'source' to state why a lack of training, insurance, and safety measures could harm people.

As for AirBnB, imagine you own a condo and someone across from you is running an illicit hotel. These kinds of things affect safety (random people who don't live there constantly coming and going), property value, and hurts legitimate businesses that actually employ people (say, the real BnB or Hotel nearby that follows all the regulations and thus has to pay higher costs). And of course, if you own a condo, are renting is, and your tenants are illegally subletting, the damage is more direct.


To be honest, I could (for lack of interest I didn't) find you sources which show that Uber could be safer than average taxi companies. The reasoning:

- After riding with Uber you get a map sent to you, this could help in proving that a taxi driver tried to rape a woman customer in a dark ally. (this has happened, driver got fired and charged) With a regular taxi it's your word against theirs.

- With Uber you are 100% sure the driver is working for Uber, since you ordered it through the app and the app shows you the license plate. With regular taxis you have to trust that the taxi license is real if you hail a cab on the street.

- With Uber you rate the driver after the ride, which makes sure that bad drivers get detected very quickly. With regular taxis you have to file a complaint, which most people don't because it's too much of a bother.

- With Uber you don't need to have anything representing money on you to get a ride. With regular taxis you need either cash or a credit card on you to pay for the ride, which is a safety issue before, during and after the ride.

I agree with you that there are some issues with a company like Uber just doing whatever they want. However, there are some pretty big issues with the current taxi industry and Uber is doing a prety good job at highlighting them and providing alternatives.


> After riding with Uber you get a map sent to you, this could help in proving that a taxi driver tried to rape a woman customer in a dark ally. (this has happened, driver got fired and charged) With a regular taxi it's your word against theirs.

stop hailing strange taxis. Call a dispatch. Boom, accountability.

>With Uber you are 100% sure the driver is working for Uber, since you ordered it through the app and the app shows you the license plate. With regular taxis you have to trust that the taxi license is real if you hail a cab on the street.

that's unfair as the comparison is simply outside the scope of what Uber offers. One could also argue that Uber drivers cannot pickup a person hailing them; but no one should make that argument, because Uber does not try to emulate that particular function of the taxi service.

> With Uber you don't need to have anything representing money on you to get a ride. With regular taxis you need either cash or a credit card on you to pay for the ride, which is a safety issue before, during and after the ride.

how is a credit card a risk? Give it to the perpetrator , ensure your safety, claim the losses. If you're talking about material worth, I hope you don't carry any devices with you.

>I agree with you that there are some issues with a company like Uber just doing whatever they want. However, there are some pretty big issues with the current taxi industry and Uber is doing a prety good job at highlighting them and providing alternatives.

A third party hopping and skipping over established transportation safety regulation is not doing a good job at providing alternatives; it's simply trying to make money before government regulation radically changes the market or the IPO occurs (which will in turn cause a sudden relaxation of 'boundary-pushing' on Uber's part, relaxing government legal worries and providing customers with a worse product after brand establishment and customers begin to rely on earlier iterations)


How does a photo of a license plate prove driver identity?

Doesn't that require a photo of the driver?


You've argued that they're breaking the law, not that they cause significant damage. It's a fair point in and of itself but it's not the one you made two posts earlier and it's not the one you need to defend. Laws are notorious for inefficiently spending large amounts of resources to chase after marginal returns (which often go negative without anyone realizing).

For instance: cameras might not have nearly as good of a value proposition in a situation where the identity of the passenger is recorded up front. Extra training might not be necessary if customers have a functioning mechanism to share their intuitive judgements about dangerous drivers.


Uber and Lyft are not taxis. They are at most livery cabs.

They do not and should not operate under the same rules as taxis. They are much more like the airport shuttle services that have been operating without taxi licenses or medallions for decades. They simply have a novel way to summon the vehicle that does not involve placing a voice call to the company's scheduler and dispatcher.

There are more types of vehicle-for-hire services than just taxicabs, and they don't need to follow the same regulations.


Livery cabs around here do need to follow most of the same regulations. In fact, they apply to any situation where anyone is driving a vehicle for commercial reasons (say, a courier, moving service, any sort of taxi/cab, etc...).


Do I actually get a permit to drive a friend or can he just decide to get into my car like an adult?


It's not a commercial activity. Come on, you should know the difference.

Just like if I have a Barbeque at home and invite my friends it's a different scenario than selling Barbeque and beer out of my backyard...


Logical persons would not make that question, assuming from the start that unlike Uber drivers, your friend knows you.


Glazier's fallacy blah blah blah more statism blah blah blah remove choice blah blah.


Surely there is a reason these taxi regulations are in place. Or not. In any case, is it fair?

My opinion of Uber was something like "great for competition". But of course they can only compete because they're doing so not quite legally, and to great benefits compared to normal taxi drivers.

A normal taxi company might've gotten this large by releasing a good app with some funding, but I doubt it because of the huge advantage Uber has compared to everyone else.


I agree with what I think your point is. But I think you're getting downvoted because you did a lousy job of expressing it, and you did it in a rather disrespectful manner. We try to aim higher on HN.


Last commit made 5 months ago, and no updates on the project anywhere. Seems abandoned.


Not quite abandoned, more a question of time. The upstream code is sufficiently undocumented and without tests that in some cases my approach has been to #ifdef stuff out until I can at least get some units compiling.

A test suite for this code would have made this whole effort far easier :(


The FreeBSD wiki page[0] has an updates page, with this, from a month and a half ago:

> 20140910: Outback Dingo has made available an older port of pfSense's launchd_xml.

[0] https://wiki.freebsd.org/launchd


The launchd_xml code is mildly toxic for openlaunchd, it's derived off of the Apple Public Source License codebase instead of the newer Apache 2.0 licensed codebase that openlaunchd derives from.

IMO the ASL is going to be required to get FreeBSD to adopt openlaunchd at some point


And a link to the repository: https://github.com/outbackdingo/launchd_xml


Note that the pfsense / outbackdingo launchd did something rather naughty -- it put the xml parser back in pid 1 which was intentionally not there by design at Apple.

tl;dr even if you could convince the FreeBSD devs that launchd is a great idea they wouldn't accept this as-is.


doesn't launchd on osx also parse plist xml files? How are those handled?


There's another binary that does the processing and the data is sent to the launchd daemon over a Mach-specific IPC channel that FreeBSD doesn't have.


OS X has a binary plist format, too. Maybe it uses that to bootstrap and does xml parsing in sub-processes.


The last benefit actually is starting to become a problem, since so many people have gotten Teslas. There's a lot of traffic in the bus lane, as electric cars, buses and taxis fight for the same spaces.


I suppose it will eventually be taken away. It's a nice temporary incentive but it obviously cannot scale.


Just like California is issuing very few carpool stickers for hybrid cars. It's like a lottery that bumps the value of your cars 5 grand up.


Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: