Hacker News new | past | comments | ask | show | jobs | submit | tacos's comments login

And so it ends. The mere act of partnering with Yahoo indicates desperation and signals poor decision making.

I'm not sure I understand HN's fascination with this site but this is not a good sign for those who claim to use it in these never-ending threads.


How? its all proxied and the 'yahoo server' is duckduckgo owned.


Yup, not buying it. Too cutesy, too impersonal, too obvious. This is a team effort. And not a very good one.


Maybe he used that style on purpose to mislead any adversaries?


> It's just impossible to "bid" on traffic via Facebook and win out over companies that have far bigger budgets

Not my experience at all. Also, have you seen their hyper-local ad-targeting options? As in, draw circles around neighborhoods, then choose "parents" who are interested in "children's clothing"?

You can easily reach thousands of people with a $5/day budget. They'll even provide the stock photography for free.


Perhaps not the most neutral source for information. I found their summaries of paywalled articles to be a bit... misleading. They'll sell you Swarovski crystals and Himalayan salt lamps though.


They do provide abstracts, which is nice. From those, you can see how few subjects are used in these studies.


Repost as a semi-useful thread below didn't meet humor standards and people who aren't logged into HN should see it, too.

If you need an FIR filter, click here and push the button. Generates the code too.

http://t-filter.engineerjs.com/

Also if you don't know what you're talking about, kindly refrain from wandering into it in the middle of an article that might otherwise be useful. "An Intro To Beamforming" is a hell of a lot stronger if it doesn't have several flaming errors about basic DSP processing in the middle of it. Those sorts of errors may cause experts discovering you for the first time to avoid your project, not devote time to fixing it.


I see a lot of rudeness on Hacker News, but your comments in this thread went past careless rudeness. You were a genuine asshole. It doesn't matter how mistaken the other person was, or how far short of your technical glory they fall, you can't treat others like that here.

Had this been a first offense, I would have posted something like: "If you know more than other people, the thing to do is provide correct information. Then we all learn." And I'd have pointed out that going on about "several flaming errors about basic[s]", without specifying what those errors actually are, is just a putdown that adds no real information.

But you've done this many times before, we've given you many warnings already, and you've ignored them. That indicates that you don't want to use this site as intended, so I've banned your account.


The "several flaming errors" were previously mentioned here and as comments on his website (not by me). To detail them again could also be construed as piling on. You are reading this thread outside the timeline in which it occurred and taking the most negative possible connotation of my actions.

Even in this thread people can't quite decide where my remarks lie on the spectrum. (I assure you this is not some game I'm playing to test the waters.) Likewise in a previous thread your banhammer was overruled by the community.

I personally think you're overreacting (calling me a "genuine asshole" ... really, dang?) but I don't wish to cause you any additional stress by being an unintentional canary in this coal mine. I will however deduct a few points for waiting for the thread to die, zeroing out an upvoted comment containing useful information, then banning my account at 1am on a holiday weekend. I know you abhor off-topic meta discussion but, as a fellow moderator who handles forums far larger and friendlier than this one, that's pretty weak. Good luck, you have my word that I won't reappear under a different name.


I didn't call you an asshole; I said you had been one in this thread, which was a carefully circumscribed statement. I doubt very much that you are an asshole or even had a negative intention. But what you did is worse than most misbehavior on HN, because it cuts into what should be the heart of this place: substantive, respectful discussion about others' work. Worse, you'd done it repeatedly already, and been asked repeatedly to stop. That's why I chose a negative interpretation in this case.

As for your procedural complaints, holiday weekends (what are those?) had nothing to do with it, 1am enters the picture because it wasn't until 1am that I had time to deal with yesterday's user flags, and there was no 'waiting for the thread to die'. It takes a long time to write comments like the one explaining why we were banning you. Had I wanted to shove this under the rug, I'd not have bothered.

If you'd like to change your ways and treat others respectfully in comments here from now on, I'm more than happy to unban you. Other users have gone from posting assholish comments on a regular basis (I may even have been one myself) to becoming good citizens of this community. If you want to do that too, you're welcome here. But this requires sincerely accepting the model of how HN is supposed to work and doing your best to abide by it. When we give people repeated warnings and they ignore them, it's hard not to conclude that they have no intention of doing so.


There's a difference between being a prick to fellow community members versus being tough on submissions. I rank myself in the top 5% regarding expertise on a few topics that come up here regularly, and most (not many, most) of the submissions are simply tragic. To enter a thread late with a hundred young people politely chatting about something that's completely wrong or completely plagiarized says quite a bit about your ability to maintain decorum. But I don't think it's quite as flattering regarding your ability to maintain quality.

Googling a machine learning or signal processing technique from 40 years ago, seeing a link to HN on the first page of Google results that gets it wrong, and having the only comment that actually provides counterpoint or correction missing because you pushed a magic button? I can't believe that's the impact either of us are looking for with regards to contributing to world knowledge.

Likewise I increasingly find myself reading the top comment then scrolling down and squinting at the dimmed out bottom comment. Half the time it's someone being an idiot. The rest of the time it's useful counterpoint or something correct but imperfectly phrased. Groupthink and tone-policing-by-committee may be a bigger problem than someone who dares to use the adjective "flaming" to qualify the nature of errors in a submission.

I too hate dealing with the stack of complaints at the end of a long day, and I too hate seeing the same names causing trouble. It certainly is hard to remain impartial given even a small number of people who are obsessed with the "report misbehavior" button or, worse, those who harbor grudges and abuse the feature. In general those people need to toughen up and the people who are causing the grief need to turn it down half a notch. But HN, in technology and in personnel, lacks the nuance necessary to correct and communicate evolving community standards.

There's nothing more boring (and more damaging) than a public debate around a moderator's standards versus a popular user who bumps against the edge on occasion. Out of respect for your work (and my time) I'm stepping out as that's precisely where this is headed. Good luck with the site, I'm all too aware of the effort you put into it.


Sad that the thread got flagged into oblivion.

> There is a fair amount of academic work describing methods to perform filtering on a sample to provide a fractional delay. One common way is to apply an FIR filter. However, to keep things simple, the method I chose was the Thiran approximation — the literature suggests that it performs the task reasonably well, and has the advantage of not having to spend a whole lot of CPU cycles first transforming to the frequency domain (which an FIR filter requires).

I'm not a real DSP expert by any stretch of the imagination, but applying an FIR filter does not require transforming to the frequency domain. To the contrary: a basic FIR filter just means that the ith output sample is a[0]x[i] + a[1]x[i-1] + a[2]x[i-2] + ... + a[N-1]x[i-N+1] where x is the input, a is the filter coefficients, and N+1 is the (finite) length of the filter. (If the input is all zeros except that x[0]=1, then the input is an impulse and a is the output, i.e. the impulse response. Hence the name: Finite Impulse Response.)

Some very long FIR filters are more efficient to apply by Fourier transforming the input, but that's almost certainly not the case here. It's worth noting that FIR filters vectorize very nicely.

(My FIR description isn't quite 100% accurate. I described only the discrete-time causal case. If you drop the causality requirement (which is fine but can be awkward in real-time processing) then you add negative indices to a. If you switch to continuous time, you end up with a convolution instead of a sum of products.)


> Sad that the thread got flagged into oblivion.

I sometimes feel the PC threshold is a bit high in some threads. In this case however even I felt the snark level was painful.

As a productive engineer I often put out internal tools or POCs that would be possible to snark at in this way.

I do however expect that people improve them or explain the better way instead of posting dismissive comments on what this guy has done on top of his open source contributions.

That said: I'm happy to see it reposted although I would prefer if he removed the defence for his previous post as it did add confusion to the new post as well.


Thanks for the explanation. I've corrected the post to not assert that the FFT is necessary.


> The next Gawker will be decentralized and it may follow the Wikileaks model or even publish on the dark web.

There are plenty of ways to publish someone's sexual preference or sex video and you don't need to evoke Wikileaks.

Likewise I doubt the marketing maven for the next Adam Sandler movie will be paying in Bitcoin for a site takeover of a .onion domain, no matter how many celebrity nudes and confidential Sony documents it leaks.


I see no mention of sales and use taxes. Welcome to Hell.

https://www.boe.ca.gov/info/taxoverview.htm#sales


There are a number of "Sales tax as a Service" businesses like Avalara out there, which honestly ANY small business that can afford it should partake.

I worked for a company that tried to build an in house sales tax rule calculator, it's damn near impossible unless you want to devote a team of people just to sales tax, and certainly not worth the risk.


Estimate states it did less than half a billion of dollars of damage to human lives, the environment, and the economy.

http://www.cnbc.com/2015/10/29/vw-excess-emissions-linked-to...

Allowing some New World Judge the power to destroy all international subsidiaries of a global corporation with 610,067 employees at the stroke of a pen might not improve the situation.


Citation?

WikiPedia places the economic cost at $39+ billion, by one measure.

"A peer-reviewed study published in Environmental Pollution estimated that the fraudulent emissions are associated with 45 thousand disability-adjusted life years (DALYs) and a value of life lost of at least 39 billion US dollars."

https://en.wikipedia.org/wiki/Volkswagen_emissions_scandal#D...


Yes, that’s the additional cost from the NOx. But the cars produced in contrast far less CO2 than advertised – offsetting many of the costs.


Citation? Why wouldn't the experts who studied this have accounted for that? I tend to want to trust the science, if it comes from reputable sources.

Anyway, CO2 is not comparable to NOx in terms of danger to human health; CO2 is a real problem, of course, but as I understand it, this was an environmental and human health trade-off that is not at all well-balanced. It produced out-sized harm in exchange for small reductions on the CO2 side. Even if the amount of NOx produced only increased by the same amount that CO2 decreased (which is not the case, if I understand it correctly), it would still be a bad trade. I'm no expert, but the experts I've read on the subject haven't been saying, "oh, it's really no big deal because CO2 went down."


The increase in NOx led to a decrease in CO2 of about 1000 times that (in volume).

Numbers and studies have been posted by another user in this thread, it’s definitely a huge difference.

That’s kinda why VW did it, it saves fuel and is cleaner.


Citation?


The world police allow themselves to do this to countries, so why not corporations? And at what deaths/corporation employee number does killing the corporation become ok? Presumably there is a value? I don't know it.


And 100x quicker than setting the clearly documented environment variable that disables the feature.

> You can opt-out of the telemetry feature by setting an environment variable DOTNET_CLI_TELEMETRY_OPTOUT (e.g. export on OS X/Linux, set on Windows) to true (e.g. “true”, 1). Doing this will stop the collection process from running.


It is still opt-out, and thus considered harmful (at least by me)


If it were opt-in, they'd collect significantly less data, and the data they do collect would likely by heavily skewed. Microsoft could be misled by the poor quality of data collected, and an opt-in system could actually be worse than not collecting any data in the first place.

The way I see it, at the end of the day, the decision for Microsoft is really between not collecting data and an opt-out system. If Microsoft chooses not to collect data, then all developers have to live with tools that improve slowly and have issues (possibly security related that could be maliciously abused) that are not fixed as quickly as they could be.

If Microsoft chooses an opt-out system, they can collect the data they need to make sure their tools are working optimally and as intended. Some developers may not be comfortable sharing how they use Microsoft's tools even with no personally identifiable information collected. These people can opt-out while minimally compromising the quality of the data collected. Additionally, the tools are open source, so any developer that's skeptical of how and what data is being collected by the tools can verify Microsoft's claims.

Those are the two options I see. To me, the cost/benefit of the second option greatly outweighs the cost/benefit of the first for all involved. By not collecting data, security issues that could actually compromise your privacy could go unfixed for longer. By collecting data through and opt-out and open source system, Microsoft can fix issues ASAP and developers can verify that data is collected in way that preserves their own privacy.

It seems like a lot of people are knee-jerking to the idea of collecting data through an opt-out system and not actually weighing the cost/benefit of the realistic options. Can you explain how not collecting data has a lower practical cost/benefit ratio than an opt-out and open source system?


Bullshit. Other companies manage to build quality products without opt-out tracking of their users just fine.


That doesn't refute my point. No data collection still leaves a greater probability of issues being left unresolved for a longer period of time. Also, the code is open source. You can see exactly what data is being collected.

To address your point, it's not possible to be aware of the benefits you're missing out on without data collection.


For me the issue comes down to whether or not Microsoft does anything useful with this data (probably not, if 20 years of NVIDIA blue screen driver failure logs, Windows 8 and OneDrive are any example of how 'big data' impacts Microsoft product quality) versus how many comments I have to read where joeblow52 is personally offended that Microsoft dares to learn what his compile time plus 999,999 other compile times, divided by a million, equals.


How, exactly, are you thinking that Microsoft is going to fix nVidia's buggy drivers? They can collect all the data they want, but at the end of the day, it's nVidia's driver.


I worked there. Ways we solved these sorts of problems include: hardening the other side of the API/HAL when appropriate/possible, simplifying the driver model so that mere mortals could write drivers, writing our own drivers and overwriting known buggy ones for companies that couldn't get their shit together (usually network vendors), adding workarounds to the OS not to use certain features of certain cards, flying external engineers to lavish parties and our driver development labs and compatibility labs and providing one on one engineering development assistance from senior kernel developers, providing free testing of drivers for known problems before release, rolling fixed drivers into Windows updates, providing marketing funds as reward for fixing problems, and not using NVIDIA in the Xbox 360 after using them in the original Xbox as punishment because they were personally responsible for over 80% of blue screens in Windows for the preceding five years.

Sadly the motivation was often to ignore the data or watch it get spun by some jackass with the exact wrong agenda. It's just software, there's always a way to fix things if you really want to.


Nice work, thanks for the insight!


I just installed these tools and it tells you on first invocation that telemetry is enabled and how ti disable it. I think that buys a little bit of good will. I am also opposed to telemetry by default, but I understand it, and appreciate the opt out message being presented clear and up front.


I'm one of the self-appointed resident whiners but I think I'll cut them some slack because they've used the magic word "preview".

This case is different from silently adding telemetry on a minor upgrade to a tool in production


"The telemetry is only in the tools and does not affect your app."


From the release notes:

"We used industry benchmarks for web platforms on Linux as part of the release, including the TechEmpower Benchmarks. We’ve been sharing our findings as demonstrated in our own labs, starting several months ago. We’re hoping to see official numbers from TechEmpower soon after our release.

Our lab runs show that ASP.NET Core is faster than some of our industry peers. We see throughput that is 8x better than Node.js and almost 3x better than Go, on the same hardware."


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: