Anyone developing Salesforce apps will not raise a stink as it will hurt their relationship with that company. So anyone writing Kudos about Salesforce for this extra $1M prize should keep quiet as it's disingenuous. It's been mentioned many times over that many of the teams that competed could tell via analytics that their apps were not looked at. This statement alone should invalidate the entire contest and force new judging.
So how does this constitute a full review and audit. The reality is that SF royally fucked up on many fronts and lost developer cred in the community. As if they cared in the first place.
If they attempt this next year, they need to appoint an outside 3rd party to oversee the contest and git rid of their own employees from the process.
This statement should concern Salesforce more than any hackathon results:
'Anyone developing Salesforce apps will not raise a stink as it will hurt their relationship with that company'
I've worked with Salesforce for 4+ years now (as customer and partner), and sadly I agree with you. This should be the message that someone at salesforce reads. Want to be invited to their pilots? Want to have access to their PMs? You had better not say anything bad, I even had my AE pull me aside when I was out there for an EBC to tell me to not bring up their recent service outage during the meeting. If you don't have anything nice to say about salesforce, they don't want to hear your voice.
Not my experience. My AE and CFL team get my frequent and unfiltered views regarding issues that affect my company, even during EBC sessions. We have been very vocal about recent NA13 outages and performance. I have not been coerced to only say nice things about Salesforce.
Didn't someone debunk all this in the last thread? That it turned out there was only one person (coincidentally the most vocal one) who claimed that analytics showed that their stuff had not been looked at? Did you have new info or are you just parroting etc.?
Well, it's accepted at this point that no one had their apps run. Which I understand giving the judging mechanism (you submit it and they look at it vs they walk around and you show it to them). They said as much in their press release. The final 5 were chosen based on descriptions and videos alone.
Now, very few people can confirm that they had no views of their videos because most people watched it themselves several times and/or shared the link. One person claims they had zero views on the video, which isn't a very good sample size. We had 18 views of our video by the time judging ended and we had 16 before it started, but whether those two were by judges or by people we shared with, I have no idea. Either way, it wasn't viewed multiple times by many different judges as we've been told.
Yes, but in a way they are likely damaging the company they own a portion of. They may end up losing the talent that won the award. That company doesn't benefit much as a result, and Salesforce even less.
Possibly but that does nothing to assuage the concern that entries were only seriously considered if you had an inside connect at Salesforce, which is what it looks like when the winners have close ties to the contest holders and many entrants complain that their analytics show their app was never used.
Ya...just like the cops who are tasked with investigating themselves and always come up with a reason why it was okay to fatally shoot a 15 year old with a toy gun...ya...
This doesn't square with several team's assertions that, according to their analytics, their videos had never been viewed by the judges. I wish they (saleforce) had addressed this issue in their response.
Can you point me in a direction where I can find contestants that are claiming this? I work at Salesforce and I would like to investigate this further.
I already commented on an earlier thread (https://news.ycombinator.com/item?id=6784782), and so far I can find only one person that is claiming that their video was not shown (@colabi). While every complaint deserves to be investigated, I think colabi's has been heard multiple times already and I'd like to see if there is some other cause for concern.
They do address it, they lie by asserting that every apps was viewed twice when it's clear that was not the case unless there are dozens of developers lying about something random which I'm pretty sure isn't the case.
@zaguis, can you point me to anyone else besides @colabi that is claiming that their video has not been viewed? I genuinely want to investigate this, but after searching for a week I can't find anything concrete.
Since you are confidentially asserting that you believe that salesforce employees are lying, it would help everyone if you could identify just a couple of the dozens of developers that you reference.
If you do so, I will investigate their specific situation directly.
What comprised a submission? Is it possible that an app was legitimately reviewed and dismissed before the reviewer made it to the video?
Put another way, I look at a lot of resumes that have personal links/CVs on them. 90% of the time I've decided no before I get to them, making their inclusion irrelevant. Could this not have happened?
> I quickly checked Testflight and our app data to see if it had been run.
So, the assumption is that judges' computers don't block phone-home and other tracking/analytics services? That'd be a very poor and dangerous assumption to make.
Our app created a record in salesforce the first time it's run on any device. No new records. They confirm in their press release that the final 5 were chosen on video/description alone.
"We instructed both our first and second round judges to evaluate the submissions using the apps’ description, screen shots and the demo video using the same four criteria"
I remember reading (some time ago, and I can't find the article now) that YouTube view counters are designed to be probabilistic and "good enough" to give a general idea of the number of views. I don't believe YouTube is providing any particular guarantee on the accuracy of their view counters.
Lock-step global counters, even when you play games like sharding and summing to reduce contention, can still introduce latency and SPOF. But then again, I would have expected any probabilistic counter to only kick in after a certain popularity threshold.
I would hesitate to make any claim that something definitely was not viewed based on anything less than the full HTTP access logs of a self-hosted video.
The problem with Upshot's exemption of "they built the mobile app during the Hackathon so it's fair" is that it sets a bad precedent for all other Hackathons. Why shouldn't teams do what Upshot did and develop the cool tech beforehand and polish it during the Hackathon, especially when winning offers such prizes/publicity?
The point is that Upshot wasn't granted an 'exemption' in any sense of the word.
Salesforce had already decided that they were going to win, and now they are being forced to justify that decision in the context of the rules.
Going forward, there is every reason to assume that future hackathons/competitions run by salesforce will be likewise pointless to participate in, unless you have been given the nod by Salesforce.
They haven't just set a bad precedent, they have totally trashed their credibility.
I am taking bets that the next time they run a hackathon there will be (anonymous) stories of Salesforce leaning on their 'partners' to ensure they get a sufficient number of entries.
Why shouldn't teams do what Upshot did and develop the cool tech beforehand and polish it during the Hackathon
They should: If you know what you're building, and the prize is enough motivation, of course you should do everything you possibly can in advance, and any rules that tried to prohibit that would essentially be crippling only the naively honest.
Even if there were keystroke and screen loggers and you had to "show your work", a prepared team would have gone through the process multiple times, completely refining their work to the point of rote re-implementation of version 10 of a proven out implementation.
This may sound cynical, but it is truthful. Which is exactly why most such competitions simply announce the goal and an end date in the future, and people grind as they see fit.
Their rules were just too complex yet weak and ambiguous. If they want to level the field, a simple fix would be: make all previously written code available to others at least one week prior to the Hackathlon, everything else is kosher.
a prepared team would have gone through the process multiple times, completely refining their work to the point of rote re-implementation of version 10 of a proven out implementation.
What is this referring to? Has this ever actually occurred?
If you tell people there is a million dollar prize at stake (and to add to the financial stakes, charge hundreds per contestant), and details of the problem they will be solving, 100% of your contestants will try to solve as much of the problem as they can in advance (while, most likely, declaring it a crime that anyone else did so).
Anyone who claims otherwise is the sort of person you should protect your wallet around.
Because the "prompt" (basically, make a cool app) was revealed a long time before the hackathon, which IMHO was a big mistake. It's much easier to prevent such cheating if you make the early work irrelevant.
It's not human nature, because I'm basically asking examples of a software project or application that has been rewritten to the point of rote reimplementation.
While the Upshot mobile app used some pre-existing code,
this did not violate the rules. Use of pre-existing code
was allowable as long as the code didn’t comprise the
majority of the app and didn’t violate any third party’s rights.
IMHO, the whole idea of a hackathon is to see what teams can come up with on the spot. Any pre-existing code you bring is an unfair advantage. But if the rules say pre-coding is OK, then it is OK ;) disclaimer: I am not familiar with the specifics in this situation, just commenting on hackathons in general.
Hey, I remember doing this in school! It was called show and tell.
I don't understand how someone can use pre-written code written specifically for use in this hackathon. It's one thing to write some generic libraries, but they had the product made beforehand, they even demo-ed the app on the 8th! (according to the meetup post)[1]
This doesn't even touch on the allegations that some contestants' apps weren't run and videos weren't watched.
> The whole idea of a hackathon is to see what teams can come up with on the spot.
Full stop. That's it. Hackathon's with prizes are their own beast, and have their own rules. But hackathons themselves are purely about seeing what you can do. I hate the idea of turning Hackathons into something that destroys that fundamental piece, and money does that.
Technically what you're saying is you can't use Rails, you'd have to use Ruby, because Rails is pre-existing code built on top of Ruby.
That also throws out jQuery and any of its libraries.
The beauty of hackathons is they force you to think about the deliverable and attempt to mitigate reinventing the wheel by using as many plugins as possible to deliver your concept.
Where do you draw the line? Let's say I'm a core developer on Rails, am I not allowed to use Rails then? What happens if I contributed 1 line of code to Rails, does that negate it then?
And obviated by the judges having no easy way to learn and isolate which "part" of the app they're testing, and how such compartmentalization goes strongly against human psychology.
I don't pretend it'll be "perfect", but it should let decisions fall (noisily) around what we think of as "fair" (since we're all running that assessment through the same hardware) which is really what we're looking for.
<rules-lawyering-participant>So I get my buddy/silent-partner to develop my cool and unique API beforehand and sneak it unannounced onto a relatively anonymous public Github account before the comp starts, but close enough to the start date that no-one else will have even heard of it yet.
I think there's a slight distinction between using Rails and entering into a hackathon with the intention to port a recently developed app into a 'mobile' version?
And you didn't write the Ruby(|node|php|python|golang) interpreter either, or gcc/clang that compiled the interpreter, or the kernel that gcc relies on. (or the cpu microcode - it's turtles all the way down).
Personally, I think the Upshot approach was a sleazy hack of the competition rules. On the other hand, with a $1 million prize, you'd have to be an idiot not to expect rules-lawyering to be at least as important in winning as a good idea…
This was the only 'outcome' that was available to salesforce. If they took away UpShots prize money, they would have upset a lot of people who felt that prizes shouldn't be taken away after the fact, that upshot won by playing within the rules, etc. If they kept the ruling how it was, they still would upset a lot of people who felt cheated by upshot using existing code. Realistically they couldn't rejudge the entire competition, and even if they did there would be people complaining about that. They ran the hackathon poorly, they communicated poorly by posting certain things on the forums, etc, they judged it poorly, and now they just want out from under their own mess with the least damage possible
Salesforce had one out, throw money at the problem and hope it fades away.
I wonder what Upshot data code submission looked like? If they submitted just the mobile app, then all that Salesforce got out of it was a Webview + Mic integration, but if Salesforce actually got the NLP code (doubt it)...paying $1mil would have been worth all of this nonsense.
The right thing to do would be to ask Upshot to submit ALL their code and add it to Salesforce.com. I think most of the other participants would consider that fair justice.
I was contemplating some snide response that, regardless of how many apps are considered 'winners', Salesforce comes off as the loser due to bad publicity in the development community. Then I remembered they'd just been given source code and rights to a ton of pretty nifty apps for a bargain price.
> While the Upshot mobile app used some pre-existing code, this did not violate the rules. Use of pre-existing code was allowable as long as the code didn’t comprise the majority of the app and didn’t violate any third party’s rights. Our internal review determined that Upshot’s mobile app was created during the hackathon and met these criteria.
So what does "a majority" really mean? Can I come in with 49% of the code and still be eligible? 49% isn't technically a majority. What about 49.99%? How do you even determine the concept of "a majority"?
Wow, much more impressive response than their first one. Especially good move addressing the private gallery complaint, that's a small detail that shows they paid attention to some of the complaints at least.
What's the purpose of your reply, exactly? To me, it seems perfectly reasonable to criticize and evaluate a potentially unfair ruling in a large, notable competition.
Now that Salesforce has given their response, it should be evaluated just like any other company statement, to see if it is accurate and meets the community standards of what's "fair". Yes, fairness is partially subjective, which is why people want to discuss it here.
You must have forgotten the grand prize was for $1 million. Not exactly chump change. I'm sure if you felt slighted after entering, you'd be right there complaining about the rules as well. For $1 million I know I would raise my concerns.
So how does this constitute a full review and audit. The reality is that SF royally fucked up on many fronts and lost developer cred in the community. As if they cared in the first place.
If they attempt this next year, they need to appoint an outside 3rd party to oversee the contest and git rid of their own employees from the process.