Hacker News new | past | comments | ask | show | jobs | submit | itay's comments login

Pretty nice, I have two questions:

1. Comparing to something like GitPod (which lets you run on your own instances as well), where do you think Hocus shines?

2. Given you're leveraging Firecracker for isolation, and Firecracker doesn't support GPUs, I assume that adding GPU-enabled machines isn't on your near-term roadmap?


Thanks!

1. GitPod doesn't officially support self-hosting anymore - https://news.ycombinator.com/item?id=33907897. When they did, it was extremely hard to set up. In fact we tried to do it at my previous company several times and failed. Hocus is designed with self-hosting in mind, and we want to make deployment and management straightforward. And from an end user's perspective there are 2 main advantages:

- Hocus dev envs are VMs instead of containers, so you can run anything you want inside. For example Kubernetes or nested KVM.

- Workspaces start in <5 seconds, while Gitpod can take a minute or longer.

2. Actually we are exploring moving to Cloud Hypervisor or QEMU, so we may support GPUs sooner than later. If you have a specific use case in mind, feel free to contact me - contact info is in my profile. I'd be happy to hear why you need them.


Thanks - that's super helpful.

1. Good to know about GitPod - I haven't looked at it for a while so looks like I was outdated. The rest of what you said is good too.

2. This is mostly for ML development, where GPUs are sadly often required even for dev work.


Okera | San Francisco (SF) and Seattle (REMOTE considered for the right candidate) | Full-time, VISA

Okera opens up data for greater innovation by scaling access and governance across heterogeneous, distributed data environments. The Okera Active Data Access Platform manages data access across a multi-cloud, multi-datastore and multi-tool world reducing friction between agility and governance. With greater accessibility, protection and visibility, you have the confidence to move forward to innovate.

Your data can do more. It can be used by analysts and data scientists to drive innovation. It can help you discover untapped markets, unseen opportunities, and unproductive workflows. It can change the way your business and the world works.

The Okera platform tackles the hardest issues behind data access and governance across hybrid and multi-cloud environments—giving you the ability to explore your data’s potential like never before. Our vision is to enable self-service analytics with responsible data access so that everyone can benefit from the potential of data in the enterprise.

Open positions include: * Staff Backend Software Engineer - Data Platforms (San Francisco or Seattle) * Staff Backend Software Engineer - Data Platforms * Staff Frontend Software Engineer (San Francisco or Seattle) * Senior or Staff Frontend Software Engineer * Staff DevOps Engineer * Director of Product * Technical Writer

Okera Careers: https://www.okera.com/careers/

Backend Tech Stack: Java, Go, C++, Kubernetes and the big data ecosystem (Spark, Presto, Hive, Impala, etc)

Frontend Tech Stack: React, Python, Redux, and Cypress.

Questions? Contact myself (email in profile) or Chris via email: cfinch@okera.com or apply online! You can read more about us at https://www.okera.com as well.


We're a big Atlassian shop, and switched from Crucible to Bitbucket Server (then Stash). I do not get the sense that Crucible is the future as far as Atlassian is concerned - I believe they see Bitbucket Server as what they are going to focus on.

FWIW, with the latest releases, they've addressed most of the remaining features that were missing from Crucible, and I'm very happy now with the PR/code review flow in Bitbucket Server.


I agree; Bitbucket Server is a smash hit in big banks who are now moving away from SVN to GIT to stay relevant. It simply makes more sense to use BitBucket Server offering by Altassian given that most organizations already use Confluence for hosting their wiki.


I've been trying to figure this out from the docs, but how does it support Windows? In the sense that for now (until Server 2016 comes out), you don't really have "container support".


Concourse just talks to Garden, it's up to the Garden backend whether or not it actually does containerization. So on Windows it just does the world's worst containerization (cd'ing into a directory), though it at least guarantees that the processes all die.

There's a proper Garden Windows backend in the works which we'll switch to at some point once we better understand it: https://github.com/cloudfoundry/garden-windows


Honest non-troll question: when you say high-performance, what's the measurement? I'm honestly curious what the right benchmark is for a DB like this, especially one that's doing a variety of spatial queries.

I'm specifically interested in how the measurement performs with a lot of writes constantly happening.


You bring up a good point. This is a subjective statement that I made. While I feel that it's a feature to have good performance, it's certainly something that should have a benchmark tool to back it up.

I'm going to generate some benchmarks and post in the coming days. Thanks a ton for the suggestion.


The CascadiaJS team did an awesome job of organizing the conference!


Thanks, Itay. You did a killer job of speaking at it. For anyone who's into "big data" and Node.js, check out Itay's talk here:

http://www.youtube.com/watch?v=r0TVWW8316E&list=PLLiioAb...


We built this as part of node knockout - we wanted to be able to share debugger sessions with other people, so you could collaborate on debugging.

You can read more here: http://nodeknockout.com/teams/pandabits


If anybody is interested talking about Splunk, APIs, or need any help on the Splunk side, please feel free to get in touch - I'd love to help :)

Disclaimer: I work for Splunk, on the Dev Platform team - we're trying to make it easier for developers to use Splunk.


We were early adopters of the API, and we still use it to demo integration of Splunk with 3rd party dashboards.

The beauty of the API is that it allows you to display relatively arbitrary data in very compelling ways.

For example, we have access to quite a bit of data at Splunk, from Twitter, server logs, etc. A lot of our customers ask us how they can use the data that is inside Splunk and present it in a 3rd-party dashboard, so we built a demo with Leftronic.


Thanks for the kind words, Itay.

For reference, here's the link that talks about the Splunk + Leftronic integration: http://dev.splunk.com/view/SP-CAAADSR


I've been using Clipboard for a long time now, and know several people who built the site. It's an amazing team and they did an amazing job with it. It would be easy to be discouraged with hotness of Pinterest, but they're going at it from a different, very valuable direction.

Best of luck to the Clipboard team!


i like what i see, unfortunately for me, i have so much data/notes on Evernote, and organised so well, that i'm locked in the product. for me swapping services right now would be a real nightmare


What's the difference between this and Pinterest?


Upon actually using it, it's pretty much snip.it but even more difficult to read. I think Snip.it and Pinterest are not very well designed around their primary purpose - content - but posts that can span 900px high of illegible text is pretty bad, especially when you scroll a little bit more and it's just posted again by someone else.

For whatever reason, I just wrote another post about features and ways to distinguish this product from the competition in their other thread (http://news.ycombinator.com/item?id=4049752).


I like snip.it's design and content fairly well, and it works quite simply. The only thing wrong with it right now is a relative lack of breadth of topics represented on the front page due to the proclivities of early adopters. That will improve in time.


In my opinion: you're clipping content + structure, and not just something like images/text. So you get to preserve the original layout, links, etc, which is a huge benefit.

You can also have things private and/or shared with a select group of people, which is what I usually do.

For example, a common use case I've found is that I'll clip something from Gilt/other signup required sites, to show someone a deal they might be interested in, but they don't want to sign up just to look at it. If they like it, they end up signing up.


Thanks for that.

So the clips aren't totally live - if you are sharing content that is somehow walled off.


I'm not quite sure what you mean. Your clips can be private and shared with a few people (or none), or they can be completely public. In the example I gave, I wouldn't really mind if they were public, but I don't see a reason to make them public - it is just to share with a few friends, really.


I might have misunderstood the product. I assumed (though this would be tough), that when you took a snippet of a page, and say the page updated later on - the snippet would reflect the change.

You make it sound like the snippets are static caches of content.


Yes, that's my understanding as well. That's also what I want. I'm taking a snapshot in time of a page, so that I can then go back to that snapshot and look at it.

Sometimes those snapshots are not valuable after a while, so I delete them :)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: