Hacker News new | past | comments | ask | show | jobs | submit | more pfitzsimmons's comments login

If you look at the biomechanics, it does seem like a keyboard + mouse + >=20" screen is the optimal setup for doing actual work. A keyboard is simply the most efficient way to get information into a computer (the exception is that some graphics editors work might work better with a multitouch screen, it will be interesting to see if someone builds a touch-first photoshop killer). That said, there might be a convergence where mobile devices learn to run desktop software, and can be docked to a mouse/keyboard/monitor. But we are still a long ways from that point, and there is no great incentive to build office suites for mobile devices that are efficient for power/work users.

Mobile is great for 1) consuming content 2) interacting with your extended environment when you are not grounded to a computer (summoning an Uber, paying with an app, etc.) The money in content consumption will go to either the content creators or the digital sharecroppers (Facebook).

So the question is, are there large untapped areas where a phone could be used to interact with ones environment? What kind of day-to-day things could be enhanced with internet connected software?


Your assumption, which I think is wrong, is that WPM (or maybe APM) is the bottleneck for most work. I suspect reading/comprehension, problem solving, planning, usability, and access to the right tool, discovery of tools, responsiveness of tools, teamwork... These things are much more likely to be the bottleneck.

As a programmer, I suspect I could probably type the entirity of a days work into the computer in a half hour.


I can't even read a phone screen without reading glasses on, I absolutely loathe reading more than a paragraph of text on one and I really couldn't imagine getting any significant amount of work done on a mobile platform.

If we count tablets as mobile too (they're wireless after all and plenty of them come with SIM slots) then the consumption part gets a bit better, in landscape mode you can read PDFs on them but the part of work that requires significant input would - for me - not be an option.


I suspect that this is highly related to the task being done, and the context.

Some days, I'd totally agree with you; I'm not really sure of the next step(s), and have to give myself lots of time to think about things. This tends to apply when I'm entering unknown territory, and my tasks are relatively fuzzy and uncertain.

On the other hand, some tasks are extremely straight-forward (repetitive / memory-based), and more or less completely WPM and flow-bound. Even working with relatively efficient editors (using shortcuts, macros, VI bindings, etc), it's hard to type out (or otherwise input) much more than 2000 lines of code in a day. These types of tasks certainly require efficient input, and could be greatly enhanced by even better human-to-machine interfaces.

(I'm also a programmer, and these do come from my own experiences)


I think you're right on some counts, but there is a part that I think most people overlook when they dismiss rapid input as being a useful feature.

A lot of modern languages/platforms are able to be used as a REPL console. Same with commandline tools.

Being able to rapidly experiment with bits of code to identify the correct solution can be incredibly valuable.


Yes, but it takes you the rest of the day to edit and test the 30 minutes of typing you are ready to ship. Presumably, most of this testing or editing would involve a fair amount of keyboard interaction.


> Your assumption, which I think is wrong, is that WPM (or maybe APM) is the bottleneck for most work.

I don't care if it's a bottle neck. I want an efficient means of typing, so that I can keep my focus on other stuff. Maybe I could do a lot of my typing using a terrible interface like my phone, but it would be very aggravating.


> That said, there might be a convergence where mobile devices learn to run desktop software, and can be docked to a mouse/keyboard/monitor.

I never understand this prediction. That's a bit like saying I don't need a car because I could just dock my bicycle into some sort of enclosure with four wheels. Tablets, smartphones, and laptops/desktops were all built for different purposes and cannot be full replacements for each other, just like a bicycle can't fully replace a car without some serious sacrifices.


Even if I had a full scale performance in a cell phone with great docking capabilities, I probably would rather have a separate desktop computer for working. Just being able to compartmentalize "social stuff" on my phone, and "work stuff" on a desktop tends to vastly improve my performance.


Isn't that easily solved with logins?


It's a psychological thing, not a technical problem.


To expand on it, its also a security thing. Having my phone with a network of personal contacts and my computer with more work related data means a separate of attack surface.

At this point, if you have anything worth securing, its probably a good bet that your device will get compromised in the next 5 years. Compartmentalized devices helps with that significantly, since it means only partial compromises.


That strikes me as a lousy comparison. I could easily see a notebook with a detachable touchscreen and the proper OS (OSes?) being a useful machine (easier to see since a reasonably selling device actually exists).

What if an iPad Air could simple attach to a MB Air chassis and only serve as the display when attached?


It's not a hardware limit that keeps mobile from wholly eating desktop, but a software one. Mobile OSes are intentionally crippled and locked down at the OS layer. You don't own or control your device, and only approved software can run.

Android is a bit better than iOS in this respect, but not much.

None of the mobile vendors have any incentive to change this, since it would mean forfeiting the App Store tax and for Apple would cannibalize the Mac market. The only way I see an uncrippled mobile device entering the market that is high enough quality to compete is if someone with none of these conflicting interests bucks the trend. Android is pretty forkable, so a droid fork that solved the security problems in a non-feudal way and that supported the sort of docking you describe would be disruptive.

Dell? Compaq? HP? A "washed up" PC vendor with stagnant market share would have nothing to lose and might have the resources to pull it off.


Apple loves to canibalize itself. iPod, which used to be 50% of the company: practically gone, totally canibalized by the iPhone. The iPad has already eaten plenty of Mac, outselling it between 2:1 & 3:1. The idea that if only the iPad were less locked down it would sell more and canibalize the Mac, thus Apple doesn't allow it, is absurd.

The App Store "tax"? Sure, Apple doesn't mind the cash. But they are first, second and third a hardware company: that's where the real money is. The reason they have no intention to allow side loading apps on iOS has to do with user experience, eliminating support headaches and security (order may be different, but these re the reasons).

The fact is your dream device would appeal to the same people who buy desktop Linux machines now. They exist, but they are a tiny part of the market. Nobody can stay in business catering to just those customers.


Most people don't see the lack of control over their mobile devices as a problem. Instead, they see it as a good thing, because their mobile devices are a lot more worry-free than their computers.


It's not just a political issue -- I agree that most people don't care about that stuff. It also grossly limits what you can do.

In practice this means that PCs and their unlocked OSes will continue to hold onto their market niche until or unless mobile bridges that cap.


What about Ubuntu and Firefox? Those are uncrippled, I hope.


Not the same thing.

In 5 or 10 years dockable tablets are going to be every-goddamn-where, especially in business. It just makes sense, and is too all around practical. And for most computer uses, even "intensive" ones, it's perfect. You get portability plus productivity in the docked configuration plus huge economic benefits. Tablets are mostly just screens, batteries, and a handful of chips, all of which are super amenable to economies of scale in manufacture. Tablets are going to be cheaper than dirt eventually, and because a tablet can be a self-contained computer it'll tend to be the default computing choice. The biggest thing missing today is primarily good software.


I don't believe you are truly understanding the potential. Nor is that even close to proper analogy. You are presuming that all of the desktop software will be running on the mobile device, which will need all this power and can't possibly handle it.

I would instead focus on the work done with virtual machines. If instead I had a subscription service to access a virtual machine that had the ability to run any application I wanted, streamed to my mobile device that would then display it anywhere I wanted. My mobile device could connect me to any amount of computing power I need (in reason and with a large enough budget).

Why on earth would I buy this whole separate machine to do this? My personal computing device that I carry around with me everywhere could allow me to perform any function possible, I could have a full desktop computer anywhere I wanted as long as I have a internet connection and a screen.

Gaming could take place anywhere as well. You wouldn't need a gaming rig, the processing power would be handled elsewhere whilst your device handles the graphics processing and streaming.

Internet speeds will have to increase exponentially, but are we really that short sighted to state that personal computers will never be replaced by mobile devices? Yes it may not happen tomorrow, but it will come


It baffles me. Look at the cost of components for any smartphone. The cost of a full-fledged ARM SOC is, what... $20? The cost to turn any such docking station into an actual computer is basically trivial compared to the total cost. You can sync storage over the network without any need for a dock. Why on earth would anyone get a phone dock, rather than a separate machine?


"So the question is, are there large untapped areas where a phone could be used to interact with ones environment? What kind of day-to-day things could be enhanced with internet connected software?"

This is why I find machine learning and optimization such fascinating areas to watch. At a certain point, we may reach the practical limits of what human gestures, commands, and requests can tap into or do. The machine (or rather, the distributed ecosystem of machines) becomes more and more important in automating X, suggesting Y, and predicting Z.

I enjoy the prospects of VR and AR, especially in an omniconnected world. Those always seem like interesting use cases for a smart(er) phone. But I'm a lot more excited about the non-UI advances that the "internet of things" can bring us. When we free ourselves from the limitations of human comprehension, human attention span, and human neurological heuristics, we can do so much more. To me, the "large untapped areas" are all the things we won't have to tap to access (terrible pun intended). Before we can get there, of course, we'll have to connect all the devices.

At the risk of sounding hokey, naive, or unapologetically futurist, I look forward to the day when kids will say, "Wow. When you were my age, you actually had to touch things to make them work?"


Gestures are much less efficient than using a mouse or touchscreen, gestures require more muscles and movement and are less precise. Voice commands beyond the very simple stuff is an AI complete problem. True AI is much further away than we think. And when it comes it will be so deeply weird and also mind-blowing that asking it to buy us plane tickets via voice command or whatever will be the last thing that we would be worried about.


How about eye?


Right, and on your last point, that's almost exactly what the NFC industry has been trying to do for years. Most successful mobile ticketing deployments seem to use the barcode-a-like on a screen approach thanks to NFC not really getting very far. It will be curious to see if Apple manage to get any traction here.

Obviously flesh blood is a good thing, however, this problem space has been thoroughly explored.

The real unexplored area is that few people have noticed just how insanely powerful the GPUs in these devices are, but again the problem is in working out what they might be useful for, especially given the trend is for mobile "apps" to really be trivial front ends for web services.


especially given the trend is for mobile "apps" to really be trivial front ends for web services.

Yep. After doing mobile for a while you start to notice that most apps are just listviews hitting REST endpoints. Not exactly earth-shattering technology.


Agreed. Except I think proper styluses/digitizers, like what I gather one gets with the Surface Pro, and I enjoy on my Samsung Note 3 hold much better potential than multi-touch (there's a reason why artists and designers have been using digitizers for a long time).

Keyboard for text entry coding, digitizers for design/art/photo-work and possibly (multi)touch for richer, Smalltalk-like UIs. I've always thought three-button mouse weren't such a great idea (ergonomically) -- but a lot of the same things that work well with them (Smalltalk, ACME), should work fine with multi-touch -- as long as we evolve the GUIs a bit to take proper advantage.


Clip Studio Paint has a really good touch UI. It doesn't replace Photoshop for photographers, but for illustrators it certainly acts as a replacement.

I have had an iPad since the first version. I also have had various touchscreen "tablet" PCs prior. The problem is not the hardware now its the software -- and the software has gotten so, so, so much better. I can do real work from a tablet and a phone. The best practices devs first discovered and now proliferating more widely. That takes time but we are seeing the results now.


My favorite term for it is "Eye of Sauron" management. Most of the time the eye is not on you ... but when it is, look out!


The point of a microservice based architecture is to allow a large team (15+ people) to work together without stepping on each others toes. The idea is you break the application into smaller services, and each service can be iterated on and deployed by a small group of 1 to 4 people. I cannot see any reason why it would be a good idea to adopt a microservice architecture for personal projects or for a startup with less than 10 people.

I did work on an application suite that included a blogging platform, that aimed for a service based architecture. We had services for: * taking screenshots of the rendered blog post, for use in preview thumbnails * user identity and settings (shared service for the entire suite) * asynchronous task handling * sending emails * adding blog post analytics to the dashboard * all things commenting (including a javascript embed code) * the main application for editing and rendering the blog content * customizing blocks of content based on visitor identity


This is great. You should have an offer a combined zip file with all the mp3's as a $5 download. I would pay for it.


So 2,200 signups turned into 9 active users. If those were paying customers, than that is a .5% lead to customer ratio. There are companies that have built large businesses with a ratio like that. I'm not saying that OP should continue the business, his other reasons for giving it up may be valid. But a .5% conversion ratio is really not that bad for a brand new product. Very few products are instant hits.


Exactly - after being featured on HN, my SaaS got around 1k email signups, and out of my head only 15 to 20 translated into paid customers. There is a lot of work to be done to improve that, and it does not mean the product isn't good.


I would love for someone to start a company that could bring fiber to cities via the kickstarter model. Getting the money upfront from customers could go a long way toward mitigating the risk. Ideally the company would be structured as a hybrid for-profit/customer co-operative (so maybe 20% of the shares would be 10X preferred stock owned by the for-profit arm, and the other 80% of shares would be owned by the customers).


The problem with this model would be actually convincing a large majority of residents. The New Zealand Government is funding fibre to most cities and towns in New Zealand. Currently they're at 15% of their target population covered, and only 3% of those 15% are connected with UFB (ultra fast broadband) service.[1]

I should note, these fibre services are a free installation for residential use and many businesses. Monthly charges are the same price as ADSL2+ internet is here.

1. http://northpowerfibre.co.nz/index.php/news/entry/slowly-int...


Well, although not exactly the same business model, this community in a rural area north of Toronto seem to have convinced a distant ISP to do a FTTH project. Homeowners pay up-front and get their deposit back as they continue their subscription.

http://www.vianet.ca/trailofthewoods/


So, I haven't done any market research, but the model I have in mind is that the network is dark fiber terminating in a carrier neutral facility to decouple the infrastructure from the service - I think this is important for financing reasons. Running fiber to the home is a pretty expensive operation and most people would need some kind of financing. In a non-decoupled scenario, that financing would come from locking in a long term provider subsidy contract, but the timescales needed (maybe 5-15 years) are way too long to commit to a single service provider. They might be awesome in the beginning, but they then deteriorate or go bankrupt or whatever and you're stuck paying for Comcast-like service for a decade.

A dark fiber, however, is a perfectly neutral medium and it's trivial to measure it's quality so you can set up meaningful SLAs (basically, like water or electricity - it's either there or not, and if it's not, it's the providers problem. OK, not quite, but it's a heck lot more clean cut than deciding whether your internet connection is too slow, why and whose responsible).

So you essentially take out a mortgage on your fraction of the dark fiber coop (my consultancy could conceivably facilitate this financing - maybe there's some local businesses or wealthy citizens that you like to underwrite the loans, but they'd still need the infrastructure to facilitate the loans and secure the collateral). You own the bit of infrastructure that only serves your house outright (if you're in a dense area, that't not much, if you have a two mile driveway, it's more) and 1/nth of the shared infrastructure, including the carrier neutral termination room. When someone buys in later, they pay for any direct cost of connecting them, and their 1/nth is distributed evenly between the existing members. If this is a community effort, you can save a lot of money digging ditches yourself (the consultancy will provide instructions on how to properly secure the cables in the ditch) and by placing the termination room in a town hall, community center, church or local business annex - ideally close to existing backbone cable runs. Obviously, bullet proof leases and contracts for access to this room would need to be in place, that's part of what I imagine the consultancy would help with.

Once you have the infrastructure in place, you need to get a provider to set up shop in the termination room. It would make a lot of sense for the consultancy to also be an ISP for this purpose. Maybe the cost of setting up could be incorporated in the initial capital of the coop, maybe the ISP will front it on back of signing maybe 1-2 year contracts for service (which is separate from and on top of the cost of the fiber), maybe it can just provide the service on the expectation of being able to do business - that would likely depend on how remote the community it. Installing optical transceivers at each end of the fiber is the responsibility of the customer and their desired ISP.


Math, English, Python, Chinese, Ruby - these are all just languages. Programming languages and Math are simply languages designed to be good at describing phenomena that can be defined with precision. Programming languages are optimized for describing precise sequences and procedures, while the language of Math is optimized for describing precise proofs. You could actually write pretty much anything that is written in math or Python in English, but the English language has so much ambiguity that you would be more prone to making mistakes. English would also be much more verbose and would not be parseable by a computer.


Precise English is "Legalese" - lots of definitions, qualifiers, and words having precise technical meanings.


The numerous compilers (courts) for this language frequently produce different and conflicting results, a situation which cannot be fixed but merely worked around by adopting the result of a master compiler as the authoritative version. The compilation process is also quite slow and very expensive.

So if legalese can be considered a precise, formal language, it must be considered one with an extraordinary amount of undefined behavior.


It is more precise than normal English dialect, but still much less precise than math or python. At root, the phenomena that legalese describes are often imprecise. For instance, there is no perfectly precise way to describe the boundaries of a non-compete agreement, or what exactly constitutes "negligence." (This is why we cannot actually replace law with code)


"He wants the rest of the people to first hear: "We're changing for the better, and while these layoffs are a bad thing, it's necessary pain we have to go through to be better in the long run."

Right, but the question remains, why did he not just say exactly what you just said? Why use the business school/managementese style language? Why not use language that is human and expresses empathy?


One approach is to start documenting how much of your time is spent either a) manually testing changes because you have no automated tests b) debugging, fixing bugs, and patching messed up data due to bugs that easily could have been caught by tests. I find that investing in basic acceptance tests pays itself off very quickly. Good tests allow you to iterate on your early product much faster, because you can actually make changes without breaking things for your early users. If you can document the wasted time, maybe you can convince him of the benefit.


"My intuition would be that functional startups are visibly functional; they feel awesome."

It is a lot more complicated than that. The term "sausage factory" usually applies - http://www.urbandictionary.com/define.php?term=sausage%20fac.... Even very successful startups can feel very dysfunctional in the heat of the moment. First, when you grow fast and do things that have never been done before, lots of things break. Second, very often management may be absolutely brilliant about some matters, but have giant gaping holes in other areas. A CTO may have figured out a breakthrough in machine learning, but have no idea how to run an engineering organization. A CEO may be able to sell sand to a Sheik, but have no idea how to do proper accounting. As a company matures, it figures out a way to augment the leadership and compensate for weaknesses of the CEO or founder. But there are always growing pains early on, as a startup has not yet identified and fixed said holes.

You really should be evaluating a startup by the high notes that it is hitting, not by the amount of dysfunction. So it should feel awesome at least some of the time. But the startup may very well feel dysfunctional most of the time. That is ok, the dysfunction can be fixed later, but if there is no market or technical breakthrough, then the startup is probably not going to do well.

OP, the questions to consider are: do you have a product that people love (ie, they use all the time and recommend it to others)? If yes, is this a sample of people that is representative a bigger market opportunity? Does your CTO seem reasonable enough that if you demonstrate the value of tests and a build process, he'll allow you to allot time to fix things?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: