Hacker News new | past | comments | ask | show | jobs | submit login
WebVM: Server-less x86 virtual machines in the browser (leaningtech.com)
352 points by AshleysBrain on Feb 1, 2022 | hide | past | favorite | 105 comments



The JIT compilation to WebAssembly they are doing with WebVM is pretty cool!

I didn't see any benchmarks on the linked to page. I tried their sample Fibonacci program, but up to 100000 and ONLY timing actual execution (using the time Python module) to not include startup time, and WebVM only took 6.7 times as long as native for me. That's very impressive.

There's a similar open source project called https://copy.sh/v86/. Using their arch Linux image with the exact same Fibonacci benchmark, it take 44 times as long as native.


As a smoke test, I tried running `time python3 -c 'print(max(range(2*10**7)))'`

It's about ~10x faster on webvm.io compared to copy.sh/v86 and only ~20x slower than native, impressive stuff


20x slower than native suggests this solution has its own ozone layer hole.


Thinking more about this.

I'd love to try v8 there, so we can benchmark the WebVM v8 against the Native JS in the browser... all using the same engine (is a bit meta, isn't it?)


Well, nodejs uses V8 and it's installed


Exciting... trying it as I type!

Edit: I'm trying to run the following benchmarks [1]:

    function mySlowFunction(baseNumber) {
     console.time('mySlowFunction');
     let result = 0; 
     for (var i = Math.pow(baseNumber, 7); i >= 0; i--) {  
      result += Math.atan(i) * Math.tan(i);
     };
     console.timeEnd('mySlowFunction');
    }
    
    mySlowFunction(8); // higher number => more iterations => slower

Results: 99ms in my Chromium browser (v8 JIT enabled), it breaks in the WebVM (after typing `node` and enter) with `TODO: FAULT af5147bf / CODE da d9 83`

Results (not using console time): 99ms in my Chromium browser (v8 JIT enabled), 680ms in WebVM (1050ms in the first run).

Conclusion: WebVM node is only 7 times slower than native v8 in my local machine (macOS M1 Max)

[1]: https://gist.github.com/sqren/5083d73f184acae0c5b7


If you want Node in your browser, stackblitz has a super fast implementation. I think it's NodeJS compiled to Wasm but with V8 replaced by browser's native JS engine:

https://blog.stackblitz.com/posts/introducing-webcontainers/


That's an unimplemented instruction. If I have to make a guess, one the trigonometric funcs you are calling. Feel free to report a bug if you'd like.


The issue is triggered by `console.time`

Try:

  function mySlowFunction(baseNumber) {
    const startTime = Date.now();
    let result = 0; 
    for (var i = Math.pow(baseNumber, 7); i >= 0; i--) {  
      result += Math.atan(i) * Math.tan(i);
    };
    console.log(result, 'time:', Date.now() - startTime);
  }

  mySlowFunction(8);
I get ~100ms in Chromium and ~900ms in the WebVM (if I close the devtools)


I timed a very simple loop in C: "for (volatile int i=0; i<N; i++);" (handful of arithmetic, compare and branch instructions) with N=1e9 and it was 70% the speed of native which looks really good. I'd love to see LINPACK now :)


Thanks for the benchmarks! I was curious about timing.

Python is also a bit tricky because it does things with pointers that I believe are hard to optimize (or maybe not, who knows!). Have you tried other languages/programs?


Is WebVM a potential solution to "JupyterLite doesn't have a bash/zsh shell"? The current pyodide CPython Jupyter kernel takes like ~25s to start at present, and can load Python packages precompiled to WASM or unmodified Python packages with micropip: https://pyodide.org/en/latest/usage/loading-packages.html#lo...

Does WebVM solve for workload transparency, CPU overutilization by one tab, or end-to-end code signing maybe with W3C ld-proofs and whichever future-proof signature algorithm with a URL?


The VM cannot have full TCP/IP stack, so any data research tasks are likely to need a special code paths and support for downloads. No SQL databases, etc.


From "Hosting SQLite Databases on GitHub Pages" https://news.ycombinator.com/item?id=28021766 https://westurner.github.io/hnlog/#comment-28021766 :

DuckDB can query [and page] Parquet from GitHub, sql.js-httpvfs, sqltorrent, File System Access API (Chrome only so far; IDK about resource quotas and multi-GB datasets), serverless search with WASM workers

https://github.com/phiresky/sql.js-httpvfs :

> sql.js is a light wrapper around SQLite compiled with EMScripten for use in the browser (client-side).

> This [sql.js-httpvfs] repo is a fork of and wrapper around sql.js to provide a read-only HTTP-Range-request based virtual file system for SQLite. It allows hosting an SQLite database on a static file hoster and querying that database from the browser without fully downloading it.

> The virtual file system is an emscripten filesystem with some "smart" logic to accelerate fetching with virtual read heads that speed up when sequential data is fetched. It could also be useful to other applications, the code is in lazyFile.ts. It might also be useful to implement this lazy fetching as an SQLite VFS [*] since then SQLite could be compiled with e.g. WASI SDK without relying on all the emscripten OS emulation.


Also, I'm not sure if jupyterlab/jupyterlab-google-drive works in JupyterLite yet? Is it yet possible to save notebooks and other files from JupyterLite running in WASM in the browser to one or more cloud storage providers?

https://github.com/jupyterlab/jupyterlab-google-drive/issues...

https://github.com/jupyterlite/jupyterlite/issues/464


The VM could have its own TCP/IP stack, possibly with a SLIRP layer for translation of connections to the outside. Internet connectivity can be done by limiting it to AJAX, or forwarding the packets to a proxy (something like http://artemyankov.com/tcp-client-for-browsers/), or including a Tor client that connects to a Tor bridge, etc.


Is all of that necessary to LD_PRELOAD sockets and tunnel them over WebSockets, WebRTC, etc?

So e.g. curl doesn't work without (File System Access API,) local storage && translation of e.g. at least normal curl syscalls to just HTTP/3?


StackBlitz' WebContainers have in-browser TCP/IP stack, I think from MirageOS.


How does it get raw ip packets out from inside the browser?


Does it need to reframe packet structs and then fix fragmentation issues, or can it set the initial MSS (because MTU discovery likely won't work quite right) so that it doesn't have to ~shrink and re-fragment tunneled {TCP,} packets? https://en.wikipedia.org/wiki/Maximum_segment_size#MSS_and_M...


I think it's only externally exposed as local-only HTTP server, presumably via service worker, so you can open the website you are developing.

The IP network is entirely virtual.


Wow, this is amazing!

Now we could create a /dev/dom virtual device, and write dynamic web pages in pure bash. I love this.


Or a DOMFS in /dom that's organized in the same hierarchy as the browser DOM. For example, to write a whole page:

    echo "...." > /dom
Update the <title> tag:

    echo "TITLE" > /dom/html/head/title
Change the charset:

    echo "EBCDIC" > /dom/html/head/meta[1].charset // second <meta> tag
    echo "EBCDIC" > /dom/html/head/1.charset       // second child of <head>
Even go full XPath and replace a tag's inner HTML:

    echo "<div>abc</div>" > /dom/[@id='myID']
This is a horrible idea...


Oracle Acquisitions team would like to discuss a business transaction.


This comment is worse and more horrible than the parent.


XPath? What's wrong with the find command?


The way I envisioned it was that attributes are also files themselves, and the contents contain the values. So:

    // given:
    <div id="myID">

    // `id` attribute is located in
    /dom/html/body/…/div[n].id
So unless you look at the contents of the files, you wouldn’t be able to find a certain ID. Because of that, DOMFS (when XPath is enabled) would expose that same "file" at `/dom/[@id='myID']` as well.

I guess you could do something like this?

    grep "myID" /dev/**/*.id
But why would you even use this “DOMFS”?


This is awesome. Really. Props to all the Leaning Tech team (creators of Cheerp, an alternative to Emscripten)!

I believe it will be possible to achieve similar state in the future just using Native Wasm/WASI (so no transpilation from x86 -> Wasm will be needed), but we are far from it given how slow the WASI standards move.

The shell is impressive: https://webvm.io/ (only downloads ~5Mb of resources for a full Debian distro)


Thanks, appreciated.

By the way, it's spelled "Cheerp", with a lowercase p :-)


Corrected!


License check without commentary.

> Copyright (c) Leaning Technologies Limited. All rights reserved.

https://github.com/leaningtech/webvm/commit/6efab7e60bf6f173...


The repo doesn't contain the actual distro itself, it appears to be loading CheerPX's virtualisation engine and feeding it a disk image here https://github.com/leaningtech/webvm/blob/6efab7e60bf6f173a2...

Assuming they wrote their own xterm interface (no idea if they did, I got as far as that), seems everything open-source is fetched by the client at runtime. This feels to me more like a bootloader than an OS. Not sure where that lands it license-wise whether merely linking to the image requires appropriate licensing and attribution but either way the work seems pretty straightforward to replicate assuming you have / can supply an xterminal-esque interface and can compile your OS image appropriately.

I don't think they're doing anything wrong licensing-wise but I guess it depends on how the law defines including software as a library, whether that needs to happen at compile time or run time, or whatever. Seems like a grey area?


Hello HN, author of the post here, happy to answer questions.



Perf. Our JIT is extremely advanced. Of course different workloads will behave differently, but you are welcome to try multiple payloads and see for yourselves.


What constitutes advanced?



And jor1k (JavaScript openrisc processor emulator) : well, they emulate processors (x86, openrisc..) in JavaScript while WebVM execute (transpiled?) code in webassembly


v86 also recompiles machine code to web assembly. The main difference is that v86 is a hobby project (of which I'm the original author, by the way), with much fewer contributors and no (known) commercial users, and is much less sophisticated than this project. On the other hand, v86 is open source, so you could make it sophisticated if you wanted to :-)

I'm not sure what exactly cheerps is used for; v86 is mostly used to demo operating systems (mostly hobby and vintage). We recently got SerenityOS to run: http://copy.sh/v86/?profile=serenity


So, this is a reimplementation of the Linux ABI and no Linux kernel source is involved, right?


That's correct.


Have you tried compiling Linux as User Mode Linux with emscripten? I imagine something like this https://github.com/nabla-containers/nabla-linux would run on wasm, too?


Well, User Mode Linux would still require an underlying Linux ABI, for example mmap to implement pagination. With sufficient work it might be possible to actually run UM Linux _on top_ of CheerpX / WebVM.

Implementing the Linux ABI ourselves gives us the opportunity of a tighter integration with the Web platform anyway.


Hey dude, I've been screwing around implementing plan9 semantics in a OS like system for the browser (https://github.com/intigos/possimpible). I'm interested in using a x86 emulator inside a webwoker that I'm using for processes so I can run x86 code. How hard is something like this? Can you give me some pointers on how to start working on this? Thanks!


It seems to not work with my eager block settings. It works with a fresh Firefox profile though, so it's not clear what the issue exactly is though. I know for sure that the ext2 is never actually downloaded (0 byte response) and when I try to check anything in DevTools cxcore.wasm triggers a pause on debugger statement, which spikes the CPU.

Any chance there could be a version with all the assets in one thing (say, GitHub Pages)?


Is there support for loopback networking (for IPC)? Is there a way to translate HTTP(S) requests to `fetch` requests? How difficult would it be to port a Go app that uses https://github.com/pion/webrtc to use the browser's native WebRTC?


HTTP request could be intercepted, but due to CORS they would most likely not succeed. I have not studied the WebRTC protocol in detail but it might be possible.


What performance bottlenecks are there that can still be improved?

Right now I see a ~6x slowdown on small-code benchmarks like sieve, but it goes up to 50x or more for large code like GCC. For QEMU it's roughly 1.6x and 2-3x respectively, so it seems like your JIT is slower than QEMU's.


Is this tech a reasonable way for cross-platform emulation, e.g. a x86 VM running on a browser on ARM hardware like the new Apple chips?

In essence, for this approach, would the x86-on-x86 performance hit be similar or very different than the x86-on-ARM performance hit?


The actual host hardware does not matter, neither in terms of support or performance, theoretically. We currently optimize the codegen for Chrome/V8 on x86.


How about the network stack? Is the VM can talk with other VMs from other browsers.


Not in the current implementation, but absolutely possible with WebRTC. We have done something equivalent some time ago: https://medium.com/p/29fbbc62c5ca


I have REDIS driver working in web using websockify bridges (and there are a few other options)

https://observablehq.com/@tomlarkworthy/redis


Will you open source your code one day and if no, why not?


Impressive stuff! Worth to try to run the OSv unikernel (one of my babies) in it.


Can you run containers in webvm?


> HTTP servers (microservices): By combining Service Workers with virtualized TCP sockets, we will allow HTTP servers inside the WebVM to be reachable from the browser itself. A JavaScript API will be exposed allowing to connect an iframe to the internal servers. This is mostly supported already, although not exposed on the current demo page. A similar solution could be used to support REST microservices as well since Service Workers also handle fetch/XHR requests.

I wonder if this can be used to create a semi-decentralized website where visitors automatically served a vm to run, turning them into an edge server to offload requests from other visitors. The more active visitors, the more edge servers you have. Infinite scaling on the cheap! The visitors may not like you abusing their browser though, but there might be use case where this is acceptable, such as popular community run websites that too expensive to run due to huge amount of traffic.


> I wonder if this can be used to create a semi-decentralized website where visitors automatically served a vm to run, turning them into an edge server to offload requests from other visitors. The more active visitors, the more edge servers you have. Infinite scaling on the cheap! The visitors may not like you abusing their browser though, but there might be use case where this is acceptable, such as popular community run websites that too expensive to run due to huge amount of traffic.

You can buy ads. Ads cost about as much as ads, so you can buy and sell the same unit, and then run some compute for free.

I ported k(5) to html+js (ecmascript) during Iverson College (I think this was 2014?) and used webtorrent to connect to secondaries to run a scale test. The cpus are cheap, and they are slow, but it was a lot of (distributed) fun. I pitched the idea to KX (and a few others) to sell compute for fractions-of-pennies-per-hour but I think it was still a little early.

Do you think Now's the time?


Hold on, so you buy an ad slot, serve your code in that ad slot, AND serve another ad inside that slot to resell? Is that actually allowed by ads provider?

The idea of running vm inside an ad to harvest compute from unsuspecting visitors... I think this might accelerate widespread use of adblockers even more if you successfully deploy this in the wild because people will notice ads are getting heavier.


> Hold on, so you buy an ad slot, serve your code in that ad slot, AND serve another ad inside that slot to resell?

Exactly.

> Is that actually allowed by ads provider?

On some ad networks it's prohibited by ToS and (in some cases) a review process, but it is extremely difficult to prevent in practice, especially if you have any understanding of how this works. I estimate perhaps as much as 50-90% of Google's adsense revenue comes from this, so they aren't (directly) incentivized to stop it.

> The idea of running vm inside an ad to harvest compute from unsuspecting visitors... I think this might accelerate widespread use of adblockers even more if you successfully deploy this in the wild because people will notice ads are getting heavier.

Perhaps, but people also have a lot of idle cores, so if you don't block networking and you monitor system performance carefully to ensure you don't affect things, for the most part people simply won't notice.

Quite a few people have been caught out doing wasteful things like trying to generate "coin", which definitely doesn't help, but there's also some interesting applications that have been run on volunteer-cpu-time (folding, seti, etc), so it seems plausible with some charitable examples and some care to avoid impact, this might be a doer?


I don't see how these could be exposed to another person's browser as an http site, my reading of this is that they are exposed to the user's environment running inside WebVM.

To my knowledge the only way that one browser can talk to another is through peer to peer webrtc, but that requires a handshake.


Websites rendered via websocket instead of http is already possible these days using various frameworks, right (e.g. Phoenix.LiveView)? It might be possible to add webrtc support in addition to websocket for transport. The initial connection is served from a centralized server, just enough to render the initial page and initialized webrtc handshake. After that, subsequent page navigation is handled over webrtc to available peers. The vm will also get downloaded over this channel, turning the visitor into a node to expand the network.


Yeah, I was thinking about this, but at that point there's not much reason to prefer a vm over just writing a server that's compiled to wasm, if you're going to have to write something that renders via javascript instead of serving http?


Having a vm means you get to use a huge amount of pre-existing softwares instead of writing them by yourself, and even expose them on webrtc using something like webrtc-socket-proxy. Ideally you'll just need to write some glue code to tie them all.


Come to think of it, this kind of looks like an isomorphic SPA if you squint enough.


Sibling comments have mentioned web torrents, you could also look at IPFS for something related to what you're describing (without the abuse) https://ipfs.io/


If you just want to serve static files between browsers, you can do that with WebTorrent. I also searched “webrtc rpc” and found another piece of the puzzle.

I suspect that running arbitrary computations on peer browsers is uncommon today because verifying the output of an RPC executed on hostile machines is nontrivial unless you’re serving static files with a known hash, mining crypto, or solving an NP problem. I guess you’d also need a quota system to prevent users from taking over your botnet’s CPU time by spamming peer browsers with expensive RPCs.


`su` sends the vm into a spin on a single core, is it going to get somewhere?

Is there anywhere that documents the covered/uncovered syscall surface?


Yeah, happened to me as well..


I always wonder if someone will eventually put Docker container up on the browser. It will make tons of experimentation work easy.


is this some kind of joke to make computing as slow as possible?


If you thought web pages were bloated before, just wait until they download an entire Ubuntu image on every page load.


Some people don't care that it's slow. Availability and uniformity is much more valuable for example in school environment, especially at one where they teach IT one hour per week and the teacher is not really a programmer themselves.


Yes but...

The more likely case is you get corporate and military clients who adopt this for security (ya know, load a known 0-state image) to check email (which is loading some old version of outlook from the image), and it ends up taking the entire workday before you can briefly use your system.

Basically giving a take on the recent https://www.airforcemag.com/fix-my-computer-cry-echos-on-soc...


I don't care about them. If they wan't to wait for an e-mail all day, it's their problem. I won't hate this technology just because some users of it are dumb.


I can see why running Docker daemon on the browser can be seen as excessive.

But running docker container does not need to use dockerd. One can flatten the filesystem and have systemd run the container directly.


It'd only be a joke if it ran a VAX emularor in that docker container and then ran VMS on that emulator and then ran the RSX-11 emulator on VMS and then ran Adventure on that RSX-11.


This would actually be super useful on Chromebook just as an alternative ssh client as I’m not super happy with existing ssh client solutions on Chromebook— especially that chrome ssh extension thingy which sucks pretty hard imo.

Note I don’t like to put my Chromebooks in developer mode or do the crouton stuff or whatever the latest is on that front..

EDIT, nm it has no tcp stack or outbound connectivity


If you already simulate linux on linux why not transfer TCP over HTTPS over TCP using a tunnel server?

https://stackoverflow.com/questions/14080845/tunnel-any-kind...


EDIT, nm it has no tcp stack or outbound connectivity


This looks awesome.

Would it be possible to compile GNU/Linux to WASM as a target platform? What's missing for that?


Truly an impressive feat, and a lot of work no doubt. But why? Recently it seems to be some kind of fad to demonstrate that everything can be done inside a web browser. Again: why? Scope creep of web browsers is already beyond repair.


For educational uses, I have plenty of use cases for large scale teaching and learning, without backend servers and without installing anything complex on the client side.


we are well on the way towards the death of yavascript, let's hope there the exclusion zone doesn't come true.


The 5 year war seems mighty close though. One or two parallel universes away maybe.


Gotta love the term serverless...on par with fullstack developer.

It's either a server somewhere handling tasks in a queue or the client is the server. I hate that we have to care so much about "words" but I've seen far too many walking away with an impression that the server is unimportant or requires little maintenance in this model. I'd argue it becomes even more important because of how persistence, consensus, availability, etc works /rant

That being said, always excited to see work coming out of webassembly. Network is the computer


Wouldn’t this mean you can host a website in the browser if you run dynamic DNS to connect back to a webserver you have running in the vm on localhost?


Nice! How does this compare to Web Containers (https://blog.stackblitz.com/posts/introducing-webcontainers/, proprietary to Stackblitz)?


From what I read on their website, web containers cannot run Linux binaries or compile c code or run python scripts / bash etc. - for example?


What needs to happen to enable the browser to act as a functioning web server?


It's not really a server but you can do some interesting stuff with intercepting fetches in Service Workers, e.g. https://servefolder.dev - a little side project of mine.


It depends on the definition of "functioning".

If you'd like to access a server from another tab / iframe of the same browser, that's almost possible already, just some UX work would be required.

If you'd like users of the same page (or a separate specialized page) to connect, that could be possible with WebRTC.

If you'd like arbitrary hosts to connect, that would require a server side proxy, there is no client-side only solution that I can see,


I have it working here https://webcode.run



Why?


Why?


Played with it for a minute or two. No network permissions. No sudo. There is a su, but it hangs after entering a password.

So probably good for unprivileged non-networking applications.


Not working on iPhone or iMac M1… not impressed by that..


How do you move files into or out of this VM?


This is cool if you are running Windows and want to test some Linux command immediately...


Would love to see Cloudflare support this on Workers


After the introductory message I initially saw lots of

  id: /lib/i386-linux-gnu/libpthread.so.0: unsupported version 0 of Verdef record
  id: error while loading shared libraries: /lib/i386-linux-gnu/libpthread.so.0: unsupported version 0 of Verneed record
  bash: [: : integer expression expected
which would repeat no matter what command I try to run:

  user@:~$ ls -l
  ls: /lib/i386-linux-gnu/libpthread.so.0: unsupported version 0 of Verdef record
  ls: error while loading shared libraries: /lib/i386-linux-gnu/libpthread.so.0: unsupported version 0 of Verneed record
Very curious.

Turned out my hitting F5 (because things appeared to have frozen - I assumed resource downloads had hung, I have flaky internet) corrupted the local ext2 filesystem in a way the runtime didn't detect and catch upon reload.

That's actually kind of interesting because HTTPS nowadays almost always means authenticated encryption, ie a guarantee that application data has absolutely explicitly not been modified. Thus, correct data was downloaded, but somehow corrupted on its way into IndexedDB, due to my hitting F5.

I'm very curious as to why that's happening, but I could only blindly speculate as to why.

Considering the bigger picture of the many block requests shown in the devtools, it would seem the ext2 block layer just transmits remote requests for small spans of "exactly what's needed right now", with no buffering or preemptive readahead. Given this runtime's nature as a likely-embedded component running specific bespoke applications (I get the impression this entire product exists solely or largely to run a retargeted version of the Linux Flash player) it is likely possible to optimize this quite effectively.

In fact, because of the A (then) B (then) ... (then) N serially blocking nature of the problem space, it could actually be quite straightforward to implement a fairly effective solution: if the block layer were modified to understand "here are the bytes you asked for... please also cache this random unrelated list of chunks to cache as well", the web server could then be modified to aggregate s=...&e=... requests, find correlations between ranges, and then preemptively send data estimated to be relevant for correlations over a certain threshold.

If you further modified the client to also implement a tagging system in the cache denoting "directly requested" or "preemptively sent by server (when requesting s=...&e=...)", when application code goes to request a given arbitrary block from disk, if that block turns out to have been preemptively cached and the preemption did indeed save a roundtrip, the runtime could send a pingback to the server to strengthen the relationship between the original request and the followup. (Given that this logic would be running on the server, and this approach provides a reward/feedback signal function, I'm almost tempted to wonder if a small neural network could be interesting to play with here, but it might be overkill.)

In any case, I'm 119ms away from the disk CDN... which might help explain my motivation to do the above. My ADSL2+ is hit and miss; today it's doing 6Mbps, give or take (especially for small high-latency transfers which take time to ramp up). The devtools is saying a full reload (clear IndexedDB, hard reload) takes 33.75s.

During that time it's entirely hung.

Some sort of early-boot progress-bar (an extention of the logic that puts the introductory text on screen?) would probably be a good idea. Maybe extend the runtime with a "userspace reached" opcode (or some sort of equivalent magic noop) called at the very bottom of /etc/bashrc, which hides the progress bar.

Lastly, a bit of an awkward problem that is Kind Of Interesting™ but doesn't have any straightforward next steps: while attempting to identify how long a cold start takes here I got into a bit of a fight trying to delete the IndexedDB with the site open. Naturally that didn't go down so well, with the app recovering and auto-recreating the DB. Somewhere in there between closing the tab and trying to reload everything, I realized the browser process had started using 100% CPU, and my laptop was at 94°C. Interestingly, everything was still entirely responsive (indeed the only reason I'm submitting this was because the form autosaver still worked; this was slightly too many butterflies at once to reason about and I completely forgot to save my post text), so I quit Chrome via the menu and... it didn't fully exit (I had to hit ^C twice (I run Chrome from a terminal) to make it fully quit). All the threads shut down but the root browser process was still yelling at my CPU. gdb decided it was stuck in futex_wait_cancelable(). Yay. Apparently the IndexedDB has a few handle leaks in it or something :/.


>>> The web platform is well on its way to becoming the dominant platform for application distribution.

Errrr…. yes, “on its way”.


This is why I love Capitalism and Competition. This is amazing and fantastic.


The downvoters are anti capitalists it seems. If you don't like me, don't downvote me, just leave it as it's. I never ever will downvote who loves socialism in HN.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: