Hacker News new | past | comments | ask | show | jobs | submit | xuejie's favorites login

hyperjson? (Python bindings for Rust’s serdes-rs).

It’s not really a cheat answer - what if you could iterate quickly and launch 5 startup ideas in the time to launch 1 written in Zig but when one of your 5 turns out to be the next Youtube / Instagram / Facebook / [insert other huge traffic site launched on a “slow” backend here] then you have the freedom to glue in some optimised native code in the hot path?

I haven’t gotten to macros yet in my Rust playtime so far but Rust does seem like a really nice complement to Python. Go/Swift strike me as a faster Python - nothing wrong with that, Rust strikes me as a different kind of tool, that appeals. Zig / nim are too bleeding edge for me, when i saw the cool kids start to decry Rust, that was my cue to finally go order the O’Reilly book and dig in (i’ll admit to reading up about comptime to see what the fuss was about - cool idea, i’ll stick with trying Rust for now).


Lisp Machines are something that you think is really cool when you first learn about them, then you come to the realization that pining for them is a waste of time.

I've had a flash of inspiration recently and have been thinking about Lisp Machines a lot in the past three weeks.

But first, a digression. There's an important lesson to be learned about why Symbolics failed. I think Richard Gabriel came to the completely wrong conclusion with "Worse is Better" (http://www.dreamsongs.com/WorseIsBetter.html). There are two reasons why:

1. Out of all the LispM-era Lisp hackers, only RMS understood the value of what's now known as Free Software. (If you haven't read it yet, read Steven Levy's Hackers - it describes the MIT/LMI/Symbolics split and how RMS came to start FSF and GNU).

2. Portability is really important.

The key lesson to draw from Unix isn't that "Worse is Better," it's that survivable software is Free and portable. Free because getting software to someone's harddrive is 80% of success, and portable because you don't know where people will want to use your software (there are some really weird places).

Symbolics was neither. If Genera had been Free Software, it would by definition still be around today. If Genera had been portable, it's likely Symbolics would never have gone out of business (the Alpha virtual machine would have been done sooner, with less resources, and for more systems).

Being released as Free Software today wouldn't help. Genera's predecessor, MIT CADR, was made available under an MIT-style license in 2004 (http://www.heeltoe.com/retro/mit/mit_cadr_lmss.html). There's a VM emulator which runs the code. The whole system is pretty useless.

Now on to the inspiration part:

It's possible to make a very high-performance, portable Lisp operating system on modern hardware. This has been a possibility ever since the Pentium came out. The main bottleneck to conventional Lisp runtime performance is the way operating systems manage memory allocation and virtual memory.

A type-safe runtime that has control over memory layout, virtual memory, and is aware of DMA can provide extremely high throughput for allocation and GC (this has been shown by Azure's Linux patches for their JVM), true zero-copy I/O, almost optimal levels of fragmentation, and excellent locality properties. If you go single address space (and there's no reason not to) and move paging into software (object faulting and specialized array access), you've also eliminated TLB misses.

Throw in the fact that it now becomes trivial to do exokernel-type stuff like for example caching pre-formatted IP packets, and it should be possible to build network servers that have throughput many times that of anything that kernel/user-space split OSes like Linux or FreeBSD are capable of for dynamic content (ie - not just issuing DMA requests from one device to another).

The only problem is device drivers. Lisp doesn't make writing device drivers any more fun, or reduce the number of devices you have to support.

What to do?

The reason I've been thinking about this is that I came across this: http://www.cliki.net/Zeta-C

I've heard of Zeta-C multiple times before, but for some reason this time I made the connection - "why not use Zeta-C to compile an OS kernel?"

I explored the idea further, and it seems to me that it wouldn't be an unreasonable amount of work to take the NetBSD device subsystem and have it running on top of a Lisp runtime with the necessary emulation of those parts of the NetBSD kernel that the drivers depend on. If you don't know, NetBSD's device drivers are modular - they're written on top of bus abstraction layers, which are written on top of other abstraction layers (for example, memory-mapped vs port I/O is abstracted). So the actual system twiddling bits can be neatly encapsulated (which isn't necessarily true for Linux drivers, for example).

I'm aware of Movitz (http://common-lisp.net/project/movitz/) and LoperOS (http://www.loper-os.org/). Movitz makes the mistake of trying not to be portable, but there's useful things there. I haven't spoken to Slava about this yet so I don't know what's going on with LoperOS. I am also aware of TUNES, and think it was an interesting waste of time.

The main thing is to get Zeta-C to work on Common Lisp. Then it's to build a new portable, boot-strappable runtime (I think the Portable Standard Lisp approach of having a SYSLISP layered on top of VOPs is the right way to go for this), and either build a compiler targeting that runtime, or adapt the IR-generating parts of one of SBCL, CMUCL or Clozure. Further bootstrapping can be done with SWANK and X11 once a basic networking stack is in place. I think such a system would be quite fun to hack on.

If you've gotten this far, let me know what you think about this idea. I also have some preliminary thoughts about how this can be worked into the base of a new high-performance/scalability transactional database startup, if you want to hear about that email me: vsedach@gmail.com


This piqued my interest:

> handle arbitrary files (this includes large ones, think >100M SQL-dumps)

My old hex editor Hex Fiend was a serious attempt to handle arbitrary-sized files correctly. It's hard! In particular, operations which are usually instantaneous (e.g. Find) now may take a long time: they need progress reporting and cancellation, and ideally should not be modal.

A text editor makes that even harder, because now simple operations like "go to beginning of line" may take a long time if you have to find the beginning of the line. There's probably some conditions that you could impose (e.g. handles large files but not long lines); it would be interesting to see what those are.

> Loading a file from disk is as simple as mmap

This is a sketchy decision. For one thing, it means you cannot work with files larger than maybe 3 GB, or even 3 1 GB files, in a 32 bit process. I'd also be uncomfortable relying on mmap over NFS.

Hex Fiend handled this by not mapping files, but reading them (via pread) on demand.

> Since the buffers are append only and the spans/pieces are never destroyed undo/redo functionality is implemented by swapping the required spans/pieces back in.

Saving is what makes this tricky. Say I open a 10 GB file, delete it all, and save it. Can I now undo that delete? (Hex Fiend initially could not, and users were unhappy). If so, where does that 10 GB data live?

For that matter, how DO files get saved? Say I append 1 byte to the end of a file: is the entire file rewritten? Say I delete 1 byte from the front of the file: does it require twice the disk space to save it?

How about copy and paste? Say I open a 1 GB file, copy it, and paste it into another. Does that 1 GB of data get copied into memory, or is it just referenced in the original file? If it's referenced, what happens if I now edit that original file - does the data get copied at that point?

Anyways this is really tricky (but fun) stuff, and I hope the author succeeds since I do want a fast text editor that can operate on arbitrarily sized files.


I'm the developer of it, feel free to ask me questions about the stack.

In brief: The client side and the server side are both written in C++, and they share a lot of the game code. The clientside is compiled to Javascript using emscripten, and uses the canvas 2d API directly. There's also a load balancer which is coded in C++ (the lb.diep.io endpoint). Servers connect to it to inform their status, and clients query it for a list of servers. It also handles the auto scaling (as it works across several providers, at the moment only Linode servers are enabled).

One interesting bit is that the client is able to swap its code pretty easily, so releasing updates is very smooth (outdated servers start kicking clients as they die, clients update themselves without reloading the page, and connect to an updated server).


1. A web IDE is widely used at Google and has instant syncing with the file system so you can use your local editor and immediately see the changes in the IDE. This is probably helped by the fact that your "local" code is actually a distributed filesystem mounted via FUSE.

2. Features of the web IDE that people love: it works offline via caching and syncs back up the same way you expect an offline Google Doc you were editing to sync. It has plugins (e.g vim bindings) that are most common feature requests from developers. You can build your code very quickly (yay distributed building). No startup time, it loads close to instantly like Google Docs does. Split pane editing, debuggers, linters, etc all built-in. Code search also built in, review tool integration, version control integration.

3. Outside of Google people don't want to pay a 3rd party to allow editing / storage / access of their code; it mostly comes down to "if I get used to a tool I need to be guaranteed it will be around forever," (open source solves this), "I want full control of where my code lives" (on-premise solves this), "performance + features" (not sure if this has been solved anywhere, like being able to modify your code locally and remotely and sync instantly)


Let me start by stating I am the release manager for Codenvy and Che. When trying to decide if a cloud IDE is right for you like so many things in life "It depends". A local IDE probably works "fine" for most single developers or small teams. Developers that have been using a certain local IDE may find that it is better to stick with it than to invest time in learning about and how to use a cloud IDE. Also certain program development can only be done with specific local IDE software. Codenvy is trying to fit into as many developer needs as we can but we know we will not always be right for everyone.

However, Codenvy and Che does have advantages over local IDEs. The biggest in my opinion is having a consistent programming environment that can be distributed quickly. We leverage the use of Docker in creating workspaces that run machine(s) which are Docker containers. Source code, compilers, debuggers and executables are all "contained" in the same runtime environment. This means consistency in compiling, executing and debugging. When a developer get's something to work successfully others will be able to do the same consistently. Also, transitioning from development environment to production in most cases is more consistent and faster if your production environment uses or can use Docker containers in production. Running the IDE on a dedicated server also can increase compile performance and reduce local machine hardware requirements.

One special advantage that Che and Codenvy over traditional IDE's is some embedded systems. A developer could include a built-in IDE in their embedded system. When the embedded system is connected to a network the developer could use Codenvy or Che to directly reprogram the device using the device's ipaddress and a web browser. Downside to having the IDE on the embedded system though is processing power but could work in some cases. Alternatively, a cross compile development environment could be setup with Che and Codenvy that could upload the binary/assembly to the embedded device after compiling. This is actually what we are doing Samsung's Artik development board with the Artik IDE.

On the topic of leaving a developer "high and dry" if the product fails, is why open source is a good idea for an IDE. Open sourcing Che, which Codenvy is built on, ensures that the product has the ability to live on without us if for some reason Codenvy fails. Not only is Che open sourced but is part of the Eclipse foundation.

This is a great topic and glad to see all the interest. It's good to see what developers feel about and want from cloud IDE's.


I've considered a password management system many times but I can't think of something that's not over-engineered.

So, for the last 20 years or so, I just have a single GPG encrypted file that contains the list of my passwords for various sites and services, ssh keys, and whatnot. I usually read and write that file in Emacs, or gpg -o - out to shell with an alias for quick read-only access.

The file is easy to backup and easy to distribute even carelessly: I even had it on my public www server at some point when I needed access to the passwords over the network.

I can't think of a simpler scheme than that.

Of course, the GPG keys themselves can lock and unlock my life completely. I have them in a separate backup file that is also encrypted using GPG but with a symmetric cipher. Thus, I don't depend on any extra files to decrypt my GPG keys.

As the passphrase for that symmetrically encrypted file is basically the master password to my life and because I've never needed it yet I store the password in a suitable physical location. But I can still distribute the backup file itself: even obtaining the passphrase to the symmetric cipher doesn't really expose my secrets yet. It will only give access to my GPG keys which in turn need my regularly-used private-key-passphrase to be useful at all.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: