Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: When has switching the language/framework made an important difference?
140 points by banashark on July 18, 2017 | hide | past | favorite | 146 comments
It's always fun to look at different (micro)benchmarks comparing language/frameworks/systems to each other.

I'm curious about real world examples where a change (preferably measurement/profiling driven) has lead to a significant positive outcome in performance, code quality/maintainability, etc.

Did changing from Python to Go make it so that you could avoid horizontal scaling for the size of your app, reducing operational complexity?

Did switching from Dart to Rails speed up development because of the wide range of libraries available, speeding up time to market?

While most bottlenecks exist outside of languages/frameworks, I find it interesting when the language/framework actually made a difference.

An example I'll use is switching an internal library from C# to F#: The module as designed mutated 3 large classes through a pipeline of operations to determine what further work was needed. I incrementally rewrote this module in type-driven F# with 63 types to model the data transformations and ensure that the correct outcome desired was compiler verified. In the process 3 bugs were fixed and 12 additional bugs were discovered that while edge cases, had a couple of old tickets with "unable to reproduce" as the last comment in the ticketing system. This could have been done in C# and because I did it in F# it is most likely slightly more difficult for the other team members to jump into. It probably also uses more memory to represent the types than the C# version. In this case however, the trade offs were worth it and I've been told the module has barely needed to be touched since.




A few years ago, I rewrote an `R` script into Fortran, for better than order-of-magnitude speed increase.

The script optimized the placement of samplers in a building, in order to maximize the probability of detecting airborne pollutants, or to minimize the expected time required to detect. The rewrite cut the runtime down from 2-3 days to sub-hour.

Some of the speedup was intrinsic to the interpreted/compiled divide. However most of the speedup came from the greater control Fortran gave over how data got mapped in memory. This made it easier for the code to be explicit about memory re-use, which was a big help when we were iterating over millions of networks.

Re-using memory was helpful in two ways, I think. First, it avoided wanton creation and destruction of objects. Second, and more importantly, it allowed bootstrapping the work already invested in evaluating network `N` when it came time to evaluate a nearly-identical network `N+1`. Of course, I could have made the same algorithms work in R, but languages like C or Fortran, which put you more in the driver's seat, make it a little easier to think through the machine-level consequences of coding decisions.

That experience actually taught me something interesting about user expectations. When the Fortran version was done, my users were so accustomed to waiting a few days to get their results, that they didn't run their old problems faster. Instead, they greatly expanded the size of the problems they were willing to tackle (the size of the building, the number of uncertain parameters, and the number of samplers to place).


This is the first time that I have ever encountered someone who used Fortran for something other than earth sciences related modeling.

Thank you, now I have a second example the nest time someone claims "nobody used fortran anymore".


I’ll say it again. If the natural language of your problem domain is linear algebra and/or matrices (tensors), Fortran is the perfect tool.


Why is it better than C(++) for this?


Historically i believe Fortran language had stronger assumptions that variables arent aliases for the same memory. This means that the compiler can be more aggressive (than e.g. C) in terms of the generated code as it doesnt have to behave correctly if e.g. memory for unrelated arrays overlap. See https://en.wikipedia.org/wiki/Pointer_aliasing

I am not sure if this performance advantage is still the case as the wikipedia page on pointer aliasing notes that C99 added a "restrict" keyword so that the C programmer can tell the C compiler where similar strong assumptions can hold.


Because for example Fortran has multi dimensional arrays as first class entities. C does not. Yes you can get the same speed in C - yes you can do the same things in C but you can do them with simpler code easier to reason about code in Fortran.


Modern Fortran is high-level enough to pass for MATLAB, has full vectorized operations support, but in many cases faster than C. What's not to like?


Because of the libraries.

You say you can get matrix libraries for C++. And you're right, you can. But with Fortran, you can get (for example) functions that will solve Hermitian matrices, which run faster than solving regular matrices. Or special routines for sparse matrices, which run faster and use less memory. Or...

And you could write every one of those in C++. But the Fortran ones have 30 years of debugging and optimization on them. They're solid. You can't get that by writing your equivalent in C++.


Because Fortran was purpose-built for manipulating array-based data while C++ is a general-purpose programming language. That means Fortran is optimized for this problem domain starting at the _grammar of the language_ while C++ can provide at best access to a locally optimized set of libraries. Those libraries might be quite good, but the comparative experience is still like trying to drive a Phillips-head screw using a flat-head screwdriver.


Because scientists already know fortran and they're too lazy to learn something else even if there would be significantly improved results.


No, because computer scientists are too lazy to create modern domain specific languages that are powerful enough for practitioners to put to effective use. :-)


Like what?


Julia, C++ with libraries, Python with Cython or Numba, etc.

You could also get much faster if most of your work is matrix multiplication if you make use of libraries like ViennaCL and a modern C++ compiler.


Numpy, Scipy and a lot of other numerical libraries use Fortran for the underlying operations along with C. Because Fortran is just faster for such operations and will continue to be. Numba is still very much alpha. Julia is still immature for production (Good language from what I hear but still very much in development).

Don't know if C++ can ever be as fast as Fortran because all fortran compilers are optimized for the architecture on which they run. (Best fortran compiler for intel cpus is by intel). As for ViennaCL I don't know much about GPU programming and its performance. Never done it. (My only exp is with Cython and Scipy stuff)


Intel also has C/C++ compilers which probably share optimizers and backend code generators with their Fortran compiler and shouldn't be worse. Ditto for 3rd party commercial vendors like PGI.

And, BTW, I have seen a case where PGI vastly outperformed GCC on the same code. So compilers matter.


Both Julia and Python use Fortran code under the hood to speed up matrix multiplication. Julia comes with OpenBLAS which is partially written in Fortran, and the following page of SciPy documentation indicates that a Fortran compiler is required for some NumPy modules:

https://docs.scipy.org/doc/numpy-1.10.1/user/install.html

"To build any extension modules for Python, you’ll need a C compiler. Various NumPy modules use FORTRAN 77 libraries, so you’ll also need a FORTRAN 77 compiler installed."


I would also say that no one in the scientific world should be writing ASM for their programs. Does that mean I'm also telling them not to use any compiled language? No.


ASM != Machine code.

That aside, you haven't explained why you think the languages you listed are better suited for scientists than Fortran. What are your issues with modern Fortran?


Modern library availability, larger pool of programmers, better compiler technologies, more features that apply to modern hardware, cleaner code, and better documentation are all present on my languages. They are not present on FORTRAN.


All these languages, except Julia, are much _worse_ syntactically than modern Fortran. Especially C++, which is a clusterfuck beyond the widest possible imagination. And all are worse performance-wise.

Julia is regarded as the modern replacement for Fortran, but it isn't quite there yet.

The major problem with Fortran is that there is (sadly) nearly no documentation on modern idioms, and occasionally you have to dive in legacy code, which can be... scary, to say the least. Otherwise, it was upgraded really well, without losing performance.


> All these languages, except Julia, are much _worse_ syntactically than modern Fortran. Especially C++, which is a clusterfuck beyond the widest possible imagination.

There is some C++ code that is very bad out there but most C++ code maintained by normal people is passable. When I use it I use it as C with references.

> And all are worse performance-wise.

You are incorrect just by the facts of the situation [0][1][2]. This lie has stuck around so lazy people can pat themselves on the back for being lazy. "Yay we did it! We found the best language every! We don't need to learn anything new or update anything!".

The truth is a modern C/C++ compiler like GCC or clang/LLVM can generate far more efficient code than Intel's FORTAN compiler. Not only that but you have access to more performance and scalability libraries when you're not using the programming language equivalent of Latin. You're never going to get out of the MPI ditch that scientists have dug for themselves in a museum piece like FORTRAN. There will be no good abstractions, no well planned libraries, no in language support for newer hardware.

FORTRAN is going to be stuck in the 70s and 90s.

Languages like C++, Python's libraries, Julia, and other languages (like D/Go/etc) are going to evolve as industry needs them to. As we start being able to use less and less of the our computers abilities compilers, optimizers, and libraries are going to allow us to easily pick up the slack. You can already see this things like ViennaCL. GPUs, FPGAs and other tools are coming. MPI and FORTRAN can't be the only tool in your toolbox and if you're lying to yourself and burying your head in the sand in an attempt to pretend you can then good luck to you. You're wasting your budget, the taxpayer's money, and everyone's time on the supercomputer's queue because you don't want to try something new.

But it's fine. Who cares really. FORTRAN is the only language a scientist will ever need. I mean it is the fastest (even if it isn't) and it has the best compilers (even if they aren't the best and we waste thousands and thousands of dollars of taxpayer money to get them) and we know it and that's all that matters. Who cares about broader impact, portability, and future proofing.

[0] - http://benchmarksgame.alioth.debian.org/u64q/fortran.html

[1] - https://julialang.org/benchmarks/

[2] - http://benchmarksgame.alioth.debian.org/u64q/compare.php?lan...


is gfortran that much slower than Intel's compiler that they didn't bother to benchmark it?


Sorry about the "me too" but I'm another example of someone who programs non-science in FORTRAN. I work on a crew management system for a major airline.

Most of this and related applications engage in data shuffling rather than number crunching, but FORTRAN used to be one way of creating code that ran at nearly assembly language speed and size in (historically) memory constrained environments.

That said, the application (and I) are nearing retirement.


i haven't been doing any numerical computing for the last 5 or so years, but previously I was using python's numpy + scipy pretty heavily for data-science / applied-maths stuff, and under the hood quite a lot of the numerical optimisation algorithms in scipy are wrappers around old fortran libraries. so at that stage it still gets used but perhaps most people can pretend it doesnt exist.

similarly, the backend of the original "random forest" machine learning algorithm for R was written in fortran, and then wrapped as an R library, but as an R user you could largely pretend that fortran didnt exist. R was a pitifully slow language so this was a good move.


Fortran is an excellent language. I enjoy the fact that people think it's too old and/or difficult to learn, because neither of those are true. It's a great fit especially when the problem domain is heavily numerical in nature.


My one experience trying to read fortran code is this climate model: http://climatemodels.uchicago.edu/isam/isam.doc.html

If most fortran code looks like that then I think the language deserves a reputation for being arcane and cumbersome.


That is an example of 'legacy' FORTRAN (when it was still written in all-caps). I agree, reading through that is not a great time.

edit: That being said, there are some really well-written projects that have shown me what good modern Fortran looks like [1-3].

[1] https://github.com/nwukie/ChiDG

[2] https://github.com/flexi-framework/hopr

[3] https://github.com/jacobwilliams/Fortran-Astrodynamics-Toolk...


My big problem with Fortran is finding a modern tutorial written for an audience that doesn't already know some previous version of Fortran.


That's a fair criticism.

One of the disadvantages of Fortran is that it caters to a very non-sexy segment of the programming population, by that I mean that UX/UI is not of great concern - a lot of learning materials I come across reflect that. Unless you find a textbook, you'll probably end up looking through undergraduate/graduate engineering lectures, or some presentation from a national lab (see below).

First question, do you have access to a compiler? On all Linux distros I'm aware of, you can get the gnu fortran compiler fairly easily (some variation of gcc-fortran, gfortran, etc.). On Windows you can either boot up a VM, or use something like the MinGW toolchain (https://sourceforge.net/projects/mingw-w64/), which is a port of gcc compilers to windows. I have no experience with Mac and/or BSD.

Some learning materials I just came across on Google, which don't seem to cater to people with prior Fortran experience:

http://www.hpc.lsu.edu/training/weekly-materials/2014-Spring...

https://www.tacc.utexas.edu/documents/13601/162125/fortran_c...

http://www.dsf.unica.it/~fiore/f03t.pdf

Some more advanced topics:

http://people.ds.cam.ac.uk/nmm1/fortran/paper_10.pdf

http://people.ds.cam.ac.uk/nmm1/fortran/paper_11.pdf

http://people.ds.cam.ac.uk/nmm1/fortran/paper_12.pdf

http://people.ds.cam.ac.uk/nmm1/fortran/paper_13.pdf

http://people.ds.cam.ac.uk/nmm1/fortran/paper_14.pdf

http://people.ds.cam.ac.uk/nmm1/fortran/paper_15.pdf

http://people.ds.cam.ac.uk/nmm1/fortran/paper_16.pdf

http://people.ds.cam.ac.uk/nmm1/fortran/paper_17.pdf


I clicked on your first link out of curiosity and see that it's for F90/95. My first reaction was to wonder whether it would cover the object-oriented paradigm and IEEE floats added in later standards, but then I realized that this is a manifestation of the problem: lecture slides from 2014 that fail to cover even F03.

>First question, do you have access to a compiler?

Sure do. In fact I managed to fumble my way through writing a program to parameterize away piston-engined aircraft performance into an iterative solver a few years ago, but the lack of good/comprehensible language reference materials made trying to decipher the arcane incantations needed to do robust file I/O or divide the program into multiple source files, etc. into a very frustrating experience.

It was the last time I touched Fortran.

I'm surprised nobody has sat down and said "All right, if we want people to use Fortran outside of legacy codebases, we really need to polish the language introduction and put out a modern getting started guide to writing idiomatic Fortran in $current_year."

Because if they don't want new people to learn and use Fortran, why are they bothering to update the standard?


Sorry to hear it was such a struggle for you, I didn't mean to be condescending in my previous comment. I guess from your comment Fortran isn't as easy to learn as I thought - I'll remember that in the future.

The fact that 'the standard' has a different meaning in Fortran compared to other languages is indeed frustrating (see 2008 standard conforming status of various compilers here [0] as of Apr 2016); however, that being said backwards compatibility is taken very seriously, so learning the basics of 90/95 isn't a waste compared to the later versions of the standard.

I'm not sure why there isn't a better focus on a thorough introduction. I learned Fortran as I have learned all languages - with great frustration. Fortran hasn't been any different for me in that regard.

BTW, here's a great resource I've found helpful in the past that does a great job of comparing Fortran to Python (e.g great if you have a numpy background).

[1] http://www.fortran90.org/index.html


No sweat, it wasn't condescending.

I think for me, the problem is that my background is not in comp sci or software engineering. I am an aerospace engineer and am mostly self-taught save for what I retained from my first year C++ course back in the early aughts or gleaned from my sibling who is an ex Googler.

So I end up not grokking a lot of tutorials targeted at professional programmers, maybe?

I've written C, C++, C#, PHP, Python, lisp, Tcl, and Fortran and it was Fortran that presented the biggest hurdle in terms of finding quality info.

I sometimes wonder why nobody wrote a "Practical Common Lisp" for Fortran. Some artifact of the different characteristic Fortran user vs the Lisp user maybe?


Well I'll provide a somewhat different story where the lack of change has caused quite a loss in productivity.

At my current employer's a big part of the codebase is in Perl and the boss is a fan of the language, so we keep using it. The problem is, Perl is pretty much dead, and most of the packages out there on CPAN feel like they've been built 10 years ago. Not to mention, the language itself lacks what I would call essential features like exception handling, classes, etc (which has to be tacked on by using "shims" from CPAN like Moose or Try::Tiny)

At some point I had a particular issue in one of our apps where it would be making tons of DB queries and we need to cache them. In Python land there are plenty of packages that give me transparent caching at the ORM level. In Perl land? Oh yeah this post on a mailing list from 2007 about someone having the same problem, and a bit of untested code that may or may not work.

I gave up on that particular issue and it'll probably never get fixed, but let's just say that have we been using an "alive" language things would've gone much smoother.


The problem of old languages like perl and Ada is not lack of libraries, but lack of good developers that know how to write idiomatic code. Performing caching at the application level is so trivial in Perl that I do not see how a caching at ORM level could give any benefit.


> I do not see how a caching at ORM level could give any benefit.

I don't know anything about the Python libraries as such but if they implement pluggable modules like memcache, redis, SysV SHM, etc., for the caching, then you can chop and change as required without having to make any application level changes[1]. Which is also handy for testing because you can supply your own mock cache module to do ... whatever.

[1] A good example would be going from "direct DB access" to "global memcache" to "local memcache backed by global memcache" - all without application changes.


> A good example would be going from "direct DB access" to "global memcache" to "local memcache backed by global memcache" - all without application changes.

I strongly disagree: I'd rather debug something wrong with the lone application level change which added the cache support at the application level, rather than have to enter the long and deep rabbit hole of the path the code takes at the point I ask for a resultset and I get something from the DB-backed ORM, or from a global memcache, or blah.

But then again, I happily develop on mostly-Perl codebase(s) ;)


Well in my particular case it's a read-only app (there are no writes at all from the app itself so no need to worry about cache invalidation) so a caching ORM that can be a drop-in replacement would've been a huge win for less than one hour of dev time, and I could've done it in Python without any issues.

With this obsolete language the only way is to change pretty much all the app to do app-layer caching and that would've cost 20x times that, so we're not going to do it.


> lack of good developers that know how to write idiomatic code

… and they're getting fewer and fewer by the year, at least for Perl :|


And lack of employers that want to pay for such developers.

Hence everything is being rewritten to Java(Script) and PHP.


Well I don't blame employers who don't want to invest in a dead language.

In Python land I am still happily getting offers with good salaries.


Developer ergonomics is underrated. Having an easy to understand system that has no magic, and in 90% of the work has no dependency chains that you have to hold in your head can really help people avoid bugs by being able to focus on the intent and execution rather than the framework quirks.


I agree. Do you have a platform/framework that does this for you?


I have it for some applications. For smaller (<50kloc) services I use node in a functional/immutable style with minimal context objects (too convenient to go full immutable). When you learn the ctx object and the utils module you know how to make almost anything. For larger I'm still looking, but my rule of thumb is to avoid most OOP paradigms and most especially inheritance at nearly all times.

Think more in terms of the code state than the conceptual/business state, because when the code state is readable and understandable and low-overhead, it leaves you all the brain space to think about the problemset. No matter what, working on establishing patterns, share them with other people working on the same codebase, and make them better over time - build only enough tests so that you can cleanly refactor whole files, while not bogging everything down in the details. If you are ever afraid of changing a file (except at top level tooling), even in production, you're doing something wrong, because the potential footprint of your changes should be as low as possible.


Functional programming languages are better at it, simply because there are constraints in place preventing you from having mutable state everywhere.


Moving a large C++ codebase to C# has helped us a lot. The legacy code had memory leaks that caused random crashes. It was also long to build, and wouldn't build in 64 bit. The C# version seemlessly let us run the code in either 32 or 64 bit. C# is also a much more readable language, and has a lot of great tools for refactoring and profiling. And C#'s performance is almost as good as C++ in many areas, so all in all it's been a great move.


I'm guessing that moving from C++ to C++ (i.e., a rewrite in the same language) would also have helped a lot.


Actually that particular C++ was very well written. It had been written by a single very disciplined person over 3 years, following a lot of very strict conventions.

So, no it was not really messy in there. But it was big and memory leaks are hard to find.

The main problem is that there was some low level funky stuff happening that we didn't have a good grip on. The guy who wrote it thought it would be great to optimize the code in some ways, but clearly overdid it.


We did/are doing exactly the same thing with great results in the same areas!

> The guy who wrote it thought it would be great to optimize the code in some ways, but clearly overdid it.

I feel that C++ is a language that entices decent developers to try and 'optimise'. We have the same issue in some of our lower level stuff including pointer arithmetic and other 'optimisations' that might have saved a couple of kb/mb of memory 20 years ago when the code was written but today only makes the code difficult to read and to modify.


> The guy who wrote it thought it would be great to optimize the code in some ways, but clearly overdid it.

But then I suppose a rewrite would undo those optimizations and solve the problems.


Well, that is as long as we don't introduce other leaks in the rewrite.

Our team is mostly made of mathematicians and business-focused engineers, so for us C++ was somewhat inappropriate.

My point is trading a little bit of performance for a lot of clarity and safety was a good move.


There is a lot of safety features in C# that are not present in C++, so a rewrite in C++ would probably have helped. However the same class of errors that was a problem in the old codebase are still present.


I'm guessing that moving from C++ to C++ (i.e., a rewrite in the same language) would also have helped a lot.

I'd guess that a same-language rewrite would have helped a lot for any non-trivial project mentioned in this Ask HN.


Several years ago, our team refactored a large Java control system into a Ruby based one. The Ruby actually performed faster, scaled better, and was much easier to understand and maintain. The ergonomics of Ruby enabled clearer thinking about the problem, leading to the better results.

More recently, we've replaced Python, Ruby, and Java based systems with golang based ones. Not having to lug around a VM and associated other parts (jars, gems, ...) is a huge win. Performance is better across the board, and we've reduced the amount of hardware needed. There's also much better understanding of the code across the whole team.


Out of interest, to what degree do you think the improvement in comprehensibility is from the new ecosystem vs performing a rewrite with learnings from the original write?


In the first case, we inherited the codebase. However, it was clear that too much attention was paid to low level stuff such as threading, etc. Ruby allowed us to more easily focus on the problem with more of a micro services approach. The Java monolith we replaced was emblematic of its time. Frameworks piled on libraries piled on other things. A lot of complexity to achieve the intended function.

The move to golang has been really interesting. The language really does seem to scale well with teams. Its great to be able to pick up even code that you would expect to be complicated, such as the golang tls library, and be able to understand it. The much easier to use concurrency and fantastic standard library mean there's a lot less looking around for which framework to use or which approach to apply.


I want to explicitly call out an important point here. They inherited the original codebase and were able to understand the domain problems and what the original solution got wrong AND what it got right. Pretty much any language or framework they chose to rewrite the application in should have seen similar benefits (even if rewritten in Java). I've seen several projects where a rewrite was given to a entirely new team and those projects tend to be as complex or more so than the original (and tend to take a long time to ship)


Often, it's politically possible to achieve a complete rewrite by switching to another language, while staying in the same language would cause pressure to reuse parts of the old system or refactor the old system. Which is a bit sad.


That's a good point - a could top level comment could be:

"We switched from x to y. It was important because it justified rewriting the thing, which improved performance. Would have loved to rewrite it in x, but support just wasn't there."


I'm slowly converting all my old homegrown Ruby / Perl / Python projects to Go and enjoying it. I never wrote complicated code anyway but Go really pushes you into simple readable code that's much easier to reason about.


It all depends on the team, but personally the larger the team gets and the more complex your application becomes (monolith/microservice/etc independent) types are invaluable when it comes to shipping an actual product. You gain speed increases due to paths being evaluated ahead of time (compile time), but these are all secondary to developer peace-of-mind and time saved when work actually needs to get done - this includes ticket revisits due to "oh I forgot about that case", or "user did x and now this object is a string".

Yes, you should write unit tests to cover this in interpreted languages with weak or no types, but this depends on the developer a) doing it, and b) not missing any cases - PRs/code reviews are not a catch-all. Especially in the case of factoring common logic out or some other form of refactoring, a strongly typed language is my best friend.


Things like bug count and time to market seem difficult to separate from having more experience as a developer or knowing your problem space better. We use almost all Java, and we have several projects that we did major re-designs of in the same language, which increased maintainability, found bugs, etc. But if I chose a different language, I would have probably attributed it to the new language or framework.

Unless you have a problem that fits a specific technology really well, my experience is that your time to market will be minimized by using the tools you know best.


At work, we greatly benefited from a transition from Spring with Java to Play with Scala.

This mostly due to the inherent complexity in Spring and the fact that Spring developers are also Spring experts. When the main developer behind the application left, we struggled to add new features or even fix bugs because the team lacked the Spring expertise.

The rest of the business mostly dealt with Scala, so it was almost a no-brainer to go with Play.

The outcome has been very surprising. The application has better performance overall, is better suited for streaming and we have much more expertise in-house to add features and fix bugs.

The re-write was not without pain though. Spring is a well-supported and very rich framework. It probably does a bunch of things that the casual web developer will likely forget.


Oh man I would love to work with Scala


I would not recommend Scala anymore unless you're a small team with smart devs eager to learn it.

My project is not so small, and the average developer is not interested in spending the time to really learn the language and functional style.

So in the end, we have code that not everyone reads and writes well, an ecosystem that is years behind Java, and tons of incompatibilities between Scala versions as it's still not stable. It's a mess, and most just end up writing Scala imperatively a la Java from 2005.

Java 8 is actually pretty good and has a lot of the power of Java without the complex features few use correctly. I'm hoping Kotlin continues improving, too.


This was not my experience. Of course you need a team that has the drive to learn the language, whatever it is.

If you go out on your own and decide to build your project with Scala and a big focus on using a functional style but no one else is following, you are going to have a bad time.

I'd like to know what are your issues with Scala and versions because we don't have these. IMHO, Scala is years ahead of Java and it's possibly so different that comparing them does not even make sense.

The transition from Java to Scala is awkward because you can do most of the things you do with Java in Scala. With that said, I think it's counter productive to do it. Scala is very different and approaching it with an imperative Java (even an OO) style is the worst thing to do.

One thing that I realized is that some people can go off and write cryptic code in Scala, very easily. This is true of other languages as well but in Scala, you can do a lot of things that will make you regret it the next time you try to read the code.


The last problem we had was with various Fintatra / Finagle libraries written in 2.10.

We wanted to upgrade some other - not all - projects to newer versions of 2.11.

It wouldn't work because our Thrift clients were incompatible with stubs compiled with 2.10. We didn't have the time and manpower to rewrite all the dependent services, and so we're stuck on 2.10.

This is something that'd never happen with even old-school SOAP or similar web services. It kind of left a bad taste in my mouth.


Can you speak more specifically regarding the ecosystem being behind? I'm a recent addition to the Scala development world and love it. That being said I'm eager to listen to any opposing viewpoints.


Due to the desire to support Windows as a first-class citizen on a project, I've lately moved some support scripts from Bash to Python. These include dependency-scrapers (that assemble makefile rules), and archiving scripts (that back up selected files from selected directories).

This has been a big win in terms of the readability of the code, which has in turn made me more aggressive about adding features.


I have some similar work cut out for a batch file to Python port for a bunch of automation work. It's totally worth it like you say with readability and widening the feature pipeline.


I'm a solopreneur and switching to AngularJs 1.x really made a drastic difference to my work. Before we used all sorts of hacks and jquery plugins to do the same and it was really hard to maintain our code.

After switching to AngularJs the development time became 20% and it became super easy to maintain. Not to mention programming became super fun again. I've tried react and angular 2 and vue but that just didn't click for me. We're are a small team of two so we don't follow the industry best practices, just whatever works for us and gets the site launched as quickly as possible, so i guess my comment is highly opinionated.


A friend of mine is a java contractor, him and a bunch of java people got hired by a French company for a javascript project.

After a few months slogging through an unfamiliar language, the company decided to switch to java. Productivity went up, surprise!


I am an amateur developer, mostly using python. I had from time to time to code a front-end in JS and it was a huge pain in the bottom (again, I am a real amateur).

This until I discovered Vue.js which changed my life. This and lodash made me actually like JS and front-end programming.

So this is an example what a framework did not help to improve code but actually made me choose in something else than the language I was using so far (coming from a background in C, then Perl)


Vue is great.

Currently using it in a ColdFusion project to retrofit a modern frontend to oldish code. Works like a charm, without a full rewrite.


I had a similar experience using React Redux. The model just 'clicked' in my head, and suddenly I could make UIs.


Would you be able to recommend any Vue tutorials/resources/projects you found useful while learning?


I agree with the comment sibling recommending the docs -- they are good. maybe try building something with nuxt[1] first.

nuxt a framework/static site generator build on top of vue (inspired by next.js for react). its conventions are easy to pick up. I find nuxt eliminates all the boilerplate and middleware I would normally have to write for a vue app with SSR (especially since vue-cli really only has a good client-side template). it's a joy to work with.

1. https://nuxtjs.org/


Made my personal website with nuxt. The tools are excellent -- it was so easy to integrate PostCSS, Pug, and TypeScript into the site.


This looks absolutely fantastic. Thanks. I will have a closer look as soon as my family goes for vacation :)


All you need are the (excellent) docs! You can read the whole thing in like an hour or so.

Onve you've read the docs, just start building.


Absolutely agreed. I went from zero (and very poor JS skills) to starting refactoring my app in one afternoon.


The docs on the site are very good. I would recommend taking an existing project of yours and rewrite it. This is what I did with my home dashboard.

I wrote a few frontends in my job (nothing off the shelf was suitable and I had simple needs). I then rewrote everything in Vue.

This is also the snell of having a hammer and everything looking like a nail but I have basic needs and do not have the time to learn 20 languages. Python and JS are a good combo for me.


At work, our first success in this vein was migrating a Python Twisted service to Go. Due to the critical nature of this service, we strove for feature parity. We ended up with a mostly Go-like code base, but it was very similar to the original code base in basic structure. Most of the improvements we got were due to the language choice, not due to a better design.

Our main concern is concurrency, and for that Go excels. Additionally, post re-write, that code base is easier to follow, easier to extend and maintain, easier to deploy, easier to control lower level aspects, and much, much more performant. It was years ago at this point, but I believe our improvement was something like 120x.

Since then, at work, we've moved to Go as our main language for services. This has provided even more wins as we continue to move legacy Perl/AnyEvent code over. With new rewrites (and new projects), we are taking the opportunity to also redesign systems, enabling our software to scale even further. We have lots more to do and we are hiring :)


Related post from 2013.

tl;dr: Don’t build your app on top of a pile of crap in-house framework.

https://www.cloudbees.com/blog/about-paypals-node-vs-java-%E...


My team switched an entire application from C# (~350k loc) to F# (~30k loc). Smaller team, smaller code base, fewer bugs, complete implementation of requirements, clearer code. The whole of the F# code base was less than the number of blank lines in the C# code base and a lot more fun to read and write.

https://github.com/Microsoft/visualfsharp/issues/2766#issuec...


Wow. I have been using C# for a long time and has dabbled in F# a little bit. And I know F# can help you reduce some boilerplate that C# requires but I have never seen reducing code base by 90 percent like in your case. Can you give us little bit more details? Eg 350k doesn't include blank lines, right? What did the application do? Is it a Windows service or a Web App or a command line program?


The app evaluated the expected income from Ancillary Service contracts in the UK energy market and subequently reconciled against actual income:

https://en.wikipedia.org/wiki/Ancillary_services_(electric_p...

It comprised a windows server, a web app and a calculation engine.

As well as remorselessly removing boilerplate code using FP techniques the F# solution also allowed us to explore our way, via the REPL, to solving the problems we were faced with and come up with better ways of doing things. The clarity, concision and low ceremony of F# code allowed us to rapidly evaluate and change approaches if necessary.

I find getting into the flow of development much easier in a language that supports this exploratory approach to programming - leads to less rigid thinking.


We included Lua in our videogame, and that help us to reduced drastically the prototyping time (screen design, FXs, transitions). Isn't exactly a switching because we still use C/C++. However porting the main logic to Lua allowed us to use the Live Coding feature (we were using ZeroBrane), so you can get rid compilation times, executions and manuals steps to reproduce whatever you are working on. We couldn't be happier with this decision. Totally worth it!


The SPARK people at AdaCore found a bug in the reference implementation of Skein just rewriting it in SPARK:

http://www.adacore.com/knowledge/technical-papers/sparkskein...


Interesting. And the tl;dr for the bug is:

byte_count := (n+7) / 8;

When n is near max size, it overflows to a small number, then (small number) / 8 = 0

the fix is to restrict n to [0,MAX-7)


One can do it without restricting the range as well, if the full range of inputs is needed.

In C:

    (n / 8) + !!(n % 8);


This is correct, and is sometimes necessary, but is a last-ditch save, since as written, it involves a branch.

Much better if you can let the n+7 expand to a wider integer type. Or better yet, if you can ensure that it won't overflow in the first place -- which is what they did by restricting the type.


My x86-assembly has become very rusty, but couldn't it work like this, not using a branch:

(with eax being the unsigned 32-bit input word)

  xor ebx, ebx
  test al, 7
  stnz bl
  shr eax, 3
  add eax, ebx
(edit: replaced a movzx with an xor)


My assembly is probably worse than yours: and that is why I weaseled with "as written". I expect good compiler -- such as the ones we use every day will get this right.

But clever compiler still give me the willies, with their less-than-literal interpretations of the meaning of what I wrote.

I prefer code that is efficient good (whether that means fast, or something else appropriate to the circumstance) when taken literally, and then trust trust the compiler to smooth over my naivity about what efficiency really means.


There's no branch required on architectures I'm familiar with. What branch do you refer to?


I worked on a project a few years ago that was a mixture of Erlang and Ruby. Most of us were new to Erlang, coming from a Ruby background. We wanted to use Erlang to its strengths for high availability and to make the platform distributed, and Ruby (via BERT/Ernie) to its strengths for business logic.

Unfortunately Ernie didn't work as well as it advertised, had many edge cases, and in the end it was a big bottleneck (it would have been faster to ditch Erlang and use plain Ruby).

In the end we wrote everything in Erlang. It wasn't that hard in the end, and the reason why we went down the Ruby route in the first place was because the platform was over-engineered to be as generic as possible (which wasn't needed), so we didn't actually lose anything.


Do you have any experience with Elixir? I'm hearing that Ruby devs like it for similar syntax that runs on the Erlang runtime....(But I don't really know much about it, but I am curious.)


I learnt Erlang before Elixir became popular. Other than a few sanity changes (like ordering of arguments for some built in functions), I prefer the Erlang syntax. The syntax is Ruby like, but it's still different enough.


10 year ago the RIA (rich Internet applications) platforms were the trendy thing of the day. Microsoft had Silverlight, Adobe had Flex and so on. I made a bet on OpenLazslo, an open source contender. I made a wireframe and the snappy interface impressed a client that gave me a big project.

1 month working with OpenLazslo and PHP it became clear I would not be able to finish in time. I started searching for alternatives and was unable to finish the Rails tutorial - but after 30 minutes I had a basic CRUD app done using Django.

I ditched the RIA thing, rewrote everything in Python/Django, was able to finish that project in time and this stack is paying the rent since then.

It was really worth.


Anything long lived to elixir/erlang. Not for less bugs or better speed. But for the ability to bulkhead the errors and the debugging capabilities.

Dynamic tracing all the things has reduced the time to solve bugs by an order of magnitude for us.


Switching to React Native allowed our team of 6 to become a team of 2 and still go twice as quickly. It's not just the cross-platform capabilities - React is also just really great.


I have to wonder if the other 4 members of your team were as happy about the switch...


Ha! There is plenty to work on and they have skills for other apps and the platform. They are not-laid off or anything. This tech change was about leverage to spend our time wisely.


I was involved in the conversion of over 500k lines of Visual Basic 6 to C# motivated by Microsoft's EOL'ing of VB6. Most of it was a desktop application although part of it was a server component. We used an automated code conversion tool which did a surprisingly good job of handling the mindless parts. Some pieces such as the database interaction code had to be hand-ported. I was pretty surprised that we were able to pull the whole thing off while maintaining functional parity. In the end, there wasn't much of a change in performance, but C# proved to be a much more productive development platform than VB6 in terms of tooling, ability to refactor, use automated test tools, etc., so the team's velocity increased significantly over the next few years.


Interesting. Is the conversion tool internally developed? How did you handle semantics like -1 is true?


We used a code conversion tool from Artinsoft (now Mobilize.Net) and brought one of their consultants onsite for a week. The cost for that was in the low 5 figures, but our CTO (rightfully) recognized that as a bargain given the amount of developer time it would save us. The consultant helped us analyze the code conversion results and got us some custom builds of their tool to handle certain edge cases in our code. We had to make a pass over the converted code by hand for certain things such as changing from VB6 to C# coding conventions, etc., but as I recall, the overall effort was 3-4 months for a team of about 10 developers. That's not bad considering it was a critical part of our flagship product in ~2010 and MS had dropped support for the VB6 IDE.


> I'm curious about real world examples where a change has lead to a significant positive outcome in performance, code quality/maintainability, etc.

I wanted to build a database in a dynamic language. While others have succeed to do so by layering their DB on top RDBMS (like EdgeDB or Datomic) I went lower level and built a datomic like DB with GNU Guile Scheme using wiredtiger (now mongodb storage engine). The reason for that is that Guile doesn't have a Global Interpreter Lock (GIL). Using the same design in Python would simply not be possible. I did not benchmark, but I don't think it's possible for a single thread DB to be faster than multithread DB. In this chance changing language made the project possible.


> I'm curious about real world examples where a change has lead to a significant positive outcome in performance, code quality/maintainability, etc.

Another example: same language, new framework: In a Python web app, we needed to have websockets. But at that time Django had no real websocket support. But there is future proof framework that does: aiohttp! Also one might argue that you can use old django with websocket using another process. But it leads to a more complicated architecture. We want to keep monolith the app as long as possible/sane.


Is the Django-websockets story ok now?


Not really. There is the "channels" project which brings it closer to proper websocket capabilities, but it still isn't great to work with. I tried using it for a production project and gave up as I encountered too many issues. Ended up rewriting the app in phoenix as we needed really solid websocket support. Phoenix makes websocket seriously nice to work with and we've found the performance to be quite impressive.


That's nice to hear. I'm trying https://github.com/olahol/melody in a smallish project right now, and it seems very promising.


Yes, switching something heavy on cpu (mostly real mathy computations) from python 2 to Go - we could keep working eith current number of CPU cores/boxes and not having to like, double our expenses.


Did you try PyPy?


I am not familiar with F#. I think it's a functional language, did you take advantage of immutability? What is the main difference between the F# implementation and the C# one (outside less bugs)


F# is a Functional/OOP language.

In this case the original code had a class like:

    class DeliveryItem {
        public bool isTagged { get; set; }
        public Deliverer previousDeliverer { get; set; }
        public Deliverer nextDeliverer { get; set; }
        public string itemName { get; set; }
        public DateTime lastTransferTime { get; set; }
    }
The code would take a single DeliveryItem and each time something used it, it would update the properties on it.

While not ideal, this worked fine if pipeline was an ordered sequence. The pipeline had grown into a complex graph, with many different possible states, and new properties were incrementally added to shoehorn the new states and transfers between states into the single object.

The class grew large and it became difficult to determine from logs what path of the pipeline that an item had taken. It could have been scanned, weight analyzed, compaction determined, etc, etc. And these didn't always happen in the same order because of the business process.

Instead I created types like `NewDeliveryItem`, `PreProcessedDeliveryItem`, `ThirdPartyVerifiedDeliveryItem`, etc.

It allowed for explicit modeling of the pipeline, since functions could take a `PreProcessedDeliveryItem` instead of a `DeliveryItem` and would know that they wouldn't need to send it for processing.

The example has been translated a bit and isn't the best explanation, but gives a small amount of detail.

You can check out https://fsharpforfunandprofit.com/series/designing-with-type... and https://fsharpforfunandprofit.com/ddd/ to get more information on how this can decomplect seemingly simple applications.


Switching from CodeIgniter to Laravel saved me a ton of time, and ultimately made start to enjoy programming again.

I'm actually in the process of writing a book about the key differences between the two frameworks, enabling people to shorten the time it takes to get the hang of Laravel. It'll document the differences and similarities between the two frameworks so that you can use your CodeIgniter knowledge to learn Laravel.

I.e. the difference between routing methods in CodeIgniter vs. Laravel, and how migrations work in one vs the other.

Almost everything to do with migrations still has to be done manually, for the most part. I.e. having to create your own controller (https://stackoverflow.com/questions/9154065/how-do-i-run-cod...) in order to run a migration in CodeIgniter vs the more slick 'php artisan migrate' command in Laravel.

If you want to know more about it, you can add your email here: Book info (it's a google form) https://goo.gl/forms/gCAT33rl1h6JbQsw2


Literally every time I've switched from bash to anything else.

Usually the use prerequisite situation is "this script processes strings and is getting too big and unwieldy for bash"


Change languages is the only way so far, to step-out serious blind-spots in the previous language. Is a shame that langs are rarely fixed, only added more and more features without learning anything in the process. Devs are so change adverse that is not even funny.

I have done a lot of business/enterprise development (a very hostile space to innovation and working solo or with very small teams), and have done small-to-largeish (from my POV) rewrites in several languages.

From:

- Fox 2.6 to Visual FoxPro. A breaking change in a lot of ways, a total win in the process. Not just because the app was native windows now.

- From Fox to Delphi. Now I discover the beauty of Pascal and improve the app and deployment scenario. Static types is a net win overall. My other love is python, probable code faster on it, but have FAR LESS trouble with strong type systems.

(However take a me some years in note how bad all languages are aside the DBase Family in talk with databases, but other wins distract me from that...)

- Visual Fox to .NET (1.0, 1.1 with both Visual Basic and C#) was a total net loss. A Massive increase in code size, yet the (desktop) apps were way slower than Visual FoxPro, even more than Delphi (but my boss not let me use Delphi).

The web was also terrible in performance and complexity. Sadly back in the day I was unaware of how do web properly and drink all the MS KoolAid on this.

This sink the project and almost the company. Only saved returning back to full FoxPro.

- To Python. I move several things to python, mainly .NET stuff. How boy, how big was the win. The net reduction in code size and the clarity of the code!

Also, (web) apps way faster. Take .NET some years in learn the way here, so...

- To RDBMS (Heck, even sqlite): Still big wins when someone else try to use a nosql/desktop datase (in my space, NOBODY is Facebook. With no exception, step-out of a RDBMS is one of the biggest mistakes)

- To F#: I return to .NET past year (because MS do a lot of the right moves to fix old mistakes!!!) and again a lot of reduction in code size, removing of problematic logic obscured by years of OO-only code. Still not happy about the way lower quality tooling, but enduring it even in Xamarin Mobile because I see the benefit.

I wish I could use swift for Android, so F#/.NET is my only sane option left...

----

Mainly, move from a lang to another that is not similar, help in see the problems with the old one. Learn new or better ways to solve stuff, and get access to different toolsets and mindsets. This payback when returning back to the old, too, when this ideas are migrated.


As others have said earlier, if the new tools serve the problem domain better there will be significant gain.

My first game server was hand crafted with php/mysql. It did work and was able serve players, however moving to Erlang allowed two order of magnitude more players onto the same box, while the code maintainability increased as well.


Years ago I rewrote some liquid flow routing analysis in C# that was precisely one jillion times faster than the original code. This code was written in....a SQL stored procedure.

tl;dr Don't use SQL for recursive conditional logic.

Probably not a useful example huh?


We're starting to develop greenfield APIs in Scala (with Play) rather than PHP (with Laravel) and we've noticed new developers without experience in either language have a surprisingly similar time-to-productivity. Here are some major factors:

PHP's dynamic typing combined with Laravel's magical approach makes discoverability hard. A developer can't trace through a request by starting from a controller method and navigating through a codepaths with the support of their IDE. Our application code uses typehints almost exclusively, which helps. But whenever the code you're debugging drops into the framework (or PHP), you'll need to break out your browser and spend time a great deal of time reading documentation to understand how to use the function. For example, certain functions in Laravel accept no arguments in the function signature, but the function body calls PHP methods to dynamically parse function arguments.

We spend a fair amount of time documenting all the framework and language-level magic constructs. If we've dropped the ball on documentation (which happens often) a new developer is at the mercy of coworkers to explain where the framework (or language) magic happens.

On the plus side, Laravel's batteries-included approach significantly speeds our time to MVP.

Scala's category theory approach to functional programming is not easy for new developers to understand at first glance. While most of our code (framework or otherwise) is now easily navigable with an IDE, developers now need to spend time understanding concepts such as for comprehensions, monads and ADTs. However, most functional concepts are understandable without the help of coworkers, which means a new dev can rely on Google to help understand a concept, rather than relying on a coworker.

Once knowledge of syntax has been attained, Scala's strong type system makes development far easier. We can communicate semantics through types and monads (such as Either, Future, Option and domain-specific ADTs), and incorrect code is immediately flagged by the IDE. A new developer making a change to a database schema may now change a database column name, recompile, and be presented with a list of every bit of code they've broken.

Using types to represent the semantics of our domain has been incredibly powerful, and makes potential bugs much easier to spot when reading the code. For example, rather than checking a user's subscription status inside a method, we can require a "SubscribedUser" type in our method signature. With this type in place, a new developer can no longer accidentally call that method with an "UnsubscribedUser".

Perhaps most importantly, the long term benefits of Scala's strong type system are incredibly valuable. We're a software agency, so our large projects experience development in phases. It may be 6-12 months before our team circles back to a large project for major development. In that time, we've forgotten all the quirks and gotchas of that particular framework and language, and Scala's strong static type system significantly decreases regressions during the new development effort.

In summary, new developers have a similar learning curve for each language/framework. And in the end, Scala's long term maintainability is more valuable than Laravel's speed to MVP.


I'm bookmarking this as a case study. You should comment here more often esp when these topics come up. :)


Thank you for the encouragement


Thanks for posting that. Where would a PHP programmer go to get a foothold in Scala web development?


I would recommend starting with SBT templates, and I second the recommendation of Scala for the Impatient.

We began working with Scala very slowly, only using it for small internal projects until we were more familiar with the language and stack.

We also avoided heading too deep into category theory territory in the beginning. Libraries like Cats and Scalaz are not allowed (currently).


There's a Coursera specialization on Scala[1], however it seems to focus more on data analysis than web development.

[1]https://www.coursera.org/specializations/scala


I didnt think the Scala Coursera was very good for general developers - a bit too academic and focusing on functional / recursive patterns which aren't a good way to get a feel for the language in everyday web usage (vs Php, Java, etc).

I'd recommend the book 'Scala for the Impatient', tutorials and projects from Twitter, and Maven or SBT template projects.


Seems focus is on positive differences, but I have seen the reverse a lot; switching native to web because 'everyone is doing it', switching PHP to 'something that doesn't suck', migrating off c# because 'M$' and many others that really hurt (and killed) projects. I did see projects improve as well but that was almost always because it was code from an 'old' team where the framework and programming language was not optimal (to say it lightly) for the new team.


This is more than a little self serving, but when I switched us over to intercooler.js it made a huge difference in our app.

When I pulled the trigger on it I was terrified that I was screwing us over by not using Angular (which was the cool tech at the time) or some other more javascript-oriented solution. Thankfully it has worked out well, and my co-founders don't hate me any more than they already did before hand. (And maybe even a bit less.)


This isn't surprising to me at all. Having the creator of a framework working with a team using a framework is a huge benefit, almost regardless of the framework quality (within reason). And that's not a dig at intercooler's quality (I've never used intercooler, and am no fan of Angular), but I do think it's a huge confound.

Meaning, if I had access to creator of framework A on my team, and was choosing between framework A and B, and objectively liked B better than A (but not by too much), I would still choose A.


If you like framework B more than A by too much, it seems unlikely to me that you'd have the creator of framework A on your team.


Really? Was it really unclear to you that I was presenting a hypothetical to illustrate a point and not practical advice?


The biggest change for me was when I switched to strictly typed languages. It doesn't matter if it's Go or Typescript or whatever. As long as it has types it dramatically improves maintainability and ease of scale.


I've never experienced this personally, although I hear it said a lot. I generally work on server side applications with a database behind them and 90% of the work involved is taking data from the client and putting it in the database or the other way around. The database handles actual type enforcement. The requests come in as strings and they are returned to the client in string based format.

Introducing types to the server side language has never done anything for me other than making sure that the types are in sync with those of the database while creating a lot of tedious overhead converting data that the application itself might not ever need to process to a particular type.

The Elixir gradual typing approach has always seemed like the ideal balance to me.


That's been my experience as well. Dynamic typing in a language doesn't bother me at all, but the thought of a database without dynamic types doesn't seem right.


Agree with this. I've had a large JavaScript codebase where there were so many annoying edge cases mostly caused by variables being null/undefined when you didn't expect it and string/arrays values being mixed up. You could track them down eventually but there was always a feeling of unease that there was more bugs there. If you start using TypeScript with its non-null checking feature and type checking, you can catch so many of these kinds of bugs before even running the code and have high confidence you got them all.


I'll echo this sentiment, although I found Go much more limiting type-wise than Typescript, Java, or other mainstream typed language.

But in general, moving from PHP, Python and JS (where I started out years ago) to languages like Java, Flow-typed JS, and even Scala at work have made me realize how much dynamically-typed large projects loose in maintainability vis-a-vis statically-typed projects. Yes, excellent test coverage can help mitigate this, but static types go so much farther when used effectively.


what about measurement/profiling


I think measurement/profiling in a case like this is hard. How do you measure maintainability without a large database of bugs, iteration performance, etc.

I wish it were an easier thing to do, but I think as far as maintainability and codebase size scalability go, it's hard to quantify properly without metrics that are usually only tracked in enterprises.

I'm trying to think about how to make this more clear in the original post.


This is a less dramatic change than swapping out the framework since we are still using rails, but a couple years ago my employer switched from using rails controllers to using https://github.com/gocardless/coach and it is noticeably easier to write good tests.


When our Rails team delivered our projects and Christmas came in 2010, we peeked our heads out and discovered Node.js, CoffeeScript, and WebSockets. We created a real-time web framework from the combination of those technologies, and demoed it at the Hackernews London meetup of June 2011. It was known as SocketStream.


R to python/pandas - deployability


Moving from Ruby to both Elixir and JavaScript (node) strongly improved developer productivity, performance, and time to release.

Ruby/Rails isn’t bad as such, but it’s slow and promotes a very convoluted & interdependent monolith by default.

Very happy with the change.


i wanted to compute perlin noise to make planet textures fast enough thay the user would not have to experience a loading screen when flying into a solar system. i could not make that happen in c# which the bulk of the game was in. c++ allowed me to compute it around 11 times quicker via SIMD intrinsics.


Did you try generating your textures in a pixel shader or compute shader?


Wow, that's awesome! Which FP constructs did you use?


J2EE




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: