Hacker News new | past | comments | ask | show | jobs | submit login

TL DR: Distributed system is hard, and Hadoop cluster are not really designed for a dynamic environment.

There are as probably as many services running inside a Hadoop clusters as a Micro Service meshes, except:

* The services can be huge: HDFS Namenode can takes hundreds GB of RAM, hours to start up in a multi PB cluster. Updating configuration requires restarting, so it is a huge pain if you care about zero downtime.

* The services are often bound to host and can not be easily migrated to other hosts.

* The communication interfaces between services are not well-defined: Hive Metastore, for example, didn't have a formal protocol documentation for a long time, yet everything depends on it to store metadata (I'm not even sure if they have the protocol now). As a result, services are often tightly coupled: you need everything to be compiled with the same libraries version, or stuffs might not work correctly. Furthermore, due to dynamic code loading, issues might not surface until days later - in the middle of the night, making life miserable for everyone.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: