Flink's SQL implementation actually follows ANSI SQL - which I think is very important.
There is a way of interpreting streams and tables to make ANSI SQL meaningful in the presence of streams, which we follow [1].
The big advantage is (besides not having to learn another syntax and retaining compatibility with SQL tools and dashboards) that this seamlessly handles the batch (bounded/static input) and streaming (unbounded/continuous) use cases with literally the same SQL statement.
There is a way of interpreting streams and tables to make ANSI SQL meaningful in the presence of streams, which we follow [1].
The big advantage is (besides not having to learn another syntax and retaining compatibility with SQL tools and dashboards) that this seamlessly handles the batch (bounded/static input) and streaming (unbounded/continuous) use cases with literally the same SQL statement.
[1] http://flink.apache.org/news/2017/04/04/dynamic-tables.html