Hacker News new | past | comments | ask | show | jobs | submit login

There is a difference of course. While we are still learning about their implementation I think the statements below are true

1. We don't support just stream. You can throw a SQL at a kafka topic as easy as SELECT * FROM `topic` [WHERE ]

2. We support selecting or filter on the record metadata: offset/timestamp/partition (Haven't seen something similar in Confluent KSQL)

3. We integrate with Schema Registry for Avro. We hope to support Hortonworks schema registry soon as well as protobuf.

4. We allow for injecting fields in the Kafka Key part. For example: SELECT _offset as `_key.offset`, field2 * field3 - abs(field4.field5) as total FROM `magic-topic`

5. Just quickly looking at the Confluent KSQL "abs" function i see it accepts Double only. It doublt that everything is converted to Double before it hits the method and then converted back. (too short of a time to understand the whole implementation).

6. Filters: is related to point 2. We allow filter on message metadata. For example: SELECT * FROM topicA WHERE (a.d.e + b) /c = 100 and _offset > 2000 and partition in (2,6,9)

Also not sure if they have customers yet using it. We do.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: