I came here to say the same thing. The article has things the wrong way round: rather than moving the computation on individual webhook event streams into the SaaS provider of that webhook, I want to consolidate all the processing of all my disparate webhooks into my company's own unified event log.
Shifting the computation to the SaaS provider is wrong for all the reasons you mention, and others too - for example, if I want to sink my webhooks into a database, I'm not going to want to share those database credentials with the SaaS provider.
At Snowplow we support ingest of various webhooks (yes, standardization is a total pain), and then you can write your own "webtasks" on them in whatever tech you like (Lambda/Spark/Hadoop/SQL):
Shifting the computation to the SaaS provider is wrong for all the reasons you mention, and others too - for example, if I want to sink my webhooks into a database, I'm not going to want to share those database credentials with the SaaS provider.
At Snowplow we support ingest of various webhooks (yes, standardization is a total pain), and then you can write your own "webtasks" on them in whatever tech you like (Lambda/Spark/Hadoop/SQL):
https://github.com/snowplow/snowplow/wiki/Setting-up-a-Webho...