Hacker News new | past | comments | ask | show | jobs | submit | shaeqahmed's comments login

Matano is completely serverless and stores all data in ZSTD compressed parquet files in dirt-cheap object storage, allowing you to bring your own analytics stack for queries on large amounts of data for things like investigations and threat hunts. Since we store data in a columnar format and plug in query engines like Snowflake that are optimized for analytical processing the queries on specific columns will run much faster than they would run if executed on a search engine database like Elasticsearch which would require maintenance to scale.

I think it's important to understand that search engines and OLAP/data warehouse query engines have fundamental architectural differences that offer pros/cons for different use cases.

For enterprise security analytics on things like network or endpoint logs which can hit 10-100TB+/day, using anything other than a data lake is simply not a cost-effective option. Apache Iceberg was created as a big data table format for exactly this type of use case at companies like Netflix and Apple.


You are not wrong, but I do think realtime and olap have been converging a bit for a while.

Stateless elasticsearch and opensearch are actually moving to a similar model as what Matano proposes. They both have made announcements for stateless versions of their mutual forks. Data at rest with that will live in s3 and there are no more clusters, just auto scaling ingest and query nodes that coordinate via s3 and that allow you to scale your writes and reads independently. Internal elasticearch and opensearch data formats are of course heavily optimized and compact as well. Recent versions have e.g. added some more compression options and sparse colunn data support.

But they are also optimized for good read performance. There's a tradeoff. If you write once and read rarely, you'd use more heavy compression. If you expect to query vast amounts of data regularly, you need something more optimal because it takes CPU overhead to de-compress.

For search and aggregations, you either have an index or you basically need to scan through the entirety of your data. Athena does that. It's not cheap. Lambda functions still have to run somewhere and receive data. They don't run locally to buckets. Ultimately you pay for compute, bandwidth, and memory. Storing data is cheap but using it is not. That's the whole premise of how AWS makes money.

Splunk and Elasticsearch are explicitly aimed at real-time use cases (dashboards, alerts, etc.), which is also what Matano seems to be targeting. But it can also deal with cold storage. Index life cycle management allows you to move data from hot, warm, and cold storage. Cold here means snapshot in S3 that can be restored on demand for querying. It also has rollovers and a few other mechanisms to save a bit on storage. So, it's not that black and white.

Computing close to where the data lives is a good way to scale. Having indexing and caching can cut down on query overhead. That's hard with lambdas, athena, and related technology. But those are more suited for one off queries where you don't care that it might take a few seconds/minutes/hours to run. Different use case.


Yep, SIEM is just a superset of Log Management as it needs to do things like alerting + correlation + detection etc. in addition to ingesting logs to be considered a SIEM.

It is a common use case to send application logs along with security logs to something like Matano or Splunk for analysis as well, so feel free to use Matano to analyze your non-security logs!

Do keep in mind this will be a better fit if you have structured logs (you can also use VRL transformation to parse them at ingest) as the query language will be SQL.


The code is written in high performance multi-threaded Rust and uses the [1] Arrow compute framework. We also batch events and target about 32MB of event data per lambda invocations. As a result it can process tens of thousands of events per second per thread, depending on the number of transformations.

That said, we are working on performance estimates and a benchmark on some real world data for Matano to help users like you better understand the cost factors. Stay tuned.

[1] https://github.com/jorgecarleitao/arrow2


We launched before Amazon Security Lake :)

Amazon Security Lake's main value prop is that it is a single place where AWS / partner security logs can be stored and sent to downstream vendors. As such, Amazon only writes OCSF normalized logs to the parquet-based data lake for it's own data in a fully managed way (VPC flow logs, Cloudtrail, etc.) and leaves it to the customers to handle the rest.

For partner sources, the integration approach has been to tell customers to set up infrastructure themselves to accomplish OCSF normalization, parquet conversion, etc. For example, here is okta's guide using Firehose and Lambda, https://www.okta.com/blog/2022/11/an-automated-approach-to-c...

The Amazon Security Lake offering is built on top of Lake Formation, which itself is an abstraction around services such as Glue, Athena, and S3. Security Lake is built using the legacy Hive style approach and does not use Athena Iceberg. There is a per-data cost associated with the service, in addition to the costs incurred by other services for your data lake. Looks like the primary use case of the service is being able to store first-party AWS logs across all your accounts in a data lake and being able to route them to analytical partners (SIEM) without much effort. It does not seem very useful for an organization that is looking to build its own security data lake with more advanced features, as you will still have to do all the work yourself.

Matano, has a broader goal to help orgs in every step of transforming, normalizing, enriching and storing all of their security logs into a structured data lake, as well as giving users a platform to build detection-as-code using Python & SQL for correlation on top of it (SIEM augmentation/alternative). All processing and data lake management (conversion to parquet, data compaction, table management) is fully automated by Matano, and users do not need to write any custom code to onboard data sources.

Matano can ingest data from Cloud, Endpoint, SaaS, and practically any custom source using the in-built Log transformation pipeline (think serverless Logstash). We are built around the Elastic Common Schema, and use Apache Iceberg (ACID support, recommended for Athena V2+). Matano's data lake is also vendor neutral and can be queried by any Iceberg-compatible engine without having to copy any data around (Snowflake, Spark, etc.).


Some big differences:

- Matano has realtime Python + SQL detections as code with advanced correlation support. Chronicle uses inflexible YARA-like detection rules iirc

- Matano supports Sigma detections by automatically transpiling them to the Python detection format

- Matano has an OSS Vendor Agnostic Security Data Lake and can work with multiple clouds / let's you bring your own query engine (Snowflake, Spark, Athena, BigQuery Omni). Chronicle is a proprietary SIEM that uses BigQuery under the hood and cannot be used with other tooling.

There are no limits on data retention or ingestion with Matano, it's your S3 bucket and the compute scales horizontally.


Thanks for the response. Chronice uses Yara-l and bigquery uses sql on steroids. Both are difficult to start working with them. I would want someone that has never even looked at python code to be able to query the data. Having a different query langauge than detection language is also a big problem (e.g.graylog). I will keep an open mind, I prefer python but it is not ideal for getting a wider audience (general IT staff) to use it. Junior staff prefer chronicle over splunk because they can put in an IP or domain and just get results. Now ask them to learn python and you have a revolt.

I looked at your sample detection on the home page. This is have for me but I can't get others to use it. I promise you, doing a little market research on thid outside of the tech bubble will save you a lot of money and resources.


Long term, I believe Python (along with good ol' SQL for correlation) is the best language to model the kind of attacker behaviours companies are dealing with in the cloud and a lot of the difficulties with it are not inherent but around tooling. For example, in our cloud offering we plan on building abstractions that let you search for an IP or domain and get results with a click of a button as well the ability to automatically import Sigma rules and test Python logic directly with an instant feedback loop of a "low-code" workflow.

Currently we focus on more modern companies with smaller teams that have engineers that can write Python detections and actually prefer it over a custom DSL that needs to be learned and has restrictions.

Keep in mind there are more people in general that know Python than are trained in a vendor-specfic DSL so perhaps long term the role of a security analyst will evolve to overlap with that of an engineer. We are already seeing more and more roles require basic proficiency in Python as attacks on the cloud become increasingly complex :)


> get results with a click of a button as well the ability to automatically import Sigma rules and test Python logic directly with an instant feedback loop of a "low-code" workflow.

Ok, so importing sigma rules is the easy part, it takes on average 2-3 hours of tuning a sigma imported rule to get it to where it is usable in a large environment where you have all sorts of false positives. The language in question should not me making a fuss about indentation or importing the right module. You never (to my knowledge) need loops, classes,etc... Python is great just not purpose built for this use case. Most companies, even fortune 50 companies can't get many people in their security team who know or are willing to learn python well. You need someone to write/maintain it, someome to review it and the people responding to detections would want to read it and understand it. I am not saying python is difficult, just that you have to take the time to learn it. Detection engineering is all about matching strings and numbers and analyzing or making decisions on them. You have to encode/decode things in python, deal with all kinds of exceptions, it is very involved compared to the alternatives like eql,spl,yara-l,etc... But them again, maybe your customers who want to run their own siem datalake in the cloud might also have armies of python coders. But generally speaking, it is rare (but it happens) to find people interested in learning python but also doing boring blue team work. I would love python so long as I don't have to deal with newer python versions requiring refactoring rules.

> Currently we focus on more modern companies with smaller teams that have engineers that can write Python detections and actually prefer it over a custom DSL that needs to be learned and has restrictions.

Fair enough, honestly, if your focus is silicon valley Python is great. You will just get a reputation about what your product demands of users if you ever want to branch out. The only time I have ever done a coding interview wss with a statup like typical YC funded type company. I am just warning you the world is different outside the bubble. I would want to recommend your product and I will probably mention it to others but it looks like you know what you want.

> Keep in mind there are more people in general that know Python than are trained in a vendor-specfic DSL so perhaps long term the role of a security analyst will evolve to overlap with that of an engineer. We are already seeing more and more roles require basic proficiency in Python as attacks on the cloud become increasingly complex :)

Attacks on the cloud are not that complex but python does not make the job easier just more complicated. And I spend at least 10-15% of my time writing Python so I am not hating on it.

The golden standard is splunk. Nothing, I mean absolutley no technology exists that even comes close to splunk. Not by miles. Not any DSL or programming language. Do you know why CS Falcon is the #1 EDR, similary all alone at the top? Splunk!

Even people that leave splunk to start a competitor like Graylog and Cribl can't get close.

A detection engineer is a data analyst (not a scientist or researcher) that understands threat actor's TTPs and the enterprise they are defending well. I wish I wasn't typing on mobile so I can give you an example of what I mean. None of the sigma rules out there come close to the complexity of some the rules I have seen or written. Primarily, I need to piece together conditions and analysis functions rapidly to generate some content and ideally be able to visualise it. It doesn't matter how good you are with a language, can you work with it easily and rapidly enough to analyze the data and make sense of it? Maybe you can get python to do this, I haven't tried. But you are not going to compete with Splunk or Kusto like that. The workflow is more akin to shell scripting than coding where you cam easily pipe and redirect IO.

E.g.: "Find GCP service account authentication where subsequently the account was used to perform operations it rarely performs and from IP addresses located in countries from which there has not been a recent login for that project in the last 60 days"

I am just giving you an example of what a detection engineer might want to do, especially if they've been spoiled by something like Kusto or Splunk SPL. That's the future not simple matches.

The role of a security analyst and engineer already overlap, modern security teams have a lot of cross functionality where everyone in part is involved and embedded with other teams that have the same security objective (detect threat actors in this case).

Just to show you the state of things. I can't get a team solely dedicated to security automation to write one simple python script to solve humongoud problem that we have. We spent over a year arguing and battling over the solution. The hangup was they would be responsible for maintaining it, so instead they wanted an outside vendor to do it.

At a completely different company, I wrote a small python script to make my life analyzing certain incidents easier and that started a territorial battle.

What you consider a simple python script requires consultants and many meetings amd reviews and approvals at bigcompanies and they're constantly worried the guy who wrote the script can't be replaces easily. I am telling you all this so you understand how people making purchasing decisions think: they are very much on the buy side of things than build once a company gets to a certain size when their main business is not technology services.

So I hope you also think about offering managed detection/maintenace services (kind of like where socprime is going) in the long term.

Finally, I think your strategy is to have an exit/ipo in a few years and if you don't see long past that, I have no doubt you will succeed. And I am very happy to hear about another player in this field. I have even pitched building an in house datalake solution similar to this except with tiered storage where you drop/enrich/summarize data at each tier (you need lots of data immediately but less details and more analysis of that data at each tier).

I wish you the best of luck!


Thank you for typing up a long detailed response. I think a lot of the points and concerns you bring up are valid, and we are mostly agreed upon.

In Matano however, we see Python as a viable component in security operations for narrowly tracking atomic signals while the language for writing detections and hunting threats will be SQL, which works perfectly well for use cases like the detection example you provided, albeit verbose. We have thought of also building a transpiler that would let analysts actually use the succinct syntax of SPL and compile that to SQL under the hood. This could be a great way to get adoption in companies where using Python would be difficult.

If you are interested, I would love to find some time to chat and share thoughts. Can you email me at shaeq at matano dot dev?


Thanks for the well thought out response. I hope Matano succeeds. I can't email you since my hn presence isn't public/social but I might be involved in evaluating your product some day soon and would chat and share thoughts with your folks then.


Do you have contact info to consult about this stuff in a few months? Building something adjacent and analyst usability is top of mind.

I started my career doing detections (Snort / ClamAV) but have been out of the loop doing development for a while. A fresh perspective would be helpful.


Sorry,can't consult but if I see a post from you asking about this, I will be sure to respond/discuss that in detail.


Thank you! We definitely believe in open source and don't need AGPL. Sending you love as you deal with that Splunk instance.

P.S. feel free to open some issues for any log sources you'd like to see supported in Matano


Thank you! Yes with AppTrail we wanted to solve the pain points around SaaS audit logs but since it was a product that needed to be sold and integrated into B2B startups rather than the enterprises that felt the pain points and needed audit logs in their SIEM, we couldn't find a big enough market to sell it.

We realized that the big problem was that most SIEM out there today did a poor job with pulling and handling the data from the multitude of SaaS and Cloud log sources that orgs have today, and decided to build Matano as a cloud-native SIEM alternative :)


We are working on a solution for GCP and Azure :) GCP recently announced Iceberg support with BigLake and support for federation across multi-cloud lakes so it would be perfect use cases.

If you are interested in using Matano for GCP, feel free to reach out and join our Discord community! We are FOSS so would love to collaborate on a solution.


Definitely looking forward to GCP.


I completely agree with you and the need for a fully integrated solution with great visualizations without hosting additional tools that aren't purpose built! Unfortunately there are very few SIEMs that get this right today..

Here's how we are thinking of it. We think it's important for a successful security program to first have high quality data and this is why we want help every organization build structured security data lakes to power their analysis using our open source project. The Matano security lake can sit alongside their SIEM and be incrementally adopted for a data sources that wouldn't be feasible to analyze otherwise.

Our larger goal as a company though is to build a complete platform that allows a security data lake to fully replace traditional SIEM -- including a UI and collaborative features that give you that great feedback loop for fast iteration in detection engineering and threat hunting as you mentioned. Stay tuned I think you will be excited by what we are building!


For sure. Pull a dbt and get everybody hooked on your tool, then slap a SaaS platform ecosystem to the farthest right and watch the revenue flow.


Splunk is HEAVILY pushing their SaaS offering at the moment. They are the most obnoxious vendor we currently deal with.

We are fine on prem, pay big $$ license fees, but not enough. They want that sweet SaaS revenue.

I would be wary of pushing this, being a non-SaaS platform could be an advantage here.


I’m assuming the difference is: “big $$ license fees” for on-prem is $X a year, while “sweet saas revenue” is $A a year, $B per user, $C for compute, $D for storage, and $E for requests.

As a large company, what are the things you are more than happy to pay for with on-prem?

The reason I’m asking: this feels like the largest issue with cloud saas, which is one of the more popular implementations of open-core for B2B. Not saying Splunk is open-core, but it’s related to above/dbt cloud discussion.

Enterprise customers have the highest propensity to pay, but don’t need or want their cloud offering.

Mid-tier customers actually prefer a managed service by their cloud provider, aws/gcp/azure, because it strikes a balance between easy AND it works within their vpc/iam/devops. But this cuts off open-core companies main revenue, so they start making ELv2 licenses (elastic, airbyte, etc) which makes things harder on mid-tier.

Small customers are the ones who love saas the most, but have the least ability to pay, have the least need for powerful tools, and will probably grow out of being a small customer…

I’m curious if there are any companies which are: source code available, commercial license, allow you to fork/modify the source code, only offer on-prem (no cloud saas offering), want the mega-clouds to offer a managed service. BUT the commercial license requires any companies over 250 employees or $X revenue (docker desktop style) to pay a yearly license fee.


Indeed, no more SaaS. I've had enough of this cloud nonsense already.


Say more? Are you tired of it personally or is it troublesome at work?


Many enterprises using Splunk are already being forced to purchase products like Cribl to route some of their data to a data lake because writing it all to Splunk is just way too expensive at that scale 1-100TB+/day (7 figures $).

But a data lake shouldn't just be a dump of data right? Matano OSS helps organizations build high value data lakes in S3 and reduce their dependency on SIEM by centralizing high throughput data in object storage using Matano to power investigations. To give you an example, one company is using Matano to collect, normalize, and store VPC Flow logs from hundreds of AWS accounts which was too expensive with traditional SIEM.

Matano is also completely serverless and automates the maintenance of all resources/tables using IaC so it's perfect for smaller security teams on the cloud dealing with a large amount of data and wanting to use a modern data stack to analyze it.


nice thanks, makes a lot of sense


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: