Hacker News new | past | comments | ask | show | jobs | submit login
What Drop-In API Observability Looks Like, Pre-Launch and Post-Launch (akitasoftware.com)
49 points by cpeterso on July 1, 2022 | hide | past | favorite | 8 comments



"Almost immediately after installation, Akita provided all the endpoints’ requirements, as well as some examples of expected values, which allowed us to better understand the service that the contractors had built. Once we could understand the data flow (including not only the request body, but also headers and authorization), improving the system became a lot easier. After we had the mapping,"

Interesting... could be cool to see a value-add service on top of Akita or another Obs vendor that just inspects req/res payloads and generates OpenAPI specs based off what was observed. I can't count how many times I've dropped into teams and tried to piece together their API contracts just to realize... they don't have specs! Having to then turn around and reconstruct them backwards based off code spelunking and maybe some design docs is... frustrating.


Hi, I’m creator of a tool called AppMap that will record traces of your code (test cases or live interaction). Once you’ve made AppMaps there are different tools you use on top of it, such as visualization and analysis, and extensions for VSCode and JetBrains.

One of the tools is openapi generation. Here’s a blog post that shows how it works: https://appland.com/blog/2021/12/22/how-to-auto-generate-ope...

AppMap works with Rails, Django, Java, and JS/Express. Check it out and let us know what you think!


Hi, Jean here from Akita. Yes, it's possible to generate an OpenAPI spec, either by running Akita in production or from a HAR file. Here's a blog post someone wrote about using Akita to generate an OpenAPI spec: https://apisyouwonthate.com/blog/creating-openapi-from-http-...


I see a lot of value in being able to generate the schema based off of real time traffic, vs. static analysis or test cases. In my mind the traffic to the API is the real source of truth for the API.


This is cool product, but but I don't undersatand this statement:

"Importantly, Akita did not impact processing loss or extra costs inside AWS, a main concern at our company stage."

Author specifically talks about AWS Fargate and links to Akita docs where it says in AWS Fargate setup, Akita agent should run as a side car in each container you deploy. How can that not bring significant amount of extra compute cost?


Not from the company but we do something similar with opentelemetry. It’s true, because you pay for the total allocation of CPU/memory on Fargate, so you can add a sidecar container into that total allocation with a small deduction from the amount left available to the app itself. E.g. Before: 512MB for task, 512MB available for the application After: 512MB for task, 412MB available for the application, 100MB available for sidecar


Yes, but even in your example, that's 20% of resources going to the sidecar. Not to mention sizing correctly multiple containers in a single task gets complicated.


Hello, this is Jean from Akita. You're right that running Akita as a sidecar does incurs extra compute. Whether or not this incurs extra cost on Fargate depends on whether it's necessary to switch to a larger instance. In zMatch's case, the answer is no. In general, it depends on how much traffic you're sending through Akita.

Something to point out is that Akita passively watches traffic and doesn't sit in the path of traffic, so there's no impact on latency, unless the task is at 100% CPU utilization.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: