I believe this is the best combination of cheap/powerful for early-stage startups. My very non-technical cofounder is able to use metabase's simple GUI interface to create graphs/insights (even joining and aggregating across tables!), and for anything complex I can step in a give a helper SQL query. We have around 10M records we aggregate around for daily insights.
Metabase can also be run as a pseudo-desktop/web application to save additional cash (we don't do this though).
Sorry not a question, just to say again that your UI rocks. After I came to this post, I stopped everything I was doing to install metabase for my company (and then used it for a project :) ). I have spotted some issues with the installation instructions where do I post them? Issue at Github?
Worlds apart, superset provides powerful visualisation capabilities, but is very difficult to use/setup/maintain (although they just launched as a company [0], so perhaps that will change).
Metabase is a very easy to use data exploration, data vis and reporting tool designed to give insights into data in an easy/self service way for mostly non-technical users.
I've POC'd both, and would recommend Metabase or redash unless you really know what you need out of superset
Pretty similar feature wise and intended use, I did a direct comparison and went for Metabase because it allows you to schedule reports delivered to email/slack (only allows you to schedule individual queries, not entire dashboards).
I didn't demo redash so can't comment on the actual use comparison.
For our team, using an ELT architecture (as opposed to ETL) [1] for managing our data warehouse has greatly reduced the complexity of our data processes. Instead of creating ETLs for every table we want to load into the data warehouse, we create the minimum necessary setup to copy the table into our data warehouse. Then, we write transforms, which are simply SQL statements, to generate wide-column tables that our non-technical users can use to explore data without worrying about joins or having to learn esoteric naming conventions.
Custom EL Scripts -> Redshift -> Transform Statements -> Redshift -> Metabase supports the data needs of all our departments with no dedicated data team members.
+1 for metabase. I’ve deployed it at 2 companies. The latest version allows you to compose “questions” you create in the system which is really powerful for organizing layers of data reporting.
Also, their API conforms 100% to Zapier’s polling feed structure, and I managed to build an integration that syncs our customer base to Pipedrive in a few hours. A bit more time and I have it automatically moving deals along in the pipeline as users progress with using the product. That alone was game changing for our customer success team.
Metabase has been serving our startup well. Not as good as Tableau, but the minimalist approach is really good and SQL is better than Tableau's LOD expressions.
it is licensed under AGPL. For white-label embedding it requires a commercial license: https://www.metabase.com/enterprise/pricing.html
Pricing for variant with row-level security is not cheap as you might expect, so if you plan to embed reports in your web app it might be better to consider another alternatives.
Agreed. I helped setup a company on this last year and they're very happy. There's a 'Deploy to Heroku' push button install that was incredibly easy to setup. Great job Metabase folks.: https://www.metabase.com/start/heroku
We've also been using Metabase lately. It doesn't have some of the querying power of a Tableau/Data Studio, but in terms of being able to just click through and "discover" trends in your data, it's really nice. Super quick and easy to get started even for non-technical folks, and best of all it's OSS! Really impressive project.
Looks interesting but I would like to know how is it different than Apache Superset. What is the concept of "Question" here? Is it about some "official terminology" for a "SELECT" query? Any video demo?
Have you ever tried to install/setup Superset? The chances of you completing this task fall somewhere between "Next to impossible" and "Not going to happen".
If you have a data team:
Stitch / Segment -> BigQuery -> Dataform -> BigQuery -> Looker
I work with many companies helping them set up their data stack, and from what I've seen this is pretty much the optimal set up. These tools all require very little maintenance, are relatively cheap (compared to the man power required to set these things up from scratch internally), and scale well as companies grow and complexity increases.
If you don't have a data team:
Segment -> Amplitude/Mixpanel
If you don't have a dedicated data team, you can do a lot with tools like Amplitude and Mixpanel. Get your engineering team set up with Segment and tracking all of the important interactions within your product, and set Segment to pass those events to Amp/Mix. Self serve analytics in these tools is then pretty easy and will serve you well until you start hiring a data team.
Full disclosure: I work for Dataform (used to be a customer, loved it so much I decided to join them)
Good point. I guess I mean, if you put a gun to my head and forced me to pick one, this is what I'd pick (and I think it works for a pretty meaningful percentage of companies that are looking for an answer to the original question)
As a data scientist (startup / Yahoo) turned product manager (4 startups), I've used a variety of stacks in the past - from plain SQL, to Mode, to Mixpanel, Looker, Interana, and Hive.
$20/user/month seems incredibly expensive for an editor that we would have traditionally paid a one time cost for. I assume this has the saving and sharing of queries built in, but that is a lot of money for some storage. Are there other major benefits I am not seeing?
(It is not my intention to attack you or make you defend this product, I am just curious if I am missing something - there are a lot of similar products with similarly questionable value propositions)
Airflow -> S3 -> DBT with Spark/EMR or Redshift/Spectrum -> Redshift data marts -> Looker
At least, that’s the way we like our pipelines to work. In practice we have a couple of extractions that land directly in Redshift (we extract Zendesk data, for instance, with Stitch Data). We use Snowplow for click stream analytics. And we’ll likely move from Redshift to Snowflake (or mayybbbeee Bigtable) in Q2 of 2020.
We used to do all of our transforms via DBT within Redshift but have been offloading the heavier-duty pieces (like Snowplow event processing) to Spark jobs because they were too taxing on Redshift.
We’ve gone through 3 “generations” of Looker reporting — gen 1 was just wrapping LookML around our schema and forcing Looker to do the joins and generate SQL for everything we wanted to know. Second generation involved a bunch of common table expressions within Looker itself that worked, but were developed without much thought as to data mart design. Gen 3 is where we are now with super deliberate scoping and implementation of warehouses in DBT. Before any of that we rolled our own tool [0].
Happy to answer any questions. We’ve gone from ~30 to ~1000 employees in the 3 years since we started using Looker and have learned a thing or two along the way.
Hi, do you manage your transformations both in redshift and EMR using dbt? Do you have a single dbt project/repo covering both or separate? How do you coordinate your dbt run against spark with dbt run against the data warehouse?
We’ve been battling Redshift scaling problems so it’s fluctuated a lot over the last 6 months (both up and down). $15k per month to an order of magnitude (not counting Looker licensing).
Just curious, if you could highlight some of the problems you've experienced with each "generation" of looker reporting? Was there any specific issues that encouraged the re-implementation?
1. Complex joins and poor performance.
2. Not pre-computing important attributes made them hard to slice and dice on (because they weren’t true dimensions)
2nd gen:
1. Really unmaintainable. Like....really unmaintainable. A rat’s next of ad-hoc common table expressions, that referred to each other in undocumented ways, used inconsistent CASE statements to transform enums to human-readable values, inconsistent time stamp formats (and time zones), etc, etc, etc. Fields in different explores that were named the same but didn’t quite match and no one could explain why.
2. All of it was SQL in LookML and the persistence strategy (e.g. when were the CTEs running, were they writing to disk or ephemeral, etc) was not manageable.
If we had been smarter, we would have introduced DBT after gen 1 and not gotten into the gen 2 mess.
There’s a ton of other nuance here, but that’s the high level.
> If we had been smarter, we would have introduced DBT after gen 1 and not gotten into the gen 2 mess.
Is this a roundabout way of saying you wouldn't have exposed looker to the underlying staged data?
We have two layers within our data warehouse: The ingestion/staging layer, and the organized/cleansed/segmented layer. We only give Looker projects access to the latter.
We still have a lot of chaos within our Looker projects (views with select star, dimensions for every column bc of select star, hard-coded schemas, etc...). Slowly working it out though.
I don't want to derail the thread, but I recently started a company (https://syndetic.co) that's working on this problem. We've been focusing on the external data dictionary use case (how does a data-as-a-service company explain a dataset to their prospective customers) but we've been encountering a number of companies that are evaluating data catalogs and other internal tools for their data science teams.
I would really appreciate getting your perspective - I'm steve (at) syndetic.co
The legacy players are a mess, the new cloud native offerings are overly Engineering focused and immature, and the few startups in this space are recreating the legacy solutions in a SAAS formats
Go forth and claim the significant checks that I and others plan to write for these mediocre offerings.
We are revamping our management of Snowplow event structures presently to make them more semantic and easier to use.
For our core data marts, we rely on naming conventions for the LookML fields, good tool tips, hold monthly training sessions for new employees, weekly office hours with analysts, and do occasional “train the trainer” sessions to try and make sure there is at least one Looker power-user in each business team.
Luigi, AWS S3, DBT, Snowflake and Re:dash (currently analyzing Metabase or Looker to allow queries without SQL)
Luigi runs our scrapers and other workflow management tasks (e.g. DB backups).
All raw data lives in S3. We make an effort to be able to recreate the whole data warehouse from the raw data, so if any cleaning/normalization process fails, we have this safety net. I'm curious to hear if others use a similar pattern, or if there are better options.
DBT handles both loading the data from S3 into Snowflake (by creating the Snowflake Stages), and transforming the data in Snowflake. This isn't how DBT is used usually, but it felt wasteful to add Stitch or another tool to load the data into Snowflake, as snowflake supports it out of the box. I also created a `setup_snowflake` macro operation that creates our users, warehouses, databases, etc., in Snowflake (a kind of "poor man's Terraform")
I don't think Snowflake requires introduction. It's an amazing tool. We used Postgres before, but Snowflake is much much better, even though our DB is pretty small (~200 GB).
Finally, we use Re:dash as a BI, but I'm checking other options that allow usage without SQL (currently Metabase and Looker).
> All raw data lives in S3... I'm curious to hear if others use a similar pattern, or if there are better options.
You still may need to maintain information about what is in S3. If you're already using Snowflake you can probably achieve this via External tables. Otherwise you could run your own hive meta-store or use AWS external tables.
We're also trying to decide whether it's cheaper/easier to store things in S3 or just keep everything internal in snowflake. In some cases the compression is so good on some of our data, it's better keeping data stored in a table than in internal or external staging. Obviously this is bad if we ever have to move away from Snowflake, but we haven't committed to either approach and lots of data is still backed up in S3. Our total data warehouse is about 200TB at the moment and we're projecting significant growth over the next couple of years.
Looker on top of Redshift. Events streamed in from Segment and ELT transforms managed by Airflow. Looker gives you nice visualizations and charting with RBAC and some some lightweight ETL functionality. The real advantage of Looker is their modeling layer which sits on top of your physical data and is done via a DSL called LookML. Source control is managed via a wrapper around git. The end result is that analysts can move lightning fast building their own models in SQL or do analysis via drag and drop explorations. Looker's customer support is the best I've experienced and hasn't changed since Google acquired Looker. We're likely moving off Redshift to Snowflake in the next 6 months because it is slow to pick up changes and we want to decouple storage and compute for scaling reasons. Airflow is an incredible platform but needs customization and templated workflows to make it great. Data build tool (DBT) is how we plan on managing ETL code via Airflow in the near future. We're also adding Spark, but more for data science.
If you're interested in an alternative to dbt that's a little more analyst friendly, check out Dataform. A lot of teams are using Dataform between the warehouse and looker, to handle transformation logic using pure SQL, but in a more user friendly format than Airflow. Get in touch if you'd like to chat! dan at dataform.co
We're a small, bootstrapped company with 2 people. Some data is millions of rows others are billions.
Goal for us is KISS. Keeping everything as simple as possible -- both in terms of infrastructure, ease of use and cost.
Primary we're using Metabase in front of multiple MariaDB instances. Metabase is absolutely fantastic!
We also have a couple of additional small projects:
- A small Node.js app that receives events live via fluentbit, saves them off in MariaDB and sends text message notifications via Twilio when certain thresholds are exceeded
- A small "ETL" app that polls the Stripe and QuickBooks API to mirror data in a MariaDB database so we can easily access it from Metabase
Sorry to be that one avoids-the-question comment, but:
Without design sessions to figure out your data store design (look up Kimball, Immon), and then monitoring/testing to make sure everything is running smoothly, any data stack will be hell.
Badly designed data brings fatigue and rancor and unnecessary work and is a huge hard-to-quantify money suck.
IMO the right solution to this is not spending a bunch of time upfront trying to design your warehouse. You just want to adopt a tool that makes it easy to continuously refactor into more optimal formats as your warehouse scales. Happy to discuss more if you're interested! dan at dataform.co
We use Snowplow directly into Snowflake and report on this using Mode. We are a data native SaaS firm, and we set things up like this about 5 years ago and it has served us well. Streamlined and high performance. For all other sources, we use Stitch -> Snowflake, with one semi-custom Singer.io tap also running on Stitch. All this was simple to set up and means we don't have to worry about getting the data into one place. Of course, we then have lots of work to do in Snowflake and Mode to get the data the business needs. We share the reports from Mode to various teas via Slack. Hope this helps.
At Amplify, Matillion/Stitch/Fivetran/Custom -> S3 + Snowflake -> Matillion for transforms -> Looker + Tableau.
It's important to say out loud that a lot of analysis also happens within product-local reporting systems, or in "offline" Google sheets.
We are currently working on building out the same stack, terminating in a Powered By Looker instance, for customer-facing aggregate reporting.
The engineering and data science teams do great with Looker, but Tableau still covers use cases for non-engineer business people (think: the finance, customer operations, training, etc) who want to start from source data plus a few sidebar Google sheets, tinker around, and then develop a rough report without waiting for iterative cycles with the data engineering team. We're thinking hard about how to get those use cases into the warehouse + Looker faster.
Data infrastructure is scattered, siloed, excel sheets and google sheets stored in various places (personal g drives, company g drives, some network share somewhere, an ms sharepoint site, sometimes in development git repos or various wikis)
Reporting infrastructure is manual massaging and lots of powerpoint.
My company isn't that large, but bigger than you'd think for such a "system".
- Amazon Redshift (data sourced from many different backend DBs; e.g. PostgreSQL, MySQL, other Redshift instances etc.)
- BigQuery (Some teams store the data for their projects here. For reporting, they're selectively transferred to Redshift.)
Reports:
- Tableau (extracts of pre-aggregated data from Redshift)
- Looker (connects live to and executes its own queries on Redshift)
Anything that is based on pre-aggregated (rather small, e.g. n_rows < 5mil) data is visualized on Tableau. If users want to work on a self-service tool they use Looker which can connect to (cleaned) raw data optimized for this use case.
ETLs for raw data run daily on Redshift. Reports are also scheduled on Redshift (with Airflow) but the refresh interval is report-specific.
Postgres read-replica for cheap realtime stuff,
Redshift for the larger sets,
Airflow to manage transfers,
and Metabase to visualize and share.
We also collect data from a bunch of APIs, but those each run via their own job and store in Postgres.
We also try to define wide short tables in Redshift that enable users to use Metabase to ask questions.
I was very happy with Metabase. Being that we can't afford Looker right now (but we would all love to) it is pretty solid.
Everyhting goes through S3 because Snowflake storage is on it.
dbt is amazing, we began using it a month ago and it already transformed the way our data team work. It really is a value multiplier for everyone. Data engineers are happier because they don't need to write and maintain data transformations, analysts are happier because they can maintain their own SQL pipelines & the whole company is happier because we now have a great documentation tool to explore our data.
We also are big fans of Snowflake, make operating a data warehouse a breeze.
Then, we use a mix of Redash & Tableau for reporting.
Redash for static reporting (open to the whole company) & Tableau to create more complex data tools we expose to some internal teams; Marketing, Risk, Finance ...
I am surprised no one mentioned https://redash.io/ till now (a lot cheaper than looker/mode/Periscope with all the basic functionality that you might need).
We use Segment for event tracking, Postgresql for transactional data and a number of spreadsheets and third-party integrations with Stitchdata. Since our data is relatively small, we use PG as a data-warehouse and heavily use DBT for ETL.
The people who are familiar with SQL just use DataGrip, for the UI we use our tool https://rakam.io.
Shameless plug: It's basically similar to Looker but it plays well with the product data and integration with Segment as well.
Using Kafka-connect, we're able to serve up near real-time (2-5 mins) insights on device generated events.
We probably need to use some kind of ETL tool to replace custom SQL and sf-tasks. Unfortunately, we haven't been able to find a tool that handles this in a non-batch (even if it's micro-batching) form. Snowflake change-streams and tasks allows us to ETL in a streaming-like fashion.
We're ingesting everything from raw/transformed/aggregated events, micro-service DBs (as fast as they sprout up), netsuite/salesforce, mixpanel, MySQL, MongoDB... Billions of rows of data across multiple data-source accessible to internal and external customer in a matter of seconds. It's been an incredible challenge, especially with only a team of 2-5 people.
Python, Redshift, Tableau. But if you are starting from scratch I’d suggest to focus on the etl piece with python and send pdf reports generated with latex. Too many people get distracted by the fancy reporting stuff and don’t do the 2 things that matter: good etl with good sql db; analytics that is tied to results the company cares about
It's interesting seeing the various stacks being used...Here at One Medical:
[ onemedical.com, mixpanel, Google Sheets, Salesforce, etc ] -> S3 (Amazon EMR) -> [ Tableau, Aurora MySQL ]
It's a nice & clean stack for data engineering.
Airflow is used for orchestration and is heavily customized with plugins. Zeppelin notebooks are used by most analysts.
We'll probably be replacing Aurora MySQL w/an MPP database like Snowflake or Redshift. MySQL is a crutch, but easy to deploy and manage to a point.
Several python frameworks also do various ETL & data science functions, but everything generally revolves around S3 & Airflow.
Amazon EMR is a great swiss army knife for ETL. Moving between Pig, Spark & Hive is trivial. Tableau is a bit of a challenge. Tableau seems to give users too much rope to hang themselves with.
In my previous company, we did TreasureData->Tableau.
TreasureData is a platform like Segment and lets you bring your customer event data as well as data from SaaS tools (like Salesforce, Zendesk) into a data warehouse hosted by TreasureData. It worked great but had the downside that all the data was in TreasureData and we were kind of locked into it. Segment kind of solves that problem because it has connectors to Redshift/Snowflake etc so you can keep ownership of your data warehouse but the warehouse sync delay (in our version) was a problem.
Also, BI was just one of the use cases. We wanted to send the data to 3rd party tools (like Facebook ads) based on some logic (some were simple rules but we had complex ML driven lead scoring models too). TreasureData was more flexible on being able to run those rules and activate the results but ideally we wanted to run them on top of our own warehouse in AWS.
It generates a very beautiful CRUD admin dashboard automatically via reflection.
Allows building drag-and-drop data viz dashboards, saving commonly-used custom queries as "scopes", and even building your own HTML views if you need to get really fancy (think tracking live deliveries on a map, etc).
Also has Stripe and Intercom integrations.
I really can't hype this enough. Have been using this on nearly every app I've built the past three years.
The core team also answered my emails as a never-paying customer within 2-3 days the few times I have mailed them over the years I've used it.
Wow, that looks incredible. I don't know why, but I've never really thought to look for admin site solutions out there, instead opting for janky in-house built ones. I'll definitely be playing around with this.
Stitch -> Redshift (with DBT for data modeling) -> Looker
For a smaller company, it makes a lot of sense for us to use off-the-shelf tools like these rather than rolling our own pipelines. Every once in a while we run into issues where Stitch isn't quite as flexible as we'd like (can't set dist/sortkeys etc), but that's the tradeoff you make for its simplicity.
DBT is amazing and I can't recommend it highly enough!
Looker works for analytics, but we're starting to do more data-sciency work, and it doesn't have built-in Jupyter notebooks or anything like that. Does anyone have a solution they use and like for hosting and sharing notebooks alongside a stack like this?
A lot of these answers seem to focus on app analytics (e.g. collecting clicks, page views, etc. from Segment). How are people collecting / integrating financial data (e.g. sales, subscriptions, expenses)?
Ive done 2 data pipelines, one alot like what most people are are talking.
The other I had to build for a startup with millions of monthly uniques but only seed funding (cant do a 30+k a month data eng bill).
Went with custom event emission->kenisis->(druid & S3)
and used imply (https://imply.io/). Easy real time data analytics, auto "enrichment with druid lookups from a RDBMS, and a simple ui for slice/dice investigation metrics. All in all costed lest then the cheapest looker license.
We made our own reporting. Seems crazy to pay multiple 3rd parties to look at our own data. We're using Apache Druid on the back end. (Giant fortune 50 non tech company)
We initially considered Stitch and other -as-a-service ETL but ~500 lines of Python later we had our own thing.
I also experimented with FDW: https://unwindprotect.com/postgres-fdw
On a meta note, if you're interested in viewing & sharing stacks, that's the primary feature of the startup I'm working on: Vetd (app.vetd.com). The communities we host (often VC portfolio companies) share their stacks and leverage for discounts.
We don't currently have any way to define relationships between products, but I like that idea...
It looks something like this [0] when you view the aggregated data of the companies in your community, and you can also view each company's individual stack.
App, Web, IoT device send realtime events to a fluentd system with many events going to Segment. Web front end also loads Segment for various ad trackers. IoT device also uploads telemetry and other logs in both realtime and asynchronously and those end up in some Postgres databases.
Segment syncs our event data periodically to our data warehouse (Redshift).
We have a readonly replica of the eCommerce DB for live stats (Postgres).
And there is a time series db for system/IoT telemetry (InfluxDB).
Most of our BI queries are done in Mode. Some are done in Grafana (data in our InfluxDB and some live data). Spot check against Google Analytics or FB ad tracker...
MariaDB > Metabase
Stitch/Segment > BigQuery > Metabase
Stitch/Segment > BigQuery > Google Data Studio (Curmudgeonly stakeholders refuse to log in to BI tools... but require reporting anyways...)
A bit different from the crowd here. After much searching, our BI stack has been fully converted to data streaming using: StreamSets DataCollector >> Kafka >> AWS S3 >> AWS Redshift >> Dremio >> Jupyter Notebooks. Great to have Jupyter take on data prep and data analysis tasks while external tables in Redshift are very fast with minimal ETL. Dremio has been great as a virtual data mart with great performance on Parquet data in S3 and desktop BI tool integration.
We've recently changed from : Segment -> BigQuery -> Looker to Segment -> BigQuery -> Dataform -> BigQuery -> Looker.
The addition of Dataform into our stack has completely changed the game, allowing us to maintain one source of objective truth and maximise looker (minimising the joining etc. in LookML, instead pointing it straight at tables in BigQuery).
This in turn means our downstream data users are happier to use a fast and reliable looker instance!
Custom pipes mostly coordinated by Luigi (heavily customized) -> Redshift -> DBT -> Looker.
Some spark mixed in at the ingestion and transformation phases.
Like someone else said in this thread, we're currently battling Redshift scaling problems and are looking to offload more of the raw data to S3 using Spark to create read views.
No data catalog right now but the Looker setup is extremely well-maintained. Hoping to deploy Lyft's Amundsen to fix this more generally.
The main reason is that only a subset of all our datasets are actually exposed in Looker, and Amundsen would have applicability to other data sources - certain production semi-OLAP DBs, the future hive-on-S3 deployment... that we have no plans to expose in Looker at all.
That said I haven't looked closely at how far we could go with Looker, I'll check it out.
I see. We actually develop a BI tool similar to Looker but also want to implement a data catalog feature embedded into it. I'm familiar with Amundsen but I believe that this kind of tool needs to have tight integration with the BI tool and Amundsen still has a way in terms of integrations. (searching among saved reports, dashboards and collaboration with teams, etc.). Otherwise, using spreadsheets is still easier and maintainable.
I would love to hear the features that you would need in a data catalog system.
Dave from Chartio here, wanting to share our new book describing the 4 stages of setting up your ideal data stack here - https://dataschool.com/data-governance/.
It covers BI a bit, but mostly the stack that BI sits on top of. It's an open book so we're always looking for suggestions and experiences such as those shared here.
We built a tool to help non-technical folk transform their data into useful formats for BI (https://www.data-monkey.com). The tool currently supports JSON, CSV, Excel and text files, and comes with features to merge/filter/transform data. It's free to try and use if you're interested - we'd love to hear your feedback!
We built our own tool at Datastreams (https://datastreams.io) to collect data. We currently collect several thousands of events per second, mainly web data.
Events are mainly streamed to one of the following: Cloud buckets(S3, etc), HDFS, SQL-db or Cassandra.
Most clients use one of the following visualization tools: PowerBI, Qlik or Tableau.
Is there any way that we can provide BI to our customers using an OSS tool?
Our databases store all our users data. I'm thinking of using something like pgsync to sync all database postgres to a new postgres and then having redash or metabase set up to connect.
Alternatively using locked filters on metabase and embedded questions.
All our data for our users are in postgres and they very much want BI insights. Not sure how easiest way for this...
Ideally: Podio , excel sheets, other program backend-> Pentaho -> MySQL/MongoDB on Google Cloud, Google Analytics-> R scripting(DBI connections, BigQuery and Hadoop managed through R) -> shiny-proxy, APIs, Hadoop for Big Data computation-> Rmarkdown Reports, shiny-dashboards, a little Tableau and a little Power BI.
MySQL will be moved to Postgres for better Performance soon.
Custom Collector(analytics) -> Clickhouse -> Custom ETL Scripts -> Clickhouse -> Re:dash
We tried metabase which is awesome , but Redash is also great and easy to setup as well if your team knows sql then Redash is better
We also looked at druid and after some benchmarking we settled on Clickhouse, realtime queries even without etl runs within seconds in clickhouse
We are a small data consultancy, so we use other/more diverse things for customers, but our internal stack is fairly simple:
Hubspot/Jira/G Suite → (Python) → PostgreSQL → (SQL) → Tableau
Since we are Tableau partners, we have a bunch of internal licences either way. We host Tableau Server, ETL, and PostgreSQL ourselves, all on Centos cloud servers.
We have a variety of data sources, from Mixpanel to PostgreSQL to black-box systems that can only dump out CSV files. We use Airflow to manage pulling from those, doing some transforms, and writing everything into BigQuery. Looker then handles the reporting and browsing.
This setup has worked pretty well for us so far. I've learned of a few tools from this thread that might help us to better manage data sets and views - specifically DBT and Dataform.
@sixo @numlocked I'd be happy to share more about Snowflake's
architecture. As others mentioned in this thread, Snowflake completely separates storage from compute -- eliminating Redshift scaling issues.
Are any of you offering embedded analytics in your products? Mostly applies to mid-market/enterprise SaaS platforms but interested in learning more about how you offer analytics/BI to your customers in your products.
Hi! You may like to check out what our team has built at www.holistics.io. The pricing metrics of our embedded dashboards cater for you to scale to unlimited viewers, and you don't need to recreate a duplicate copy of your user accounts into our system for it to work. Check out the video below to find out more!
We use the standard Microsoft stuff. SQL Server, SSIS, SSRS, PowerBI. There are some pain points, but for the most part, it works, and it's pretty inexpensive if you're already a Microsoft shop.
how does one end up doing BI | Data Engineering ? currently a frontend | full-stack JS dev. tired of that world and want to switch to something stable ?
I'd highly recommend the Kimball books, e.g. "The Data Warehouse Toolkit".
Unless you're working with a lot of data, the technical parts of a data warehouse are relatively easy to grasp (the dbt team wrote some great posts on how they structure their projects [1]). After that, what really makes a difference is how you structure our data warehouse, how you model your data in a way that allow you to query it efficiently and ask many questions. That's where the Kimball books shine.
Technical skills with databases are a must. It's basically a database engineer + being able to generate accurate and insightful reports/metrics/visualizations.
More value comes from database engineer skills as well as being able to understand the structure of a business. Little actually comes from the BI platform you choose.
If you're already doing engineering work you may be able to just go get a job focusing on data - if you're upfront about wanting to learn. I've found most people have learned the skills on the job. Besides the books mentioned I commonly recommend these resources
DataCamp.com has some great courses
Tableau has some great online education - many people learned BI from them
Many free resources for getting started with data at different levels (note: this is one of my sites)
https://dataschool.com
Yeah, enriching the events as they pass through - so we have a Kafka Connect JDBC source polling the source of truth DB every X seconds for model changes (we looked into Debezium for streaming model changes, but the additional complexity didn't bring us much benefit based on how often our model changes), and then we pull that into a GlobalKTable and join the KStream as appropriate.
We also use Kafka streaming with a persisted event window store to deduplicate across a window of 2 hours.
Mind if I ask you what you used to capture source changes if Debezium didn't work out for you? Is there a specific Kafka Connector for JDBC that you're using?
If we had a need for near instantaenous model updates, then I'd definitely go Debezium. We didn't use it because we didn't need it - a generic KC JDBC source with a poll period of 5 seconds met our needs.s
Business Intelligence. Making reports. Usually over a large set of data. A normal query would be too slow so the results are pre calculated. Maybe you will calculate the average and total sales per city then dump the results into a table. From there you can quickly generate more granular reports like sales per state.
I do. I work in data consulting. I prefer tableau on top of snowflake for 99% of jobs, but for massively scaling insert performance elasticsearch has a niche. Kibana seems like a mishmash of different products duct taped together, but for the most part plays nice.
I believe this is the best combination of cheap/powerful for early-stage startups. My very non-technical cofounder is able to use metabase's simple GUI interface to create graphs/insights (even joining and aggregating across tables!), and for anything complex I can step in a give a helper SQL query. We have around 10M records we aggregate around for daily insights.
Metabase can also be run as a pseudo-desktop/web application to save additional cash (we don't do this though).