while MySQL is very popular, it is very rare to see it in analytical/ML usecases, that's why we haven't added it yet. There's nothing from a technical POV that prevents us from adding, it just hasn't been a priority, I am happy to pull it up if that would help your usecases.
If you search for “dd gzip” or “dd lz4” you can find several ways to do this. In general, interpose a gzip compression command between the sending dd and netcat, and a corresponding decompression command between the receiving netcat and dd.
>WireGuard does not focus on obfuscation. Obfuscation, rather, should happen at a layer above WireGuard, with WireGuard focused on providing solid crypto with a simple implementation. It is quite possible to plug in various forms of obfuscation, however.
>TCP Mode
>WireGuard explicitly does not support tunneling over TCP, due to the classically terrible network performance of tunneling TCP-over-TCP. Rather, transforming WireGuard's UDP packets into TCP is the job of an upper layer of obfuscation (see previous point), and can be accomplished by projects like udptunnel and udp2raw.
* Heuristics to detect the WireGuard protocol:
* - The first byte must be one of the valid four messages.
* - The total packet length depends on the message type, and is fixed for
* three of them. The Data type has a minimum length however.
* - The next three bytes are reserved and zero in the official protocol.
* Cloudflare's implementation however uses this field for load balancing
* purposes, so this condition is not checked here for most messages.
* It is checked for data messages to avoid false positives.
Thanks for the comment! As we work with customers we will add more source and target dbs. A couple of things, our scope as of now is data-movement in/out Postgres. And as we add more data-stores as sources/targets to Postgres, providing a high quality experience will be the primary focus than expanding coverage.
Congrats! I’m curious where PeerDB positions itself relative to ReplicaDB? Also wrt sources, I’m curious whether CSV will ever be on the roadmap? Although it’s antiquated it’s ubiquitous, so I was surprised it’s not supported with launch. That said, I’ve encountered ridiculous challenges with data movement from CSV to Postgres and I’m curious if that alone was the blocker?
Thank you. Our goal is to focus on postgres and provide a fast (by native optimizations, see above post for a few examples), simple and a feature-rich data-movement experience in/out of Postgres. So adding more connectors based on customer feedback will be a part of this journey!
In regards to CSV as a connector, postgres’s COPY command should do it right? Am I missing something? Is it CSV files in cold storage (like s3 etc)? OR periodic streaming of CSV files into Postgres?
That’s right! If it’s easy then it should be easy for your team to add —- but if it’s not easy then it’d be even more useful for your team to add! Win win
You bring up a great point. Periodically streaming CSV files to Postgres from storage through a single SQL command (CREATE MIRROR) is indeed very helpful for customers. We will add this to our product prioritization. With the infra we have, this shouldn't be too hard to support!
Not related with Illa. Entire different software. I believe it would be the next big thing because of the rapid features they add in compare with other tools.
In my job i was searching for a simple yet powerful tool to migrate data between different Databases.
Everything i searched for was either expensive, total overkill, hard to configure or too invasive. Tools like CDC etc.
I stumble upon this open-source tool ReplicaDB on GitHub.
I have never imagined that there would be someone to create such an awesome tool that can do these things.
Written in JAVA and uses JDBC drivers. It can connect to every popular databases and migrate data, for example from a PostgreSQL table to a MSSQL or MongoDB etc.
It can query TABLES, VIEWS and the best of all, you can write free SQL SELECT queries from source and move the data to a destination table. It does not have to match the source to the destination table.
The developer is awesome and keeps the project updated, also he helps a lot of troubleshooting or giving you a solution if you stuck somewhere.
The documentation is simple, yet excellent.
I think this tool is a little hidden gem that can run on all systems (OS's) that have JAVA installed, and helps a lot of people who create scripts for migrating, syncing data between heterogeneous environments.
I just want people to know that this software exists and personally i believe the developer truly deserves some help for his incredible work.
reply