Hacker News new | past | comments | ask | show | jobs | submit login
MemSQL is now free to use for databases with up to 128GB of RAM usage (memsql.com)
256 points by kermatt on Nov 6, 2018 | hide | past | favorite | 115 comments



Another instance of the Wikipedia page for a product [1] being more useful than the main site to describe it:

* MemSQL is a distributed, in-memory, SQL database management system.

* It is a relational database management system (RDBMS).

* It compiles Structured Query Language (SQL) into machine code, via termed code generation.

* On April 23, 2013, MemSQL launched its first generally available version of the database to the public.

* MemSQL is wire-compatible with MySQL.

* MemSQL can store database tables either as rowstores or columnstores (The OLAP vs OLTP part I guess).

* A MemSQL database is a distributed database implemented with aggregators and leaf nodes.

* MemSQL durability is slightly different for its in-memory rowstore and an on-disk columnstore.

* A MemSQL cluster can be configured in "High Availability" mode.

* MemSQL gives users the ability to install Apache Spark as part of the MemSQL cluster, and use Spark as an ETL tool.

The main value proposition seems to be the distributed nature, which probably makes it easier to setup out of the box than, say, trying to setup a cluster MySQL or PostgreSQL databases which are not "natively distributed". Also, probably most useful when the data is "big enough" vs resources available on any single server or when reliability is very important.

1: https://en.wikipedia.org/wiki/MemSQL


I've been using MemSQL in development since March 2017. I've seen it evolve into one of the fastest databases for columnar storage and analytical workloads. Having in-memory rowstore as well removed the need to have Aerospike in my infrastructure and simplified the whole stack.

This announcement, that it's now free to 128GB pretty much saved me from having to do a kickstarter to raise funds for my little SaaS project.

If anyone from MemSQL is reading, really thanks for doing this. I think it's a very shrewd move. Should definitely see an upswing of potential customers adding it to their stack and when they grow, become enterprise customers.


It says that he 128GB version comes with High Availability features. Does that mean if I have 2 nodes, each one is limited to 64GB? Or each one can have 128?


I am the Director of Product Management for MemSQL.

The 128 GB limit applies to the whole cluster. So if you have two nodes in the cluster they would each have to be 64GB or less. If you have four nodes they would all have to be 32 GB or less. To have a highly available system we recommend 4 nodes (a master aggregator, a child aggregator and two leaf nodes). You can read more about the cluster architecture here: https://docs.memsql.com/concepts/v6.7/distributed-architectu...


How does MemSQL reach consensus between nodes?


A typical setup for HA would be 2 aggregator nodes and 2 leaf nodes. You can allocate the memory however you want. For example, you could give 32GB to all 4. Or give 16GB to each aggregator and 48GB to each leaf. (I'm a MemSQL Product Manager by the way.)


Do you have recommendations on provisioning memory in aggregators vs leaves? That is, would one of those configurations make for sense than the other for typical workloads?


In general more memory should go to the leaves. All data is stored there and MemSQL will push processing to the leaves whenever possible.


I tried to come up with a smart and polite comment but I can't. The target audience for memsql aren't developers or engineers. It's the management that has no idea about IT. I don't like closed-source solutions. I don't want to book a demo. I want to be able to read the source. I want to install it, use it, benchmark it, be sure that the results are 100% accurate. Sadly, I see this post as marketing ploy and I can't find any nice words for this product.


You can install it, use it, benchmark it, and check everything yourself.

It's not open-source, but there is plenty of closed-source proprietary software, and plenty of buyers who care about solving their problems and paying money to get that done (and ensure the vendor stays alive).

If you don't want to use a closed-source product then that's your prerogative, but I don't see how you're making a dev/engineering decision by ignoring a product because of that.


because the source is important, it tells the buyer they have a future if they no longer like your services; closed source means they are locked into whatever fresh hell might come upon your company which in turn unleashes fresh hell upon their decision to buy in.. in 2018+ it's just a smart decision to make


The vast majority of companies can barely build their own products, let alone study the source code of complex 3rd-party software. A distributed SQL database is on the extreme end of knowledge required to even understand it, so I'm not sure how open-source is going to help you.

As a counter-example, rethinkdb is open-source but the company failed and nobody cares about using it anymore. What would you do with that? Start building new database features yourself? Or just get your data out and move to a different system?


You will be able to at least maintain it, fix bugs, security issues. Maybe even start working on new features, promote your fork, revive some of the community, find people with relevant expertise, etc.

Databases are so lock-iny and critical that it's only natural for closed source database startups to be considered too risky to touch.


Possible doesn't mean realistic. As stated, 99.99% of companies are not going to come close to understanding, forking, and running their own build of a database.

It's better to practice proper vendor management and weigh all the risks and realities instead. If you're not more capitalized and viable then your vendor, then you have more important things to worry about then your vendor disappearing overnight.


/Widely used/ open source projects will find continued support; the user base of a project is a crucial part of the decision. Rethinkdb had only one serious site, who found it easier to port to a thin layer on top of Postgres when rethink collapsed.


That's not a technical decision though. That's a business decision.

A real technical reason might be "the ability to fix the software ourselves" or "easier debugging of the software". Avoiding vendor lock-in isn't a technical decision


Good luck with the source code of a database you didn't write...


I recently was able to unfuck my way out of a 300GB data loss resulting from a failed DB upgrade. By looking into the commit history of PostgreSQL, finding the commit with the PG_CATALOG_VERSION I needed, and compiling from that revision, I was able to re-run the upgrade with the parameters I needed. I'm not sure what I would have done if that had been MS SQL Server or something else.


You would send in a support ticket to the company and they would solve it for you.


> they would solve it for you

That hasn't been my experience, at least not on any suitable time scale.

I strongly suspect that the vast majority of those of us who have worked somewhere "not more capitalized and viable" than the vendor share that experience.

Even when a vendor's support engineer is fully capable of solving the problem, the sense of urgency can't reasonably be expected to match that of a much smaller customer facing potentially catastrophic data loss (or other existential-threat-level consequences).


Vendors have support plans and SLAs so if you need 24/7 support then make sure that is indeed what you're paying for.

I do not see how having spare engineering talent capable of reading, editing and running a custom database build is the more realistic or faster option for any business in case of issues.


> Vendors have support plans and SLAs so if you need 24/7 support then make sure that is indeed what you're paying for.

Those are totally useless during an existential crisis without associated indemnity (which any vendor would be crazy to provide) against loss due to failur to perform.

> I do not see how having spare engineering talent capable of reading, editing and running a custom database build is the more realistic or faster option for any business in case of issues.

I don't see how it isn't, considering that "custom database build" could be so simple as to be trivial. In the GP's case, it was merely using a specific version.

Even the characterization of the required engineering talent as "spare" seems incongruous, as, in small companies, the talent requird to handle unexpected problems with technlogies fundamenta to running the business is essential, not superfluous.


The proof is in the pudding. Plenty/most serious users of open source databases become customers of the relevant companies. The commercial license is usually a small fraction of the potential cost of an outage.


While plenty of closed source software systems are still being widely used, I think the world is dramatically moving towards open source. The world has changed and I think these expectations of being able to leverage an open source software is here to stay. This despite the fact that a vast majority may never actually read, fork or modify the source code.


Disclaimer: I am a product manager at MemSQL, so I may be biased :)

There are a variety of ways to try out MemSQL yourself such as installing on Linux, Windows, Mac, AWS, etc, and maybe I am biased since I was an engineer before I became a PM, but we optimize our product currently exactly for technical people such as IT, devops, and of course, engineers. For a list of installation guides, check this link out: https://docs.memsql.com/guides/latest/install-memsql/

Take a look at our docs (docs.memsql.com) and you will see that we all actually are just a bunch of engineers and people with a technical background. Are there certain technical topics you feel are unclear here? I'm also happy to chat privately.

If you still feel this product isn't right for you, that is fine -- MemSQL's focus on query speed may not be for everyone. However, with this release of having a free product for people to try out, we definitely optimized exactly for people that want to try the product out :). I'm actually surprised you mention our product isn't for engineers/technicalPeople, because from our field of view, we actually sometimes see MemSQL as too technical, hence why we focused on usability in this release, ha!

Hope that answers some doubts you may have -- thanks for the comment.


Some feedback - I cant appear to find any way to install on Windows on the following part of your website - https://docs.memsql.com/guides/latest/install-memsql/


good catch i dont see it either..weird


Guys, I believe you are engineers that are doing an amazing job. But the whole "free up to 128GB" promotion is just.. wtf. And the website revolves around attracting IT managers, not engineers that make educated decisions related to project(s). I make decisions based on calculations, not based on shiny websites. I'm not undermining your product. It appears to be amazing. I'd love if it were free and open source. Heck, we'd probably spend a ton of money on it for paid support, scaling planning, consulting, deployment and what not. I just hate your business model, that's all. Let me have your program running without constraints! And because of it, I can't see a reason to use it. I choose to explore other venues that went down the open source route. I'm quite okay with not running the fastest option. No hard feelings, I sincerely wish you make a huge dent in this area and make a ton of money!


Free up to 128GB is practically free for up to 95% of applications. In both dev and prod.

I really don’t see how you could have a problem with that.


The problem is free up to 128GB is not 95% of problems you would run on an in memory database.

I have never seen a productive system that small. I’ve seen thousands.


if you stored data on disk compressed in MemSQL's columnstore, you would use that 128 GB of RAM for query execution. on-disk data storage would not be limited. if that's not a productive system, then I must have imagined the whole data warehouse and data mart market


I've worked with MemSQL and it's both really easy to use and setup and very fast. FWIW, I'm not a manager.


What is the benefit of using MemSQL over some other free in memory databases like Apache Ignite ? I see that they have better documentation and support (edit: + competition on Codeforces which winners rarely receive their T-Shirts). What about other things?


MemSQL CEO here. There are a few:

- MemSQL is transactional and writes transactions on disk

- MemSQL has an excellent implementation of SQL with mature query optimization and query execution. And it get better every release. This is from 6.5 https://www.memsql.com/blog/6.5-performance/

- MemSQL has in-memory and on-disk data storage so you can use MemSQL to store petabytes

- MemSQL has columnstores and vectorized query processing: https://news.ycombinator.com/item?id=16617098

- MemSQL supports geospatial, fulltext search, and json

- MemSQL allows you to stream data from kafka in one command: https://docs.memsql.com/sql-reference/v6.5/create-pipeline/


Given the availability of open source solutions why would I in 2018 build a critical part of my application on a closed platform?

Genuinely not intended as snark, I'm just curious why memsql is so compelling that I would consider it.


To build on the list of features above, here's a blog post from Pandora where they go into the details of what they use MemSQL for and some of the alternatives they looked into for their use case: https://engineering.pandora.com/using-memsql-at-pandora-79a8...


Isn't that what the previous answer was trying to say? Paraphrasing "MemSQL has these features which we think Apache Ignite and others do not."

All things being equal, I agree that open source solutions are the best. Things are just not always equal.


> Isn't that what the previous answer was trying to say?

Truth be told, the list if features is not very compelling. I mean, JSON support is not a reason to pick a commercial dbms over a FLOSS one.


There is also Tarantool which is another in-memory/SQL DB and is Open Source.

https://github.com/tarantool/tarantool


I would also like to see a response to this.


Those are some nice features. Unfortunately, your DBMS doesn't do some basic things like return consistent results for a simple SQL query with a group by and having clause. I admit this might be a configuration issue on my company's end, but if so, that is a terrible configuration option and should be hidden away, opt in only, with a huge wall of warnings so people don't actually enable it except in extreme circumstances.


I've seen you mention these inconsistent results twice here in this this thread, but have worked at MemSQL for 5 years and never heard of such an issue. Have you reached out to see if maybe your query / data is not what you expect? I've seen inconsistent results only once, and it was because the default date formats across RDMBSs were different (and was not anticipated).


How would the query/data not be what I expect if If I'm writing the query myself, and looking directly at the sql table definition to create it?

Beyond those considerations, why would the same exact same query (executed several times in rapid succession from the console) produce vastly different results? Also, I should clarify, rewriting the query from "select ... from xyz group by ... having ..." to "select ... from (select * from xyz where ...) group by ..." made the inconsistency goes away, without changing the filtering clause. That does not inspire confidence.


Can you post the full schema and query? Are you sure you are not projecting columns that are not part of the group by expression?


I really appreciate the offer. I got the go-ahead to share this information, where should I direct it?


Here, or https://www.memsql.com/forum/, or memsql-public.slack.com.


To close the loop on this one. We looked at the query and strictly speaking we should be rejecting it b/c HAVING clause is referencing a column that's NOT in group by and NOT an aggregate expression. The query shape is:

  select count(*), a from T group by a having b > 0
In this case b is not allowed to be part of having by ANSI standard.

We let it run b/c some customers migrate from MySQL and MySQL allows this query. You can set MemSQL to be strict about it by setting this variable:

  set session sql_mode = only_full_group_by;


Thanks for taking a look at it. Your position is perfectly reasonable, but given the fact that (at least in my case) the results I got back were subtly wrong, and there's a good chance someone wouldn't notice, it might be a good idea to default this off if it isn't already, with a really stern warning in the config.


If you’re hitting the disk, aren’t you losing some of the advantages of using an in-memory database in the first place? Or would it still be more performant than a traditional RDBMS due to optimized in memory data structures?


There is a compound effect in building memory optimized features and being a distributed database that can put a lot more cores at work and cache a lot more data in a cluster:

- In-memory row stores. Super fast for updates and point lookups

- Memory optimized hash joins that minimized cash misses. Great for analytical/reporting use cases

- Vectorization for columnstore query processing. Super fast aggregations that work best when data is cached in memory


Apache Ignite is an in-memory data grid that supports persistence and overflow-to-disk. It primarily started as a cache and now has a full key/value store with SQL-92 on top, but isn't a full relational database. Instead it has other features like distributed data structures, messaging, and is more about connecting your applications together. Easier deployment model with all nodes being identical.

MemSQL is a distributed full-featured relational database that has in-memory rowstore and on-disk columnstore tables with rich support for SQL, fulltext search and JSON. It's a fast RDBMS and does really well with analytical queries.

Do you need a fast cache, key/value, messaging system? Or a RDBMS with fast OLTP + OLAP capabilities?


Presto on top of Apache Ignite is basically an OLAP database (It just takes some effort to write the connector). It doesn't support transactional workloads though.


This sounds like one of those things that sounds fun on a whiteboard but will be hell to implement and actually make perform well at all.


"Basically like" is very different from "engineered for"

Column-oriented storage itself is many times faster for analytical queries, even if on disk, and combined with the other optimizations of MemSQL will get you far better performance. Along with all the data being able to constantly undergo transactional updates.


I would like to extend this question, what's the benefit of MemSQL over a columnar store db with a massive cache?


Seems with this they've also deprecated the developer version which was unlimited capacity but lacked enterprise features. Also, no mention of pricing except to "contact sales".


I asked back in 2017 and it was $25k.

This post 7 months ago [0] mentions the same.

It's probably still at this price, but if you are really interested. Yeah, contact sales.

[0]: https://news.ycombinator.com/item?id=16617827


I don't understand the high frequency use case they describe. High frequency trading is something very different from "12,000 transactions a minute". What is exactly the use case? Pre-deal checks? Post-deal checks? Book replay? Or is it just a simulation? It's not very clear.


HFT can easily be 12000 transactions a minute with a requirement for each of these transactions to be very fast.

It's not (necessarily) about high throughput, but about low latency.


No real HFT system is doing a * database transaction * in the critical trading path. HFT systems are not built like web applications. They are typically built as a tight event loop, reading market data packets directly from the network card, doing a tiny bit of computation and then writing to a userspace TCP stack for order entry.

I guess you could be using MemSQL for post-order or trade analysis but then it would probably overkill since a lot of that can be done considerably slower.


Additionally 128 GB is just about 20 minutes worth of data.


That's just the free edition RAM limit...

But yes, actual HFT is not an accurate use-case, for any database product.


HFT firms have plenty of cash to buy the unlimited version.


HFT ain’t what it used to be.


I work for a dark pool ATS that is hit by HFT firms, and we routinely see flows greater than 12k transactions per minute. Ive been benchmarking a variety of compilers, db libs, drivers and platforms. So far, best perfomance ive gotten, single threaded, is I can write a single order to a man store table in MSQL in about 500 microsecs (that was from a .net core app running directly on the same server as MS SQL, ive been able to get comparable performance from a C++ app running on Linux with kernel bypass network IO). Mind, ive not tried to optimize the DB at all, this is purely comparing DB APIs. Worst Ive seen, all other things being equal is about 800 micros.


MSSQL? That's fast. Are you able to give any more details about the database server / net core code?


Can't share the code or schema, but can give a rough approximation of the setup.

We're experimenting with MSSQL's memory optimized tabled and native compiled stored procedures. My timings today, I was getting one call to our stored proc in the 300-400us range, that was inserting one record each into 2 tables.

Test setup for all of my scenarios are do all of the same DB ops, I'm alternating which libs I'm using. Best performance so far ive been able to get from linux talking to MSSQL has been using OTL on top of unixodbc with MS Driver 17. Mind these are physical servers sitting a few feet from a shared router.


https://www.youtube.com/watch?v=_vloWsdPCDs

CMU had guest lecture by memsql founder about the architecture.

Not sure how much of it changed in past 2 yrs tho.


One thing I would like to see is the commitment to keep it free till X years at least. OK, MemSQL has "decided" to give it for free up to 128GB of RAM usage now. But they can "decide" 2 years later that they want to charge for any MemSQL usage. Then what options the "small" businesses, who have adopted MemSQL, have?

When it comes to commercial products, it is generally good to check if you can afford their paid offering, then only make the tool core part of your infrastructure. For databases, it's better to stick with fully open source popular options from a long-term perspective.


Why cannot I use free open source MySQL heap or memory storage engine? It also provides clustering/replication


You can. :)


This might be a stupid question, but what is the advantage of this solution over an in-memory sqlite database?


Distributed. Scalable. OLTP + OLAP queries. High availability. Very fast performance for reads and writes.

They are entirely different systems. Sqlite is meant for self-contained applications that need some relational data persistence with a single file for storage, not for accessing as a central database with many clients storing TBs and scaling across nodes.


SQLite is a replacement for fopen

(Quote from their docs)


There are actually some patches in the works (currently used by LXD from memory) that allow sqlite to be distributed. I'm quite hazy on the details, but apparently this is part of what enables LXD's clustering.


> Distributed

is MemSQL shared nothing or shared everything, or can be mix of both?


Shared nothing. It has leaf nodes that store data and do local processing, and aggregator nodes that run queries spread to the leaf nodes and return the results.


From what they claim, MemSQL supposedly scales well across a cluster of servers (and of course the usual, fail over, features, and other fun stuff).


I’m no db expert, but doesn’t SQLite have very non-granular locking that makes it hard to scale?


What happens if I set hard limit on MemSQL process to use 128GB RAM and it decides it needs more? Will it die demanding more memory or will it behave like traditional rdbms and manage buffers to work within limits?


The best thing to do in this situation would be to set the maximum memory on the MemSQL server via the `maximum_memory` system variable. The server will then internally manage it's memory usage against this upper bound and fail only specific operations which can't be performed without additional memory.


What are the specific operations? Will another batch of INSERTs / UPDATEs work? Does it matter if I use the analytical part of MemSQL ?


I mean that an operation that needs more memory than is available will fail, but the server itself will remain operational. An simple example would be loading more data into an in-memory table (row store) than there is available memory. If you're using the columnstore then your storage won't be limited to memory.


That sucks, it's similar to Hekaton though. I was hoping there was some kind of a fallback mode when the DB has to juggle memory buffers at a performance cost.


The bit I can't see on the FAQ is about how it is as fast as RAM but protects against data loss using disk.

I think most engines guarantee Durability by assuming that once on disk, it won't go anywhere but if it's in RAM, it is susceptible to power outage? If it gets written to disk, it's not as fast as RAM?


[Director of Product Management for MemSQL]

MemSQL has two storage modes Rowstore and Columnstore. The Rowstore is "in-memory" and the columnstore is "on-disk" but those are oversimplifications. The rowstore data is stored in memory but we keep a snapshot of the data on disk. We also keep the transaction log (a record of all changes since the snapshot was taken) also on disk. So queries can be satisfied fully from memory (because that is where the current data lives) but writes go to memory and to the transaction log on disk. If the machine reboots then the snapshot is loaded from disk back into memory and the transaction log is replayed. When that is complete you are back to where you were when the machine rebooted with no loss of committed data. Columnstore data is always stored on disk although we use a row store in front of the column store that is hidden from the user but acts as a buffer of sorts so that writes in the column store can be pretty fast. More details on how the columnstore works can be found here: https://docs.memsql.com/concepts/v6.7/columnstore/#how-the-m...


The concept in databases that you’re looking for is called group commit. Transactions running concurrently batch writes together before flushing to disk. This means the minimum latency for committing a transaction is the speed of fsync, but the throughput can be as high as the disk bandwidth.

There are other reasons why an in memory database can run faster than a disk based database, such as not having a buffer pool manager, but I don’t think that’s what you were worried about.


It might use a Write-Ahead log like postgres to guarantee data integrity between memory and disk.


That’s right and it writes it on two separate nodes if u enable high availability


Does the free version supports Horizontal sharding and async/sync DISK writes for data mirroring from RAM ?


From the announcement:

> You can do almost anything with MemSQL, using the free tier, that you can do if you have an Enterprise license, including capabilities and production use. The differences are that you can only configure the free tier of MemSQL to use up to 128GB of RAM usage, and support is only community support; for paid MemSQL support, you need an Enterprise license.


How is MemSQL for OLTP nowadays? Has anyone had any success consistently/scalably using it for true HTAP-ish use cases?


It's always been good at OLTP and great at OLAP. We used it for years as a single HTAP solution during the 5.x versions for heavy adtech applications.


Thanks! Just curious, do you use the same tables for both OLTP and OLAP, or do you have some kind of pseudo-ETL process to transform between OLTP and OLAP schemas?


The schemas are the same. Tables behave the same way logically, and its just performance and physical semantics that are different.

We would have recent data in rowstore and move older data into columnstore. You can easily join/union between both for queries. Also some constantly changing data (like budget counters) would always remain in rowstore with many lookups and updates per-second.


is memSQL a replacement for a full RDBMS in your stack ? Or just for analytics -- siphoning off the data from your master.


MemSQL is an "HTAP" (hybrid transactional-analytical processing) database. They have an in-memory row store table engine for the transactional queries and a disk-backed column store table engine for the analytical queries.


It can be, but if you're using the rowstore tables (best for single row lookups and constant modifications) then you'll have to keep all that data in RAM which can be limiting.


Yes we have customers that standardize their database workloads on memsql and move off of oracle


Personally, I would not use memsql as a first line RDBMS. It lacks too many useful features, and it doesn't return consistent results for all queries (I've found inconsistencies when using group by with having, for instance).


Been using MemSQL for a enterprise-level financial services client since mid-2017. We have this in production and are running it in a multi-TB cluster. We've not seen ANY of these issues here and are heavy, crazy query users. Their support has been nothing but on-the-spot and very helpful.


I totally admit that the inconsistency might be a result of misconfiguration by maintainers at my company. That being said, being able to shoot yourself in the foot so subtly and badly via configuration seems like a pretty strong anti-feature.


Since you aren't sharing any actual details and are switching between blaming the product to blaming your devs and config (where no product can magically keep you from breaking settings), your comments come across as rather disingenuous.

Why not share a clear example of exactly what happened, or post on their forum with details, so we can all judge for ourselves?


If you have an example of the query you thought was returning inconsistent results, feel free to make a post over at https://www.memsql.com/forum/ with a repro.


Is there a way on this release to create auto-scale rules like on RDS?


Sort of shameless plug, but if you are looking into such solution you could have a look also at https://redisql.com/

It is a redis module that embed SQLite, bringing on the table a lot of advantages. Extremely fast and with it you can even upgrade your Redis instance to be your only database.

There is not a huge company behind it, but I really do my best to support it. I don't believe to have disappointed any of our users so far.

Also the tech documentation: http://redbeardlab.tech/rediSQL/references/


Interesting project but this is very different from MemSQL. Perhaps you should do a separate "show hn" post.


This is awesome! Exactly what I was looking for. Basically a SQLite server-node. Replace my homebrew Go thing.


I am very glad you find it interesting!

For any issues don't hesitate to contact me directly or through GitHub.

Also if you need it for some open source/(do the world a better place) kind of project I am more than happy to provide the PRO version free of any charges or obligations.

Of course it apply to anybody reading!


Cool project! I will check this out.

Note: you have a typo on your home page: "never loose a bit" 'loose' => 'lose'


Spotted another typo: simplicity in "The power of SQL with the simplicty of Redis" is missing an i


Thanks guy!


I was just looking into in-memory SQL DBs so this is great. Does it have ODBC support?


No unfortunately it does not support ODBC.

It works on top of Redis so it inheritance its interface. I guess it would be possible to add an ODBC layer but honestly I have to look into it...


Sweet!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: