I didn't quite understand why this was made. We create our local test environments using docker-compose, and so I read:
> Creating reliable and fully-initialized service dependencies using raw Docker commands or using Docker Compose requires good knowledge of Docker internals and how to best run specific technologies in a container
This sounds like a <your programming language> abstraction over docker-compose, which lets you define your docker environment without learning the syntax of docker-compose itself. But then
> port conflicts, containers not being fully initialized or ready for interactions when the tests start, etc.
means you'd still need a good understanding of docker networking, dependencies, healthchecks to know if your test environment is ready to be used.
Am I missing something? Is this basically change what's starting your docker test containers?
Shows how you can embed the declaration of db for testing in a unit test:
> pgContainer, err := postgres.RunContainer(ctx,
> testcontainers.WithImage("postgres:15.3-alpine"),
> postgres.WithInitScripts(filepath.Join("..", "testdata", "init-db.sql")),
> postgres.WithDatabase("test-db"),
> postgres.WithUsername("postgres"),
> postgres.WithPassword("postgres"),
> testcontainers.WithWaitStrategy(
> wait.ForLog("database system is ready to accept connections").
This does look quite neat for setting up test specific database instances instead of spawning one outside of the test context with docker(compose). It should also make it possible to run tests that require their own instance in parallel.
And, to quote non-code text, you have to do it manually; there is no formatting operator and the code-indent method won’t work (unreadable at many browser widths). I tend to do it like so:
> *Paragraph one.*
> *Paragraph two. Etc.*
Which produces the desired effect:
> Paragraph ‘one’.
> Paragraph two.
(To use a * in a paragraph that’s italic-wrapped, backslash it.)
This seems great but is actually quite slow. This will create a new container, with a new postgres server, and a new database in that server, for each test. You'll then need to run migrations in that database. This ends up being a huge pain in the ass.
A better approach is to create a single postgres server one-time before running all of your tests. Then, create a template database on that server, and run your migrations on that template. Now, for each unit test, you can connect to the same server and create a new database from that template. This is not a pain in the ass and it is very fast: you run your migrations one time, and pay a ~20ms cost for each test to get its own database.
I've implemented this for golang here — considering also implementing this for Django and for Typescript if there is enough interest. https://github.com/peterldowns/pgtestdb
As a user of testcontainers I can tell you they are very powerful yet simple.
Indeed all they do is provide an abstraction for your language, but this is soo useful for unit/integration tests.
At my work we have many microservices in both Java and python, all of which use testcontainers to set up the local env or integration tests. The integration with localstack and the ability to programmatically set it up without fighting with compose files, is somewhat I find very useful.
Testcontainers is great. It's got seamless junit integration and really Just Works. I've never once had to even think about any of the docker aspects of it. There's really not much to it.
It’s not coming across in your comment, but Testcontainers can work with unit tests to start a container, run the unit tests and shutdown. For example, to verify database operations against the actual database, the unit test can start an instance of Postgres run tests and then shut it down. If running tests in parallel, each test can start its own container and shutdown at the end.
Wouldn't that just massively, _massively_ slow down your tests, if each test was spinning up its own Postgres container?
I ask because I really like this and would love to use it, but I'm concerned that that would add just an insane amount of overhead to the point where the convenience isn't worth the immense amount of extra time it would take.
A better approach is to spin up one container and a _template_ database before the tests. Apply migrations to that database. Then, each test creates its own database from the template, runs, and drops the database.
Tests can be run in parallel, and they are fast because the database is prepared just once, tests simply make a copy.
We're doing this in my company, I'm happy how it works.
Testcontainers are for testing individual components, apart from the application.
I built a new service registry recently, its unit tests spins up a zookeeper instance for the duration of the test, and then kills it.
Also very nice with databases. Spin up a clean db, run migrations, then test db code with zero worries about accidentally leaving stuff in a table that poisons other tests.
> Also very nice with databases. Spin up a clean db, run migrations, then test db code with zero worries about accidentally leaving stuff in a table that poisons other tests.
Are you spinning up a new instance between every test case? Because that sounds painfully slow.
I would just define a function which DELETEs all the data and call it between every test.
It supports both patterns (and variations in between). So you get to pick between isolation at a test level or if you want less overhead, rolling back the commit or other ways to cleanup.
Can only speak for the Golang version of the lib, but spinning up new instances was surprisingly quick.
I usually do one per suite with a reset method run before each test.
It's a decent compromise between performance and isolation, since weird interactions can only originate from the same suite, rather than anywhere in any test. Also permits parallel execution of db test suites.
This looks to be like just language specific bindings over the docker compose syntax. You're right that docker compose handles all of the situations they describe.
The major issue I had with docker compose in my CI environment is flaky tests when a port is already used by another job I don't control. With testcontainers, I haven't seen any false positive as I can use whatever port is available and not a hardcoded one hoping it won't conflict with what other people are doing.
Unless I'm mistaken, this is only a problem if you're forwarding ports from the Docker containers to the host machine, which isn't necessary if the test itself is running from inside a Docker container on the same bridge network as your dependencies. (Which compose will set up for you by default.)
> Creating reliable and fully-initialized service dependencies using raw Docker commands or using Docker Compose requires good knowledge of Docker internals and how to best run specific technologies in a container
This sounds like a <your programming language> abstraction over docker-compose, which lets you define your docker environment without learning the syntax of docker-compose itself. But then
> port conflicts, containers not being fully initialized or ready for interactions when the tests start, etc.
means you'd still need a good understanding of docker networking, dependencies, healthchecks to know if your test environment is ready to be used.
Am I missing something? Is this basically change what's starting your docker test containers?