Skip to main content
added 322 characters in body
Source Link
Laiv
  • 15k
  • 2
  • 34
  • 72

In addition to k3b's answer.

If you want to use "predefined database-content" you should have a test-database-setup script so the database can be easily set up and loaded on a developer-database engine.

It's not that simple. You can not simply rely on an existing DB that could come and go, change any time due to unknown reasons and by the hand of different actors1.

No, you should deploy your own DB. Once per test or once per the whole suite is up to you. This is important if you want your tests to be deterministic and get the most similar behaviour to the one expected from production. It's important for CI too. The builds might run in dedicated environments that might not have access to the DB. Think in running builds on the cloud. In my opinion, these tests should behave like unit tests as well: run any time in any environment.

You must consider matching the DB engine in vendor and version too, for more liable outputs.2

However, there's the problem of timings. Deploying DBs slows down test executions. Time is a limited and valuable resource you don't want to waste. In consequence, you have to balance integration tests with other kinds of tests.

So, when I would run on-the-fly DB instead of mocks or stubs?

  • When CI and CD timings are not constraining or critical.
  • When I need accuracy and precision in configurations and set-ups, overall system behaviour, approximate performance, etc.
  • When there's several teams (or devs) working on the same application, feature, task, etc.
  • When running test in distributed environments.
  • When I want to keep test code complexity at bay 3
  • When the benefits offset the costs by much.

How to shift from one way to another depends on the context, the team and the resources at hand. I would not start small scale redesigns without letting others know, because the change on the paradigm is fairly important as for the others to know and embrace the idea as soon as possible.


1: Say, there're more teams running tests on the same DB. Tests running for a different version of the same application.

2: You might think that deploying fake or lightweight DB could do the job, but to my experience, they never behave exactly like the product they try to behalf. In some cases, I got unexpected and unpleasant behaviours in production that I could not detect during tests.

3: Introducing mocks increments the complexity of the test code. It's also a possible source for bugs because usually we never test the test code. We accidentally could set the wrong behaviour to the mock, the reason why mocking 3rd party components is not advisable.

In addition to k3b's answer.

If you want to use "predefined database-content" you should have a test-database-setup script so the database can be easily set up and loaded on a developer-database engine.

It's not that simple. You can not simply rely on an existing DB that could come and go, change any time due to unknown reasons and by the hand of different actors1.

No, you should deploy your own DB. Once per test or once per the whole suite is up to you. This is important if you want your tests to be deterministic and get the most similar behaviour to the one expected from production. It's important for CI too. The builds might run in dedicated environments that might not have access to the DB. Think in running builds on the cloud. In my opinion, these tests should behave like unit tests as well: run any time in any environment.

You must consider matching the DB engine in vendor and version too, for more liable outputs.2

However, there's the problem of timings. Deploying DBs slows down test executions. Time is a limited and valuable resource you don't want to waste. In consequence, you have to balance integration tests with other kinds of tests.

So, when I would run on-the-fly DB instead of mocks or stubs?

  • When CI and CD timings are not constraining or critical.
  • When I need accuracy and precision in configurations and set-ups, overall system behaviour, approximate performance, etc.
  • When there's several teams (or devs) working on the same application, feature, task, etc.
  • When running test in distributed environments.
  • When the benefits offset the costs by much.

How to shift from one way to another depends on the context, the team and the resources at hand. I would not start small scale redesigns without letting others know, because the change on the paradigm is fairly important as for the others to know and embrace the idea as soon as possible.


1: Say, there're more teams running tests on the same DB. Tests running for a different version of the same application.

2: You might think that deploying fake or lightweight DB could do the job, but to my experience, they never behave exactly like the product they try to behalf. In some cases, I got unexpected and unpleasant behaviours in production that I could not detect during tests.

In addition to k3b's answer.

If you want to use "predefined database-content" you should have a test-database-setup script so the database can be easily set up and loaded on a developer-database engine.

It's not that simple. You can not simply rely on an existing DB that could come and go, change any time due to unknown reasons and by the hand of different actors1.

No, you should deploy your own DB. Once per test or once per the whole suite is up to you. This is important if you want your tests to be deterministic and get the most similar behaviour to the one expected from production. It's important for CI too. The builds might run in dedicated environments that might not have access to the DB. Think in running builds on the cloud. In my opinion, these tests should behave like unit tests as well: run any time in any environment.

You must consider matching the DB engine in vendor and version too, for more liable outputs.2

However, there's the problem of timings. Deploying DBs slows down test executions. Time is a limited and valuable resource you don't want to waste. In consequence, you have to balance integration tests with other kinds of tests.

So, when I would run on-the-fly DB instead of mocks or stubs?

  • When CI and CD timings are not constraining or critical.
  • When I need accuracy and precision in configurations and set-ups, overall system behaviour, approximate performance, etc.
  • When there's several teams (or devs) working on the same application, feature, task, etc.
  • When running test in distributed environments.
  • When I want to keep test code complexity at bay 3
  • When the benefits offset the costs by much.

How to shift from one way to another depends on the context, the team and the resources at hand. I would not start small scale redesigns without letting others know, because the change on the paradigm is fairly important as for the others to know and embrace the idea as soon as possible.


1: Say, there're more teams running tests on the same DB. Tests running for a different version of the same application.

2: You might think that deploying fake or lightweight DB could do the job, but to my experience, they never behave exactly like the product they try to behalf. In some cases, I got unexpected and unpleasant behaviours in production that I could not detect during tests.

3: Introducing mocks increments the complexity of the test code. It's also a possible source for bugs because usually we never test the test code. We accidentally could set the wrong behaviour to the mock, the reason why mocking 3rd party components is not advisable.

added 257 characters in body
Source Link
Laiv
  • 15k
  • 2
  • 34
  • 72

In addition to k3b's answer.

If you want to use "predefined database-content" you should have a test-database-setup script so the database can be easily set up and loaded on a developer-database engine.

It's not that simple. You can not simply rely on an existing DB that could come and go, change any time due to unknown reasons and by the hand of different actors1.

No, you should deploy your own DB. Once per test or once per the whole suite is up to you. This is important if you want your tests to be deterministic and get the most similar behaviour to the one expected from production. It's important for CI too. The builds might run in dedicated environments that might not have access to the DB. Think in running builds on the cloud. No, IMOIn my opinion, these tests should behave like unit tests as well: run any time in any environment.

You must consider matching the DB engine in vendor and version too, for more accurateliable outputs.2

However, there's the problem of timings. Deploying DBs slows down test executions. Time is a limited and valuable resource you don't want to waste. SoIn consequence, you have to balance integration tests with other kinds of tests.

So, when I would run on-the-fly DB instead of mocks or stubs?

  • When buildingCI and packaging in 10 min instead of 10 seconds isCD timings are not a problemconstraining or critical.
  • When I need accuracy and precision in configurations and set-ups, overall system behaviour, approximate performance, etc.
  • When there's several teams (or devs) working on the same application, feature, task, etc. In other words, confidence and determinism.
  • When running test in distributed environments.
  • When the benefits offset the costs by much.

How to shift from one way to another depends on the context, the team and the resources at hand. I would not start small scale redesigns without letting others know, because the change on the paradigm is fairly important as for the others to know and embrace the idea as soon as possible.


1: Say, there're more teams running tests on the same DB. Tests running for a different version of the same application.

2: You might think that deploying fake or lightweight DB could do the job, but to my experience, they never behave exactly like the product they try to behalf. In some cases, I got unexpected and unpleasant behaviours in production that I could not detect during tests.

In addition to k3b's answer.

If you want to use "predefined database-content" you should have a test-database-setup script so the database can be easily set up and loaded on a developer-database engine.

It's not that simple. You can not simply rely on an existing DB that could come and go, change any time due to unknown reasons and by the hand of different actors1.

No, you should deploy your own DB. Once per test or once per the whole suite is up to you. This is important if you want your tests to be deterministic and get the most similar behaviour to the one expected from production. It's important for CI too. The builds might run in dedicated environments that might not have access to the DB. Think in running builds on the cloud. No, IMO, these tests should behave like unit tests as well: run any time in any environment.

You must consider matching the DB engine in vendor and version too, for more accurate outputs.

However, there's the problem of timings. Deploying DBs slows down test executions. Time is a limited and valuable resource you don't want to waste. So, you have to balance integration tests with other kinds of tests.

So, when I would run on-the-fly DB instead of mocks?

  • When building and packaging in 10 min instead of 10 seconds is not a problem.
  • When I need accuracy and precision in configurations and set-ups, overall system behaviour, approximate performance, etc.
  • When there's several teams (or devs) working on the same application, feature, task, etc. In other words, confidence and determinism.
  • When running test in distributed environments.
  • When the benefits offset the costs by much.

How to shift from one way to another depends on the context, the team and the resources at hand. I would not start small scale redesigns without letting others know, because the change on the paradigm is fairly important as for the others to know and embrace the idea as soon as possible.


1: Say, there're more teams running tests on the same DB. Tests running for a different version of the same application.

In addition to k3b's answer.

If you want to use "predefined database-content" you should have a test-database-setup script so the database can be easily set up and loaded on a developer-database engine.

It's not that simple. You can not simply rely on an existing DB that could come and go, change any time due to unknown reasons and by the hand of different actors1.

No, you should deploy your own DB. Once per test or once per the whole suite is up to you. This is important if you want your tests to be deterministic and get the most similar behaviour to the one expected from production. It's important for CI too. The builds might run in dedicated environments that might not have access to the DB. Think in running builds on the cloud. In my opinion, these tests should behave like unit tests as well: run any time in any environment.

You must consider matching the DB engine in vendor and version too, for more liable outputs.2

However, there's the problem of timings. Deploying DBs slows down test executions. Time is a limited and valuable resource you don't want to waste. In consequence, you have to balance integration tests with other kinds of tests.

So, when I would run on-the-fly DB instead of mocks or stubs?

  • When CI and CD timings are not constraining or critical.
  • When I need accuracy and precision in configurations and set-ups, overall system behaviour, approximate performance, etc.
  • When there's several teams (or devs) working on the same application, feature, task, etc.
  • When running test in distributed environments.
  • When the benefits offset the costs by much.

How to shift from one way to another depends on the context, the team and the resources at hand. I would not start small scale redesigns without letting others know, because the change on the paradigm is fairly important as for the others to know and embrace the idea as soon as possible.


1: Say, there're more teams running tests on the same DB. Tests running for a different version of the same application.

2: You might think that deploying fake or lightweight DB could do the job, but to my experience, they never behave exactly like the product they try to behalf. In some cases, I got unexpected and unpleasant behaviours in production that I could not detect during tests.

deleted 15 characters in body
Source Link
Laiv
  • 15k
  • 2
  • 34
  • 72

In addition to k3b's answer.

If you want to use "predefined database-content" you should have a test-database-setup script so the database can be easily setupset up and loaded on a developer-database engine.

It's not that simple. You must consider matching the DB engine in vendor and version if you really want confidence and determinism from the tests. You can not simply rely on an existing DB that could change, come and go, change any time due to manyunknown reasons and by the hand of different actors1.

No, you should to deploy your own DB engine. Once per test or once per the whole suite is up to you. This is important if you want your tests to be deterministic and get the most similar behaviour to the one expected from production. It's also important for CI too. theThe builds might run in dedicated environments that might not have access to the DB. thinkThink in running builds on the cloud. No, IMO, these tests should behave like unit tests as well: run any time in any environment.

You must consider matching the DB engine in vendor and version too, for more accurate outputs.

However, there's the problem of timings. Deploying DBs slows down test executions. Time is a limited and valuable resource you don't want to waste. So, you have to balance integration tests with other kinds of tests.

So, when I would run on-the-fly DB instead of mocks?

  • When building and packaging in 10 min instead of 10 seconds is not a problem.
  • When I need accuracy and precision in configurations and set-ups, overall system behaviour, approximate performance, etc.
  • When there's several teams (or devs) working on the same application, feature, task, etc. In other words, confidence and determinism.
  • When running test in distributed environments.
  • When the benefits offset the costs by much.

How to shift from one way to another depends on the context, the team and the resources at hand. I would not start small scale redesigns without letting others know, because the change on the paradigm is fairly important as for the others to know and embrace the idea as soon as possible.


1: Say, there're more teams running tests on the same DB. Tests running for a different version of the same application.

In addition to k3b's answer.

If you want to use "predefined database-content" you should have a test-database-setup script so the database can be easily setup and loaded on a developer-database engine.

It's not that simple. You must consider matching the DB engine in vendor and version if you really want confidence and determinism from the tests. You can not simply rely on an existing DB that could change, come and go any time due to many reasons1.

No, you should to deploy your own DB engine. Once per test or once per the whole suite is up to you. This is important if you want your tests to be deterministic and get the most similar behaviour to the one expected from production. It's also important for CI. the builds might run in dedicated environments that might not have access to the DB. think in running builds on the cloud. No, IMO, these tests should behave like unit tests as well: run any time in any environment.

However, there's the problem of timings. Deploying DBs slows down test executions. Time is a limited and valuable resource you don't want to waste. So, you have to balance integration tests with other kinds of tests.

So, when I would run on-the-fly DB instead of mocks?

  • When building and packaging in 10 min instead of 10 seconds is not a problem.
  • When I need accuracy and precision in configurations and set-ups, overall system behaviour, approximate performance, etc.
  • When there's several teams (or devs) working on the same application, feature, task, etc. In other words, confidence and determinism.
  • When running test in distributed environments.
  • When the benefits offset the costs by much.

How to shift from one way to another depends on the context, the team and the resources at hand. I would not start small scale redesigns without letting others know, because the change on the paradigm is fairly important as for the others to know and embrace the idea as soon as possible.


1: Say, there're more teams running tests on the same DB. Tests running for a different version of the same application.

In addition to k3b's answer.

If you want to use "predefined database-content" you should have a test-database-setup script so the database can be easily set up and loaded on a developer-database engine.

It's not that simple. You can not simply rely on an existing DB that could come and go, change any time due to unknown reasons and by the hand of different actors1.

No, you should deploy your own DB. Once per test or once per the whole suite is up to you. This is important if you want your tests to be deterministic and get the most similar behaviour to the one expected from production. It's important for CI too. The builds might run in dedicated environments that might not have access to the DB. Think in running builds on the cloud. No, IMO, these tests should behave like unit tests as well: run any time in any environment.

You must consider matching the DB engine in vendor and version too, for more accurate outputs.

However, there's the problem of timings. Deploying DBs slows down test executions. Time is a limited and valuable resource you don't want to waste. So, you have to balance integration tests with other kinds of tests.

So, when I would run on-the-fly DB instead of mocks?

  • When building and packaging in 10 min instead of 10 seconds is not a problem.
  • When I need accuracy and precision in configurations and set-ups, overall system behaviour, approximate performance, etc.
  • When there's several teams (or devs) working on the same application, feature, task, etc. In other words, confidence and determinism.
  • When running test in distributed environments.
  • When the benefits offset the costs by much.

How to shift from one way to another depends on the context, the team and the resources at hand. I would not start small scale redesigns without letting others know, because the change on the paradigm is fairly important as for the others to know and embrace the idea as soon as possible.


1: Say, there're more teams running tests on the same DB. Tests running for a different version of the same application.

added 21 characters in body
Source Link
Laiv
  • 15k
  • 2
  • 34
  • 72
Loading
Source Link
Laiv
  • 15k
  • 2
  • 34
  • 72
Loading