You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@yngvem and I have discussed the test suite a bit, and we have thought of some ways that might improve it. We don't need to adress all these points in one go, but we could try to be mindful about it for new additions and maybe incrementally split up the tests we have into smaller tests? What do you guys think?
Test structure
Currently, TensorLy mainly has monolithic tests that check many things at once. However, best practices are splitting large tests into smaller tests that (ideally) only test one thing at a time. This will make it easier to see what fails and what works.
Better use of fixtures
If we split the monolithic tests into smaller tests, we should probably use fixtures more actively to avoid having to rewrite the same boilerplate code for each of the smaller test functions. So, for example, we could have fixtures for the different decompositions.
Clearer separation of unit tests and integration tests
There is currently little difference between unit tests (which test a single unit of code) and integration tests (which test how several code units work together). It can be challenging to create a hard line separating the different types of tests, but it is a good strategy to be mindful of the difference and have both.
Better handling of random numbers
Currently, we use fixed random seeds. We can use pytest-randomly to get new random seeds every time we run the test suite, with the possibility to specify the seed manually and re-run the last seed from added command line arguments to the pytest command.
The text was updated successfully, but these errors were encountered:
@yngvem and I have discussed the test suite a bit, and we have thought of some ways that might improve it. We don't need to adress all these points in one go, but we could try to be mindful about it for new additions and maybe incrementally split up the tests we have into smaller tests? What do you guys think?
Test structure
Currently, TensorLy mainly has monolithic tests that check many things at once. However, best practices are splitting large tests into smaller tests that (ideally) only test one thing at a time. This will make it easier to see what fails and what works.
Better use of fixtures
If we split the monolithic tests into smaller tests, we should probably use fixtures more actively to avoid having to rewrite the same boilerplate code for each of the smaller test functions. So, for example, we could have fixtures for the different decompositions.
Clearer separation of unit tests and integration tests
There is currently little difference between unit tests (which test a single unit of code) and integration tests (which test how several code units work together). It can be challenging to create a hard line separating the different types of tests, but it is a good strategy to be mindful of the difference and have both.
Better handling of random numbers
Currently, we use fixed random seeds. We can use pytest-randomly to get new random seeds every time we run the test suite, with the possibility to specify the seed manually and re-run the last seed from added command line arguments to the
pytest
command.The text was updated successfully, but these errors were encountered: