Software testing is meant to see how the software works under different conditions. These conditions might be different depending on what the audience is. Testing is done to understand if it will work correctly, partially fail to work properly, or totally fail to work properly. Each test may be used to see how one, or many, parts of the software work at a point in its development.
Proper performance may be based on specific (written) requirements or standards (which might, for example, be usability). Bad performance, or poor quality, might cause an unhappy audience. This could cause more work needing to be done on the software and higher costs.
A review of the results of tests may show that some parts of the software system may need to be done again, or may work well. Some bad performances or software bugs may need to be fixed. After more work on the software, testing may be done again.
For larger software systems, tracking may take place checking completeness of the set of tests, test results, and how quickly any problems are fixed. All this information can be used for decision making about how ready the software is, and when it could be released to the final audience.
Software testing may be done with separate parts of the software, with a group of these parts, or with the entire software. Software testing may be done by allowing the software to be used by a small number of people who the software is meant for, under controlled settings. It is then tested with a larger group of people under less controlled settings (beta testing).