CLASP Password Manager
Test Plan
COP4331 - Processes for Object Oriented Software Development - Fall 2014

Modification History:

Version Date Who Comment
v0.0 09/08/2014 Cindy Harn Created Template / Empty Document
v1.0 09/16/2014 Thomas Bergens Initial upload from Google Doc to template
v1.1 09/18/2014 Cindy Harn Updated and uploaded the remainder of the sections
v1.2 09/18/2014 Thomas Bergens Made some minor changes and added an additional Test Case
v2.0 10/21/2014 Cindy Harn Upgraded the design / layout of the page.

Team Name: Group 8

Team Website: http://www.cs.ucf.edu/courses/cop4331/fall2014/cop4331-8/

Team Members:


Contents of this Document

1. Introduction

2. Description of Test Environment

3. Overall Stopping Criteria

4. Description of Individual Test Cases


SECTION 1: Introduction

Overall Objective for Software Test Activity:

The overall objective for the Software Test Activity of the Password Manager is to ensure all core functionality is present. In addition, the test processes will be used to identify any bugs with the interface or logic of the software. This will allow the identification of the most important issues, and prioritize them to determine what needs to be addressed immediately, as well as potential bugs that will be fixed later in the development process. There may also be issues that arise that the group can identify as not worth fixing, or a minor issue that can potentially be addressed before delivering the software. Automated Unit Testing is expected to be used for functionality that allows it to help with quick iteration of the software design.

Reference Documents


SECTION 2: Description of Test Environment

The Test Environment will consist of all hardware and software available to the team at a minimum. In the context of the project, this will include Windows and various Linux distributions as the OS. In addition, the 32-bit and 64-bit editions of each Operating System will be used as well. For hardware, there is an assortment of modern and legacy laptops. For Windows, the team will test against all of the newer releases, including Windows 8.1, 8, 7, and Vista. For Linux distributions, it is planned to be testing against all popular distributions such as Ubuntu, Fedora, Archlinux, and others. The most recent LTS releases of each applicable distribution will be tested. The Linux kernel is expected to be reasonably up-to-date and still supported. This includes the most recent kernel versions as well as all currently supported LTS releases.

As the application is currently targeted for Java supported platforms, the group is planning to test application functionality against all major Java Platform Standard Edition releases on the platforms described above. The EOL Support Roadmap as provided by Oracle will be used in order to guide the team in this testing: http://www.oracle.com/technetwork/java/eol-135779.html

The database and web service for the application will be developed and tested on CentOS 6.5 Minimal installation within a Xen hypervisor, with the additional packages necessary for development, maintenance, backups, and deployment. The hardware is a Xeon series quad-core CPU with 2GB of RAM at our disposal. Disk space is adjustable. For the database, the latest package of MySQL available to CentOS through the official repositories, or the EPEL will be used for testing and deployment.


SECTION 3: Stopping Criteria

There will be three types of testing: Unit Tests, User Tests, and Developer Testing. Unit Tests will be performed every time a major feature is merged into the Master Branch of our revision control. If a test fails, this issue should be addressed right away as these tests are meant to verify basic functionality of the app. The goal of the Unit Tests are to maintain a working prototype that encompasses the basic functionality with little issues.

User Testing will occur during each major milestone, as determined by the developers. Bugs or issues found during these testing periods should be minimal and not impact the basic functionality of the application. If issues are found, they should be conveyed to the developers, where they will enter these issues into the issue tracker for investigation at a proper time.

The Developer's Testing will occur around the same time as Unit Tests, when a new feature is merged into the master branch. This type of testing may even occur during the development phase of the new feature, before any commitments are made to the local repository or the master branch.

In the case of a major, or even fatal, error for the application, it is expected to investigate the issue in full detail to determine the issues and any possible workarounds. These types of issues should not be merged into the master branch. With these issues, as we are developing using revision control with feature branches, they should not directly affect the master branch, which should always be free of fatal or major issues. If assistance is needed in investigating the fatal error, the feature branch should be committed to the remote repository for others to clone and debug.

The software can be considered "good enough to deliver" once all major unit tests are consistently passing on all testing platforms. An amount of ten unit test runs of each major unit test on each platform can be considered good enough. In addition, we expect the user and developer's testing to be free of major issues as well. The software cannot be considered "good enough to deliver" if it does not meet these minimum requirements.

The software package will be deemed "good enough to deliver" if there are no known errors related to core functionality. Cosmetic errors or bugs can be acceptable. If there are performance issues associated with the software, such as response delays in account creation or retrieval, or the login process, further analysis should be done to determine if the delay can be fixed in a reasonable amount of time, or if it will need to be considered acceptable for submission. All unit tests that test the core functionality and feature set of the software should all pass before delivering the final product.


SECTION 4: Description of Individual Test Cases

Test Objective: This test will verify functionality of account creation.

Test Objective: This test will verify functionality of Recovery Mode.

Test Objective: This test will verify functionality of Master Password Reset Mode.

Test Objective: This test will verify a password submission to the existing list.

Test Objective: This test will verify that the checking of a sufficiently strong Master Password is entered upon creating an Account or recovering/changing the Master Password.

Test Objective: This test will verify a credentials submission to the existing list.

Test Objective: This test will verify a credentials removal to the existing list.

Test Objective: This test will verify a successful edit for a credentials entry within the credentials list.


This page last modified on October 21, 2014.

Please do not reproduce this page.