Acceptance Testing

Testing Scope

Testing Goals and Objectives

The objectives of the testing are:

  • The Court Acceptance Test (CAT) defines how CCPOR is tested to ensure that it meets the functional requirements by testing defined test cases; and
  • Test all critical components and receive court/sheriff sign-off
  • To achieve these objectives, the following goals are established:
  • Court, LEAs and sheriff’s office will test the functionalities of the CCPOR application; and
  • The test results to be logged. Any issues to be resolved before court go-live

Assumptions

The court acceptance test assumes that all other tests are satisfactory. This test will cover the following.

  1. The functional requirements as defined in this document
  2. Usability of the system for access in Judicial Council staging environment

Exclusions

The court acceptance test will not cover the following because they will be covered by other tests.

  • The nonfunctional requirements
  • Integrity of the source code
  • Regression testing
  • System performance testing

Court Acceptance Test (CAT)

Responsibilities

The following table defines the structure and primary roles and responsibilities during CAT. The resources can play a dual role as preferred (see below).

Resource Planning

Name

Role

Responsibilities

Judicial Council of California Staff

CCPOR Team

Communicate with courts/sheriffs to agree on format and scope of CAT

Provide CAT plan to courts/ sheriffs

Coordinates with court/ sheriffs on test dates and user account setups

Support court/ sheriffs testers during test execution

Ensure issues identified during CAT are logged for fixing

Ensure issues identified within scope of project phase are resolved and re-tested

Track test progress and help facilitate to ensure it takes place within agreed timeframes

Schedule weekly meetings with courts/ sheriffs during CAT

Get sign-off from courts/ sheriffs on CAT approval

Application Development Team

Provide fixes to priority 1 issues/defects immediately

Provide fixes to priority 2,3,4 issues/defects

Release new versions as needed per reported issues/defects

Monitor logs and performance

Court

Law Enforcement Agency (LEA)

Sheriff’s office

IT Deployment Manager Supervisor Restraining & Protective Order Operations

Agree with format and scope of CAT

Agree with acceptance criteria prior to commencing CAT

Schedule testing and coordinates test resources including sheriffs

Assist with the creation of a court/ sheriffs detailed test plan

Support testers during test execution

Track test progress and help facilitate to ensure takes place within agreed timeframes

Attend weekly meetings during CAT as scheduled by the Judicial Council

Provide Approval to Close CAT process

Subject Matter Experts (SME) Testers (Users of R&PO process)

Provide training to court testers prior CAT

Create detailed test plan

Identify test data needed for scenarios

Execute court acceptance test scenarios/cases to ensure the application performs as an acceptable level

Ensure that issues identified during CAT are logged

Record and prioritize defects/issues

Retest as needed

Attend weekly meetings during CAT as scheduled by the Judicial Council

Testing Strategy

The CAT plan is a high level guide, and is not intended as a replacement for any specific court acceptance testing procedures that the court/sheriff might have. It is recommended that detailed test plan be used to record the results of court/sheriff CAT testing.

Consideration

Details

Test Approach

CCPOR Application Training:

  • CCPOR Training Plan will be distributed before court acceptance testing. This plan will cover details related to training: scope, suite & courses.
  • Judicial Council Trainers to provide training to court and sheriff staff on CCPOR application software
  • Training will consist of a walkthrough of the primary test scenarios with hands on practice

Court detailed test plan:

  • Test scenarios/test cases to be identified in plan
  • Test results (pass/fail) recorded with step through process.
  • A method to be used to track CAT progress for reporting to Test Execution:
  • Test data: Court users will identify test data which meet the criteria needed for the test scenarios (i.e. entry clerks, court and sheriff supervisors, orders, etc)
  • Log-ins: Court users will use their authorized logins and be assigned their actual roles to ensure that the necessary features are available and unauthorized features are not
  • Court/sheriff users will have access to Judicial Council staging environment in which CAT will be performed

Testing Location

  • From a court location/desk
  • From sheriff location/desk

Defect/Issue Tracking

  • Court users -track which of their assigned scenarios have been tested;
  • track the status of their assigned scenarios (pass/fail);
  • report and prioritize defects/issues. See Defect Tracking section as a recommendation
  • Judicial Council Application Development Team: reviews reported issues/defects and may follow-up with tester
  • Courts / sheriffs users: to report issues/defects, in particular show stoppers immediately and attend weekly CAT meetings as scheduled by Test Support:
  • Court IT Deployment Manager and Judicial Council Deployment Manager will oversee and triage testing activities as appropriate
  • Weekly meetings will be scheduled with courts and sheriff’s office for testing status update
  • Judicial Council Application Developers – available to fix show stoppers encountered during testing; monitors performance and system logs

Assumptions & Constraints

  • Court/sheriff user setup for access to CCPOR application
  • Court/sheriff connectivity to CCTC Stage environment
  • Successful integration test; DOJ/CLETS/CARPOS

Test Environment

  • Stage environment at CCTC:
  • Courts/users IDs/passwords will be tested prior CAT to ensure successful user connectivity

Test Period

Testing window – to be determined in Court Detail Test Plan

Entrance and Exit Criteria

Entrance and exit criteria define the quality conditions and deliverable pre-requisites in order to begin or end CAT. When the following criteria are met, the CCPOR system will be considered ready for court go-live readiness.

Test Level Entrance criteria Exit Criteria Exit Deliverables Suspension Criteria
CAT
  1. Test cases assigned
  2. Test data -orders
  3. Pass integration test – DOJ/CLETS/CARPOS
  4. Stable environment
  5. Court/sheriff access to Staging environment
  6. Court/sheriff log in with access and permission assigned
  1. All tests executed
  2. All issues documented
  3. Functionalities met
  4. All priority 1,2, 3 and 4 Test defects/issues addressed
  1. Tests, data, & results
  2. Updated tests
  1. Change request
  2. Design flaw

Test Scenarios

The test scenarios are based on the use cases as identified during requirements gathering phase of the project. Test cases are built using the test scenarios. The following table identifies high-level test scenarios to be tested during CAT.

Warning iconIt is important that the members of the court, LEAs and sheriff’s office have their own detailed test plan to test from and an agreed process to ensure all application requirements are being met before go-live.

Test #

Type

1.0

System Accessibility

1.1

URL Accessibility

1.2

Login & Password

2.0

Scanner Usability

2.1

Scan an Order

2.2

Name Order

2.3

Save an Order

2.4

View an Order

2.5

Print an Order

3.0

CCPOR Application Functionality

3.1

Add Quick Attach

3.2

Search Quick Attach

3.3

Search an Order

3.4

Add an Order

3.5

View Draft Order

3.6

Modify an Order

3.7

Service an Order

3.8

Cancel an Order

3.9

View CARPOS Messages; as applicable

4.0

CCPOR Integration Functionality

4.1

View CLETS Acknowledgement Messages

Defect Tracking

Defect tracking is the process of finding defects in the CCPOR application by testing and recording feedback to the Judicial Council of California. The defects are logged for corrective action and are fixed in the order of their assigned priority.

Severity and Priority Levels for Defects/Issues

Severity measures the impact of the issue to the application and priority measures the business urgency of the issue. The severity levels (see below) indicate the impact of the defect and are defined as follows:

Severity Level

Level

Severity

Description

1

Critical

Critical defects are those (showstoppers) which stop all testing with no work-around causing a suspension (e.g. system crash or unavailable page). These defects must be fixed immediately and before the testing can resume. Testers need to escalate critical defects immediately to the reporting manager.

2

Major

Major defects block the testing of major functionality with no work-around. These defects have the highest priority for being fixed after the completion of current testing and being retested in the next round.

3

Minor

Minor defects do not stop the testing effort or block major tests. They indicated normal defects where the solution design is not being followed, requirements are not met, or expected results are not observed. But, they are required to be fixed before the application code can be released to the next environment.

4

Cosmetic

Cosmetic severity defects are minor functionality defects with work-around or cosmetic ones. They do not need to be fixed before the application code can be released or signed off in production. The fix can be deferred to maintenance or another phase of the project. However, their fix now would result in a higher quality release.

The priority levels (see below) indicate the urgency of fixing the defect and are defined as follows:

Priority Level

Level

Priority

Description

1

Urgent

Urgent priority defects are the important, regardless of severity level, and must be fixed immediately—generally in the next round of testing and before go-live.

2

High

High priority defects are the next most important and must be fixed before go-live.

3

Medium

Medium priority defects are not as important and do not need to be fixed before go-live.

4

Low

Low priority defects are the least important and do not need to be fixed before go-live.

Functions To Be Tested

Full functionality of the CCPOR application includes the following components:

Function

Description

Add Quick Attach

Add Quick Attach use case allows a user to add a scanned image into CCPOR database. This image can then be searched later for converting to an order.

Search Quick Attach

Search Quick Attach use case is used to search for image within CCPOR that has been entered using the Add Quick Attach function to convert into an order.

Add Order

Add Order use case is used to add a new restraining and protective order (R&PO) to CCPOR and (optional) submit to DOJ CARPOS.

Draft Order

The Draft Order use case is used to save a partially entered R&PO in CCPOR in DRAFT status. CCPOR users may retrieve the orders in DRAFT status, enter the rest of the order information and submit to CCPOR to add the order in ACTIVE status and (optional) submit to DOJ CARPOS.

Modify Order

Modify Order use case is used to modify the R&PO data stored in CCPOR. The modifications are also sent to DOJ CARPOS (optional) to modify the CARPOS file if the order in the CCPOR system is in ACTIVE status.

Service Order

Service Order use case is used to add a proof of service (POS) for an existing R&PO in CCPOR. If the order exists in DOJ CARPOS then CCPOR will add (optional) the POS in CARPOS.

Cancel Order

The Cancel Order use case is used to cancel R&PO in CCPOR. The orders can be cancelled for various reasons such as the order is terminated by the court, it was entered by error or the restrained person is deceased. CCPOR system sends (optional) a Cancel Order message to the DOJ CARPOS.

Search Orders

The Search Orders use case is used to search R&PO in CCPOR system.

CARPOS Messages

The CARPOS Message view allows the user to see all the messages/responses received from DOJ and take appropriate action based on the responses received.