Alfresco is an open source, open-standards content repository built by the most experienced content management team that includes the co-founder of Documentum. The Alfresco product has a lean, modular component architecture that allows new functionality to be added without any system disruption and is significantly faster than proprietary commercial systems.
The Alfresco 3.0 product will include the existing JSF client along with a new client. It is the new client and it's capabilities that the main bulk of the 3.0 testing will concentrate upon.
Information can be found about Alfresco 3.0 in the following locations:
Roadmap - You can also read more about our 2008 Roadmap and 3.0 plan
Test Plan Objectives
This Test Plan for the Alfresco CMS supports the following objectives:
Define the activities required to prepare for and conduct System, Beta and User Acceptance testing.
Communicate to all responsible parties the System Test strategy.
Define deliverables and responsible parties.
Communicate to all responsible parties the various Dependencies and Risks
Alfresco 3.0 provides a new web client targeted at performing specific tasks. The web client is engineered to run on Microsoft IE 6/7, Mozilla Firefox and Safari 3. The product also provides several API's to access the functionality programmatically.
The web client provides an interface for adding, viewing and retrieving files from the system. There is also a CIFS interface to the repository that acts like a standard shared directory on the network and can have all standard actions associated with a shared network directory applied to it, e.g. offline synchronisation. The product also includes an FTP server for direct ftp access, and WebDAV.
Each user will need a userid and password to login to the system. The management of these details can be configured to use Ldap, NTLM etcâ€¦
The test strategy consists of a series of different tests that will fully exercise the Alfresco 3.0 product. The primary purpose of these tests is to uncover the systems limitations and measure its full capabilities. A list of the various planned tests and a brief explanation follows below.
The System tests will focus on the behavior of the system. User scenarios will be executed against the system as well as screen mapping and error message testing. Overall, the system tests will test the integrated system and verify that it meets the requirements defined in the Wiki (see above for links to relevant requirements).
The API tests will focus on testing the REST API for interacting with the repository.
Security tests will determine how secure the system is. The tests will verify that unauthorized user access to confidential data is prevented. This includes XSS and injection techniques.
A suite of automated tests will be developed to initially test the basic functionality of the system and allow regression testing to be performed on areas of the systems that previously had blocker/critical defects.
Stress and Performance Test
We will subject the system to high input conditions and a high data volumes. The System will ideally be stress tested using incrementally increasing volumes of both input and data to produce scaling graphs.
Multiuser / Concurrency Test
We will test the system with multiple users trying to interact with the same item concurrently to ensure there are no cases of data corruption, data loss or system failure. These tests will be conducted using an automated tool such as Jmeter or Selenium.
We will subject the system to steady input conditions and steady data volumes over a long period to check for any potential issues with system uptime with a more realistic usage scenario. The System will ideally be tested using steady volumes of both input and data to produce system health graphs ofer the test period.
Recovery tests will force the system to fail in a various ways and verify the recovery is properly performed. It is vitally important that all data is recovered after a system failure & no corruption of the data occurred. This is especially important in clustered deployments.
Upgrade testing will ensure that customers can upgrade from previous versions of Alfresco to Alfresco 3.0 and be able to use the JSF client without issues. Any potential upgrade route that includes use of the 'new' client is still tbd.
The default deployment of Alfresco 3.0 for testing will be clustered.
Cluster layout / config tbd
The system will be validated against accessibility criteria, with a minimum level of matching section 508 guidelines. Further criteria might be validated against, such as W3C's Web Accessibility Initiative's WAI-ARIA and WCAG 2.0.
Tests will be conducted to check the accuracy of the user documentation. These tests will ensure that no features are missing, and the contents can be easily understood.
We will beta test the new system by providing â€˜Preview Releasesâ€™ to the community and reporting any defects that are found. This will subject the system to tests that could not be performed in our test environment.
User Acceptance Test
Once the system is ready for implementation, we will perform User Acceptance Testing. The purpose of these tests is to confirm that the system is developed according to the specified requirements and is ready for operational use.
The test procedure is:
(For JSF Client)
Test alfresco on each Stack
Default configuration for DM is a 2 machine cluster
Default configuration for WCM is a standalone server with remote FSR machines
External authentication mechanism used by default.
Issues found on DM cluster are revalidated on a standalone DM install to determine if they are cluster specific or not
Issues found on 'exotic' stacks (i.e. one not used by the development team) are revalidated on the development stack of Win/Tomcat/MySQL to determine if they are generic issues across all stacks or specific to the 'exotic' stack.
When issues are found they are logged in Jira
When issues are fixed buy an engineer they are assigned to a QA member for retest.
(For New 3.0 Client)
test procedure TBD
See Support for list of stacks
The project team will perform reviews for each Phase as required. A meeting notice, with related documents, will be emailed to each participant.
Bug Review meetings
Regular meetings will be held when appropriate to discuss reported defects. The development department will provide status/updates on all defects reported and the test department will provide addition defect information if needed. The support department will flag any issues raised as Support issues. A member of all teams will participate.
Once testing begins, changes to the system are discouraged. If functional changes are required, these proposed changes will be discussed with the Change Control Board (CCB). The CCB will determine the impact of the change and if/when it should be implemented. The CCB consists of the QA Manager, the Development Manager, the Support Manager and Product Manager(s), with the CEO as final arbiter.
When defects are found, the testers will complete a defect report in Jira. The defect tracking system is accessible all members of Alfresco as well as any external party who has registered with the system. When a defect has been fixed or more information is needed, the developer will change the status of the defect to indicate the current state. Once a defect is verified as FIXED by the testers, the testers will close the defect report.
The QA Manager and Project Manager / Development Manager will determine when system test will start and end. The QA Manager will be responsible for coordinating schedules, equipment, & tools for the testers as well as writing/updating the Test Plan, Weekly Test Status reports during formal system testing phases and Final Test Summary report. The testers will be responsible for writing the test cases, creating the automated tests and executing the tests. With the help of the QA Manager, the other people involved in the project will be able to input into various aspects of testing such as beta and user acceptance testing.
The test team consists of:
A Development Manager
Engineers when needed.
Development Manager - Responsible for Project schedules and the overall success of the project. Participate on CCB.
QA Manager - Ensures the overall success of the test cycles. Will coordinate any BRB meetings and will communicate the testing status to the project team. Participate on CCB.
Testers - Responsible for performing the actual system testing.
Usability Expert - He will help coordinate the Beta and User Acceptance testing efforts. Optional participation on CCB.
Engineers - Will write comprehensive unit tests, and will assist in performing testing when necessary.
Develop Test cases
Test Case Review
QA Manager, Dev. Lead, Testers
Develop Automated test suites
Requirements Validation Matrix
Execute manual and automated tests for product release
Complete Defect Reports
Everyone testing the product
Document and communicate test status/coverage
Execute Beta tests
Document and communicate Beta test status/coverage
Execute User Acceptance tests
Document and communicate Acceptance test status/coverage
Final Test Summary Report
End Of Project
Suspension / Exit Criteria
If any defects are found which seriously impact the test progress, the QA manager may choose to Suspend testing. Criteria that will justify test suspension are:
Hardware/software is not available at the times indicated in the project schedule.
Product under test contains one or more critical defects, which seriously prevents or limits testing progress.
Assigned test resources are not available when needed by the test team.
If testing is suspended, resumption will only occur when the problem(s) that caused the suspension has been resolved. When a critical defect is the cause of the suspension, the â€œFIX' must be verified by the test department before testing is resumed.
The test team requires experience testers to develop, perform and validate tests. These testers must also be versed in automated testing.
The source code must be unit tested and provided within the scheduled time outlined in the Project Schedule.
The test servers and test client machines, as well as the LAN environment need to be available during normal working hours. Any downtime will affect the test schedule.
Test Data & Database
Test data should also be made available to the testers for use during testing.
The schedule for each phase is very aggressive and could affect testing. A slip in the schedule in one of the other phases could result in a subsequent slip in the test phase. Close management is crucial to meeting the forecasted completion date.
We will be running our test applications on multiple servers so that if one server develops faults we can still perform testing against the other servers. If, however, the requirement to perform testing against a platform or software that we currently do not have then the set-up time to implement this needs to be considered.
Management support is required so when the project falls behind, the test schedule does not get squeezed to make up for the delay. Management can reduce the risk of delays by supporting the test team throughout the testing phase and assigning people to this project with the required skills set.
Due to the aggressive schedule, it is very important to have experienced testers on this project. Unexpected turnovers can impact the schedule. If attrition does happen, all efforts must be made to replace the experienced individual
The test plan and test schedule are based on the current Requirements Document (or wiki plan). Any changes to the requirements could affect the test schedule and will need to be approved by the CCB.
The Mercury QuickTestPro Automated test tool will be used to help test the system. We have the licensed product onsite and installed.
Please Click here to view the information on the Automated Test Scripts (note, current legacy scripts can be found there).
The following documentation will be available at the end of the test phase: