May 2008

"A manager may be more interested in the overall quality rather than in a specific quality characteristic, and for this reason will need to assign weights, reflecting business requirements, to the individual characteristics."

ISO 9126

For a company, the choice to opt for software as a component of its information system, whether this software is open source or commercial, rests on the analysis of needs and constraints and on the adequacy of the software to address these needs and constraints.

However, when one plans to study the adequacy of open source software (OSS), it is necessary to have a method of qualification and selection adapted to the characteristics of this type of software and to precisely examine the constraints and risks specific to OSS. Since the open source field has a very broad scope, it is also necessary to use a qualification method that differentiates between numerous candidates to meet technical, functional and strategic requirements.

This document describes the QSOS (Qualification and Selection of software Open Source) method, conceived by the technology services company Atos Origin SA to qualify, select and compare OSS in an objective, traceable and argued way. The method can be integrated within a more general process of technological watch which is not presented here. It describes a process to set up identity cards and evaluation sheets for OSS.

Why a Methodology?

When evaluating software, the following questions naturally arise:

  • which software best meets the actual or planned technical requirements?
  • which software best meets the actual or planned functional requirements?

In addition, every company should answer these questions before making any decision:

  • what is the durability of the software and what are the risks of forks and how do we anticipate and manage them?
  • what level of stability can be expected and how will we manage dysfunctions?
  • what is the expected and available support level provided on the software?
  • is it possible to influence further development of the software with the addition of new or specific functionalities?

To answer these questions and set up an efficient risk management process, it is imperative to have a method allowing:

  • software qualification by integrating the open source characteristics
  • software comparisons according to formalized needs requirements of weighted criteria, in order to make a final choice

Why a Free Methodology?

We believe that the method as well as the results it generates must be made available to all under the terms of a free license. A free license is capable of ensuring the promotion of the open source movement as it provides:

  • the ability for all to re-use available works for qualification and evaluation
  • the quality and objectivity of documents generated, perfected according to principles of transparency and peer reviews

For these reasons, we decided to make the QSOS method, and the documents generated during its application (functional grids, identity cards and evaluation sheets), available under the terms of the GNU Free Documentation License.

General Process

The general process of QSOS is made up of four interdependent steps:

  1. Definition: creation of frames of reference used in the following steps.
  2. Evaluation: made on three axes of criteria: i) functional coverage; ii) risks for the user; and iii) risks for the service provider independent of any particular user or customer context.
  3. Qualification: weighting of the criteria split up on the three axes and modeling the context, user requirements, and/or strategy set by the service provider.
  4. Selection: process the data provided in steps one and two through the filter set up in step three in order to proceed to queries, comparisons, and selections of products.

Figure 1 provides a visualization of the four step QSOS process. Each one of these steps is detailed further in this document.

Figure 1: The Four Steps of QSOS

The general process introduced here can be applied with different granularities. It enables the establishment of the desired level of detail for the process as well as advancement of the process by iterative loops to refine each of the four steps.

Tools developed by Atos Origin to apply the QSOS method in a coherent way are available to the community to coordinate creation, modification and use of QSOS evaluations.

Step 1: Definition

The objective of this step is to define various elements of the typology to be re-used by the three remaining steps of the general process. The frames of reference are:

Software families: hierarchical classification of software domains and description of functional grids associated with each domain. This frame of reference evolves the most because as software evolves, it offers new functionalities that need to be added to the frame of reference.

Types of licenses: this frame of reference lists and classifies the major licenses used for OSS. The criteria chosen to describe such a license are: i) ownership (can the derived code become proprietary or must it remain free?); ii) virality (is another module linked to the source code affected by the same license?); and iii) inheritance (does the derived code inherit from the license or is it possible to apply additional restrictions?). Note that a piece of software or code can be published under the terms of several licenses, including closed source licenses.

Types of communities: classification of community organizations existing around OSS and in charge of its life-cycle. The types of communities identified to date are: i) insulated developer where the software is developed and managed by one person; ii) group of developers where several people collaborate in an informal or not industrialized way; iii) organization of developers where a group of developers manage the software life-cycle in a _formalized _way, _generally _based on based on role assignment and meritocracy; iv) a legal entity that manages the community, generally possesses copyrights, and manages sponsorship and linked subsidies; and v) a commercial entity employing the project's main developers who are remunerated by the sale of services or of commercial versions of the software.

The O3S tool is designed to be able to easily manage these frames of reference and to measure impacts generated by modifications on data already collected during other QSOS steps.

Step 2: Evaluation

The objective of this step is to carry out the evaluation of the software. It consists of collecting information from the open source community, in order to:

  • build the identity (ID) card of the software
  • build the evaluation sheet of the software, by scoring criteria split on three major axes: i) functional coverage; ii) risks from the user's perspective; and iii) risks from the service provider's perspective

Data constituting the identity card is raw and factual and is not directly scored. However, it is used as a basis for the scoring process described below. The main parts of an identity card are:

General information: this includes the: i) name of the software; ii) reference, date of creation, and date of release of the ID card; iii) author; iv) type of software; v) brief description of the software; vi) licenses to which the software is subjected; vii) project's webpage and demonstration site; viii) compatible operating systems; and ix) fork's origin, if the software is a fork.

Existing services: this component includes: i) documentation; ii) number of contractual support offers; iii) number of training offers; and iv) number of consultancy offers.

Functional and technical aspects: include the: i) technologies of implementation; ii) technical prerequisites; iii) detailed functionalities; and iv) roadmap.

Synthesis: includes the general trend and any comments.

Every software release is described in an evaluation sheet. This document includes more detailed information than the identity card as it focuses on identifying, describing and analyzing in detail each evolution brought by the new release.

Criteria are scored from 0 to 2. These scores will be used in step four to compare and select software according to the weightings, representing the user's requirements specified in step three. The following describe the criteria used for each axis of evaluation. Note that the same or similar criteria can appear on a different axis.

The functional grid is determined by the software's family and proceeds from the frame of reference of step one. Consult the QSOS website for details of functional grids by software families. For each element of the grid, the scoring rule is as follows:

Functionality Score
Not Covered 0
Partially Covered 1
Completely Covered 2

In certain cases it is necessary to use several functional grids for the same software; for instance, when it belongs to more than one software family. In this case, the functional criteria are distributed on separated axes in order to be able to distinctly evaluate the functional coverage for each family.

The "risks from the user's perspective" axis of evaluation includes criteria to estimate risks incurred by the user when adopting OSS. Scoring of criteria is done independently of any particular user's context as the context is considered later in step three. Criteria are split into five categories:

  • intrinsic durability
  • industrialized solution
  • integration
  • technical adaptability
  • strategy

Tables detailing each of these categories as well as their subcategories, by specifying the rule of notation to be used for each criterion, are available here.

The "risks from the service provider's perspective" axis of evaluation regroups criteria to estimate risks incurred by a contractor offering services around OSS such as expertise, integration, development, and support. It is notably on this basis that the level of commitment can be determined.

It is possible to iterate the QSOS process. At the evaluation step this brings the capacity to score criteria in three passes with different levels of granularity:

  • first the five main categories
  • then the subcategories of each category
  • finally every remaining criterion

The general process is thus not hindered if not all of the scored criteria are available. Once all criteria have been scored, the score of the first two levels is calculated by the weighted average of scores of the directly inferior level.

The O3S tool allows the entry of raw data and the evaluation of software on the three major axes, as well as generation of the identity cards of evaluated software.

The granularity of evaluation is managed as follows: as long as all criteria composing a subcategory are not scored, its score is not calculated but entered by the user. As soon as all criteria are scored, its score is then automatically calculated.

Step 3: Qualification

The objective of this step is to define filters translating the needs and constraints related to the selection of OSS. This is achieved by qualifying the user's context which will be used later in step four.

A first level of filtering can be defined on data from the software's ID card. For instance, one could consider software only from a given family or software that's compatible with a given operating system. In general, although it is not mandatory, this filter does not include any weighting. It is mostly used to eliminate inadequate software in the specific context of the user.

Each functionality is attributed a requirement level selected among the following: i) required functionality; ii) optional functionality; and iii) not required functionality. These requirement levels will be linked to weighting values at step four, according to the selected mode of selection.

The relevance of each criterion of the "user's risks" axis is positioned according to user's context as one of three criterion: i) irrelevant and therefore excluded from the filter; ii) relevant; and iii) critical. This relevance will be converted into a numerical weighting value at the following step, according to the chosen mode of selection.

The "filter on service provider's risks" is used by a service provider to evaluate software and services to be integrated into its offering and to determine the associated levels of commitment. The O3S tool allows the definition of these different filters.

Step 4: Selection

The objective of this step is to identify software fulfilling user's requirements or, more generally, to compare software from the same family. Two selection modes are possible:

Strict selection: based on direct elimination as soon as software does not fulfill the requirements formulated in step three. Reasons for immediate elimination include: i) incompatibility with the filter on the ID card; ii) not providing functionality required by the filter on the functional grid; and iii) scores on the "user's risks" axis do not meet the relevance defined by the user, as the score of a relevant criterion must be at least equal to 1 and the score of a critical criterion must be at least equal to 2. This method is very selective and may, depending on the user's requirement, return no eligible software. Selected software is attributed a total score, calculated by weighting.

Loose selection: this method is less strict as rather than eliminating non-eligible software, it classifies while measuring gaps with applied filters.

The weighting value for both selection methods is based on the level of requirement defined on each functionality of the functional grid as follows:
 

Level of Requirement Weight
Required Functionality +3
Optional Functionality +1
Not Required Functionality 0

The weighting value on the "user's risk" axis is based on the relevance of each criterion as follows:
 

Relevance Weight
Irrelevant Criterion 0
Relevant Criterion +1 or -1
Critical Criterion +3 or -3

The weight's value sign represents a positive or negative impact relating to the user's requirements.

The software of a same family with a common functional grid can also be compared by using weighted scores determined earlier. Figure 2 is provided as an example showing that weightings on the various axes are not representative of all kinds of relational database management systems (RDMBS) utilizations.

Figure 2: Comparison of RDMBS on QSOS Axes

Besides implementing the strict and loose selection modes, the O3S tool also enables the consultation of data related to a specific software (ID card and evaluation criteria) and the comparison (integrally, by filtering or differentially) of software in the same family.

Conclusion

The vast amount of available OSS software requires a methodology to allow for the evaluation of potential candidates to meet business requirements. The QSOS methodology allows for an iterative needs analysis for gauging the technical, functional, and strategic capabilities of OSS products. The QSOS website centralizes documents and information on the methodology and the creation, modification, and certification of functional grids, ID cards, and evaluation sheets.

This article is based on QSOS version 1.6 which is copyright Atos Origin under the terms of the Gnu FDL and included in this issue with permission from the copyright owner. The original document and its Latex source is available from the QSOS website.

Share this article:

Cite this article:

Rate This Content: 
No votes have been cast yet. Have your say!

Breadbasket