Evaluating and improving the process of Software Design with KPIs

Software designing is a subset of a larger domain called "software development"; some of the other being specification, architecture, testing and deployment

Designing software invariably includes planning the implementation and architectural issues of the eventual product. It talks of testing alternatives on some pre-decided grounds, which point towards the most efficient choice, cancelling others. This imparts the task an important place in ensuring a "good-quality" outcome.

The extent of "understandably complex" nature of "software designing process" is reflected from the fact that any software designer would agree with the statement "designing is itself a part of software designing". This puzzling and self contained statement emerges from the huge number of aspects contained in this concept and the blurred demarcation lines between these, which can cause anyone to get baffled.

There is a very high degree of inter-relationship between and among the numerous tit-bits of software designing. This fact, in turn, results into a network of entities each of which is related to every other component of the mesh.

Another challenge comes from the reality that even a small and "seemingly negligible" mis-placement of a piece of software can lead to incredibly disturbing results.

Consequently, testing and debugging of the software is yet another hurdle to be overcome by the developers. It has often been experienced that such follow-up tasks can be more expensive than the actual building of the software. This causes cross-checking to consume a major chunk of efforts and time dedicated to the whole process.

Saying it all, designing software is already a complicated exercise, so no compromise on quality or process of the job can be afforded. Moreover, the changes that keep occurring must be mirrored in the software in an accurate and timely basis. This is needed as reverting back to get inside the fundamentals again, for carrying out the "re-tooling" is half of the times not possible and in other half of the instances it is "too expensive" to do.

However, equally true is the fact that building and implementation of software do occur, and that too, with much success.

In such a scenario, the way out is to apply mechanisms for minimizing the errors at every step. Moving on, one of the simplest method to do this is to put a constant check on "how the job is proceeding", "how far the designers have succeeded in attaining the aims" and "how does the output fare on the pre-set quality parameters".

This can be done by using quantitative overview of the process, as provided by a BSC(Balanced Scorecard). This methodology employs metrics, which can be assigned target values to compare the actual position with. Besides strengthening the chances of achieving success, it also facilitates detection of problems, in timely manner. One can frame these specific and relevant parameters in areas like Quality; Exception handling and efforts; logical size and Complexity and Design Efforts.

Quality Perspective can be evaluated via metrics like compatibility index, security and fault bearing ratio, reliability claim meeting and other quality enhancing features. Exception handling and Operations perspective can be had with exception failure fraction, number of run-time exceptions, number of manually created exceptions and number of operations. Further, Logical size and complexity perspective can be assessed with SLOC measure, average development time, cyclomatic complexity and blank line contribution. Lastly, design efforts can be calculated by knowing values of metrics such as "study", "create", "review", "testing" and "correction". By putting such and similar parameters on a balanced scorecard, one can ensure the success of the software designing process.