Job98456


 
HomeCalendarFAQSearchRegisterMemberlistUsergroupsLog in

Share | 
 

 Software Errors: Prevention and Detection

Go down 
AuthorMessage
raj_mmm9



Number of posts : 1850
Age : 55
Registration date : 2008-03-08

PostSubject: Software Errors: Prevention and Detection   Fri 11 Apr - 15:25

Most programmers are rather cavalier about controlling the quality of the software they write. They bang out some code, run it through some fairly obvious ad hoc tests, and if it seems okay, they’re done. While this approach may work all right for small, personal programs, it doesn’t cut the mustard for professional software development. Modern software engineering practices include considerable effort directed toward software quality assurance and testing. The idea, of course, is to produce completed software systems that have a high probability of satisfying the customer’s needs.

There are two ways to deliver software free of errors. The first is to prevent the introduction of errors in the first place. And the second is to identify the bugs lurking in your code, seek them out, and destroy them. Obviously, the first method is superior. A big part of software quality comes from doing a good job of defining the requirements for the system you’re building and designing a software solution that will satisfy those requirements. Testing concentrates on detecting those errors that creep in despite your best efforts to keep them out.

In this article we’ll take a look at why the issue of software quality should be on the tip of your brain whenever you’re programming, as well as discussing some tried-and-true methods for building high-quality software systems. Then we’ll explore the strategies and tactics of software testing.

Why Worry About Software Quality?

The computer hobbyist doesn’t think much about software quality. We write some little programs, experiment with graphics tricks, delve into the operating system, and try to learn how the beast works. On occasion we write something useful, but mostly just for own benefit. The “quality” of one-user, short-lived programs like these doesn’t really matter much, since they’re not for public consumption.

The professional software engineer developing commercial products or systems for use by his employer has a more serious problem. Besides the initial effort of writing the program, he has to worry about software maintenance. “Maintenance” is everything that happens to a program after you thought it was done. In the real world, software maintenance is a major issue. Industry estimates indicate that maintenance can consume up to 80 percent of a software organization’s time and energy. As a great example of the importance of software maintenance, consider how many versions of the Macintosh operating system have existed prior to version 7.0. Each new system version was built upon the previous one, probably by changing some existing code modules, throwing out obsolete modules, and splicing in new ones.

The chore of maintenance is greatly facilitated if the software being changed is well-structured, well-documented, and well-behaved. More often, however, code is written rather sloppily and is badly structured. Over time, such code becomes such a tangle of patches and fixes and kludges that eventually you’re better off re-writing the whole module, rather than trying to fix it once again. High-quality software is designed to survive a lifetime of changes.

What is Software Quality?

Roger Pressman, a noted software engineering author and consultant, defines software quality like this:

Conformance to explicitly stated functional and performance requirements, explicitly documented development standards, and implicit characteristics that are expected of all professionally developed software.

A shorter definition is that a high-quality software system is one that’s delivered to the users on time, costs no more than was projected, and, most importantly, works properly. “Working properly” implies that the software must be as nearly bug-free as possible.

While these are workable definitions, they are not all-inclusive. For example, if you build a software system that conforms precisely to a set of really lousy specifications, do you have a high-quality product? Probably not. Part of our job as software developers is to help ensure that the system specs themselves are of high quality (i.e., that the specs properly address the user’s needs), as well as building an application that conforms to this spec.

A couple of other important points are implied in this definition. One is that you HAVE specifications for the programs you’re writing. Too often, we work from a fuzzy notion of what we’re trying to do. This fuzzy image becomes refined over time, but if you’ve been writing code during that time, you’ll probably find that much of it has to be changed or thrown out. Wouldn’t you rather think the problem through in detail up front, and only code it once? I’ve discovered through years of practice that this approach results in a much better product than if I just banged out the code on the fly.

Another implication is that “software” includes more than executable code. The “deliverables” from a software development project also include the written specifications, system designs, test plans, source code documentation, and user manuals. Specifications and designs might include narrative descriptions of the program requirements and structure, graphical models of the system (such as data flow diagrams), and process specs for the modules in your system.

Software quality impacts these other system deliverables just as it affects the source code. The quality of documentation is particularly important. Have you ever tried to change someone else’s code without being able to understand his mindset at the time he wrote it? Detailed documentation about the parts of the software system, the logic behind them, and how they fit together is extremely important. But erroneous documentation is worse than nothing at all, since it can lead you down a blind alley. Any time the documentation and source code don’t agree, which do you believe?

There’s a compelling economic incentive for building quality into software. The true cost of a software development project is the base cost (what you spend to build the system initially) PLUS the rework cost (what you spend to fix the errors in the system). The rework cost rarely is figured into either the time or money budgets, with the consequence that many projects cost much more to complete than expected and soak up still more money as work is done to make the system truly conform to the specifications. In too many cases, the project is delivered too late to be useful, or not at all.

Software Quality Assurance

Software quality assurance, or SQA, is the subfield of software engineering devoted to seeing that the deliverables from a development project meet acceptable standards of completeness and quality. The overall goal of SQA is to lower the cost of fixing problems by detecting errors early in the development cycle. And if your SQA efforts prevent some errors from sneaking into your code in the first place, so much the better. SQA is a watchdog function looking over the other activities involved in software development.

Here are some important SQA thoughts. First, you can’t test quality into a product; you have to build it in. Testing can only reveal the presence of defects in the product. Second, software quality assurance is not a task that’s performed at one particular stage of the development life cycle, and most emphatically not at the very end. Rather, SQA permeates the entire development process, as we’ll see shortly. Third, SQA is best performed by people not directly involved in the development effort. The responsibility of the SQA effort is to the customer, to make sure that the best possible product is delivered, rather than to the software developers or their management. SQA won’t succeed if it just tells the managers what they want to hear.

Testing is certainly a big part of SQA, but by no means the only part. Testing, of course, is the process of executing a computer program with the specific intention of finding errors in it. It’s nearly impossible to prove that a program is correct, so instead we do our best to make it fail. Unfortunately, most of us perform testing quite casually, without a real plan and without keeping any records of how the tests went.

Proper software testing requires a plan, or test script. It includes documentation, sample input datasets, and records of test results. Instead of being informal and ad hoc, good software testing is a systematic, reproducible effort with well-defined expectations. We’ll talk more about good testing strategies later on.

Now let’s look at some goals of SQA for the various stages of structured software development. No matter what software development life cycle model you follow, you’ll always have to contend with requirements analysis, system specification, system design, code implementation, testing, and maintenance, so these SQA goals are almost universally applicable. For one-man projects, much of the formality of these stated SQA goals is not needed. Instead, try to discipline yourself enough to meet the most important aspects of the goals, while still having fun writing the programs.

Requirements Analysis

• Ensure that the system requested by the customer is feasible (many large projects have a separate feasibility study phase even before gathering formal requirements).

• Ensure that the requirements specified by the customer will in fact satisfy his real needs, by recognizing requirements that are mutually incompatible, inconsistent, ambiguous, or unnecessary. Sometimes needs can be addressed in better ways than those the user is requesting.

• Give the customer a good idea of what kind of software system will actually be built to address his stated requirements. Simple prototypes often are useful for this.

• Avoid misunderstandings between developers and customers, which can lead to untold grief and hassles farther down the road.

Software Specifications

• Ensure that the specifications are consistent with the system requirements, by setting up a requirements traceability document. This document lists the various requirements, and then tracks how they are addressed in the written specs, the system design (which data flow process addresses the requirement), and the code (which function or subroutine satisfies the requirement).

• Ensure that specifications have been supplied for system flexibility, maintainability, and performance where appropriate.

• Ensure that a testing strategy has been established.

• Ensure that a realistic development schedule has been established, including scheduled reviews.

• Ensure that a formal change procedure has been devised for the system. Uncontrolled changes, resulting in many sequential (or even concurrent) versions of a system, can really contribute to quality degradation.

Design

• Ensure that standards have been established for depicting designs (such as data flow diagram models for process-oriented systems, or entity-relationship models for data-oriented systems), and that the standards are being followed.

• Ensure that changes made to the designs are properly controlled and documented.

• Ensure that coding doesn’t begin until the system design components have been approved according to agreed-upon criteria. Of course, we all do some coding before we really “should”; so long as you think of it as “prototyping”, that’s fine. Just don’t get carried away prematurely.

• Ensure that design reviews proceed as scheduled.

Coding

• Ensure that the code follows established standards of style, structure, and documentation. Even though languages like C let you be super-compact and hence super-obscure in your coding, don’t forget that a human being (maybe even you) may have to work with that code again some day. Clarity of code is usually preferred over conciseness.

• Ensure that the code is being properly tested and integrated, and that revisions made in coded modules are properly identified.

• See that code writing is following the stated schedule. It probably won’t be; the customer is entitled to know this and to know the impact on delivery time.

• Ensure that code reviews are being held as scheduled.

Testing

• Ensure that test plans have been created and that they are being followed. This includes a library of test data, driver programs to run through the test data, and documentation of the results of each formal test that has been performed.

• Ensure that the test plans that are created do in fact address all of the system specifications.

• Ensure that, after testing and reworking, the software does indeed conform to the specifications.

Maintenance

• Ensure consistency of code and documentation. This is quite difficult; we tend to not update documentation when we change the programs. However, such an oversight can create nightmares the next time a change has to be made. Can you trust the docs, or not?

• Ensure that the established change control process is being observed, including procedures for integrating changes into the production version of the software (configuration control).

• Ensure that changes made in the code follow the coding standard, are reviewed, and do not cause the overall code structure to deteriorate.

By now you’re saying, “Yeah, right. No way, man. Can’t be done.” To be honest, most professional software engineers DON’T follow nearly this stringent a quality plan. Unfortunately, the result of having no SQA plan at all often is software of low, perhaps unacceptable, quality. If you can incorporate even a few of these SQA goals into your own development activity, you should see some real improvements. I certainly have.
Back to top Go down
View user profile
 
Software Errors: Prevention and Detection
Back to top 
Page 1 of 1
 Similar topics
-
» Need help with IMX Software
» CDAX software dilemma
» Colombo Crimes Division raids banking firm for software piracy
» CSE related software
» deleted SOFTWARE

Permissions in this forum:You cannot reply to topics in this forum
Job98456 :: General Software-
Jump to: