SCOUG Logo


Next Meeting: Sat, TBD
Meeting Directions


Be a Member
Join SCOUG

Navigation:


Help with Searching

20 Most Recent Documents
Search Archives
Index by date, title, author, category.


Features:

Mr. Know-It-All
Ink
Download!










SCOUG:

Home

Email Lists

SIGs (Internet, General Interest, Programming, Network, more..)

Online Chats

Business

Past Presentations

Credits

Submissions

Contact SCOUG

Copyright SCOUG



warp expowest
Pictures from Sept. 1999

The views expressed in articles on this site are those of their authors.

warptech
SCOUG was there!


Copyright 1998-2024, Southern California OS/2 User Group. ALL RIGHTS RESERVED.

SCOUG, Warp Expo West, and Warpfest are trademarks of the Southern California OS/2 User Group. OS/2, Workplace Shell, and IBM are registered trademarks of International Business Machines Corporation. All other trademarks remain the property of their respective owners.

The Southern California OS/2 User Group
USA

SCOUG-Programming Mailing List Archives

Return to [ 05 | April | 2003 ]


Date: Sat, 5 Apr 2003 07:31:59 PST8
From: "Lynn H. Maxson" <lmaxson@pacbell.net >
Reply-To: scoug-programming@scoug.com
To: < "scoug-programming@scoug.com" > scoug-programming@scoug.com >
Subject: SCOUG-Programming: Message Six

Content Type: text/plain

Gregory W. Smith writes:
"...My Python code in the first post was to make the point that
automatic generation of test data may be fine in theory but
not practical. ..."

All too frequently we achieve the opposite of our intent. I
did not regard your Python solution as a Swiss Army knife of
peg solitaire solutions, but instead a set of custom filet knives
based on the fish involved. That a set of knives can come
as a package, i.e. one set, says that packaging does not
unduly restrict content.

That allows us to turn to the issue of automatic generation of
test data. You have manual, partially automatic, and
automatic as options. The most impractical generation in
practice is manual based on hundreds of millions of such
incomplete attempts. If it were practical, you would not
require beta testing or beta testers and no incorrect logic
would ever get released.

Remember that's the only mistakes you can make in
programming are syntax, semantics, and logic. We can
accomplish the first two in software rather quickly and
accurately. That leaves only mistakes in logic. The means for
their discovery lies in preparing sets of test data applied
during execution.

Now how do you prepare this test data, manually,
automatically, or somewhere inbetween? You have the effort
to prepare the test data, that for scripting the tests, that for
executing the tests, and that for verifying the results. Any
errors discovered in the verification process means a process
of correction and reiteration of the tests. It's little wonder
that project guidelines suggest that you give twice the
amount of time to testing than has occurred up to that point.

You either run the test linearly, i.e. a sequence of one test at
a time, or in parallel. Proper parallel execution requires
independent sets of test data, one set per test, and
independent, isolated test environments. Therein lies the use
of beta tests and beta testers, particularly since you pay for
neither which amounts to a lowering of development costs.

However, even with all this, even in the absence of cost or
time constraints, you have incomplete testing: logic errors still
appear in released versions. Now comes along logic
programming and the use of predicate logic. Predicate logic
declares not only the data variables involved but also the
range of values of those variables. You can specify these
ranges as contiguous or non-contiguous with points outside of
acceptable (true) results, on the boundary, and within. Given
a set of data variables and their values the predicate logic
will iterate through all possible combinations in its testing
process.

Now that still leaves a largely manual verification process, no
mean task with large volumes of results. It only produces
"true" results, making it difficult enough to detect "false
positives". As an option you could have it produce "false"
results, making it difficult enough to detect "false negatives".

The peg solitaire program constitutes an exhaustive true/false
proof of the algorithm involved in that it generates all
possible paths. If you know a solution exists and the
algorithm doesn't find it, then you have a logic error in the
algorithm. If you don't know a solution exists, i.e. if you
cannot postulate a "true" solution, you have no idea if it is
logically correct or not.

You do know, however, that this algorithmic production, this
automatic generation of test data, occurs millions of times
faster and cheaper than its manual preparation. It does not
matter that it might increase the verification time. It has
significantly reduced the overall time along with the result of
error-free code.

More importantly it has eliminated the need for parallel
testing, beta testing, and beta testers. It further supports
true independence of the individual developer to go his own
way, to pursue his own goals, whether or not shared with
anyone else.

=====================================================

To unsubscribe from this list, send an email message
to "steward@scoug.com". In the body of the message,
put the command "unsubscribe scoug-programming".

For problems, contact the list owner at
"rollin@scoug.com".

=====================================================


Return to [ 05 | April | 2003 ]



The Southern California OS/2 User Group
P.O. Box 26904
Santa Ana, CA 92799-6904, USA

Copyright 2001 the Southern California OS/2 User Group. ALL RIGHTS RESERVED.

SCOUG, Warp Expo West, and Warpfest are trademarks of the Southern California OS/2 User Group. OS/2, Workplace Shell, and IBM are registered trademarks of International Business Machines Corporation. All other trademarks remain the property of their respective owners.