SCOUG Logo


Next Meeting: Sat, TBD
Meeting Directions


Be a Member
Join SCOUG

Navigation:


Help with Searching

20 Most Recent Documents
Search Archives
Index by date, title, author, category.


Features:

Mr. Know-It-All
Ink
Download!










SCOUG:

Home

Email Lists

SIGs (Internet, General Interest, Programming, Network, more..)

Online Chats

Business

Past Presentations

Credits

Submissions

Contact SCOUG

Copyright SCOUG



warp expowest
Pictures from Sept. 1999

The views expressed in articles on this site are those of their authors.

warptech
SCOUG was there!


Copyright 1998-2024, Southern California OS/2 User Group. ALL RIGHTS RESERVED.

SCOUG, Warp Expo West, and Warpfest are trademarks of the Southern California OS/2 User Group. OS/2, Workplace Shell, and IBM are registered trademarks of International Business Machines Corporation. All other trademarks remain the property of their respective owners.

The Southern California OS/2 User Group
USA

SCOUG-Programming Mailing List Archives

Return to [ 16 | February | 2003 ]

>> Next Message >>


Date: Sun, 16 Feb 2003 00:08:21 PST8
From: "Lynn H. Maxson" <lmaxson@pacbell.net >
Reply-To: scoug-programming@scoug.com
To: < "scoug-programming@scoug.com" > scoug-programming@scoug.com >
Subject: SCOUG-Programming: In retrospect


Content Type: text/plain

Content-Transfer-Encoding: 7bit

Sheridan,

"I have no problem understanding that the test data can be
created by software. What I don't understand is how the
software knows that the result it gets is correct. ..."

It doesn't. It doesn't care. It only reports on its logic as
written. At some point we will enter the academic debate on
the issue of correctness or more to the point a correctness
proof. I will save you some effort to say that such is
unprovable.

Consider the test results as a form of peer review, someone
looking over your logic and reporting to you the complete
results. It probably does it better than has ever occurred
from a person, who physically doesn't have the time to offer
you as complete a review.

If you ever read some of the early glowing reports about open
source, one thing touted was the availability of thousands of
beta testers. I always enjoy it when a liability becomes a
benefit...or when someone doesn't fully comprehend the
implications of his statements.

Open source relies on individual contributors who must
somehow have confidence in their source contributions. When
you based that confidence in the random testing of others,
hoping somehow that you can achieve with a shotgun what
you cannot achieve with surgical precision, you're in trouble.
Part of that trouble lies in someone else like you making a
change to source not completely verified. Thus when an error
occurs did it occur on your version or on a more recent one?

I've had open source advocates brag about the daily releases
of Linux and Mozilla as if each release marked a step forward.
If you have a thousand people beta testing a release and you
have a new release daily, something ought to tell you that
the one time delay far exceeds the other, that you have a
serious problem in error detection, correction, and
synchronization. True changes will not occur at that rate,
thus you are spinning your wheels engaged in error correction
due to the inefficiencies of your error detection technique, i.e.
the multitude of beta testers and the unpredictable delay in
reporting test results.

"Yikes!! Hope you had a 'heart beat' to let you know things
were cooking and not stopped. ..."

Somtime I must introduce you to Ackerman's Function, which is
the apex of recursion. I've let it run for days and days until it
finally blew the stack. You will run into people who will tell
you that you can use normal iteration instead of recursion.
You could do that in the peg example as there are only a
finite number of moves down any path.

The truth is I have faith in recursion, but absolutely none in
the stack. You see Burroughs introduce the stack mechanism
in its B5000 series in the early 60's. IBM never adopted it in its
mainframe programming for module to module communication,
i.e. invoking APIs and their parameters. IBM used a
dynamically allocated linked list whose only size limits was
the physical capacity of a disk drive, the swap dataset. I
guess it just pleasures me to blow a stack now and then.

"Since the recursion was now unwinding did the other
solutions then come at a rapid pace?"

I've included an attachment which shows that the solutions
occurred in two instances of some 32,000+ each. Thus when
they came they came at a rapid pace considering the much
longer amount of time they did not come at all. You will not
that I opened and closed the file for each successful pattern.
Thus I could monitor the results as they occurred.

"Ah. User prescribed rules lets the 'tester' calculate what the
correct answer should be."

No. The tester simply sets a range, a sequence of contiguous
and non-contiguous values, some lying outside the boundaries
of a desired result, others on, and others inside. The tester
knows (or should) whether an answer is correct or not. Thus
he has expectations. He simply compares the results against
his expectations.

Actually it gets somewhat deeper than that. Supposed I have
a field for an inventory count, say "inv_cnt". Suppose I have
a rule that says it can't be negative and that it can't exceed
some upper limit say 99,999. A normal fullword binary integer
of 32 bits allows values into the millions. Let's suppose that I
state all this as "dcl inv_cnt fixed bin (32) unsigned
range(0...99999);".

This means I'm turning over to the software to inject the
necessary code to verify that any change to the value of
inv_cnt stays within this range. I've given it the rule in the
"range" clause. Now I don't have to remember it. Moreover I
can't get hurt if at some moment I have forgotten it. I've got
this software assistant that never gets tired or ever forgets a
million, zillion rules.

Now supposed I select by marking in some manner the "dcl"
statement, indicating to the tool that I want to test the
statement. The tool will respond by asking me for a value. If
I respond with "-15", it will respond "false". In short it will not
allow me to enter a value outside the range specified.

I could, for example, have two fields interdependent on each
other based on their values. Say if one field can have a value
of 3 or 7 that the other can only have a value of 9. I can in a
data declaration state this as a rule:
dcl field1 char(1) range(select(field2) when(3|7) then 9);
dcl field2 char(2) range(select(field1) when(9) then (3,7));

Now I can take any expression involving field1 or field2, where
either is the target of an assignment statement, select it and
have the tool ask me for the range of values I want it tested.
When I execute the test it will only return the true instances
of the sets of values that it generates.

Now I not only do not have to write all the test source to
implement the rules, but I have also minimized what data
values or ranges I have to submit and can guarantee that they
exhaust the possibilities.

"For me that would be interesting."

The VisualAge Cobol and PL/I compilers come with an analysis
tool that does a graphical representation. I will seek out some
non-trivial example to illustrate the iterative process of
testing from the inside out to completely test the code. There
is an open source one for C that I may have time to try on
HPCalc.


Content Type: application/octet-stream

File attachment: sol_mtrx.txt


>> Next Message >>

Return to [ 16 | February | 2003 ]



The Southern California OS/2 User Group
P.O. Box 26904
Santa Ana, CA 92799-6904, USA

Copyright 2001 the Southern California OS/2 User Group. ALL RIGHTS RESERVED.

SCOUG, Warp Expo West, and Warpfest are trademarks of the Southern California OS/2 User Group. OS/2, Workplace Shell, and IBM are registered trademarks of International Business Machines Corporation. All other trademarks remain the property of their respective owners.