SCOUG-Programming Mailing List Archives
Return to [ 21 | 
February | 
2003 ]
<< Previous Message << 
 >> Next Message >>
 
 
 
Content Type:   text/plain 
Greg writes:  
"Very simply, I do not believe that "all writing errors occur  
at data entry, i.e. during the editing process".  As I see it  
a lot of errors occur in the thinking about the problem before  
it ever gets to data entry (Axiom 4).  ..."  
 
Yes, but until they get to data entry they're not in writing and   
thus in any form acceptable to entry in the system.  When   
they get corrected, i.e. rewritten, that will occur in data entry   
as well.  
 
You point out many aspects of the problem set which lose   
something in the translation to the solution set.  It's been   
around since analog computers, in place prior to digital, e.g.   
the slide rule, started losing out to digital.  Your issues   
certainly provide a challenge.  However, any solution, i.e.   
translation, results in source code which works identically   
regardless of the environment it simulates.  You have the   
same statements, the same control structures, the same   
aggregates, the same operators, and the same operands.  
 
That says you have the same means of isolated testing of the   
software and the same means of generating test data.    
Granted that this isolated testing does not does not cover all   
the needs of a control system, i.e. one involving feedback, it   
still says that you have a data-dependent IPO model.  You   
may need a second one attached in a close loop fashion to   
the first.  Somewhere either in one or the other you need   
other inputs of perturbations, again represented by data   
values, to account for environmental dynamics.  
 
My point remains that from the highest level IPO on down   
through lower levels to eventually the raw material, the   
source statements, you can use the software means intrinsic   
to logic programming to exhaustively test all paths.  You can   
do this with far less effort using software generation by   
submitting rules regarding ranges of data values than you can   
manually.  
 
In this instance your regression test data becomes this set of   
rules.  Any changes, additions, or deletions to those rules   
occur in the same language under the same rules as the   
language of the object to which they apply.  That lowers the   
cost, the time, and the effort of maintaining regression test   
data.  
 
I have no difficulty with your use of the term "faith" where I   
most likely would use "assumption".  Either as you point out in   
your examples of Euclidean and non-Euclidean geometry is   
subject to change, producing different logical systems from   
the application of the same logical process of proving   
theorems.  
 
I don't quite understand why applying Deming's approach to   
quality control regarding detecting errors as close to entry as   
possible and correcting them at their source should come off   
as a "semantic" error.  I would think specifically in a system   
involving feedback it would be even more appropriate.  
 
We do not differ on the need for peer review, which I will   
agree to call a beta tester.  We differ on the number needed.    
I have greater confidence in one, the software, which will   
exhaustively test all possibilities, than 1, 10, 100, 1000, or   
10,000 people who will not.  
There's a reason why in my initial Warpicity proposal that I   
referred to the software as "The Developer's Assistant".  It's a   
form of peer review that doesn't involve delays, random   
testing, or scheduling issues.  It's a form of peer review that   
offers you many different and logically equivalent visual   
forms.  I look at the software tool as I would a peer in terms   
of reflecting the logic of my offering.  
 
"I will take a SWAG that the second error may arise because   
of fundamental differences between typical   
engineering/scientific problem sets and typical   
system/business-application problem sets. ..."  
 
I'm not sure of the actual error count or which is number one   
and which number two.  I have to exercise some caution when   
I agree that differences do exist between the closed systems   
of engineering/scientific in which the feedback is intrinsic to   
the design and the more open systems you mention.    
 
However, they do have the common problem of responding to   
the dynamics of their environment.  The environment   
represents the problem set;the software, the solution set.  We   
cannot control nor ultimately predict the dynamics of the   
problem set.  We only know that we need to reflect those   
dynamics in the solution set at a rate internally at least equal.    
Failure to do so means we cannot keep up.  We fall behind.    
We develop a backlog.  
 
Therein lies the situation with the system/business software   
applications: they cannot keep up with the dynamics   
regardless of the people resources applied.  The cost of those   
people resources frequently reaches a threshold at which it is   
no longer economically justified to maintain pace.  As was the   
case with IBM with respect to OS/2 it gave up after years of   
sustained losses.  
 
Open source may not have a cost associated with its   
volunteer labor, but it still faces the fact that even with an   
unlimited number of them it cannot maintain pace with the   
dynamics of the environment.  In fact we can demonstrate   
that above a certain number its ability to maintain pace   
actually declines.  
 
So there is a productivity curve from 1 to some N in which   
each incremental change allows some decreasing positive   
level of productivity and above N an increasingly negative   
level.  In terms of costs, i.e. the number of people, the closer N   
gets to 1 the better.  All that remains is to insure that   
whatever that number it suffices to respond to the dynamics   
of change.  That in turn depends upon increasing productivity.    
While we have many possible factors, each of which we can   
improve up to its limits, underlying them all we have the   
physical act of writing (and rewriting).    
 
Our writing productivity, what we get out over what we put   
in, depends in large part on our tool set.  That determines   
what different forms of writing, i.e. languages, we have to   
use.  It determines how much we have to write.  
 
I have simply proposed that we use one specification language   
capable of specifying itself and one software tool specified in   
that language, using one user interface.  I further assert that   
we can get to there from here incrementally such that each   
step of the way either does or does not contribute to   
increased productivity.  
 
Now I am enamored of the two-stage logic engine of logic   
programming used in fourth generation languages like Prolog,   
Trilogy, and SQL and used throughout AI including neural nets.    
Now it takes awhile to ingest them into one's nervous system   
to have the same comfort level.  So the point of incremental   
development here lies in ingesting them in more easily   
digestible units.  
 
=====================================================  
 
To unsubscribe from this list, send an email message  
to "steward@scoug.com". In the body of the message,  
put the command "unsubscribe scoug-programming".  
 
For problems, contact the list owner at  
"rollin@scoug.com".  
 
=====================================================  
 
  
<< Previous Message << 
 >> Next Message >>
Return to [ 21 | 
February | 
2003 ] 
  
  
The Southern California OS/2 User Group
 P.O. Box 26904
 Santa Ana, CA  92799-6904, USA
Copyright 2001 the Southern California OS/2 User Group.  ALL RIGHTS 
RESERVED. 
 
SCOUG, Warp Expo West, and Warpfest are trademarks of the Southern California OS/2 User Group.
OS/2, Workplace Shell, and IBM are registered trademarks of International 
Business Machines Corporation.
All other trademarks remain the property of their respective owners.
 
   |