SCOUG-Programming Mailing List Archives
Return to [ 23 | 
April | 
2006 ]
<< Previous Message << 
 
 
 
Content Type:   text/plain 
Nathan,  
 
I have no doubt on the difficulty of PM programming.  That   
makes it a strange place to start perhaps.  We have a goal of   
producing an editor/compiler/interpreter, a single program IDE.    
I believe that portion of PM programming we need we can   
achieve over the next six months.  What anyone needs   
beyond that depends on just how far we have progressed in   
our project.  
 
I have a DOS program, EasyFlow, which I have used for   
producing dataflow diagrams and structure charts.  When   
Popkin had an OS/2 CASE tool I purchased it.  My last upgrade   
some years back was on a Windows version as they dropped   
their OS/2 support.  Personally I use a plastic process template   
to do most of my "scratch" work.  I only transcribe it to the   
the software version for presentation purposes.  I do have   
VISIO which I have to use with clients.  
 
CASE tools offer drawing assistance with productivity gains   
related to that only.  When customers would put out $20,000   
for a CASE tool it came as a shock to learn that it didn't assist   
in analysis and design for a user unfamiliar with one of the   
supported analysis and design methodologies.  
 
I have never lost my partiality toward structured design a la   
Larry Constantine.  I was fortunate enough to attend a class   
he taught at a customer site, Hughes Aircraft.  I have a long   
time friend, John Stager, who for many years taught with   
Yourdan Group, which until the disease known as O-O crippled   
programming and infected their staff concentrated on   
strutured analysis/design.  
 
You have a proof process in structured analysis using   
dataflow charts.  First off, you have a minimal of symbols,   
five.  Input, Output, Process, Datastore, and directed line   
segments providing the connections.  The proof process lay in   
saying no data item appeared in output which did not have   
clear and continuous path to one or more inputs.  
 
While some fretted over what names to assign to the   
processes, in fact in dataflows processes have no importance   
other than that of conduit, a supplement to the connecting   
line segments.  You didn't have to give them a name.  You   
could just give them a number.  The function of analysis lay   
strictly on the discrete, but continuous flow of data, thus   
dataflow charts.  
 
In going from analysis to design, having assured the "purity"   
of the data, Constantine introduced the heuristic of the   
"central transform", that point in the path of the data in   
which the input first took on the form of the output.  From   
that point you made a "request" for the input, processed it,   
and "sent" the result as output.  
 
The resulting structure charts required only two symbols, a   
rectangle for the process, and directed, two-way connecting   
line segments forming a hierarchy of functions with one main   
central transform.  Simpler than flowcharting and certainly far   
more so than UML.  
 
In order to avoid a backlog you need to incorporate changes   
in software at a rate at least on average to their occurrence.    
The complaint about the "waterfall" development process, the   
five stages of specification, analysis, design, construction, and   
testing, lay in the delays engaged in the sequential process.    
What gets lost in this is that this occurs only for third   
generation procedural languages like C and PL/I and assembly   
language.  In third generation languages each of these stages   
occurs separately and manually.  The delay is not due to their   
separate processing, but to the manual nature of the process.  
 
It is not true for fourth generation languages built on the use   
of the two-stage proof engine, the completeness proof and   
the exhaustive true/false proof.  In fourth generation   
languages only the first stage, specification, requires manual   
effort.  The remainder get accomplished in software.  
 
If you take a third-generation language like C or PL/I, relax   
the rules, make certain additions, and incorporate the   
two-stage proof engine, you can get closer to fourth   
generation productivity gains.  If in addition to the assignment   
statement you add the assertion statement, then you can   
upgrade it from third to fourth generation entirely.  
 
Now we will see this (or not) as we progress in this project.  
 
Why an interpreter/compiler?  Depends upon whether you   
want to optimize programmer throughput (interpreter) or   
program throughput (compiler).  Having only the choice of one   
or the other, as in PL/I, C, Python, Perl, APL, and LISP means   
one or the other loses overall.  Doesn't it make sense to   
optimize development time, i.e. programmer's, and when ready   
for production to optimize transaction time?  Why would you   
need two separate programs to do either when you can do   
both with one?  Remember that except for code generation   
they are functionally equivalent.  
 
 
=====================================================  
 
To unsubscribe from this list, send an email message  
to "steward@scoug.com". In the body of the message,  
put the command "unsubscribe scoug-programming".  
 
For problems, contact the list owner at  
"postmaster@scoug.com".  
 
=====================================================  
 
  
<< Previous Message << 
Return to [ 23 | 
April | 
2006 ] 
  
  
The Southern California OS/2 User Group
 P.O. Box 26904
 Santa Ana, CA  92799-6904, USA
Copyright 2001 the Southern California OS/2 User Group.  ALL RIGHTS 
RESERVED. 
 
SCOUG, Warp Expo West, and Warpfest are trademarks of the Southern California OS/2 User Group.
OS/2, Workplace Shell, and IBM are registered trademarks of International 
Business Machines Corporation.
All other trademarks remain the property of their respective owners.
 
 |