Hark!  What light on yonder window breaks? 
We can take an available shortcut to our "true", all inclusive  
IDE.  In the first place we introduced the five stages to  
support writing source code in the first three generations of  
imperative programming languages: (1) machine, (2) symbolic  
assembler (plus macro), and (3) HLLs.  Each of these require  
that the programmer specify not only "what to do" but "how  
to do it", i.e. incorporate the global logical organization or the  
"logic in programming". 
To do that we first engaged in the gathering of requirements.   
At some point when we felt we had accumulated enough we  
had to globally organize them into some semblance of a  
system.  With the introduction of Structured Analysis and  
Design this "analysis" stage occurred through the use of  
"dataflow" diagrams. 
These dataflow diagrams consisted of five symbols.  Two of  
these represented external "boundary points" of our system in  
terms of points in which data entered (inputs) and at which  
data exited (outputs).  A third symbol represented the  
"processes" which connected the inputs to the outputs.  The  
fourth symbol represented "data stores", in effect internal  
boundary points into which we could write  
(output/update/delete) data and from which we could read  
(input) data.  Data stores later became the "persistent data"  
of object-oriented technology. 
These four symbols we interconnected with the fifth: lines.   
Thus we could have an input symbol connected via a line to a  
process in turn connected by a line to another process in turn  
connected by a line to an output.  We have no limits on the  
number of inputs into or outputs from a process or the number  
of interconnections between processes. 
With these five symbols--an upright, left-facing semi-circle  
(input), an upright, right-facing semi-circle (output), a circle  
(process), to horizontally parallel line segments (datastore),  
and a line (connector)--we could diagram the flow of data  
throughout from initial entry, through numerous  
transformations (processes), and through exit points of any  
application system (including operating systems).  At the end  
of analysis then we had a complete (or sufficiently so) picture  
of the overall logical data flow. 
We then exited the analysis stage in which we created  
dataflow diagrams using the methods of structured analysis  
and entered the design stage.  As luck would have it we also  
had the methods of structured design.  In the transition from  
analysis to design we dropped the number of symbols required  
from five to two, rectangles representing functions and lines  
connecting them in a hierarchical manner.  The result was a  
set of structure charts, one for each separately compilable  
unit.  We should also mention that we had a heuristic for the  
transformation of dataflows into structure charts. 
So we left the design stage and entered the construction  
stage, the stage in which we actually wrote the source code  
to support the design.  At this point we need to focus again on  
the "big" picture, the SDP itself, in which these are only  
"manual" stages.  Therein lay the problem: the "manual" part. 
If in a later stage, say construction, you discovered an error  
or omission of design, you would logically fall back to design,  
make the necessary correction, and then return to  
construction.  The same thing would occur in design if you  
found an error in analysis or in analysis if you found an error  
in specification.  If an error found in construction affecting  
design in turn affecting analysis and in turn affecting  
specification, we would (or should) go all the way back,  
correct the error at its source, and then reinitiate the process. 
Unless we have unlimited funds, people, and time we cannot  
reasonable achieve this "synchronization" of change  
management across all stages.  Even with unlimited funds and  
people we cannot achieve it within an acceptable, if not  
reasonable time.  As a result we deliberately desynchronize  
the stages.  We even go so far as to "freeze" the introduction  
of changes, allowing known errors to continue to persist, in  
order to produce a version of the software system within a  
set time schedule. 
As this situation had persisted from the beginning of  
programming, during which we had progessed through three  
generations of programming languages, we shifted the blame  
from people and tools to the process, the SDP, itself.  Because  
all our efforts had failed to make it work, we came to the  
conclusion that it didn't work because it couldn't work, i.e.  
that its failure was intrinsic. 
If we continue to improve tools like programming languages,  
compilers, linkers, debuggers, CASE tools, etc. and people still  
could not keep pace with change management, then maybe  
we were applying them with the wrong process.  We faulted  
the SDP.  In doing so we ignored the fact so obviously pointed  
out during the abortive effort of AD/Cycle that we had never  
in practice implemented the SDP as designed, as a seamless  
set of activities.  We had never produced nor offered the  
people a set of tools actually implementing the SDP in full. 
Hark!  What light on yonder window breaks? 
We made the point that each of the five stages of the SDP  
had manual implementations.  We had evolved through three  
generations of programming languages with the manual  
nature of the SDP unchanged.  In fact with the introduction of  
object-oriented technology, its shift from the simplicity of  
structured analysis and design to the additional complexity of  
UML, we in fact made a manual process even more so. 
When we evolved to the fourth generation of programming  
languages the most significant change occurred in the nature  
of the SDP.  Specification remained a manual process of  
translating requirements into specifications, but after that  
point the responsibility for analysis, design, and construction  
shifted from people to software.  The software executes  
millions of times faster and cheaper than people.  That means  
the only delay in implementing changes, i.e. new requirements,  
exists only in the time required to translate them into  
specifications.  Thus none of the synchronization delays  
plaguing a manual system occurs in a software system: their  
implementation occurs immediately, i.e. at software speeds,  
throughout upon entry of the new specifications. 
We have to question then why do we persist with a  
minimalist, third generation language like C instead of moving  
on to a full featured fourth generation language of our  
choosing?  If we have five stages of an SDP, and we can  
reduce their manual implementation from five to two  
(specification and testing) or one (specification only), then  
what keeps us from moving on? 
************************************************** 
Therein we come again to our transitional paragraph, which  
we will repeat at the start of our next message. 
===================================================== 
To unsubscribe from this list, send an email message 
to "steward@scoug.com". In the body of the message, 
put the command "unsubscribe scoug-programming". 
For problems, contact the list owner at 
"rollin@scoug.com". 
===================================================== 
<< Previous Message << 
Return to [ 17 | 
March | 
2003 ]
The Southern California OS/2 User Group
P.O. Box 26904
Santa Ana, CA  92799-6904, USA
Copyright 2001 the Southern California OS/2 User Group.  ALL RIGHTS 
RESERVED. 
SCOUG, Warp Expo West, and Warpfest are trademarks of the Southern California OS/2 User Group.
OS/2, Workplace Shell, and IBM are registered trademarks of International 
Business Machines Corporation.
All other trademarks remain the property of their respective owners.