SCOUG-Programming Mailing List Archives
Return to [ 28 | 
August | 
2003 ]
 >> Next Message >>
 
 
 
| Date: |    Thu, 28 Aug 2003 11:56:32 PDT7  |  
| From: |      "Lynn H. Maxson"   <lmaxson@pacbell.net >   |  
| Reply-To: |    scoug-programming@scoug.com  |  
| To: |      "SCOUG Programming SIG"   <scoug-programming@scoug.com > ,    "Warpstock"  <Warpstock-public@yahoogroups.com >,    "osFree@yahoogroups.com"  <osFree@yahoogroups.com >  |  
| Subject: |  SCOUG-Programming:   Re: [osFree] Alot of Acitivity and Alot of Discussion. :)  |  
 
Content Type:   text/plain 
Ben Ravago writes:  
"Certainly, that would help.  Having had brushes with many of   
the languages you've been mentioning, I'm curious to see what   
this amalgam you have in mind will look like.  How would one   
describe an OS/2 thread switch in that language, for example?   
..."  
 
I've decided broaden the audience for my response here   
because I really get tired of repeating myself, if for no other   
reason than I've heard it before.  
Every programming language is a specification language.  Not   
every specification language is a programming language.  The   
Z-specification language, for example, is not a programming   
language, though I probably have more reference textbooks   
on it than any other with Forth running a close second.  
 
No matter how you hack it beginning with the gathering of   
user requirements as input you have a process of five   
stages--specification, analysis, design, construction, and   
testing--for their translation into a executable software   
version.  Imperative (first-, second-, and third-generation)   
languages "insist" that you perform these manually as a   
sequence of translations from one stage to the next.  They   
"insist" because you can only say "what" you want done   
through describing "how" to do it.  
 
That's their "nature" and how they move through their   
"natural habitat": one "manual" stage at a time.  If you use   
them, then you too must go lockstep with them.  After you   
have done it a few hundred million times, it becomes "natural"   
to you, a habit, a paradigm, the way you see things.  
 
Now come fourth generation "declarative" languages based on   
logic programming something needs to change, even   
God-forbid a 50-year-old paradigm that has transported you   
through three generations to this point.  You can't simply   
desert it now.  You don't.  You insist on making fourth   
generation use earlier generation tools, the same mind set   
regardless.  
 
Yet fourth generation takes specifications as input,   
automatically performing analysis, design, and construction.    
So why do fourth generation users not simply write   
specifications only without doing any of the other?  Do they   
simply have bad habits they cannot drop?  Or does their tool   
set based on earlier generation methodology "insist" on it?  
 
It's the tool set.  It's the tool set.  It's the tool set.  What I tell   
you three times is true.  The tool set not only ill-serves its   
intended earlier generations, but it's an absolute disaster   
when it comes to the fourth generation.  
 
The tool set ill serves deliberately.  The vendors had a peek   
at an alternate universe with IBM's (failed) AD/Cycle project   
and said, "Thanks, but no thank you."  Of course, they said it   
in a way which implied IBM was at fault.  That should have   
been enough of a clue to lay to rest any residual myth about   
IBM's great "marketing advantage".  
Unfortunately open source, whose only tool vendor is itself,   
has seen fit to have its tool set ill serve it.  You may see some   
reason to tie in the gcc family to what goes on here, but   
frankly I don't want to get dragged down into that dungeon.    
For my money the gcc family as currently implemented is the   
problem and not the solution.  
 
I brought up COBOL to illustrate a point on "reuse" made   
possible by COBOL paragraphs.  Granted COBOL is verbose (as   
an understatement).  Thus the trees may block out a view of   
the forest.  You may have so many paragraphs so widely   
separated in a source listing that reuse may seem more a   
handicap than a blessing.  
 
The COBOL "perform" verb executes a named paragraph (all   
paragraphs have names).  That paragraph may consist of a   
single COBOL statement on up to a larger assembly.  The   
COBOL "perform thru" executes a named sequence of two or   
more paragraphs with the first and last names acting as   
delimiters.  
 
So COBOL provides simply what PL/I and C do not: repetitive   
reuse of non-procedures, i.e. anything from a single statement   
on up.  In theory PL/I and C can achieve the same with an   
"include", but only at the cost of actually repeating the   
statements.  Also note that data, not statements, are the   
principal use of "includes" in most programming languages.  
 
So COBOL provides reuse at a granularity not easily possible in   
other programming languages.  It implements that reuse   
through an internal "label goto and return" by the compiler   
whose external use by a programmer nowadays qualifies him   
for job termination.  
COBOL at least recognizes that the smallest unit of reuse is a   
single statement (even if it occupies an entire paragraph).    
Granularity of reuse to the statement level means reuse from   
that level on up.  
 
Now in logic programming rules appear as a sequence of one   
or more statements.  In logic programming rules are reusable.    
Thus in logic programming reuse occurs down to the   
statement, i.e. specification, level.  My only point in   
referencing COBOL was to show it was not a new concept,   
something dreamed up for logic programming or by me.  
 
You see it trips off our lips to talk of "gathering" user   
requirements.  That implies a "holding" action until some   
number of them meeting some unspoken criteria is reached.    
That implies a delay.  If that delay is longer than the average   
arrival rate of a new or changed requirement, then you can't   
possibly keep up going manually through four stages to get   
through construction.  More often than not you have to initiate   
a "freeze" on one gathering to get it through the process,   
which means starting another for those requirements arriving   
in the meantime.  
 
You see we need to shift from a "gathering" mentality to a   
"capturing" one.  We don't collect them in a holding area.    
Instead we pass one off as one or more specifications before   
on average the next one comes in.  In that manner we never   
have a "backlog" nor the need to initiate a "freeze".  
 
We bring them in one specification at a time in the order in   
which they appear on input.  We don't have to determine   
ahead of time whether we have captured enough for   
meaningful results.  We let the software as part of its   
completeness proof do that for us.  Thus we have only to   
write specifications as they occur, allowing the software to   
engage continuously in analysis, design, and construction   
according to the dynamics of the input.  
 
That's what declarative buys you over imperative.  You only   
write specifications (here your specification language is a   
programming language), leaving it up to the software to write   
everything else.  You know it can do it, because that's what   
logic programming has always done.  It's not some weird   
invention of my own.  
 
"Sorry if I seem a little contentious here but my last   
assignment involved the implementation of a rule-engine   
based system.  Conceptually, a very promising technology but   
not for the faint of heart or hard of head. ..."  
 
We live in a world in which apparently we neither hear nor   
listen to ourselves.  In programming we have two things: data   
(referents) and expressions regarding their use (references).    
In documentation we have two things: data (referents) and   
expressions regarding their use (references).  So when Donald   
Knuth proposes something like "literate programming" we   
shouldn't be surprised that it deals with data (referents) and   
associations among two different expressions with the same   
purpose of their use.  
If you want to go "high class", joining those who feel better   
saying "information processing" instead of "data processing,   
you can upgrade "data" to "object".  In doing so you must   
somehow admit that all data-oriented activities are also   
object-oriented.  That means we were doing object-oriented   
technology long before we formalized and restricted its   
application to a particular methodology.  All programming from   
its very inception to now and onward into the future is   
object-oriented.  It's not that we have a choice.  It's simply a   
matter of picking a particular choice.  
 
In the instance of what we now refer to as object-oriented   
technology we made a poor choice on selecting Smalltalk as   
our paradigm.   You have to be careful when you borrow   
freely from academic and research environments.  You ought   
to have extra caution when these people seize on the   
opportunity to benefit from a "for-profit" adventure.  They   
can't lose.  You can.  
 
No better example of this exists than UML, the "Unified"   
Modeling Language.  You didn't need one as much unified as   
universal.  You see a universal modeling language is unified,   
but a unified one is not necessarily universal.  You should   
have been suspicious about the increased labor content when   
your analysis and design stages went from two sources   
(dataflows and structure charts) to fourteen.  
It not only takes longer, regardless of extreme or agile   
programming methods, costs more, and runs slower, but the   
backlog grows faster.  In other words it had just the opposite   
effect on the problem it was supposed to solve.  
 
So I can't refute my own logic by denying object-oriented.  I   
can, however, intelligently decide not to engage seriously in   
"that" object-oriented technology.  
 
This issues here regardless of what form of object-oriented   
programming you practice are reuse, its granularity, and   
whether or not repetition occurs.  No one argues against the   
principle of reuse.  Things start to differ after that point.  
 
Logic programming and COBOL allow granularity from the   
single statement level on up.  They differ on invocation.  In   
COBOL it occurs only through a "perform" statement.  Thus   
the name of the paragraph (which is the programmer's   
responsibility) gets repeated, but not the paragraph, i.e. the   
source.  Logic programming is somewhat "richer" in options   
with respect to rule processing.  Nevertheless the rule itself is   
never repeated, only its references.  
 
You can understand why this happens in COBOL.  You use   
reuse to avoid repetition as each repeated instance would   
take up valuable space.  At least that's what it would do at   
the time COBOL was invented.  So you had two benefits.  One,   
space saving.  Two, only a single copy of source to maintain.  
 
Only a single copy in a single program.  Otherwise a repeated   
copy in each.  It simplified maintenance within a single   
program, but not among multiple programs affected by a   
paragraph change.  Except for IBM's implementation of   
object-oriented COBOL I am not aware of any widespread use   
of the "copy" statement in a processing section to perform the   
same function as an "include" in C.  Even in C (or PL/I) the   
"include" statement is seldom applied to processing   
statements, certainly not to a single statement regardless of   
the possibility.  
 
Why is this important?  What's the big deal?  
 
Currently editors, compilers, a whole host of supporting   
utilities are based on a file system.  Single statement files are   
not impossible, only impractical.  To use them means having to   
explicitly and manually name each file.  Besides that, having a   
six million statement program means maintaining six million   
files.  Even if you break them down logically somehow into   
multiple directories, it still becomes unwieldy,  
 
Thus the file system itself mitigates against the use of single   
statement files.  Even in COBOL single statement paragraphs   
occur physically within a body of other paragraphs, all of   
which are contained in a single file.  Little wonder then that   
for practical purposes we use one file per procedure.  Not only   
do we not have granularity at the statement level, but we do   
not even have it at the control structure level.  In fact we do   
not have it at any statement group level below the procedure   
level.  
 
All due to basing source creation and maintenance on use of a   
file system.  The option is to use a database.  Everyone gets   
hot for that because it's good to have on your resume.  So   
how do you use a database?  Chances are exactly like a file   
system except that you have a row name instead of a file   
name.  
 
The secret here lies in taking advantage of a database and   
the relational language supporting access to it.  You do that   
by storing each statement separately, allowing the software   
to automatically generate the name based on content.  It   
doesn't matter how many times a programmer writes the same   
statement in how many different programs, the actual   
statement itself only appears once in the database.  
 
In turn statement assemblies occur only as a named list of   
names of statements or assemblies.  You have the "pure"   
manufacturing environment that object-oriented talks about in   
terms of reuse but never achieves.  
 
All this occurs through a data repository/directory   
automatically access and maintained by the software for all   
documentation, all source code, and all data.  All in one place,   
one database with one access point, the directory.  The   
software provides the necessary user interface.  
 
In the end it's not just one but a set of paradigm changes   
necessary.  You have the overhaul of a complete system of   
paradigms.  
 
 
=====================================================  
 
To unsubscribe from this list, send an email message  
to "steward@scoug.com". In the body of the message,  
put the command "unsubscribe scoug-programming".  
 
For problems, contact the list owner at  
"rollin@scoug.com".  
 
=====================================================  
 
  
 >> Next Message >>
Return to [ 28 | 
August | 
2003 ] 
  
  
The Southern California OS/2 User Group
 P.O. Box 26904
 Santa Ana, CA  92799-6904, USA
Copyright 2001 the Southern California OS/2 User Group.  ALL RIGHTS 
RESERVED. 
 
SCOUG, Warp Expo West, and Warpfest are trademarks of the Southern California OS/2 User Group.
OS/2, Workplace Shell, and IBM are registered trademarks of International 
Business Machines Corporation.
All other trademarks remain the property of their respective owners.
 
      |