SCOUG-Programming Mailing List Archives
Return to [ 05 | 
August | 
2003 ]
<< Previous Message << 
 
 
 
Content Type:   text/plain 
With so many choice entry points available perhaps none   
offers a better path than the beginning.  In the beginning and   
continuing up to now enterprise IT departments have had no   
capability of developing and maintaining software at a pace   
consistent with demand.  As a result they faced a growing   
backlog of user demands.  In many instances so great that the   
users frequently started up their own IT activities.  These in   
turn supported local, not global interfaces.  As a result they   
further increased the overall workload within the enterprise.  
 
Now no one really had an argument with software   
development costs, regarding them as something of a   
necessary evil.  However, they did get a mite upset with   
maintenance costs which quickly began to exceed   
development.  Having only a fixed amount of resources they   
soon discovered that they had to invest 90% in maintenance   
and only 10% in new development.  In short they had no   
means of meeting aggregate demand: the backlog grew, the   
costs grew, and the effort required grew as well.  
 
Now lets understand that in an enterprise eventually you will   
have developed enough applications that it is only normal that   
they would devote more resources over time to maintenance   
instead of new development.  Eventually 90% might even be   
reasonable.  In fact it should creep closer and closer to 100%.  
 
Unfortunately the focus on this ratio made it seem that we   
needed to somehow redress the balance.  The competition for   
resources then became one of either doing development or   
maintenance.  It's very much like the competition between   
open and closed source: it ignores their common problem   
relative to the costs in time and effort.  If you continue to   
increase programming productivity at some point you will have   
sufficient human resources to satisfy both regardless of any   
ratio.  
 
How then do you increase programming and thus programmer   
productivity?  Well, programmers think, write, meet, and wait.    
Any increase in productivity will come from reducing any one   
of these...or all of them.  That sets us up for what follows.  
 
We also have a penchant for silver bullets.  We expect those   
silver bullets to come from those far from the concerns of   
daily programming labor, from research.  In this instance a   
research center, specifically PARC (Palo Alto Research   
Center), had given us innovation like a pointing device in   
conjunction with a graphical user interface.  Being ga-ga here   
we could only hope that they had some other wonderful   
scheme to offer us.  
 
Unfortunately they did.  It was call SmallTalk, the genesis of   
all object-oriented programming and methodologies.  What got   
overlooked in all was to whom the "small" in SmallTalk was   
directed.  Basically it was non-programmers, initially children,   
the real "small" folks.  Like LOGO, a similarly initiated program   
for children, it had an intended audience far separated from   
the professional programmer.  
 
That should have been enough to cause the enterprise IT   
staff to look elsewhere, but, hey, when there's money to be   
made, objective reason gets quickly subverted and diverted.  
 
At its basis lies the issue of who gets to process data.  Well,   
code processes data.  Your only choice lies in determining   
which code gets to actually process the data and which has   
to call upon it to get the processing done.  
 
So we introduce a generic term for data, which we call an   
"object".  We use the generic term "object" as the data can   
be either a data element or some assembly of such elements   
(an aggregate: a structure, array, or list).  For those   
authorized to perform "operations" upon it we denote as   
"methods".  For those elsewhere wishing to have the   
processing take place we allow them to communicate their   
desire in "messages", a means of inter-module communication.  
 
Now understand that we have introduced no new data types   
or forms (elements or aggregates).  We've introduced no new   
means of isolating the permitted operations on data to specific   
modules.  We've introduced no new means of inter-module   
communication (messaging) for one module to invoke another   
to perform a given operation on data.  All these have always   
been available.  
 
What we did eliminate was the matter of choice...or   
programmer freedom as if it had been too frequently abused   
and needed correcting.  It is for programming methodologies   
what PASCAL was for programming languages, a means of   
constraining programmer freedom...for their own good.  
 
Now one potential "gotcha" remained.  As different objects   
used the same methods we needed some organizational means   
to prevent unnecessary method duplication.  Never mind that   
the entire history of programming had provided a means of   
different objects using the same methods: it's called a function   
reference or subroutine.  
 
Instead we developed a concept of an "object class"   
decomposed from its highest point (the generic "object")   
through a succession of lower level objects: an object class   
hierarchy.  Shared methods then need only be stored with the   
highest level object in a path containing all objects needing   
that method.  Lower level objects then gained access to   
higher level methods through a process of "inheritance".  
 
That works fine if you have something of a neat hierarchy   
where every object has at most one parent.  It starts to get a   
little messy when they can have two, three, or more.  Thus   
the major battle over single versus multiple parents.  The   
implementers won over the programmers in that C++ and   
JAVA allow only single parent inheritance.  
 
Now the class hierarchy of objects and their methods get   
packaged in the form of class libraries.  Oddly enough some   
methods sent messages to others in order to achieve their   
ends.  Perhaps not quite as oddly...or unexpectedly...different   
class libraries had different object hierarchies, different   
methods, and different inter-method connections.  This   
resulted in incompatible libraries, which meant that the user   
could not mix and match among them.  
 
It will probably come as no surprise to anyone that Microsoft   
opted for something different from IBM's SOM or more recently   
Sun's JAVA.  It will also come as no surprise that considerable   
effort has gone into standardization.  
 
Now before you get to programming and after you have the   
specifications inbetween you have analysis and design.  Here   
multiplicity reared its ugly head in the form of three major OO   
design methodologies.  Their competition threaten to derail   
OO's penetration in the enterprise.  The competitors got   
together, merging their methodologies into one.  That's why   
the "U" in UML stands for "Unified" instead of "Universal".    
Afterall there was no sense in going overboard.  
Now at last count UML had something in the neighborhood of   
14 or 16 different forms.  It's not a big number, except that it   
replaced a  two-decade old system of structured design that   
had only two, dataflows and structure charts.  
 
The net result of all this, and no other methodology has had   
so many resources poured into it, is that it takes longer and   
costs more to develop and maintain software.  The backlog   
has continued to grow, performance depends upon improved   
hardware technology, software productivity has declined,   
maintenance costs continue to increase, and we're caught   
between two deficient products in JAVA and C#.  We   
probably shouldn't add insult to injury by mentioning that the   
two- to six-month learning curve of the previous methodology   
has not been extended as a serious assertion of two years.  
So if I don't seem too impressed by OO, attribute it to my   
advocacy of the "principle of least effort".  Actually I should   
modify that to the principle of most effort by implementers to   
produce the least effort necessary by users.  I was going to   
throw in the KISS principle, but there's no sense trying to   
divert any optimist in a pile of manure, believing that there   
must be a horse in there someplace.  The horse didn't pile it   
that high.  People did.  You won't find any people in there   
either.  
 
So, Greg, you find a class library which defines a new data   
type, like a variable-length string with lower as well as upper   
boundaries, by redefining an existing one.  I think you will find   
yourself writing such code and maintaining it rather than   
expressing it as a rule in a declaration statement, leaving it up   
to the implementation to enforce it.  
 
 
=====================================================  
 
To unsubscribe from this list, send an email message  
to "steward@scoug.com". In the body of the message,  
put the command "unsubscribe scoug-programming".  
 
For problems, contact the list owner at  
"rollin@scoug.com".  
 
=====================================================  
 
  
<< Previous Message << 
Return to [ 05 | 
August | 
2003 ] 
  
  
The Southern California OS/2 User Group
 P.O. Box 26904
 Santa Ana, CA  92799-6904, USA
Copyright 2001 the Southern California OS/2 User Group.  ALL RIGHTS 
RESERVED. 
 
SCOUG, Warp Expo West, and Warpfest are trademarks of the Southern California OS/2 User Group.
OS/2, Workplace Shell, and IBM are registered trademarks of International 
Business Machines Corporation.
All other trademarks remain the property of their respective owners.
 
   |