SCOUG-Programming Mailing List Archives
Return to [ 25 |
February |
2007 ]
>> Next Message >>
Content Type: text/plain
An interesting book review in the Sunday LA Times Business
section about an (ongoing) open source project noted for its
funding and its failure. Apparently the project wanted to
offer an open source competitor to Microsoft's Outlook, which
I assume similar in purpose to Sundial's Relish which I use.
I suppose that open source projects flounder or fail at a rate
at least that equal to or greater than those of closed source
(proprietary). Both flounder or fail at rates that we in the
programming profession feel grateful that we don't have the
addition expense of malpractice insurance in order to continue
working.
So funding apparently cannot guarantee success. Even the
better administration and project management of closed
source cannot guarantee success. It would seem that even
programming skill level cannot guarantee success. These run
up against another contributor that cannot guarantee success:
ego.
1970 marks a critical year. In that year IBM Federal Systems
Division made public their Chief Programmer Team used in the
New York Times project. In that year IBM announced its
improved programming technologies which included the chief
programmer team approach, structured programming,
structured (peer) review and the librarian who maintained
changes to the source and did the compiles. Also promoted at
that time was the idea of "egoless" programming.
Among all these only structured programming survived the
"politics" of IT. The Chief Programmer Team failed because
it became politically unacceptable to come face-to-face with
the fact that most "senior" (in fact nearly all) could not
qualify as a chief programmer. If you didn't have the Chief
Programmer Team, you didn't need the librarian, its
operational arm. Structured reviews fell out of favor,
primarily due to failure to understand the concept. And we
have countless examples of the disregard for egoless
programming. One has to wonder how structured
programming emerged through the attacks which occurred
upon it.
So we had and have programmer egos unrelenting in their
refusal to submit to the greater good, the actual success of
an endeavor. It apparently infects even open source which
has the greater dependence upon cooperation than closed
source. Or in the instance of this book review where my
Python beats your Perl.
We live in a world of change, a dynamic world. Even those
systems which we create, human versus natural systems,
cannot avoid change. In the best of all possible worlds we
could adapt to change as fast as it occurs. In the next best
we could adapt to the current one before the next one
occurs. In the "real" software world they stack up on us,
creating a "backlog".
While I don't have a means of accurately predicting the next
change (the best of all possible worlds), what it is and when
it will occur, lacking something of the psychic ability
necessary, I did think that I could find a means of doing the
next best (adapting to the last before the next). That at
least eliminated backlog or reduced it to a transitory
condition.
What does it mean to have a rate of adpatation at least equal
to or less than the rate of change...on average? On average
no backlog. A condition which basically and essentially has
never occurred since the first program, the first generation
actual (or machine) programming language, the second
generation symbolic assembler plus macro, and the third
generation HLL...all imperative languages. To be fair it has not
occurred with the dominant fourth generation, programmer
language Prolog either. However, it has occurred with the
dominant fourth generation, non-programmer (user) language
SQL.
So with an obvious backlog failure of three generations of
imperative programming languages and at least one instance
of success (SQL) of a fourth generation, declarative language,
why does an entire (or nearly so) population of programmers
either write in or develop third generation (Perl, Python,
JAVA, etc.) programming languages? Why does all this
obvious talent seem incapable of thinking outside their third
generational box?
Well, for one thing if on average no backlog existed we would
need far less of them. Job security. Protecting the ego.
What does it mean to lose your status as one of the proud?
How many less? 50%? 80%? 95%? 99.999999%(six sigma)?
And outsourcing no matter how it has brought down the cost
and increased the "local" IT unemployment has had no effect
on reducing backlog. So if your ego doesn't get you, the
lower cost will. The problem remains.
So PL/I advocate that I am, I have to concede its imperative
failure to solve the backlog. Here I have an example of an
imperative programming language more functionally complete,
more operator complete (with the possible exception of APL),
and more data type complete...in 1970...than any other third or
fourth generation HLL since. In some 37 years it has no HLL
match either singularly or in combination.
The natural question then arises what makes the backlog
difference between Prolog and SQL? Well, Prolog is
post-1970 and is based on C data types, more specifically the
infamous "int". SQL is post-1970 with an PL/I-like syntax and
data types. So what does it take to upgrade PL/I from third
generation imperative to fourth generation declarative? An
assertion statement, a list aggregate (an SQL data type
allowing zero (false) or one or more true instances)), and the
ability to specify rules for data variables. Obviously PL/I in
terms of completeness remains a whole lot closer than any
other candidate.
In a previous message I touched upon the aspect of a
self-repairing program. I also touched upon how the
"unnecessary" designation of K&R in "The C Programming
Language" caused the non-inclusion of PL/I's system and user
error conditions along with a host of other PL/I features, e.g.
variable precision decimal and binary arithmetic. Perhaps we
can attribute this more to ego salving than incompetence.
We are in the process of building a single tool in a single
language with a single library (data depository/directory)
using only a single source for the single purpose of insuring
the future of OS/2 (and eCS). You need to grasp that its
success depends on this ability to manage, reduce, or
eliminate backlog...on the ability to institute changes to
application systems not only as fast as they occur elsewhere
in newer versions but even faster than they occur elsewhere
currently.
We have a systemic failure in the backlog. No amount of
people resources with current third-generation tools is solving
it. So what lesser amount with the same tools has a chance
of keeping up with them? So how do you go up against a
company with a 50-plus billion cash reserve with the same
tools and compete in real-listic time?
Fortunately for us that same company has an even greater
backlog problem. Moreover it is so mired in the use of its tools
that it has no means of extracting itself without losing more
financially than it has in its reserve. Even if it did, even if it
adopted the approach we currently investigate, and even if
the approach proves successful, the company would cease to
exist.
So too would IBM Global Services and Software, CA, Borland,
Novell, and any software company based on "off the shelf"
packaged (build to stock) software. If I don't have a backlog,
if I can implement changes on average faster than they occur,
then that changes the "business model" significantly with
respect to "cost of sales".
I knew in offering the Warpicity Proposal in 1998 that it had to
include a fourth-generation, declarative language that like
SQL and unlike Prolog offered a means to eliminate backlog.
Moreover that since SQL requires the separate generation of
table data even if using SQL to do so, I had to eliminate that
time with respect to test data generation. Fortunately I had a
logic programming alternative to the "causal" logic of Prolog
and SQL in the "predicate" logic of Trilogy and the
z-specification language: a rule-based, automatic generation
of test data.
I defend this approach based on an egoless assessment, an
objective analysis if you will, on how best to address the
backlog problem which plagues our programming industry.
The OS/2 community found itself faced with the problem of
catching up. You don't have to catch up unless you fall
behind.
So if the existing set of tools (with the exception of
non-embedded SQL queries) could not eliminate the backlog
regardless of the resources thrown at it (or perhaps
contributing to it), why would I want to inflict that on a
diminishing or dimished OS/2 community?
Have you downloaded your ZIM 7.0 source yet?
=====================================================
To unsubscribe from this list, send an email message
to "steward@scoug.com". In the body of the message,
put the command "unsubscribe scoug-programming".
For problems, contact the list owner at
"postmaster@scoug.com".
=====================================================
>> Next Message >>
Return to [ 25 |
February |
2007 ]
The Southern California OS/2 User Group
P.O. Box 26904
Santa Ana, CA 92799-6904, USA
Copyright 2001 the Southern California OS/2 User Group. ALL RIGHTS
RESERVED.
SCOUG, Warp Expo West, and Warpfest are trademarks of the Southern California OS/2 User Group.
OS/2, Workplace Shell, and IBM are registered trademarks of International
Business Machines Corporation.
All other trademarks remain the property of their respective owners.
|