Here we assume that he who wrote the source understands
what he has written. He needs some feedback that the logic
he wrote has the intended effect. He needs then test data.
Once more he faces a writing effort. As always when
interested in productivity he seeks the minimal effort to
achieve the maximum results.
Once more we find ourselves caught in a dilemma marking the
productive differences between third (imperative) and fourth
(declarative) generation languages and associated mentality.
Do I have to explicitly write each set of test data (imperative)
or can I specify the range of values for each variable
(declarative), some expected to be "true", others "false,
leaving it up to software to exhaustively generate the
enumerated sets of test data?
We are also caught in the dilemma of interpretive versus
compiled mode. The one (interpretive) regards each
statement, data or process, as a "complete" instance capable
of being exhaustively tested. The other (compiled) will
refuse to execute anything unless given a "complete"
procedure. Having to deliberately create "artificial"
procedures to trick a compiler into letting me test separate
source code segments consumes our time, thus reduces our
productivity.
It makes you question why introduce incremental compilers?
Why make a compiler into a pseudo-interpreter when you can
have a "real" interpreter capable of either interpretive or
compiled execution?
So the question is not one of understanding a process, on
which we agree, but what is the most productive means of
verifying our understanding.
"You are welcome to your opinion. Linux has come much
further than OS/2 in a comparable amount of time. I am only
considering the time that OS/2 was actually under active
development."
Last time I checked Linus Torvalds offered the first Linux
kernel in 1992, some ten years or so ago. To date it hardly
matches OS/2 in server and client performance or functions. It
still lacks anything the equal of PM or WPS. All this from a
participating community not of tens or hundreds but thousands
and tens of thousands. Randell Flint was quite correct in
saying that open source development lacks the efficiency of
closed source organization and management.
"This is true, although Deming never claimed that all defects
could be elimiated (sic) by a single tool. He is all about the
process."
You see, once more we agree. We agree on improving the
process to reduce defects that it introduces. Another thing
we agree on in improving the process is detecting and
correcting defects that we introduce (syntax, semantic, logic)
as early (and as quickly) as possible.
If I boast of being able to take a number of separate functions
and string them together to achieve yet another function,
does not the string itself represent one tool? The question of
productivity becomes one of do I engage in the separate
process of writing filters to integrate the functions into a
string (a stream in UNIX world) or do I engage in integrating,
i.e. rewriting, the functions into a single process? In which
instance do I truly have a single tool instead of a composite or
hodgepodge?
I advocate a single tool, a single language, and a single
interface with the tool and the interface written in the
language. In fact the language is written in the language, a
universal specification/programming language. The issue is
not that it doesn't exist, but why doesn't it? The answer as
I've pointed out often before is one of a conflict of economics:
the economics of the tool user versus that of the tool maker.
The sad lesson of IBM's expensive venture with AD/Cycle was
not that it couldn't work, but living with the economic and
competitive consequences of having something that did. The
only profit for success accrued to the user, his productivity. If
the only value (profit) lay in the tool use, not in its
manufacture, then only the tool user can profit in funding its
manufacture. Of course, in doing this the tool becomes the
property of the user. The user becomes his own vendor.
"What makes you believe this was not done before the build
went out the door. For example, have you never looked at
the built-in regression testing tools included with Mozilla?"
Once more we agree. Then we only have to ask how can an
undetected defect get out the door without admitting the
incompleteness of the regression testing? On top of that if
we have a "known" defect how is it that our correcting ability
cannot maintain pace with our detecting? Why are we not as
productive in correcting as we are in detecting? Does that
not lead you to suspect the efficacy of the toolset, whether
one tool or many? After all the productivity depends upon
the toolset.
"I'll be glad to beta-test your system once you have a
real-world prototype up and running."
It's interesting when you base a concept on known technology
and how it in fact works that you have not furnished enough
in the way of prototype. A prototype is a demonstration of
feasibility. You show examples of functional capability. I
offered these from "real world" tools.
Now you may question that though they work separately will
they work together. If you do, then you have to question
why the emx toolbox has so many separate tools that you can
string together or intermix in a process. Why is one process of
integration defective and the other based on the same
principles not?
I have to ask why you do not believe the evidence of you in
fact do? It's a form of self-denial that confuses me.
=====================================================
To unsubscribe from this list, send an email message
to "steward@scoug.com". In the body of the message,
put the command "unsubscribe scoug-programming".
For problems, contact the list owner at
"rollin@scoug.com".
=====================================================
>> Next Message >>
Return to [ 17 |
February |
2003 ]
The Southern California OS/2 User Group
P.O. Box 26904
Santa Ana, CA 92799-6904, USA
Copyright 2001 the Southern California OS/2 User Group. ALL RIGHTS
RESERVED.
SCOUG, Warp Expo West, and Warpfest are trademarks of the Southern California OS/2 User Group.
OS/2, Workplace Shell, and IBM are registered trademarks of International
Business Machines Corporation.
All other trademarks remain the property of their respective owners.