SCOUG Logo


Next Meeting: Sat, TBD
Meeting Directions


Be a Member
Join SCOUG

Navigation:


Help with Searching

20 Most Recent Documents
Search Archives
Index by date, title, author, category.


Features:

Mr. Know-It-All
Ink
Download!










SCOUG:

Home

Email Lists

SIGs (Internet, General Interest, Programming, Network, more..)

Online Chats

Business

Past Presentations

Credits

Submissions

Contact SCOUG

Copyright SCOUG



warp expowest
Pictures from Sept. 1999

The views expressed in articles on this site are those of their authors.

warptech
SCOUG was there!


Copyright 1998-2024, Southern California OS/2 User Group. ALL RIGHTS RESERVED.

SCOUG, Warp Expo West, and Warpfest are trademarks of the Southern California OS/2 User Group. OS/2, Workplace Shell, and IBM are registered trademarks of International Business Machines Corporation. All other trademarks remain the property of their respective owners.

The Southern California OS/2 User Group
USA

SCOUG-Programming Mailing List Archives

Return to [ 17 | February | 2003 ]

>> Next Message >>


Date: Mon, 17 Feb 2003 08:11:36 PST8
From: "Lynn H. Maxson" <lmaxson@pacbell.net >
Reply-To: scoug-programming@scoug.com
To: < "scoug-programming@scoug.com" > scoug-programming@scoug.com >
Subject: SCOUG-Programming: In retrospect

Content Type: text/plain

Steven,

"Here we will have to agree to disagree. Better tools will
reduce defects, but defects will always occur."

I disagree. Here we agree.

You've added an additional source of errors to this discussion
in the form of defective tools. While I spent the early years
of my IBM career in detecting and correcting such defects
down to the individual electronic component of a logic circuit
or a faulty relay point, for this discussion I would prefer
concentrating on software errors created by people.

Now I thought we agreed at the SCOUG meeting that a "smart"
editor could immediately detect syntax and semantic errors
due to misspelling. The correction could occur immediately
following the detection, i.e. at the same step in the process,
that of data entry.

That leaves only logic errors to account for. The question
becomes what known method comes closest to detecting logic
errors after data entry. The earliest you can test either a
source statement or segment (a source statement group) is
also during data entry immediately after completing syntax
and semantic checking. That's what an interpretive system
like APL supports.

If I wait until a compile, a link, and then an execute,
considering that I must spend some time creating test data
which every example you have offered thus far says I can't
do completely, I detect an error. My only means of correcting
it lies in returning to data entry mode. Isn't it a reasonable
question to ask, "Why should I ever leave data entry mode
where I can achieve the same test results and to which I must
return in order to correct detected errors?" What you have
lost in going to the compiler and the remainder of the process
is time. Making the choice of compiler over interpretive mode
automatically means a loss in productivity.

Now look what they have in common: syntax analysis,
semantic analysis, and code generation. The only difference
lies in the type of code generated, interpretive or compiled.
So why do I need two different methods when all I need is the
ability to choose the type of output? Does anyone reading
this seriously believe that I cannot indicate to a tool either
ahead of time or at that time the choice of output I desire?

I do not care that no current single tool does this. I only need
one tool that does one and another the other and combine
them. It's a matter of packaging something old rather than
inventing something new.

To return to point while I seek to reduce defects I can't
prevent them. What I can do is establish an environment for
their earliest detection and correction. You cannot do that
with a separate edit-compile sequence. If I can only correct
them in data entry (edit) mode, then why should I leave it if I
don't have to?

"The test data is only as good as the understanding of the
process to be tested."

Once more we agree. It's not all that hard.

Here we assume that he who wrote the source understands
what he has written. He needs some feedback that the logic
he wrote has the intended effect. He needs then test data.
Once more he faces a writing effort. As always when
interested in productivity he seeks the minimal effort to
achieve the maximum results.

Once more we find ourselves caught in a dilemma marking the
productive differences between third (imperative) and fourth
(declarative) generation languages and associated mentality.
Do I have to explicitly write each set of test data (imperative)
or can I specify the range of values for each variable
(declarative), some expected to be "true", others "false,
leaving it up to software to exhaustively generate the
enumerated sets of test data?

We are also caught in the dilemma of interpretive versus
compiled mode. The one (interpretive) regards each
statement, data or process, as a "complete" instance capable
of being exhaustively tested. The other (compiled) will
refuse to execute anything unless given a "complete"
procedure. Having to deliberately create "artificial"
procedures to trick a compiler into letting me test separate
source code segments consumes our time, thus reduces our
productivity.

It makes you question why introduce incremental compilers?
Why make a compiler into a pseudo-interpreter when you can
have a "real" interpreter capable of either interpretive or
compiled execution?

So the question is not one of understanding a process, on
which we agree, but what is the most productive means of
verifying our understanding.

"You are welcome to your opinion. Linux has come much
further than OS/2 in a comparable amount of time. I am only
considering the time that OS/2 was actually under active
development."

Last time I checked Linus Torvalds offered the first Linux
kernel in 1992, some ten years or so ago. To date it hardly
matches OS/2 in server and client performance or functions. It
still lacks anything the equal of PM or WPS. All this from a
participating community not of tens or hundreds but thousands
and tens of thousands. Randell Flint was quite correct in
saying that open source development lacks the efficiency of
closed source organization and management.

"This is true, although Deming never claimed that all defects
could be elimiated (sic) by a single tool. He is all about the
process."

You see, once more we agree. We agree on improving the
process to reduce defects that it introduces. Another thing
we agree on in improving the process is detecting and
correcting defects that we introduce (syntax, semantic, logic)
as early (and as quickly) as possible.

If I boast of being able to take a number of separate functions
and string them together to achieve yet another function,
does not the string itself represent one tool? The question of
productivity becomes one of do I engage in the separate
process of writing filters to integrate the functions into a
string (a stream in UNIX world) or do I engage in integrating,
i.e. rewriting, the functions into a single process? In which
instance do I truly have a single tool instead of a composite or
hodgepodge?

I advocate a single tool, a single language, and a single
interface with the tool and the interface written in the
language. In fact the language is written in the language, a
universal specification/programming language. The issue is
not that it doesn't exist, but why doesn't it? The answer as
I've pointed out often before is one of a conflict of economics:
the economics of the tool user versus that of the tool maker.

The sad lesson of IBM's expensive venture with AD/Cycle was
not that it couldn't work, but living with the economic and
competitive consequences of having something that did. The
only profit for success accrued to the user, his productivity. If
the only value (profit) lay in the tool use, not in its
manufacture, then only the tool user can profit in funding its
manufacture. Of course, in doing this the tool becomes the
property of the user. The user becomes his own vendor.

"What makes you believe this was not done before the build
went out the door. For example, have you never looked at
the built-in regression testing tools included with Mozilla?"

Once more we agree. Then we only have to ask how can an
undetected defect get out the door without admitting the
incompleteness of the regression testing? On top of that if
we have a "known" defect how is it that our correcting ability
cannot maintain pace with our detecting? Why are we not as
productive in correcting as we are in detecting? Does that
not lead you to suspect the efficacy of the toolset, whether
one tool or many? After all the productivity depends upon
the toolset.

"I'll be glad to beta-test your system once you have a
real-world prototype up and running."

It's interesting when you base a concept on known technology
and how it in fact works that you have not furnished enough
in the way of prototype. A prototype is a demonstration of
feasibility. You show examples of functional capability. I
offered these from "real world" tools.

Now you may question that though they work separately will
they work together. If you do, then you have to question
why the emx toolbox has so many separate tools that you can
string together or intermix in a process. Why is one process of
integration defective and the other based on the same
principles not?

I have to ask why you do not believe the evidence of you in
fact do? It's a form of self-denial that confuses me.

=====================================================

To unsubscribe from this list, send an email message
to "steward@scoug.com". In the body of the message,
put the command "unsubscribe scoug-programming".

For problems, contact the list owner at
"rollin@scoug.com".

=====================================================


>> Next Message >>

Return to [ 17 | February | 2003 ]



The Southern California OS/2 User Group
P.O. Box 26904
Santa Ana, CA 92799-6904, USA

Copyright 2001 the Southern California OS/2 User Group. ALL RIGHTS RESERVED.

SCOUG, Warp Expo West, and Warpfest are trademarks of the Southern California OS/2 User Group. OS/2, Workplace Shell, and IBM are registered trademarks of International Business Machines Corporation. All other trademarks remain the property of their respective owners.