SCOUG-Programming Mailing List Archives
Return to [ 20 |
February |
2003 ]
Content Type: text/plain
"Not quite. All I agreed is that a smart editor could catch
some small subset of the semantic errors."
Tell you what, Steven, let's go all the way to have the
"smart" editor catch all the semantic errors, i.e. do a complete
semantic check. Thus we can detect "type" errors as well as
"typing" (spelling) errors. My focus on productivity and
quality control says that you enable error detection as close
in time as possible to their entry.
In truth I seek a completely "smart" editor that does it all,
absorbing all of the functionality of the compiler. In addition
to compile mode for code generation I want it to offer
interpretive mode as an option.
"Depends on the project requirements. Often unit testing is
required at the function level. The cutoff points between
unit, integration and system testing are somewhat arbitrary
and usually exist to establish measurable milestones."
We have already resolved one "semantic" difference. We
might as well go for two. Can we agree that a program
represents a high level function as an ordered aggregate
through possibly multiple lower level functions. In short that
in a program we have a hierarchy of functions.
So testing at a function level which may in turn invoke a
lower level function and so on provides something of a
challenge. The challenge gets more interesting when you
invoke some functions in-line and others out-of-line, i.e. via a
procedure call. However interesting from a productivity
perspective interpretive wins hands down over compile-only
mode.
"I've been involved in projects like this too. Very few people
have the skill set needed to develop and implement projects
like this alone. The dual implementation methods are another
case where multiple eyes looking at the results is better. The
dual implementations are actually a classic method of ensuring
a reliable system. ..."
As Matlab is interpretive and C++ compiled, I assumed that the
need for both was performance. If I was shooting for
performance, perhaps my last choice would be either C or
C++. That's another issue entirely.
I obviously don't grasp entirely the process that Ben describes
except that I get the feeling it meets an input that it should
like, but doesn't. This requires some algorithmic adjustment. I
further assume that his regression test data simply assures
any new algorithmic change didn't undo some previous.
He doesn't have a problem generating test data, the one that
the use of predicate logic addresses, he just has a problem
with the processing algorithms. I assume that these
algorithms get expressed as formal mathematical logic
equations. This is the lingua franca that allows this form of
communication between the two groups. Rewriting it to a
logical equivalent form in C++ source is a clerical process.
You have to question why are we wasting people resources in
doing this.
Maybe we can get Ben to go into more detail in a future
meeting to see how what we pursue here may impact what
they do there. Certainly having an option of either compiled
or interpretive output ought to bring down their costs and
response time.
=====================================================
To unsubscribe from this list, send an email message
to "steward@scoug.com". In the body of the message,
put the command "unsubscribe scoug-programming".
For problems, contact the list owner at
"rollin@scoug.com".
=====================================================
Return to [ 20 |
February |
2003 ]
The Southern California OS/2 User Group
P.O. Box 26904
Santa Ana, CA 92799-6904, USA
Copyright 2001 the Southern California OS/2 User Group. ALL RIGHTS
RESERVED.
SCOUG, Warp Expo West, and Warpfest are trademarks of the Southern California OS/2 User Group.
OS/2, Workplace Shell, and IBM are registered trademarks of International
Business Machines Corporation.
All other trademarks remain the property of their respective owners.
|