SCOUG-Programming Mailing List Archives
Return to [ 07 |
February |
2003 ]
Content Type: text/plain
I'm not an academic. Therefore I do not fall under some
"publish or perish" edict. I'm publishing in order to provide
non-programmers something of the world of programming and
programmers possibly a different perspective on otherwise
mundane matters. Moreover the primary purpose of this
mailing list lies in promoting discussion. At the very least
some feedback pro or con relative to the views presented.
I don't regard programming as difficult, at least not in the
same class as problem solving, success in which depends
largely upon problem analysis. We have two, inter-related
guides in this: the KISS principle and Occam's Razor. Both
favor simplicity over complicated (amount of things to
consider) and complex (the connections among considered
things). The trick here as elsewhere is finding those
fundamental assumptions, the axioms, upon which you can
derive a unlimited theoretical framework. It works in plane
geometry. It works in programming.
In C, for example, you have more than one type of basic
programming element. You have the statement, the left and
right braces ('{', '}'), and possibly some others. In PL/I you
have only one, the program statement. You only have one
rule for terminating a statement: you end it with a semi-colon
(';').
This doesn't make one better than the other. Each offers a
solution set for translation from a problem set. If one is
actually better than the other, it lies in how closer one maps
into the problem set than the other. That's a form of global
comparison well beyond syntax. Here we are only showing
that one, PL/I, has a simpler syntax. That means fewer
syntax rules to learn.
Analysis of machine instruction sets allow us to classify each
instruction into one of three classes based on its "exit" rules.
We have instructions whose execution does not alter the flow
to the Next Sequential Instruction (NSI). For convenience we
will call these the "n" class. We have instructions whose
execution always alters the flow. We wil call these the "u"
class. Finally we have instructions which may or may not
alter the flow. We will call these the "c" class.
Now this has very little to do with with learning how to write
a program. It does, however, have something to do with
learning how to read a program, which has a one-dimensional
(linear), writing form, but a multi-dimensional (non-linear),
reading form. We have historically represented this
graphically in a multi-dimensional manner with software
translating source into flowcharts, flowgraphs, and other
forms of structural analysis.
We can apply these "n, c, u" classes to statements as well as
instructions. We could take hpcalc.c and methodically
substitute the "class" name ('n', 'c', or 'u') for each of its
statements or "{ and }" instances. In doing such an
abstraction we could then translate it from its
one-dimensional written form into a more representative
two-dimensional form with connecting lines, giving us an
overview of the program structure.
Frankly it would be a whole lot nicer if we had some software
which would do this for us. We do have such a candidate in
vcg (virtual compiler graphics) which you can download from
ftp://ftp.cs.uni-sb.de/pub/graphics/vcg/vcg.tgz. Or just back
up to the /vcg page for more information.
At some point we will feel comfortable enough to apply the
same analysis to an open source, smart editor, i.e. on that
does syntax analysis with colorization. Nothing really
prevents us from using the outputs of the syntax analysis as
input to a vcg function, that not only gives us a static
graphical view of our source, but also dynamically rewriting it
as we modify the source (or at least until we indicate we
want it done after some set of modifications).
Then if we have the editor also do a semantic analysis of the
same syntactical output we now have a means of detecting
two of the three possible programming errors: syntax and
spelling. Then if we have eliminated the "stupid" compiler
restriction of only one external procedure on input, we can
input all five HPCalc's source modules, .c and .h. Then our
semantic analysis will globally include all modules and our
vcg-type output will globally depict their overall structure.
All this within a single editing unit of work.
The point is that we only need one source in one language
into one tool to produce all the desired output. Compare that
to the current one language per tool per output. Which would
you prefer if you were programming?
Again we emphasize using old things in new ways. In the
vernacular its called "process improvement". The pieces we
need already exist. We just need to assemble them properly.
=====================================================
To unsubscribe from this list, send an email message
to "steward@scoug.com". In the body of the message,
put the command "unsubscribe scoug-programming".
For problems, contact the list owner at
"rollin@scoug.com".
=====================================================
Return to [ 07 |
February |
2003 ]
The Southern California OS/2 User Group
P.O. Box 26904
Santa Ana, CA 92799-6904, USA
Copyright 2001 the Southern California OS/2 User Group. ALL RIGHTS
RESERVED.
SCOUG, Warp Expo West, and Warpfest are trademarks of the Southern California OS/2 User Group.
OS/2, Workplace Shell, and IBM are registered trademarks of International
Business Machines Corporation.
All other trademarks remain the property of their respective owners.
|