SCOUG-Programming Mailing List Archives
Return to [ 05 | 
August | 
2003 ]
<< Previous Message << 
 >> Next Message >>
 
 
 
Content Type:   text/plain 
Wow, Greg, Steven, and Peter, had I known what could pique   
your interest, I would have brought it up sooner.  You have no   
idea of just how much I appreciate the feedback.  
 
For the record the PL/I "value block to which Peter refers   
Hursley Labs denoted as a "dope vector", because it contained   
the "dope" on the variable.  As PL/I supports strong typing   
more intelligently than its more rigid advocates it allows   
mixed data expressions in which the programmer can either   
accept on a variable instance a default (implicit) conversion   
or use an explicit one (as used by the more rigid advocates).  
 
The use of dope vectors in a function reference (either in a   
separate "call" statement or within the right-hand expression   
of an assignment statement) allowed a "late binding" of the   
expression evaluation to occur: it shifted from compile time to   
runtime (execution).  It was a programmer blessing, but a   
performance nightmare.  
 
As a result until it was corrected when the F-level compiler   
became the optimizing compiler IBM Hursley wrote a series of   
subroutine calls known as the "Sears routines" to offer   
performance more in line with that COBOL.  In effect PL/I   
executed at interpretive, not compiled speeds in arithmetic   
expressions prior to the F-level optimizing compiler.  
 
As we agree on the existence of dope vector, a different   
scheme in PASCAL, and yet another aspect of object-oriented   
programming (programmer-defined class), I can't be accused   
of making these things up.  Just add to that list how logic   
programming, AI and Prolog in particular, in a similar manner   
processes rules.  
So why not accept the role of a dope vector into which you   
can encode all the rules governing the characteristics and   
behavior of a data variable and its inter-dependencies with   
other data variables?  
 
**********  
 
I can't speak to the macro capabilities of IBM's assemblers   
since the introduction of the H-level assembler and its   
successors up to the current period.  When I first learned of   
the H-level assembler in an IBM internal programming   
language conference I had this general feeling that it   
functioned without bounds.  However, I am familiar with   
Autocoder and S/360 assembly language macros.  I'm also   
familiar that the macro definition had the same form as any   
other instruction.  It's invocation always produced inline code.    
That didn't prevent the inline code from containing a   
subroutine call, but the programmer had no control over this.  
 
You have two efficiency aspects for comparison, the   
instruction space required, i.e. the number of instructions   
written or coding efficiency, and the execution efficiency.    
The general consensus is that the "best" assembly language   
programmer will win over the HLL programmer every time.  
 
The real question is "why?".  For any given functional code   
once optimized for speed and space why will that not transfer   
to the code generation phase of an HLL implementation?    
Obviously it can.  If you do it often enough, eventually you   
will have exhausted the assembly language programmer into   
submission.  
 
The essence here lies in not having extra code (space   
efficiency) in the object code.  To have a single source code   
template serve two possible invocations, inline and subroutine.    
To allow the programmer on a use instance basis to determine   
which.  In the end product no extra code exists, only that   
which is necessary and sufficient for the context of the use   
instance.  
 
So I suggested the use of the "inline" term.  I used an example   
of a subroutine call, one without the "inline" present and one   
with to illustrate a meta-programming aspect.  It gets a bit   
more difficult in terms of a complex righthand side expression   
as in "a = sqrt(b) - arctan(c);".  Here you may want to   
execute either or both inline or as subroutines.  Would having   
"inline" immediately following the function expression suffice:   
"a = sqrt(b) inline - arctan(c);"?  
 
***********  
 
You don't have to convince me that PL/I stands head and   
shoulders above all other programming languages combined.    
It should as it was intended to replace them all.  Allowing   
multiple entry points into a procedure serves a number of   
different purposes, almost too numerous to delve into detail   
here.  
PL/I's strong typing with automatic (default) conversion   
allows a programmer to invoke a procedure with a parameter   
mismatch in terms of data types.  Simply because PL/I will do   
the necessary conversion in the object code prior to   
invocation and then the reverse conversion after.  If you don't   
want the defaults to occur, you can declare different multiple   
entry points which the compiler will use to find a match.  Or   
you can simply want to use the same procedure to include a   
number of different functions.  
 
One often overlooked feature of PL/I is the ability of the   
invoking procedure to prevent undesired changes in the   
values of passed parameters.  The programmer can do this by   
surrounding the parameter(s) with parenthesis: "a = gosh(b,   
(c), d);".  In this instance PL/I will dynamically create a   
duplicate of "c", passing its address in the invocation.  
 
And the list goes on.  
 
**************  
 
I guess I am too use to PL/I's emphasis on placing the   
programmer's priorities over that of the implementer's.  It's the   
job of the implementer to do what the programmer wants, not   
to impose restrictions.  It certainly makes implementing PL/I   
compilers a lot harder than the others, but it does make the   
programmer's life easier.  
 
No, Peter, I don't want to HLL to do any more than it is told to   
do by the programmer except in the most efficient way.  If the   
programmer has indicated a function reference, it should   
occur as a subroutine...unless the programmer indicates   
otherwise.  The whole point here is to give the programmer   
control over the implementation.  That was an underlying   
principle when PL/I was designed.  
 
Further as there was no "weak" typing in programming   
languages prior to C except for assembler, PL/I supported   
strong typing, only more intelligently by providing automatic   
conversions in a mixed arithmetic expression.  Actually as it   
supports both character to arithmetic and arithmetic to   
character conversions among bit strings, character strings,   
fixed point decimal and binary, and floating point decimal and   
binary variables we can just make that mixed expressions   
period.  
********  
 
I don't want to get started on a harangue on object-oriented   
programming.  Just note that the "range" attribute can   
contain a list of procedure names (methods in OO) as well as   
other data names.  As such it provides for "inheritance"   
without the need for class structures (or libraries) as well as   
"multiple" inheritance not supported by C++ or JAVA.  
 
Other than that let's leave object-oriented programming to   
another thread except to answer "no" to Greg's comment   
about "the whole point of object-oriented programming".  
 
The goal remains to produce the most efficient code in terms   
of speed and space with the least "programmer" effort.  I say   
"programmer" instead of "programming" to indicate that I   
expect the software to do more of the programming under the   
direction of the programmer.  
 
 
=====================================================  
 
To unsubscribe from this list, send an email message  
to "steward@scoug.com". In the body of the message,  
put the command "unsubscribe scoug-programming".  
 
For problems, contact the list owner at  
"rollin@scoug.com".  
 
=====================================================  
 
  
<< Previous Message << 
 >> Next Message >>
Return to [ 05 | 
August | 
2003 ] 
  
  
The Southern California OS/2 User Group
 P.O. Box 26904
 Santa Ana, CA  92799-6904, USA
Copyright 2001 the Southern California OS/2 User Group.  ALL RIGHTS 
RESERVED. 
 
SCOUG, Warp Expo West, and Warpfest are trademarks of the Southern California OS/2 User Group.
OS/2, Workplace Shell, and IBM are registered trademarks of International 
Business Machines Corporation.
All other trademarks remain the property of their respective owners.
 
    |