SCOUG-Programming Mailing List Archives
Return to [ 09 | 
August | 
2003 ]
 >> Next Message >>
 
 
 
Content Type:   text/plain 
Tome Novelli writes:  
"... Since I'm used to C, I could always declare "int" as a  
shorthand for "fixed bin(32)".  I think it goes a little   
overboard, though... I don't see why types like "complex" and   
"file" should have anything to do with the compiler; in C++   
they're just composite types ("classes") built up from the   
basic floats, ints and pointers... I've used libraries with vector   
and matrix types, with seamless arithmetic operators  
coded in inline assembly.  Now what's the difference between   
PL/I "float" and "real"?"  
 
I want to answer this in some detail.  First, however, I need to   
preface my remarks.  I appear at times somewhat harsh, if not   
disrespectful of some "academic" contributions to IT.  I don't   
want anyone to think that I apply it to academics in general,   
because I am most appreciative of their personal contributions   
to my IT knowledge over the years.  In truth I have attended   
many a workshop in which I along with others sat in awe of   
theoretic solutions to extremely difficult practical problems in   
the computing industry.  
 
However, a "cultural" difference exists between academic and   
non-academic worlds.  That difference exists on many levels,   
almost all of which resolve to issues of cost and ROI: money.    
For example, I participated as an IBMer in a joint venture with   
UCLA in the early 60's at the WDPC (Western Data Processing   
Center) in programming an early 8-bit character TCU   
(Transmission Control Unit).  We engaged in a six-month   
programming effort to produce a system supporting multiple   
different terminal types to an interactive computer-based   
learning system as well as multiple different batch terminals   
as RJE (Remote Job Entry) to an 7040-7094 direct-coupled,   
scientific (Fortran-based) system.  The day before it was to   
go operational, the academics voted to scrap the entire   
system to do it all over again.  We didn't.  The system worked   
fine.  You might from your experience with this one come up   
with a better design for its successor, but you don't throw out   
the baby, i.e. the investment, with the bath water.  
 
Later, after I had left the scene to do damage elsewhere,   
another similar IBM/UCLA joint venture took place to   
essentially use a revamped version to support the then   
proposed telecommunications system for the brand new   
University of California at Irvine (UCI).  Eventually that system   
was rejected by the UCI physics department, which control the   
funding, and replaced by systems from an IBM competitor.    
That resulted in a significant monetary (in the millions) loss as   
well as non-monetary relative to reputation.  
 
The system itself was divided functionally with IBM   
responsible for the internal processing and UCLA staff for the   
terminal interface.  IBM completed its part.  The UCLA staff,   
however, diddled with design, redesign, and esoterics with   
little, in fact no concern for the deadlines involved.  After all   
they were under no risk if the project failed.  And it did.  
 
You could say that academics have a different, almost   
isolated view, of real world problems and their solutions from   
non-academics.  
 
Prior to the introduction of the IBM S/360 series of machines   
an IBM architected its "mainframes" as either scientific (binary   
and floating point and stream i-o) or commercial (fixed point   
decimal and record i-o).  It had two distinctly different and   
basically non-overlapping architectures.  You had the   
scientific: 701, 704, 7040, 7090.  You had the commercial: 702,   
705, 7070, 1410, 1401, 1440.  More to the point the commercial   
machines used registers for address indexing only: no   
arithmetic, no bit string, no non-indexing register operations.  
 
The S/360 changed all that as it merged the two architectures   
into one.  Knowing that this would have a major effect on its   
customer set two years prior to announcing IBM went to its   
largest scientific customer organization, SHARE, to collaborate   
on upgrading the then Fortran IV to Fortran VI, the first step   
to what eventually became PL/I.  
 
With all the "real" input from Fortran customers with "real"   
problems the largest commercial customer organization, GUIDE,   
decided that it wanted to resolve problems associated with   
the many different commercial-only programming languages   
from the many different vendors.  They too became engaged   
in the evolution of this "new" programming language.  With   
the entry of the commercial merged with the scientific they   
quickly realized decided against a compatible upgrade to a   
Fortran VI, which basically Fortran, Fortran II, and Fortran IV   
had been.  They then designated it as the "New Programming   
Language" or NPL.  That put them in a trademark conflict with   
the Nuclear Physics Lab, resolved by simply calling it   
Programming Language/I or PL/I.  
 
Basically this language, PL/I, was to incorporate all the   
functionality of all languages used to write control system,   
scientific, and commercial applications.  It was to do this in as   
machine-independent manner as possible to allow the   
customer to run the same application unchanged across any   
vendor's machine with full portability (source code and data).    
That meant, for example, that you could define a 32-bit   
arithmetic data variable in source which would run on existing   
16-, 20-, 24-, 32-, 48-, and 56-bit register machines.  Those   
represented real world needs in those extremely large   
accounts: aerospace, retail, distribution, insurance, financial,   
untilities, etc..  
 
At that time thanks largely to the ACM, the most prominent   
organization of IT professionals, its major journal, "The   
Communications of the ACM", supported the sharing of   
algorithms using the Algol language.  It's important to note   
that it was a specification language only and not a   
programming language until Burroughs introduced its version   
as BALGOL.  As common, international specification language   
for writing algorithms (ALGOrithmic Language) users were   
expected to translate the algorithms into scientific   
programming languages of their choice.  
 
The point to make here is that a specification language which   
is not a programming language can take certain liberties.    
Among those liberties is the ability to express numerical   
variables in their theoretical form.  This fell to two   
specifically: integers and real.    
 
Therein, you see, lies the problem of transferring an academic   
freedom into a non-academic, real-world environment.    
Academics could write C compilers as classroom exercises to   
run on classroom machines.  That they wouldn't run identically   
on something else used by someone else, someplace else was   
not a  concern of theirs.  They were free to use "int" and   
"real" because they meant whatever they chose to have them   
mean.  
 
At the same time they were instructing in C they could buy   
machines sans OS.  Rather than write their own OS, they could   
for the price of a reel of tape get from Bell Labs (AT&T) the   
source for UNIX.  All these different universities with all these   
different vendor machines with all these different register bit   
sizes started producing all these computer scientists who   
went into the real world, infecting it with their ultimate faith   
in UNIX and C.  
 
That worked for a while because these same computer   
scientists did no pay close attention to what K&R meant by   
"portability" (source only) as opposed to what their same   
employers, those magnificent, extremely large accounts, had   
come to believe it meant and implemented in COBOL and PL/I:   
source and data.  
 
So when these same customers found that they didn't have   
the same level of portability providing them with a measure of   
vendor independence, they got more than just a little mad.    
The UNIX community realizing the hole they had dug for   
themselves relative to larger real world needs joined in the   
Open Systems Foundation (OSF).  As part of that came the C   
standards effort such that "int" for example meant the same   
throughout in terms of implementation (real world use).  
 
Neither PL/I nor COBOL suffer from this as both explicitly   
define in terms of size and precision numeric variables:   
decimal, binary, and float.  Fortran still supports stream i-o,   
which is different from that of C.  COBOL supports three forms   
of record i-o: sequential, direct, and indexed.  PL/I, of course,   
supports three forms of stream i-o (data-, list-, and   
edit-directed) as well as the three forms of record i-o with   
three different forms of direct: regional(1), regional(2), and   
regional(3).  Beyond that PL/I unlike COBOL, Fortran, C, and   
everyone else supports all known data types natively.  
 
That results, for example, PL/I declaring binary and decimal   
floating point variables as "dcl S binary float(16);" and "dcl T   
decimal float(5)".  To quote from the language reference   
manual for decimal floating point: "If the declared precision is   
less than or equal to (6), short floating-point form is used.  If   
the declared precision is greater than (6) and less than or   
equal to (16), long floating-point form is used.  If the declared   
precision is greater than (16), extended floating-point form is   
used."  
 
Note that it allows any value for precision within those ranges   
as well a allowing the programming to designate either binary   
or decimal.  Something similar is true for fixed-point binary   
and decimal, noting that either support an integer or real   
(fractional portion).  
 
We've come a long way up to this point to make the point   
that the PL/I "float" is "real" in ways that C "float (real)" is   
not.  You could argue, I guess, that PL/I implemented   
standards before the fact that C did only as an afterthought.    
That's the difference when beforehand real users discover   
real problems that they don't real-ly want.  To academics   
such confusion adds to the enjoyment of the journey.  
C++ tries to make up for the deficiencies in C, but has yet to   
reach the same level of data types and operators as PL/I.  In   
fact it has not achieved the same level that existed in PL/I   
prior to 1970.  Never have so many worked so hard to achieve   
so little.  
 
=====================================================  
 
To unsubscribe from this list, send an email message  
to "steward@scoug.com". In the body of the message,  
put the command "unsubscribe scoug-programming".  
 
For problems, contact the list owner at  
"rollin@scoug.com".  
 
=====================================================  
 
  
 >> Next Message >>
Return to [ 09 | 
August | 
2003 ] 
  
  
The Southern California OS/2 User Group
 P.O. Box 26904
 Santa Ana, CA  92799-6904, USA
Copyright 2001 the Southern California OS/2 User Group.  ALL RIGHTS 
RESERVED. 
 
SCOUG, Warp Expo West, and Warpfest are trademarks of the Southern California OS/2 User Group.
OS/2, Workplace Shell, and IBM are registered trademarks of International 
Business Machines Corporation.
All other trademarks remain the property of their respective owners.
 
   |