SCOUG Logo


Next Meeting: Sat, TBD
Meeting Directions


Be a Member
Join SCOUG

Navigation:


Help with Searching

20 Most Recent Documents
Search Archives
Index by date, title, author, category.


Features:

Mr. Know-It-All
Ink
Download!










SCOUG:

Home

Email Lists

SIGs (Internet, General Interest, Programming, Network, more..)

Online Chats

Business

Past Presentations

Credits

Submissions

Contact SCOUG

Copyright SCOUG



warp expowest
Pictures from Sept. 1999

The views expressed in articles on this site are those of their authors.

warptech
SCOUG was there!


Copyright 1998-2024, Southern California OS/2 User Group. ALL RIGHTS RESERVED.

SCOUG, Warp Expo West, and Warpfest are trademarks of the Southern California OS/2 User Group. OS/2, Workplace Shell, and IBM are registered trademarks of International Business Machines Corporation. All other trademarks remain the property of their respective owners.

The Southern California OS/2 User Group
USA

SCOUG-Programming Mailing List Archives

Return to [ 06 | August | 2003 ]

<< Previous Message << >> Next Message >>


Date: Wed, 6 Aug 2003 21:39:22 PDT7
From: "Lynn H. Maxson" <lmaxson@pacbell.net >
Reply-To: scoug-programming@scoug.com
To: < "scoug-programming@scoug.com" > scoug-programming@scoug.com >
Subject: SCOUG-Programming: Re: Warpstock 2003 Presentation

Content Type: text/plain

"The Subject title of this thread is the target. What I've been
doing is helping you codify your thoughts by giving them a
good poke in vulnerable places. I want your Warpstock
presentation to be as solid as you can make it. ..."

Peter,

I wouldn't hold my breath. I made the offer. The attendees
will vote on which presentations will occur. I may not make
the cut. I have no illusions on this point.

I'm sort of "iffy" with respect to Bob Blair's emphasis to the
front end with respect to automating source mainteance away
from the back end for automating code generation. You have
to offer a complete package from the front end to the back
end. Otherwise you won't get the full productivity impact.

Just for clarity the front end here includes an editor based on
use of a data repository/directory. That means programming
doesn't involve either the use or maintenance of source files.
It also means that the only "stored" source are statements.
That means "naming" every source statement, an automated
feature of the data repository/directory.

This means that "all" statement assemblies exist as "named"
lists of "names". Oddly enough this leads to two forms of
pattern matching or searches, one on the stored source text
and the other on the lists. I'm not going to get into using AI or
neural nets to search out patterns of "reuse". I will only say
that the maintenance of the entire source as well as the
association of source code with source text (documentation)
just got a whole lot "faster, better, cheaper". All three,
Peter. No tradeoffs.

"I keep getting a discomforting perception -- that you are
intermingling code generation efficiency with HLL design."

Not at all. The HLL design uses PL/I syntax and data types,
APL operators, LISP list aggregate and operators, and logic
programming based on use of predicate logic. Code
generation occurs using the same source. I don't care if there
are thousands, millions, or billions of available use instances
for object code optimization. I don't care, because the
software doesn't. It does one or a zillion without concern
about the time, only the result. As it does it a zillion times
faster than I can, I figure it's a wash.

"As I recall, the original Level F compiler used 47 passes."

Time for a little history lesson. IBM designated OS/360
compilers by letters based on their design level: D for 16K, E
for 32K, F for 64K, G for 128K, and H for 256K. The design
level included the OS, in this instance PCP a single main task
supporting multiple subtasks. The full function OS in smallest
form took up 12K. For the F-level PL/I compiler this meant a
54K partition.

For the record only assembly language and PL/I met their
design goals. The F-level COBOL compiler, for example,
required a minimum 128K system. Normal compiler design up
to that time said you loaded the entire compiler into memory
and passed the program against it. PL/I however stored
(when possible) the entire source in memory and essentially
passed the compiler against it.

The compiler phases were ordered sequentially, though if a
phase was not needed it was skipped. Each phase completed
its processing by determining its successor. The compiles took
longer but the boys at Hursley Labs met their design points.
When IBM introduced the S/360 Model 20, the low end of the
S/360 line, it also introduced the D-level compiler. It
supported only business applications: no floating point.

Unlike COBOL, which never met its design levels, and
assembler which had to increase them as it increased
functionality PL/I never had to change its philosophy, though
it did take advantage of available extra memory.

It didn't have 47 phases, only something in the range of the
alphabet (A - Z). To get to 47 it would have had too large a
source file, which it segmented. It essentially swapped these
segments for each necessary phase.

There remains so much builtin to the IBM PL/I compilers in
terms of functions and features outside the language itself
that makes all others pale in comparison. I know I had to
deliberately sit down to learn the IBM source level debugger,
because I found the normal debugging capabilities more than
sufficient. What can you say about a language in which you
can insert a single "put data;" statement and get a listing by
name of every value of every variable, the state of the
system? We won't bother to go into the macro language
facility of the language. At least not at this time.

"Designers and implementers should stop showering us with
new descriptive terms and just say what they're doing. If it's
a description table, say "description table". ..."

I agree. Call it a "descriptor". You have three means of
passing parameters in invoking an API: (1) pass the value, (2)
pass the address containing the value, and (3) pass the
address of the descriptor which contains the address of the
value. C uses 1 and 2. PL/I, until it had to interface with C,
used 2 and 3. Now the PL/I "entry" statement allows 1, 2, and
3 through the use of the "byvalue" or "byaddr" attribute.
Once more "int" is our culprit.

At any rate to incorporate the rules expressible in the "range"
option of the declare statement, which could literally number
into the millions and include processing code as well, the
descriptor gets somewhat more complicated. The function,
however, remains the same: it's where you go to get the
"dope" on the data.

=====================================================

To unsubscribe from this list, send an email message
to "steward@scoug.com". In the body of the message,
put the command "unsubscribe scoug-programming".

For problems, contact the list owner at
"rollin@scoug.com".

=====================================================


<< Previous Message << >> Next Message >>

Return to [ 06 | August | 2003 ]



The Southern California OS/2 User Group
P.O. Box 26904
Santa Ana, CA 92799-6904, USA

Copyright 2001 the Southern California OS/2 User Group. ALL RIGHTS RESERVED.

SCOUG, Warp Expo West, and Warpfest are trademarks of the Southern California OS/2 User Group. OS/2, Workplace Shell, and IBM are registered trademarks of International Business Machines Corporation. All other trademarks remain the property of their respective owners.