SCOUG Logo


Next Meeting: Sat, TBD
Meeting Directions


Be a Member
Join SCOUG

Navigation:


Help with Searching

20 Most Recent Documents
Search Archives
Index by date, title, author, category.


Features:

Mr. Know-It-All
Ink
Download!










SCOUG:

Home

Email Lists

SIGs (Internet, General Interest, Programming, Network, more..)

Online Chats

Business

Past Presentations

Credits

Submissions

Contact SCOUG

Copyright SCOUG



warp expowest
Pictures from Sept. 1999

The views expressed in articles on this site are those of their authors.

warptech
SCOUG was there!


Copyright 1998-2024, Southern California OS/2 User Group. ALL RIGHTS RESERVED.

SCOUG, Warp Expo West, and Warpfest are trademarks of the Southern California OS/2 User Group. OS/2, Workplace Shell, and IBM are registered trademarks of International Business Machines Corporation. All other trademarks remain the property of their respective owners.

The Southern California OS/2 User Group
USA

SCOUG-Programming Mailing List Archives

Return to [ 28 | March | 2006 ]


Date: Tue, 28 Mar 2006 08:05:03 PST8
From: "Lynn H. Maxson" <lmaxson@pacbell.net >
Reply-To: scoug-programming@scoug.com
To: < "scoug-programming@scoug.com" > scoug-programming@scoug.com >
Subject: SCOUG-Programming: Data Repository/Directory

Content Type: text/plain

Greg,

I think you have it mostly correct. At least as far as my
description went. If a name is a synonym (alias), then the
directory indicates it as such instead of raw material (R) or
assembly (A). As you can't use "A" again, use "S" for
"synonym". The search on the alias table will produce the
source name which in turn will have either the "A" or "R"
designation.

You do have the two tables. The unordered one you call
"container". The ordered one you call "contained". In the
first one we have a row with four columns, two for the
"container" name and two for the "member" name. You call
these "PrimaryName" and "AlternateName" respectively.

In the "contained" table we have six columns, two for the
"container" name, two for the "start" name, and two for the
"followed by" name. You call these "Name1", "Name2", and
"ContainedName" respectively. You also need a "not null" for
"ContainedName" and "ContainedSeq". Also the index should
list all six in that sequence.

You could say I feel conflicted whether to use one or two
tables for storing statements and sentences. I have no reason
to store them separately, thus using only a single table, as the
"naming" occurs strictly through software even though
technically you could precede a statement with a label, which
could create some confusion. I've considered treating any
such occurrence as an assembly, thus separating the label
from the statement body.

I don't know what familiarity you have with CASE tools in
terms of their visual representations. I personally found it
interesting that the authors of HIPO (Hierachical Input Plus
Output) which IBM employed masochistically on itself and
customers dutifully followed in terms of documentation didn't
actually understand which forms of analysis their four forms
represented (one classification, two structured, and one
operation). If you add to that the dataflow diagrams
(operation) and structure charts (structured) of Constantine's
Structured Design plus similar contributions from others
including today's UML, you needed only two lists, one
unordered (classification) and one ordered (structure and
operation) to deal with any of them.

It follows then that the "structure" of the Data
Repositor/Directory (DR/D) has three hierarchical levels for
the six tables (combining statements and sentences into one):

[Directory]
[Container] [Alias] [Source] [Object (Data element)]
[Sequence]

Now I don't know how familiar you are with the different
database schemas (network, hierarchical, and relational).
You will, however, notice that a strict hierarchical structure
visually appears. Thus my preference for using something
hierarchical like DL/I instead of relational like SQL. Now DL/I
(or IMS) has four "access/storage" methods: HSAM, HDAM,
HISAM, and HIDAM. I think in the hundreds of COPICs accounts I
covered I firmly destroyed IBM's preference for HDAM with my
own for HIDAM. If I were then to implement DL/I on the PC, I
would only do so for HIDAM. It would probably lead to an
order (or two) performance difference better than the
relational model.

In general I think you have the tables correct with the
possible exception of the "(data) element" table. Here's
where PL/I stands head, shoulders, body, arms, hands, legs,
and feet above all the rest. That will have to wait for a later
time.

You can see from this that the bills of material need no more
than an indented, hierarchical listing of names. Nothing as
complicated as you require in your process systems. Those
names and their ordering will suffice to produce all the CASE
visual outputs that the developer requests.

We have here a case of logical equivalence with an IPO (Input
Process Output) with a 1:1 match on I and O. Thus as we do
in flowcharting programs, if we reverse the sequence, make
the O the I and the I the O, it works. This ability then to use
visual input as a guide to generating source code implies that
we can use source code as input to generate visual output.

Now we have separate source code input for each visual
output that we create and maintain manually even with CASE
tool assistance. The previous simply says that we can go to a
single source input for "all" visual output. Thus we only have
one source to maintain regardless of what number of different
outputs we desire.

Now I do not regard flowcharting programs as silver bullets. I
do not regard their use of source code as a silver bullet.
Obviously I do not regard using the same source code input for
all visual output as a silver bullet. Nevertheless I expect an
accumulated productivity gain for in the production of all or
any desired subset of possible visual outputs. I see this as
logical and not magical: if it works for flowcharts, why should
it not for the rest?

Thus if the software can produce all outputs from a single
source, why should anyone have to write separate sources for
different outputs? If they don't, then why in the overall
scheme of things, in terms of the total productions required
for software development and maintenance, do you not see
this as a "significant" productivity gain?

Once we have an open source hierarchical database manager
with HIDAM I will drop relational in the DR/D in a nanosecond
or less. You may not have a feeling for DBDs (DataBase
Descriptor), but that will come in time.

For historical purposes note that the first prototype of a
relational database manager, System R, done by IBM at the
San Jose Research Center (and a host of others at
surrounding locations in Silicon Valley) came through the
"courtesy" of APL. In fact all of SQL, its set theorectical
basis, occurs in Iverson's "A Programming Language" ten years
prior to Codd's (the father of relational databases)
"discovery". That was in the book, some several years before
an actual implementation occurred on an IBM "tiny biny". I
remember the sense of "deja vu" I had on hearing Codd and
Date describe their "innovation" at an IBM internal conference.
Been there, done that.

=====================================================

To unsubscribe from this list, send an email message
to "steward@scoug.com". In the body of the message,
put the command "unsubscribe scoug-programming".

For problems, contact the list owner at
"postmaster@scoug.com".

=====================================================


Return to [ 28 | March | 2006 ]



The Southern California OS/2 User Group
P.O. Box 26904
Santa Ana, CA 92799-6904, USA

Copyright 2001 the Southern California OS/2 User Group. ALL RIGHTS RESERVED.

SCOUG, Warp Expo West, and Warpfest are trademarks of the Southern California OS/2 User Group. OS/2, Workplace Shell, and IBM are registered trademarks of International Business Machines Corporation. All other trademarks remain the property of their respective owners.