SCOUG Logo


Next Meeting: Sat, TBD
Meeting Directions


Be a Member
Join SCOUG

Navigation:


Help with Searching

20 Most Recent Documents
Search Archives
Index by date, title, author, category.


Features:

Mr. Know-It-All
Ink
Download!










SCOUG:

Home

Email Lists

SIGs (Internet, General Interest, Programming, Network, more..)

Online Chats

Business

Past Presentations

Credits

Submissions

Contact SCOUG

Copyright SCOUG



warp expowest
Pictures from Sept. 1999

The views expressed in articles on this site are those of their authors.

warptech
SCOUG was there!


Copyright 1998-2024, Southern California OS/2 User Group. ALL RIGHTS RESERVED.

SCOUG, Warp Expo West, and Warpfest are trademarks of the Southern California OS/2 User Group. OS/2, Workplace Shell, and IBM are registered trademarks of International Business Machines Corporation. All other trademarks remain the property of their respective owners.

The Southern California OS/2 User Group
USA

SCOUG-HELP Mailing List Archives

Return to [ 06 | June | 2003 ]

<< Previous Message << >> Next Message >>


Date: Fri, 6 Jun 2003 21:16:02 PDT7
From: Peter Skye <pskye@peterskye.com >
Reply-To: scoug-help@scoug.com
To: scoug-help@scoug.com
Subject: SCOUG-Help: HDD cache size choice with OS/2

Content Type: text/plain

=====================================================
If you are responding to someone asking for help who
may not be a member of this list, be sure to use the
REPLY TO ALL feature of your email program.
=====================================================

> Cache is where the drive electronics put data
> when it's writing to the disk, before it is sent
> out to the heads to write to a disk location.

That's one use of cache.

Another use is to anticipate sectors which will be read "real soon"
(such as the linear read of a large file) so they are ready when the
requesting program asks for them (thus no waiting for the mechanics).

A third is to hold often-used sectors in the cache (such as the OS2.INI)
file so that repeated reads/writes are processed using the cache value
(which is written to the mechanical platter when time permits, sometimes
using the stairstep algorithm).

Cache speeds things up. It can also cause more data to be lost if there
is a power failure since a lot of sectors which were "written" might not
yet have been written to the mechanical platter (same as the old DOS
"lazy write" problem where people would turn off a machine before SCACHE
had written its cache to disk).

> Unfragmented disk probably doesn't need much.
> Fragmentation might want more.

I don't know why but I've never had a fragmentation problem. Even in my
DOS days with all-FAT partitions I never had more than a dozen or so
files with more than one extent. I can't even remember what defragger I
used, it was so infrequent that I ran it. (Was one included in
SpinRite?) FAT defragging was simple -- you just created a file, set
the file pointer to the amount of disk space available less the amount
of the files you were going to defrag, and copied the files. Voila!
The files ended up at the end of the disk. Repeat this until all files
are moved, then reverse the process and fragmented files should now be
"whole". Some special tricks were necessary when the disk was almost
full.

> It would be interesting to see test data showing cache
> use when sending files with different OS, different
> amounts of frag. What else could affect it?

"What else could affect it?" On a heavily-used server with a lot of
different files being constantly accessed, you'll see that a large cache
makes a big difference.

Suppose you have a web server with, say, 5 MB of constantly-accessed
files (the home page, the logo graphic etc). Suppose your disk drive
has a 2 MB hardware cache and your file system driver has a 2 MB cache
(which by the way doesn't know what's in the hardware cache). Then
there will be a _lot_ of disk reading as people access your web site.
If you increase the hardware drive cache or your file system driver
cache then all the pages+graphics can be read from cache and there's no
slowdown due to hardware seeks and reads. (Some people load their
smallish web sites into a RAM drive on bootup and serve everything from
there.)

Suppose you're one of those Gnutella collectors with a couple thousand
songs online for others to, ahem, listen to. If your average MP3 is 4
MB and you have 2,000 of them, you have 8 GB of files. _However_, most
people could care less about your old Bob Dylan songs and instead are
grabbing your new copy of Rap My Hind End. Well now, once that 4 MB
song is in cache you won't have 18 different downloaders running a
constant butterfly seek on your hard drive (each one will be downloading
from a different point in the file at any given time).

To answer the "What else could affect it?" question, the answer is
"heavy usage".

- Peter

=====================================================

To unsubscribe from this list, send an email message
to "steward@scoug.com". In the body of the message,
put the command "unsubscribe scoug-help".

For problems, contact the list owner at
"rollin@scoug.com".

=====================================================


<< Previous Message << >> Next Message >>

Return to [ 06 | June | 2003 ]



The Southern California OS/2 User Group
P.O. Box 26904
Santa Ana, CA 92799-6904, USA

Copyright 2001 the Southern California OS/2 User Group. ALL RIGHTS RESERVED.

SCOUG, Warp Expo West, and Warpfest are trademarks of the Southern California OS/2 User Group. OS/2, Workplace Shell, and IBM are registered trademarks of International Business Machines Corporation. All other trademarks remain the property of their respective owners.