• Your Trusted Usenet Provider
  • Secure, Private, Unlimited

Newsgroups Main » Newsgroups Directory » Computers - Non-OS » Architecture

Storage ( comp.arch.storage )
 
From [email protected] Fri Nov  6 10:24:44 1992
Xref: rpi news.announce.newgroups:1912 news.groups:40838 comp.arch:22165 comp.periphs:3455 comp.databases:14361 comp.std.misc:424 comp.unix.large:501 comp.unix.admin:5688
Newsgroups: news.announce.newgroups,news.groups,comp.arch,comp.periphs,comp.databases,comp.std.misc,comp.unix.large,comp.unix.admin
Path: rpi!think.com!yale.edu!jvnc.net!darwin.sura.net!haven.umd.edu!uunet!bounce-back
From: [email protected] (Lester Buck)
Subject: CFV:  comp.arch.storage
Followup-To: poster
Sender: [email protected] (David C Lawrence)
Organization: Photon Graphics
Date: Fri, 13 Mar 1992 16:41:27 GMT
Approved: [email protected]

After more thought, comp.arch.storage seems to make more sense than
comp.storage.  Instructions for voting are given at the end.  The voting
address is NOT the From: address of this posting, so don't try to vote
by replying to this article!


CALL FOR VOTES TO CREATE NEWSGROUP
----------------------------------

NAME: 
          comp.arch.storage

STATUS:
          unmoderated


DESCRIPTION:

	storage system issues, both software and hardware


CHARTER:

To facilitate and encourage communication among people interested in
computer storage systems.  The scope of the discussions would include
issues relevant to all types of computer storage systems, both hardware
and software.  The general emphasis here is on open storage systems as
opposed to platform specific products or proprietary hardware from a
particular vendor.  Such vendor specific discussions might belong in
comp.sys.xxx or comp.periphs.  Many of these questions are at the
research, architectural, and design levels today, but as more general
storage system products enter the market, discussions may expand into
"how to use" type questions.


RATIONALE:

As processors become faster and faster, a major bottleneck in computing
becomes access to storage services:  the hardware - disk, tape,
optical, solid-state disk, robots, etc., and the software - uniform and
convenient access to storage hardware.  A far too true comment is that
"A supercomputer is a machine that converts a compute-bound problem
into an I/O-bound problem."  As supercomputer performance reaches
desktops, we all experience the problems of:

o	hot processor chips strapped onto anemic I/O architectures
o	incompatable storage systems that require expensive systems
	    integration gurus to integrate and maintain
o	databases that are intimately bound into the quirks of an
	    operating system for performance
o	applications that are unable to obtain guarantees on when their
	    data and/or metadata is on stable storage
o	cheap tape libraries and robots that are under-utilized because
	    software for migration and caching to disk is not readily
	    available
o	nightmares in writing portable applications that attempt to
	    access tape volumes

This group will be a forum for discussions on storage topics including
the following:

1)	commercial products - OSF Distributed File System (DFS) based on
	    Andrew, Epoch Infinite Storage Manager and Renaissance,
	    Auspex NS5000 NFS server, Legato PrestoServer, AT&T Veritas,
	    OSF Logical Volume Manager, DISCOS UniTree, etc.
2)	storage strategies from major vendors - IBM System Managed Storage,
	    HP Distributed Information Storage Architecture and StoragePlus,
	    DEC Digital Storage Architecture (DSA), Distributed
	    Heterogeneous Storage Management (DHSM), Hierarchical Storage
	    Controllers, and Mass Storage Control Protocol (MSCP)
3)	IEEE 1244 Storage Systems Standards Working Group
4)	ANSI X3B11.1 and Rock Ridge WORM file system standards groups
5)	emerging standard high-speed (100 MB/sec and up) interconnects to
	    storage systems: HIPPI, Fiber Channel Standard, etc.
6)	POSIX supercomputing and batch committees' work on storage
	    volumes and tape mounts
7)	magnetic tape semantics ("Unix tape support is an oxymoron.")
8)	physical volume management - volume naming, mount semantics,
	    enterprise-wide tracking of cartridges, etc.
9)	models for tape robots and optical jukeboxes - SCSI-2, etc.
10)	designs for direct network-attached storage (storage as black box)
11)	backup and archiving strategies
12)	raw storage services (i.e., raw byte strings) vs. management of
	    structured data types (e.g. directories, database records,...)
13)	storage services for efficient database support
14)	storage server interfaces, e.g., OSF/1 Logical Volume Manager
15)	object server and browser technology, e.g. Berkeley's Sequoia 2000
16)	separation of control and data paths for high performance by
	    removing the control processor from the data path; this
	    eliminates the requirements for expensive I/O-capable
	    (i.e., mainframe) control processors
17)	operating system-independent file system design
18)	SCSI-3 proposal for a flat file system built into the disk drive
19)	client applications which bypass/ignore file systems:
	    virtual memory, databases, mail, hypertext, etc.
20)	layered access to storage services - How low level do we want
	    device control?  How to support sophisticated, high-performance
	    applications that need to bypass the file abstraction?
21)	migration and caching of storage objects in a distributed
	    hierarchy of media types
22)	management of replicated storage objects (differences/similarities
	    to migration?)
23)	optimization of placement of storage objects vs. location
	    transparency and independence
24)	granularity of replication - file system, file, segment, record, ...
25)	storage systems management - What information does an administrator
	    need to manage a large, distributed storage system?
26)	security issues - Who do you trust when your storage is
	    directly networked?
27)	RAID array architectures, including RADD (Redundant Arrays
	    of Distributed Disks) and Berkeley RAID-II HIPPI systems
28)	architectures and problems for tape arrays - striped tape systems
29)	stable storage algorithm of Lampson and Sturgis for critical metadata
30)	How can cheap MIPS and RAM help storage? -  HP DataMesh, write-only
	    disk caches, non-volatile caches, etc.
31)	support for multi-media or integrated digital continuous media
	    (audio, video, other realtime data streams)

This group will serve as a forum for the discussion of issues which do
not easily fit into the more tightly focused discussions in various
existing newsgroups.  The issues are much broader than Unix
(comp.unix.*, comp.os.*), as they transcend operating systems in
general.  Distributed computer systems of the future will offer
standard network storage services; what operating system(s) they use
(if any) will be irrelevant to their clients.  The peripheral groups
(comp.periphs, comp.periphs.scsi) are too hardware oriented for these
topics.  Several of these topics involve active standards groups but
several storage system issues are research topics in distributed
systems.  In general, the standards newsgroups (comp.std.xxx) are too
narrowly focused for these discussions.

VOTES:

ONLY VOTES RECEIVED BY APRIL 10, 23:59 CDT WILL BE COUNTED

TO VOTE YES: send mail to [email protected] with the words "YES" and
"comp.arch.storage" in the subject line (preferred) or message body
(acceptable)

TO VOTE NO: send mail to [email protected] with the words "NO" and
"comp.arch.storage" in the subject line (preferred) or message body
(acceptable)

Only votes mailed to the above addresses will be counted.  In
particular, votes mailed to me directly or through replying to this
posting will not be counted.  Ambiguous votes or votes with
qualifications ("I would vote yes for comp.arch.storage provided
that...") will not be counted.  In the case of multiple votes from a
given person, only the last will be counted.

This Call For Votes, along with acknowledgements of votes received will 
be posted several times throughout the voting period.  
-- 
A. Lester Buck   [email protected]   ...!uhnix1!siswat!buck

From [email protected] Sun Apr 19 21:58:48 1992
Xref: uunet news.announce.newgroups:2214 news.groups:48782 comp.arch:29551 comp.periphs:4732 comp.databases:16742 comp.std.misc:503 comp.unix.large:530 comp.unix.admin:6160
Newsgroups: news.announce.newgroups,news.groups,comp.arch,comp.periphs,comp.databases,comp.std.misc,comp.unix.large,comp.unix.admin
Path: uunet!bounce-back
From: [email protected] (Lester Buck)
Subject: RESULT:  comp.arch.storage passes 357: 11
Message-ID: <[email protected]>
Followup-To: news.groups
Sender: [email protected] (David C Lawrence)
Organization: Photon Graphics
Date: Mon, 13 Apr 1992 18:12:58 GMT
Approved: [email protected]
Lines: 509

VOTING RESULTS:

The proposed newsgroup comp.arch.storage received 357 YES votes, and 11
NO votes during the voting period (13 Mar 1992 to 23:59 10 April 1992).
As the excess of YES votes over NO votes was more than 100, and at least
2/3 of the votes were YES, this newsgroup should be created.  A list of
yes and no votes is appended to the charter.


NAME: 
          comp.arch.storage

STATUS:
          unmoderated


DESCRIPTION:

	  storage system issues, both software and hardware


CHARTER:

To facilitate and encourage communication among people interested in
computer storage systems.  The scope of the discussions would include
issues relevant to all types of computer storage systems, both hardware
and software.  The general emphasis here is on open storage systems as
opposed to platform specific products or proprietary hardware from a
particular vendor.  Such vendor specific discussions might belong in
comp.sys.xxx or comp.periphs.  Many of these questions are at the
research, architectural, and design levels today, but as more general
storage system products enter the market, discussions may expand into
"how to use" type questions.


RATIONALE:

As processors become faster and faster, a major bottleneck in computing
becomes access to storage services:  the hardware - disk, tape,
optical, solid-state disk, robots, etc., and the software - uniform and
convenient access to storage hardware.  A far too true comment is that
"A supercomputer is a machine that converts a compute-bound problem
into an I/O-bound problem."  As supercomputer performance reaches
desktops, we all experience the problems of:

o	hot processor chips strapped onto anemic I/O architectures
o	incompatable storage systems that require expensive systems
	    integration gurus to integrate and maintain
o	databases that are intimately bound into the quirks of an
	    operating system for performance
o	applications that are unable to obtain guarantees on when their
	    data and/or metadata is on stable storage
o	cheap tape libraries and robots that are under-utilized because
	    software for migration and caching to disk is not readily
	    available
o	nightmares in writing portable applications that attempt to
	    access tape volumes

This group is a forum for discussions on storage topics including
the following:

1)	commercial products - OSF Distributed File System (DFS) based on
	    Andrew, Epoch Infinite Storage Manager and Renaissance,
	    Auspex NS5000 NFS server, Legato PrestoServer, AT&T Veritas,
	    OSF Logical Volume Manager, DISCOS UniTree, etc.
2)	storage strategies from major vendors - IBM System Managed Storage,
	    HP Distributed Information Storage Architecture and StoragePlus,
	    DEC Digital Storage Architecture (DSA), Distributed
	    Heterogeneous Storage Management (DHSM), Hierarchical Storage
	    Controllers, and Mass Storage Control Protocol (MSCP)
3)	IEEE 1244 Storage Systems Standards Working Group
4)	ANSI X3B11.1 and Rock Ridge WORM file system standards groups
5)	emerging standard high-speed (100 MB/sec and up) interconnects to
	    storage systems: HIPPI, Fiber Channel Standard, etc.
6)	POSIX supercomputing and batch committees' work on storage
	    volumes and tape mounts
7)	magnetic tape semantics ("Unix tape support is an oxymoron.")
8)	physical volume management - volume naming, mount semantics,
	    enterprise-wide tracking of cartridges, etc.
9)	models for tape robots and optical jukeboxes - SCSI-2, etc.
10)	designs for direct network-attached storage (storage as black box)
11)	backup and archiving strategies
12)	raw storage services (i.e., raw byte strings) vs. management of
	    structured data types (e.g. directories, database records,...)
13)	storage services for efficient database support
14)	storage server interfaces, e.g., OSF/1 Logical Volume Manager
15)	object server and browser technology, e.g. Berkeley's Sequoia 2000
16)	separation of control and data paths for high performance by
	    removing the control processor from the data path; this
	    eliminates the requirements for expensive I/O-capable
	    (i.e., mainframe) control processors
17)	operating system-independent file system design
18)	SCSI-3 proposal for a flat file system built into the disk drive
19)	client applications which bypass/ignore file systems:
	    virtual memory, databases, mail, hypertext, etc.
20)	layered access to storage services - How low level do we want
	    device control?  How to support sophisticated, high-performance
	    applications that need to bypass the file abstraction?
21)	migration and caching of storage objects in a distributed
	    hierarchy of media types
22)	management of replicated storage objects (differences/similarities
	    to migration?)
23)	optimization of placement of storage objects vs. location
	    transparency and independence
24)	granularity of replication - file system, file, segment, record, ...
25)	storage systems management - What information does an administrator
	    need to manage a large, distributed storage system?
26)	security issues - Who do you trust when your storage is
	    directly networked?
27)	RAID array architectures, including RADD (Redundant Arrays
	    of Distributed Disks) and Berkeley RAID-II HIPPI systems
28)	architectures and problems for tape arrays - striped tape systems
29)	stable storage algorithm of Lampson and Sturgis for critical metadata
30)	How can cheap MIPS and RAM help storage? -  HP DataMesh, write-only
	    disk caches, non-volatile caches, etc.
31)	support for multi-media or integrated digital continuous media
	    (audio, video, other realtime data streams)

This group will serve as a forum for the discussion of issues which do
not easily fit into the more tightly focused discussions in various
existing newsgroups.  The issues are much broader than Unix
(comp.unix.*, comp.os.*), as they transcend operating systems in
general.  Distributed computer systems of the future will offer
standard network storage services; what operating system(s) they use
(if any) will be irrelevant to their clients.  The peripheral groups
(comp.periphs, comp.periphs.scsi) are too hardware oriented for these
topics.  Several of these topics involve active standards groups but
several storage system issues are research topics in distributed
systems.  In general, the standards newsgroups (comp.std.xxx) are too
narrowly focused for these discussions.


The following people voted YES.
===========================================

"A. L. Narasimha Reddy" <[email protected]>
"Aaron "Fish" Lav" <[email protected]>
"Aaron Sawyer" <[email protected]>
"Fred E. Larsen" <[email protected]>
"Jeffery M. Keller" <[email protected]>
"Julian Satran" <[email protected]>
"Kevin Wohlever"  <[email protected]>
"LIMS::MRGATE::\"A1::NAIMAN,"@LIMS01.LERC.NASA.GOV
"Leo Uzcategui" <[email protected]>
"Lex mala, lex nulla  16-Mar-1992 1503" <[email protected]>
"Rawn Shah" <[email protected]>
"Ross Garber" <[email protected]>
"Russ Tuck" <[email protected]>
"Sam Coleman" <[email protected]>
"charles j. antonelli" <[email protected]>
(David Silberberg) <[email protected]>
(a. m. rushton) <[email protected]>
<[email protected]>
<[email protected]>
<[email protected]>
<[email protected]>
[email protected]
[email protected]
[email protected]
ANDY HOSPODOR <[email protected]>
[email protected]
Adam Glass <[email protected]>
Aki Fleshler <[email protected]>
Al Dwyer <[email protected]>
Alan Rollow - Alan's Home for Wayward Tumbleweeds. <[email protected]>
Bengt Larsson <[email protected]>
Bernard Gunther <[email protected]>
Brian Carlton <[email protected]>
Bryan M. Andersen <[email protected]>
[email protected]
[email protected]
[email protected]
CU Sailing <[email protected]>
Catherine Barnaby <[email protected]>
Charles Curran <[email protected]>
Christiana I. Ezeife <[email protected]>
Christopher Johnson <[email protected]>
Claus Brod <[email protected]>
Conor O'Neill <[email protected]>
Crispin Cowan <[email protected]>
[email protected] (Curt Ridgeway - SCSI Advanced Development)
Dana H. Myers <[email protected]>
Daniel G Mintz <[email protected]>
Daniel Huber <[email protected]>
Daniel McCue <[email protected]>
Dave Ford <[email protected]>
Dave Harper <[email protected]>
David Jensen <[email protected]>
David Newton <[email protected]>
Do what thou wilt shall be the whole of the law <[email protected]>
Dominique Grabas <[email protected]>
[email protected] (Don Deal)
[email protected] (E Le Page)
Edward J. Snow <[email protected]>
Esteban Schnir <[email protected]>
Gary Faulkner <[email protected]>
[email protected] (Gary Mueller)
Gerald Fredin <[email protected]>
Gholamali Hedayat (JRG ra) <[email protected]>
Greg Byrd <[email protected]>
Greg Pongracz <[email protected]>
Greg West <[email protected]>
HADDON BRUCE K <[email protected]>
Hans van Staveren <[email protected]>
Harald Nordgard-Hansen <[email protected]>
Harro Kremer <[email protected]>
Hugh LaMaster -- RCS <[email protected]>
[email protected]
[email protected]
[email protected]
James da Silva <[email protected]>
Jeff Berkowitz <[email protected]>
Jeff Wasilko <[email protected]>
Jerry Callen <[email protected]>
Jim Fox <[email protected]>
Joan Eslinger <[email protected]>
John G Dobnick <[email protected]>
[email protected] (John Hevelin)
Jon Solworth <[email protected]>
[email protected] (Joseph Wishner)
[email protected]
Karl Kleine <[email protected]>
Kevin Kelleher <[email protected]>
Klaus Steinberger <[email protected]>
Larry Pelletier <Larry.Pelletier@WichitaKS.NCR.COM>
Larry Stabile <lstabile@chpc.org>
M.Giyyar@frec.bull.fr (Madhusudan)
Marc Vaisset <marc@vega.laas.fr>
Marcus Jager <marcus@cs.uwa.edu.au>
Mark Russell <mtr@ukc.ac.uk>
Martyn.Johnson@cl.cam.ac.uk
Mathias Bage  <mathias@alex.stacken.kth.se>
Michael Bethune <mikeb@yarra.pyramid.com.au>
Michael Brouwer <rcbamb@urc.tue.nl>
Miguel Albrecht  +49 89 32006-346 <malbrech@eso.org>
Oliver J. Tschichold <olivert@glance.ch>
P C Hariharan <P13Z2781@JHUVM.HCF.JHU.EDU>
Paul Fellows <paulf@inmos.com>
Per Ekman  <pfe@Madrid.DoCS.UU.SE>
Pete Gregory <pete@wvus.org>
Peter Hakanson <peter@cyklop.volvo.se>
Peter R. Luh <prl@woody.uucp>
Po Shan Cheah <pc30@cunixb.cc.columbia.edu>
R.Buresund@frec.bull.fr (Roland Buresund)
RANADE@NSSDCA.GSFC.NASA.GOV (Dr Sanjay Ranade)
RICHARD@vulcan.mentec.ie
Randall A. Gacek <rgacek@wam.umd.edu>
Raymond E. Suorsa <grendel@fen.arc.nasa.gov>
Renu.Raman@eng.sun.com (Renu Raman)
Rob McMahon <cudcv@csv.warwick.ac.uk>
Robert Bell <Robert.Bell@mel.dit.csiro.au>
Robert J Carter <rjc@oghma.ocunix.on.ca>
Rodney Shojinaga <shjnaga@ncsc.org>
Scott Draves <spot@WOOZLE.GRAPHICS.CS.CMU.EDU>
Scott Huddleston <scott@ferrari.labs.tek.com>
Sergiu S. Simmel <simmel@oberon.com>
Shabbir Hassanali <vqhec@sun.pcl.ac.uk>
Shel Finkelstein <shel@tandem.com>
Silvia Nittel <nittel@ifi.unizh.ch>
Stan Hanks <stan@casc.math.uh.edu>
Stuart Boutell <stuart@root.co.uk>
Susan Thomson <set@thumper.bellcore.com>
TM Ravi <ravi@iag.hp.com>
Takeshi Ogasawara <takeshi@vnet.ibm.com>
Thodoros Topaloglou <thodoros@ai.toronto.edu>
Tim Oldham <tjo@its.bt.co.uk>
Tom Proett <proett@nas.nasa.gov>
Tony Wilson <wilson@stc.nl>
UUSTEVE@MARS.LERC.NASA.GOV
Vincent.Cate@FURMINT.NECTAR.CS.CMU.EDU
Zoltan Somogyi <zs@cs.mu.oz.au>
abbott@starwars.clearlake.ibm.com
adam@das.harvard.edu (Adam Shostack)
adpgate!dtb@apple.com (Tom Beach)
adpgate!martin@apple.com (Martin Golding)
al@superior.cs.unh.edu (Anthony Lapadula)
alc@allspice.Berkeley.EDU (Ann L. Chervenak)
allison@hal.com (Dennis Allison)
amdcad!osc!jgk@uunet.UU.NET (Joe Keane)
anand@research.att.com
anant@watson.ibm.com (Anant Jhingran)
andi@mips.complang.tuwien.ac.at (Andi Krall)
andrewr@highland.oz.au (Andrew Rothwell)
aro@aberystwyth.ac.uk
awerman@panix.com (Aaron Werman)
bajwa@soccer.cs.psu.edu (Raminder S Bajwa)
bitbrain!kschmahl@ebay.sun.com (Ken Schmahl)
bog@mm4.sst.co.kr (Jo BogLae)
boppana@ringer.cs.utsa.edu (Rajendra V. Boppana)
bpirenne@eso.org
bross@nas.nasa.gov (Wilson S. Ross)
buck@siswat.hou.tx.us (Lester Buck)
bytheway%asylum@cs.utah.edu (Sidney Bytheway)
bzr10@juts.ccc.amdahl.com (Bruce Richardson)
cassell@ocfmail.ocf.llnl.gov (Loellyn Cassell)
cassels@ncsc.org
ccicpg!jacksun!jack@uunet.UU.NET (Jack Benkual)
ccicpg!leon!jack@uunet.UU.NET (Jack Benkual )
ccs@Iomega.Com (Charles Storer)
ceg@pnet51.orb.mn.org (Chris Galas)
chris@alderan.sdata.de (Christoph Splittgerber)
cl@lgc.com
clarsen@ux6.lbl.gov (Case Larsen)
craig@aixwiz.austin.ibm.com (Craig Anderson)
crowfix!felix@uunet.UU.NET (Felix Finch)
d86jsk@efd.lth.se (Jonas Skeppstedt)
dahlin@allspice.Berkeley.EDU (Michael Donald Dahlin)
darkcube!vhs@orfeo.uucp (Volker Herminghaus-Shirai)
dasun!stsun2!stai@sunkist.west.sun.com (Jeff Stai (Stai) x7644)
daveg@prowler.clearpoint.com (Dave Goldblatt)
david carlton <carlton@husc.harvard.edu>
david d `zoo' zuhn <zoo@aps1.spa.umn.edu>
david@tallgrass.com (David Hoopes)
davidsen@quint.crd.ge.com (william E Davidsen)
dcw@myrias.ab.ca (Dan Wilson)
default root account <root@neptune.Jpl.Nasa.Gov>
deroest@daffy.cac.washington.edu
des%ouray@stortek.com (Dave Skinner 2-361)
dhb@felonious.ssd.ray.com
dick@ccnext.ucsf.edu (Dick Karpinski)
dinah@rockytop.tivoli.com (Dinah McNutt)
djones@pyrhard2.eng.pyramid.com (Dan Jones)
dm_devaney@pnlg.pnl.gov
doc@tcg.com (Dan Cummings)
dorst@netcom.com (Steven J Dorst)
doug@shadow.eng.pyramid.com (Doug Wong)
douglee!douglee@uunet.UU.NET (Doug Lee)
dsa@uts.amdahl.com (Dennis Andrews)
dsc@stapes.ent.hmc.psu.edu (david s. channin)
dseal@armltd.co.uk (David Seal)
edwardsg@iscnvx.lmsc.lockheed.com (Gregory W. Edwards)
efeustel@ida.org (Edward Feustel)
eggert@twinsun.com (Paul Eggert)
eklee@sparc.Berkeley.EDU (Edward K. Lee)
elm@allspice.Berkeley.EDU (ethan miller)
emcguire@ccad.uiowa.edu (Ed McGuire)
emv@msen.com (Edward Vielmetti)
ernest@pundit.cithep.caltech.edu (Ernest Prabhakar)
eugene@nas.nasa.gov (Eugene N. Miya)
futrell@corwin.CCS.Northeastern.EDU (robert futrelle)
gam@lll-crg.llnl.gov (George A. Michael)
george@eos.hac.com (George Zerdian)
gertjan@west.nl (Gertjan van Oosten)
gkc@freddie.udev.cdc.com
gls@hare.udev.cdc.com (gl sprandel x4707)
gregk@hadar.fai.com (Greg Kemnitz)
gsb@hare.udev.cdc.com (Geoff Barrett x2756)
guy@sequoia.cray.com (Guy Chesnot)
haimo@vxuwyz.cern.ch (Haimo G. Zobernig)
hammami@irit.fr
hamrick@dryheat.convex.com (Ed Hamrick)
hannuk@cs.tamu.edu (Hannu H Kari)
hardym@Sdsc.Edu (No matter where you go, there you are!)
hartz@beta.lanl.gov (W. R. Hartshorn)
hender@nas.nasa.gov (Robert L. Henderson)
hideaki@techops.cray.com (Hideaki Moriyama)
holmes@Csa2.LBL.Gov (Harvard Holmes)
hongmen@super.clipper.ingr.com (Hong Men Su)
iain@powerslide.asd.sgi.com (Iain McClatchie)
infmx!halley!raid5!milton@uunet.UU.NET (Milton Scritsmier)
infmx!halley!raid5!sam@uunet.UU.NET (Sam Pendleton)
jan@gaspra.neuroinformatik.ruhr-uni-bochum.de (Jan Vorbrueggen)
jdr@mlb.semi.harris.com (Jim Ray)
jesup@cbmvax.cbm.commodore.com (Randell Jesup)
jhh@allspice.Berkeley.EDU (John H. Hartman)
jhood@banana.ithaca.ny.us (John Hood)
jiml@stovall.slh.wisc.edu (James E. Leinweber)
jkay@cs.UCSD.EDU (Jon Kay)
jlw@improb.cls.com (Jeff Wannamaker)
jms@tardis.Tymnet.COM (Joe Smith)
jms@wits-end.informix.com (John Stephens)
jnh@iris41.biosym.com (Jon Hurley)
joel@CS.Berkeley.EDU (Joel A. Fine)
john@iastate.edu
johnw@bnd2.bnd.oz.au (John Warburton)
joseph@asl.dl.nec.com	(V. John Joseph)
jpk@ingres.com (Jon Krueger)
jpp@slxinc.specialix.com (John Pettitt)
jrc@concurrent.co.uk
jsb@odi.com
kaneko@tenet.Berkeley.EDU (Seiji Kaneko)
kaufmann <kaufmann@inf.ethz.ch>
kenchew@cs.utexas.edu (Khien Mien Kennedy Chew)
kfitzpa@hubcap.clemson.edu (Kevin Fitzpatrick)
kneuper@hermes.chpc.utexas.edu (Stephen Kneuper)
krisna@cs.wisc.edu (Krishna Kunchithapadam)
kwe@frosty.clearlake.ibm.com (Kurt Everson)
l1ngo@copper.Denver.Colorado.EDU (Swift)
larry@turing.abbott.com (Larry Pajakowski)
lawthers@Solbourne.COM (Peter Lawthers)
ldk@udev.cdc.com (ld kelley x-6857)
linimon@nominil.lonestar.org (Mark Linimon)
lm@slovax.eng.sun.com (Larry McVoy)
lutz@giverny.Berkeley.EDU (Ken Lutz)
mao@postgres.Berkeley.EDU (Mike Olson)
marier@blkcmb.zso.dec.com
markb@Solbourne.COM (Mark W. Bradley)
markhill@cs.wisc.edu (Mark D. Hill)
matwood%peruvian@cs.utah.edu (Mark Atwood)
mbjr@futserv.austin.ibm.com (Mauricio Breteinitz Jr.)
meissner@osf.org
mendel@lagunita.stanford.edu
michael@hal.com (Michael Coxe)
mike@inftec.be
mike@pooh.etl.army.mil (Mike McDonnell)
mjacob@kpc.com (Matt Jacob)
mks!sean@watserv1.uwaterloo.ca (Sean Goggin)
mlthorn@pbhyg.pacbell.com (Margaret L. Thornberry)
mmm@icon.com (Mark Muhlestein)
mourad@bach.crhc.uiuc.edu (Antoine Mourad)
mox@vpnet.chi.il.us (William Moxley)
mport!mport!admin!ofshost!jack@uunet.uu.net
muir@csi.com (David Muir Sharnoff)
mwalker@houdini.eece.unm.edu (Mark Walker)
mwc@doc.Lanl.GOV (bill collins)
nadkarni@Solbourne.COM (Sanjay Nadkarni)
nahum@freal.cs.umass.edu
newt@ultra.com
npi!conroy@uunet.UU.NET (Pat Conroy)
numb@root.co.uk (Matthew Newman)
nv90-mho@nada.kth.se
nyeager@ncsa.uiuc.edu (Nancy Yeager)
objy!server!ereed@Sun.COM (Ed Reed)
optigfx!optis31!serge@uunet.UU.NET (Serge Issakov)
optigfx!optisun14!philn@uunet.UU.NET (Phil Nguyen)
pattrsn@CS.Berkeley.EDU (Dave Patterson)
paul@osc.edu
pbg@cs.brown.edu (Peter Galvin)
peno@kps.se (Pekka Nousiainen /DP)
peter@Mozart.Informatik.RWTH-Aachen.DE (Peter Heimann)
pfile@allspice.Berkeley.EDU (Rob Pfile)
phil@gate.cmd.com (Phil Chan)
pmchen@allspice.Berkeley.EDU (Peter M. Chen)
prechelt@ira.uka.de
pww@cherry.cray.com (Paul Wells)
ravi@kiran.udev.cdc.com (Ravi Tavakley)
rbw00@juts.ccc.amdahl.com (Richard Wilmot)
rcain@netcom.com (Robert Cain)
rcc30@juts.ccc.amdahl.com (Richard Croucher)
renglish@cello.hpl.hp.com
richardc@mentor.cc.purdue.edu (Richard Commander)
rick@CRICK.SSCTR.bcm.tmc.edu (Richard H. Miller)
rick@ofa123.fidonet.org (Rick Ellis)
rob@merlin.georgetown.edu (Rob Guttmann)
roger@SST.LL.MIT.EDU (Roger L. Hale)
rthomson@dsd.es.com (Rich Thomson)
ruef@ocfmail.ocf.llnl.gov (Richard Ruef)
rwa@cs.athabascau.ca (Ross Alexander)
scott@labtam.labtam.oz.au (Scott Colwell)
scw@ollie.SEAS.UCLA.EDU (Stephen C. Woods)
seager@phoenix.ocf.llnl.gov (Mark Seager)
seidel@puma.sri.com
shan@techops.cray.com (Sharan Kalwani)
silver@samaria.ced.tuc.gr ( Spiros Arguropoulos )
sjuphil!wlyle@uu.psi.com (Wayne Lyle)
smj@beta.lanl.gov (Stephen M. Johnson)
snielsen@computer-science.strathclyde.ac.uk
snitor!rmc@uunet.UU.NET (Russell Crook)
solworth@parsys.eecs.uic.edu
srogers@tad.eds.com (Steve Rogers)
srp@zip.eecs.umich.edu
sshang@hector.usc.edu (Shi-Sheng Shang)
steve@presoft.com (Steve Kohlenberger)
steveb@cbmvax.cbm.commodore.com (Steve Beats)
sybase!tim@Sun.COM (Tim Wood)
tgl@ssd.kodak.com (Tom Lathrop (588-0677))
tim@boxhill.com (Timothy Jones)
tlayher@eccdb1.pms.ford.com (Thomas A. Layher)
tln@nsdsse.lbl.gov (Tom Noggle)
twl@cs.brown.edu (Ted "Theodore" W. Leung)
twl@screamer.cs.brown.edu (Theodore W. Leung)
u35041@u2.ncsa.uiuc.edu
vicki@kelso.rsmas.miami.edu (vicki halliwell)
vlsiphx!enforcer!donr@asuvax.eas.asu.edu (Don Robinson)
vvkmj@sven.lerc.nasa.gov (Keith Jackson)
wakew@jingluo.cs.vt.edu (William Wake)
waterfal@pyrsea.sea.pyramid.com (Douglas Waterfall)
wayne@mr_magoo.sbi.com (Wayne Schmidt)
wayne@pyrhard2.eng.pyramid.com (Wayne Anderson)
wayneh@ux6.lbl.gov (Wayne Hurlbert)
winfrms@dutiws.tudelft.nl (Rob Mersel)
wong@rkna50.riken.go.jp (Wong Weng Fai)
wscott@ecn.purdue.edu (Wayne H Scott)
wyant@centerline.com
xxbja@atlas.lerc.nasa.gov (BettyJo Armstead)
xxcrys@atlas.lerc.nasa.gov (Crystal Ratliff)
xxnls@rockwell.lerc.nasa.gov (Norbert Seidel)
xxremak@lercuts.lerc.nasa.gov (David A. Remaklus)
xxronr@convx1.lerc.nasa.gov (Ron Rivett)
yycal@shikra.lerc.nasa.gov (Calvin Ramos)
zabback <zabback@inf.ethz.ch>
zwilling@cs.wisc.edu (Mike Zwilling)

The following people voted NO:
===========================================

<jab@egr.duke.edu>
John Haugh <jfh@devnet.la.locus.com>
Norman Yarvin <yarvin-norman@CS.YALE.EDU>
Timothy VanFosson <timv@ccad.uiowa.edu>
andi@comet.public.sub.org (Andreas Roemer)
austin@franklin.com (Austin G. Hastings)
blm%6sigma@uunet.UU.NET
cristy@magick.es.dupont.com (Cristy)
cward@mordor.dseg.ti.com (Christopher Ward)
mehta@ucunix.san.uc.EDU (mehta)
owens@cookiemonster.cc.buffalo.edu (Bill Owens)
-- 
A. Lester Buck   buck@siswat.hou.tx.us   ...!uhnix1!siswat!buck

From buck@siswat.hou.tx.us Tue Sep 12 10:42:44 1995
Xref: rpi news.announce.newgroups:1732 news.groups:37200 comp.arch:20361 comp.infosystems:484 comp.object:5340 comp.os.misc:1585 comp.periphs:3238 comp.periphs.scsi:4884 comp.std.misc:408 comp.sw.components:652 comp.theory.info-retrieval:5
Newsgroups: news.announce.newgroups,news.groups,comp.arch,comp.infosystems,comp.object,comp.os.misc,comp.periphs,comp.periphs.scsi,comp.std.misc,comp.sw.components,comp.theory.info-retrieval
Path: rpi!bounce-back
From: buck@siswat.hou.tx.us (Lester Buck)
Subject: RFD:  comp.storage
Followup-To: news.groups
Sender: tale@cs.rpi.edu
Nntp-Posting-Host: cs.rpi.edu
Organization: Photon Graphics
Date: 13 Jan 92 16:14:47 GMT
Approved: tale@rpi.edu
Lines: 102
Status: O
X-Status: 

comp.storage - Request for New Group Discussion

As processors become faster and faster, a major bottleneck in computing
becomes access to storage services:  the hardware - disk, tape, optical,
solid-state disk, robots, etc., and the software - uniform and
convenient access to storage hardware.  A far too true comment is that
"A supercomputer is a machine that converts a compute-bound problem into
an I/O-bound problem."  As supercomputer performance reaches desktops, we
all experience the problems of:

o	hot processor chips strapped onto anemic I/O architectures
o	incompatable storage systems that require expensive systems
	    integration gurus to integrate and maintain
o	databases that are intimately bound into the quirks of an
	    operating system for performance
o	applications that are unable to obtain guarantees on when their
	    data and/or metadata is on stable storage
o	cheap tape libraries and robots that are under-utilized because
	    software for migration and caching to disk is not readily
	    available
o	nightmares in writing portable applications that attempt to
	    access tape volumes

This newsgroup would be a forum for discussions on storage topics including
the following:

1)	commercial products - OSF Distributed File System (DFS) based on
	    Andrew, Epoch Infinite Storage Manager and Renaissance,
	    Auspex NS5000 NFS server, Legato PrestoServer, AT&T Veritas,
	    OSF Logical Volume Manager, DISCOS UniTree, etc.
2)	storage strategies from major vendors - IBM System Managed Storage,
	    HP Distributed Information Storage Architecture and StoragePlus,
	    DEC Digital Storage Architecture (DSA), Distributed
	    Heterogeneous Storage Management (DHSM), Hierarchical Storage
	    Controllers, and Mass Storage Control Protocol (MSCP)
3)	IEEE 1244 Storage Systems Standards working group
4)	ANSI X3B11.1 and Rock Ridge WORM filesystem standards groups
5)	emerging standard high-speed (100 MB/sec and up) interconnects to
	    storage systems: HIPPI, Fiber Channel Standard, etc.
6)	POSIX supercomputing and batch committees' work on storage
	    volumes and tape mounts
7)	magnetic tape semantics ("Unix tape support is an oxymoron.")
8)	physical volume management - volume naming, mount semantics,
	    enterprise-wide tracking of cartridges, etc.
9)	models for tape robots and optical jukeboxes - SCSI-2, etc.
10)	designs for direct network-attached storage (storage as black box)
11)	backup and archiving strategies
12)	raw storage services (i.e., raw byte strings) vs. management of
	    structured data types (e.g. directories, database records,...)
13)	storage services for efficient database support
14)	storage server interfaces, e.g., OSF/1 Logical Volume Manager
15)	object server and browser technology, e.g. Berkeley's Sequoia 2000
16)	separation of control and data paths for high performance by
	    removing the control processor from the data path; this
	    eliminates the requirements for expensive I/O-capable
	    (i.e., mainframe) control processors
17)	operating system-independent filesystem design
18)	SCSI-3 proposal for a flat filesystem built into the disk drive
19)	client applications which bypass/ignore filesystems:
	    virtual memory, databases, mail, hypertext, etc.
20)	layered access to storage services - How low level do we want
	    device control?  How to support sophisticated, high-performance
	    applications that need to bypass the file abstraction?
21)	migration and caching of storage objects in a distributed
	    hierarchy of media types
22)	management of replicated storage objects (differences/similarities
	    to migration?)
23)	optimization of placement of storage objects vs. location
	    transparency and independence
24)	granularity of replication - filesystem, file, segment, record, ...
25)	storage systems management - What information does an administrator
	    need to manage a large, distributed storage system?
26)	security issues - Who do you trust when your storage is
	    directly networked?
27)	RAID array architectures, including RADD (Redundant Arrays
	    of Distributed Disks) and Berkeley RAID-II HIPPI systems
28)	architectures and problems for tape arrays - striped tape systems
29)	stable storage algorithm of Lampson and Sturgis for critical metadata
30)	How can cheap MIPS and RAM help storage? -  HP DataMesh, write-only
	    disk caches, non-volatile caches, etc.
31)	support for multi-media or integrated digital continuous media
	    (audio, video, other realtime data streams)

The current Usenet hierarchy has no central place for these discussions.
The issues are much broader than Unix (comp.unix.*, comp.os.*), as they
transcend operating systems in general.  The peripheral groups
(comp.periphs, comp.periphs.scsi) are much too hardware oriented for
these topics.  Distributed computer systems of the future will offer
standard network storage services; what operating system(s) they use (if
any) will be irrelevant to their clients.  The architecture and
massively parallel computing groups (comp.arch, comp.parallel) are also
inappropriate.  [Of course, Usenet should have a comp.distributed
newsgroup, but that is for another time.]  Several of these topics
involve active standards groups but some of the standards aspects are
research topics in distributed systems.  Real products are evolving at a
furious rate, and commercial activity may outpace some standards efforts
underway.

I envision this group as being unmoderated.

-- 
A. Lester Buck   buck@siswat.hou.tx.us   ...!uhnix1!siswat!buck

From buck@siswat.hou.tx.us Tue Sep 12 10:47:14 1995
Xref: rpi news.announce.newgroups:1912 news.groups:40838 comp.arch:22165 comp.periphs:3455 comp.databases:14361 comp.std.misc:424 comp.unix.large:501 comp.unix.admin:5688
Newsgroups: news.announce.newgroups,news.groups,comp.arch,comp.periphs,comp.databases,comp.std.misc,comp.unix.large,comp.unix.admin
Path: rpi!think.com!yale.edu!jvnc.net!darwin.sura.net!haven.umd.edu!uunet!bounce-back
From: buck@siswat.hou.tx.us (Lester Buck)
Subject: CFV:  comp.arch.storage
Followup-To: poster
Sender: tale@uunet.uu.net (David C Lawrence)
Organization: Photon Graphics
Date: Fri, 13 Mar 1992 16:41:27 GMT
Approved: tale@uunet.uu.net
Status: RO
X-Status: 

After more thought, comp.arch.storage seems to make more sense than
comp.storage.  Instructions for voting are given at the end.  The voting
address is NOT the From: address of this posting, so don't try to vote
by replying to this article!


CALL FOR VOTES TO CREATE NEWSGROUP
----------------------------------

NAME: 
          comp.arch.storage

STATUS:
          unmoderated


DESCRIPTION:

	storage system issues, both software and hardware


CHARTER:

To facilitate and encourage communication among people interested in
computer storage systems.  The scope of the discussions would include
issues relevant to all types of computer storage systems, both hardware
and software.  The general emphasis here is on open storage systems as
opposed to platform specific products or proprietary hardware from a
particular vendor.  Such vendor specific discussions might belong in
comp.sys.xxx or comp.periphs.  Many of these questions are at the
research, architectural, and design levels today, but as more general
storage system products enter the market, discussions may expand into
"how to use" type questions.


RATIONALE:

As processors become faster and faster, a major bottleneck in computing
becomes access to storage services:  the hardware - disk, tape,
optical, solid-state disk, robots, etc., and the software - uniform and
convenient access to storage hardware.  A far too true comment is that
"A supercomputer is a machine that converts a compute-bound problem
into an I/O-bound problem."  As supercomputer performance reaches
desktops, we all experience the problems of:

o	hot processor chips strapped onto anemic I/O architectures
o	incompatable storage systems that require expensive systems
	    integration gurus to integrate and maintain
o	databases that are intimately bound into the quirks of an
	    operating system for performance
o	applications that are unable to obtain guarantees on when their
	    data and/or metadata is on stable storage
o	cheap tape libraries and robots that are under-utilized because
	    software for migration and caching to disk is not readily
	    available
o	nightmares in writing portable applications that attempt to
	    access tape volumes

This group will be a forum for discussions on storage topics including
the following:

1)	commercial products - OSF Distributed File System (DFS) based on
	    Andrew, Epoch Infinite Storage Manager and Renaissance,
	    Auspex NS5000 NFS server, Legato PrestoServer, AT&T Veritas,
	    OSF Logical Volume Manager, DISCOS UniTree, etc.
2)	storage strategies from major vendors - IBM System Managed Storage,
	    HP Distributed Information Storage Architecture and StoragePlus,
	    DEC Digital Storage Architecture (DSA), Distributed
	    Heterogeneous Storage Management (DHSM), Hierarchical Storage
	    Controllers, and Mass Storage Control Protocol (MSCP)
3)	IEEE 1244 Storage Systems Standards Working Group
4)	ANSI X3B11.1 and Rock Ridge WORM file system standards groups
5)	emerging standard high-speed (100 MB/sec and up) interconnects to
	    storage systems: HIPPI, Fiber Channel Standard, etc.
6)	POSIX supercomputing and batch committees' work on storage
	    volumes and tape mounts
7)	magnetic tape semantics ("Unix tape support is an oxymoron.")
8)	physical volume management - volume naming, mount semantics,
	    enterprise-wide tracking of cartridges, etc.
9)	models for tape robots and optical jukeboxes - SCSI-2, etc.
10)	designs for direct network-attached storage (storage as black box)
11)	backup and archiving strategies
12)	raw storage services (i.e., raw byte strings) vs. management of
	    structured data types (e.g. directories, database records,...)
13)	storage services for efficient database support
14)	storage server interfaces, e.g., OSF/1 Logical Volume Manager
15)	object server and browser technology, e.g. Berkeley's Sequoia 2000
16)	separation of control and data paths for high performance by
	    removing the control processor from the data path; this
	    eliminates the requirements for expensive I/O-capable
	    (i.e., mainframe) control processors
17)	operating system-independent file system design
18)	SCSI-3 proposal for a flat file system built into the disk drive
19)	client applications which bypass/ignore file systems:
	    virtual memory, databases, mail, hypertext, etc.
20)	layered access to storage services - How low level do we want
	    device control?  How to support sophisticated, high-performance
	    applications that need to bypass the file abstraction?
21)	migration and caching of storage objects in a distributed
	    hierarchy of media types
22)	management of replicated storage objects (differences/similarities
	    to migration?)
23)	optimization of placement of storage objects vs. location
	    transparency and independence
24)	granularity of replication - file system, file, segment, record, ...
25)	storage systems management - What information does an administrator
	    need to manage a large, distributed storage system?
26)	security issues - Who do you trust when your storage is
	    directly networked?
27)	RAID array architectures, including RADD (Redundant Arrays
	    of Distributed Disks) and Berkeley RAID-II HIPPI systems
28)	architectures and problems for tape arrays - striped tape systems
29)	stable storage algorithm of Lampson and Sturgis for critical metadata
30)	How can cheap MIPS and RAM help storage? -  HP DataMesh, write-only
	    disk caches, non-volatile caches, etc.
31)	support for multi-media or integrated digital continuous media
	    (audio, video, other realtime data streams)

This group will serve as a forum for the discussion of issues which do
not easily fit into the more tightly focused discussions in various
existing newsgroups.  The issues are much broader than Unix
(comp.unix.*, comp.os.*), as they transcend operating systems in
general.  Distributed computer systems of the future will offer
standard network storage services; what operating system(s) they use
(if any) will be irrelevant to their clients.  The peripheral groups
(comp.periphs, comp.periphs.scsi) are too hardware oriented for these
topics.  Several of these topics involve active standards groups but
several storage system issues are research topics in distributed
systems.  In general, the standards newsgroups (comp.std.xxx) are too
narrowly focused for these discussions.

VOTES:

ONLY VOTES RECEIVED BY APRIL 10, 23:59 CDT WILL BE COUNTED

TO VOTE YES: send mail to yes@siswat.hou.tx.us with the words "YES" and
"comp.arch.storage" in the subject line (preferred) or message body
(acceptable)

TO VOTE NO: send mail to no@siswat.hou.tx.us with the words "NO" and
"comp.arch.storage" in the subject line (preferred) or message body
(acceptable)

Only votes mailed to the above addresses will be counted.  In
particular, votes mailed to me directly or through replying to this
posting will not be counted.  Ambiguous votes or votes with
qualifications ("I would vote yes for comp.arch.storage provided
that...") will not be counted.  In the case of multiple votes from a
given person, only the last will be counted.

This Call For Votes, along with acknowledgements of votes received will 
be posted several times throughout the voting period.  
-- 
A. Lester Buck   buck@siswat.hou.tx.us   ...!uhnix1!siswat!buck

 
USENET FACT: Big-8
BIG-8 hierarchies are the 8 traditional top hierarchies of the Usenet. See the following page for more