From [email protected] Fri Nov  6 10:24:44 1992
Xref: rpi news.announce.newgroups:1912 news.groups:40838 comp.arch:22165 comp.periphs:3455 comp.databases:14361 comp.std.misc:424 comp.unix.large:501 comp.unix.admin:5688
Newsgroups: news.announce.newgroups,news.groups,comp.arch,comp.periphs,comp.databases,comp.std.misc,comp.unix.large,comp.unix.admin
Path: rpi!think.com!yale.edu!jvnc.net!darwin.sura.net!haven.umd.edu!uunet!bounce-back
From: [email protected] (Lester Buck)
Subject: CFV:  comp.arch.storage
Followup-To: poster
Sender: [email protected] (David C Lawrence)
Organization: Photon Graphics
Date: Fri, 13 Mar 1992 16:41:27 GMT
Approved: [email protected]

After more thought, comp.arch.storage seems to make more sense than
comp.storage.  Instructions for voting are given at the end.  The voting
address is NOT the From: address of this posting, so don't try to vote
by replying to this article!


CALL FOR VOTES TO CREATE NEWSGROUP
----------------------------------

NAME: 
          comp.arch.storage

STATUS:
          unmoderated


DESCRIPTION:

	storage system issues, both software and hardware


CHARTER:

To facilitate and encourage communication among people interested in
computer storage systems.  The scope of the discussions would include
issues relevant to all types of computer storage systems, both hardware
and software.  The general emphasis here is on open storage systems as
opposed to platform specific products or proprietary hardware from a
particular vendor.  Such vendor specific discussions might belong in
comp.sys.xxx or comp.periphs.  Many of these questions are at the
research, architectural, and design levels today, but as more general
storage system products enter the market, discussions may expand into
"how to use" type questions.


RATIONALE:

As processors become faster and faster, a major bottleneck in computing
becomes access to storage services:  the hardware - disk, tape,
optical, solid-state disk, robots, etc., and the software - uniform and
convenient access to storage hardware.  A far too true comment is that
"A supercomputer is a machine that converts a compute-bound problem
into an I/O-bound problem."  As supercomputer performance reaches
desktops, we all experience the problems of:

o	hot processor chips strapped onto anemic I/O architectures
o	incompatable storage systems that require expensive systems
	    integration gurus to integrate and maintain
o	databases that are intimately bound into the quirks of an
	    operating system for performance
o	applications that are unable to obtain guarantees on when their
	    data and/or metadata is on stable storage
o	cheap tape libraries and robots that are under-utilized because
	    software for migration and caching to disk is not readily
	    available
o	nightmares in writing portable applications that attempt to
	    access tape volumes

This group will be a forum for discussions on storage topics including
the following:

1)	commercial products - OSF Distributed File System (DFS) based on
	    Andrew, Epoch Infinite Storage Manager and Renaissance,
	    Auspex NS5000 NFS server, Legato PrestoServer, AT&T Veritas,
	    OSF Logical Volume Manager, DISCOS UniTree, etc.
2)	storage strategies from major vendors - IBM System Managed Storage,
	    HP Distributed Information Storage Architecture and StoragePlus,
	    DEC Digital Storage Architecture (DSA), Distributed
	    Heterogeneous Storage Management (DHSM), Hierarchical Storage
	    Controllers, and Mass Storage Control Protocol (MSCP)
3)	IEEE 1244 Storage Systems Standards Working Group
4)	ANSI X3B11.1 and Rock Ridge WORM file system standards groups
5)	emerging standard high-speed (100 MB/sec and up) interconnects to
	    storage systems: HIPPI, Fiber Channel Standard, etc.
6)	POSIX supercomputing and batch committees' work on storage
	    volumes and tape mounts
7)	magnetic tape semantics ("Unix tape support is an oxymoron.")
8)	physical volume management - volume naming, mount semantics,
	    enterprise-wide tracking of cartridges, etc.
9)	models for tape robots and optical jukeboxes - SCSI-2, etc.
10)	designs for direct network-attached storage (storage as black box)
11)	backup and archiving strategies
12)	raw storage services (i.e., raw byte strings) vs. management of
	    structured data types (e.g. directories, database records,...)
13)	storage services for efficient database support
14)	storage server interfaces, e.g., OSF/1 Logical Volume Manager
15)	object server and browser technology, e.g. Berkeley's Sequoia 2000
16)	separation of control and data paths for high performance by
	    removing the control processor from the data path; this
	    eliminates the requirements for expensive I/O-capable
	    (i.e., mainframe) control processors
17)	operating system-independent file system design
18)	SCSI-3 proposal for a flat file system built into the disk drive
19)	client applications which bypass/ignore file systems:
	    virtual memory, databases, mail, hypertext, etc.
20)	layered access to storage services - How low level do we want
	    device control?  How to support sophisticated, high-performance
	    applications that need to bypass the file abstraction?
21)	migration and caching of storage objects in a distributed
	    hierarchy of media types
22)	management of replicated storage objects (differences/similarities
	    to migration?)
23)	optimization of placement of storage objects vs. location
	    transparency and independence
24)	granularity of replication - file system, file, segment, record, ...
25)	storage systems management - What information does an administrator
	    need to manage a large, distributed storage system?
26)	security issues - Who do you trust when your storage is
	    directly networked?
27)	RAID array architectures, including RADD (Redundant Arrays
	    of Distributed Disks) and Berkeley RAID-II HIPPI systems
28)	architectures and problems for tape arrays - striped tape systems
29)	stable storage algorithm of Lampson and Sturgis for critical metadata
30)	How can cheap MIPS and RAM help storage? -  HP DataMesh, write-only
	    disk caches, non-volatile caches, etc.
31)	support for multi-media or integrated digital continuous media
	    (audio, video, other realtime data streams)

This group will serve as a forum for the discussion of issues which do
not easily fit into the more tightly focused discussions in various
existing newsgroups.  The issues are much broader than Unix
(comp.unix.*, comp.os.*), as they transcend operating systems in
general.  Distributed computer systems of the future will offer
standard network storage services; what operating system(s) they use
(if any) will be irrelevant to their clients.  The peripheral groups
(comp.periphs, comp.periphs.scsi) are too hardware oriented for these
topics.  Several of these topics involve active standards groups but
several storage system issues are research topics in distributed
systems.  In general, the standards newsgroups (comp.std.xxx) are too
narrowly focused for these discussions.

VOTES:

ONLY VOTES RECEIVED BY APRIL 10, 23:59 CDT WILL BE COUNTED

TO VOTE YES: send mail to [email protected] with the words "YES" and
"comp.arch.storage" in the subject line (preferred) or message body
(acceptable)

TO VOTE NO: send mail to [email protected] with the words "NO" and
"comp.arch.storage" in the subject line (preferred) or message body
(acceptable)

Only votes mailed to the above addresses will be counted.  In
particular, votes mailed to me directly or through replying to this
posting will not be counted.  Ambiguous votes or votes with
qualifications ("I would vote yes for comp.arch.storage provided
that...") will not be counted.  In the case of multiple votes from a
given person, only the last will be counted.

This Call For Votes, along with acknowledgements of votes received will 
be posted several times throughout the voting period.  
-- 
A. Lester Buck   [email protected]   ...!uhnix1!siswat!buck

From [email protected] Sun Apr 19 21:58:48 1992
Xref: uunet news.announce.newgroups:2214 news.groups:48782 comp.arch:29551 comp.periphs:4732 comp.databases:16742 comp.std.misc:503 comp.unix.large:530 comp.unix.admin:6160
Newsgroups: news.announce.newgroups,news.groups,comp.arch,comp.periphs,comp.databases,comp.std.misc,comp.unix.large,comp.unix.admin
Path: uunet!bounce-back
From: [email protected] (Lester Buck)
Subject: RESULT:  comp.arch.storage passes 357: 11
Message-ID: <[email protected]>
Followup-To: news.groups
Sender: [email protected] (David C Lawrence)
Organization: Photon Graphics
Date: Mon, 13 Apr 1992 18:12:58 GMT
Approved: [email protected]
Lines: 509

VOTING RESULTS:

The proposed newsgroup comp.arch.storage received 357 YES votes, and 11
NO votes during the voting period (13 Mar 1992 to 23:59 10 April 1992).
As the excess of YES votes over NO votes was more than 100, and at least
2/3 of the votes were YES, this newsgroup should be created.  A list of
yes and no votes is appended to the charter.


NAME: 
          comp.arch.storage

STATUS:
          unmoderated


DESCRIPTION:

	  storage system issues, both software and hardware


CHARTER:

To facilitate and encourage communication among people interested in
computer storage systems.  The scope of the discussions would include
issues relevant to all types of computer storage systems, both hardware
and software.  The general emphasis here is on open storage systems as
opposed to platform specific products or proprietary hardware from a
particular vendor.  Such vendor specific discussions might belong in
comp.sys.xxx or comp.periphs.  Many of these questions are at the
research, architectural, and design levels today, but as more general
storage system products enter the market, discussions may expand into
"how to use" type questions.


RATIONALE:

As processors become faster and faster, a major bottleneck in computing
becomes access to storage services:  the hardware - disk, tape,
optical, solid-state disk, robots, etc., and the software - uniform and
convenient access to storage hardware.  A far too true comment is that
"A supercomputer is a machine that converts a compute-bound problem
into an I/O-bound problem."  As supercomputer performance reaches
desktops, we all experience the problems of:

o	hot processor chips strapped onto anemic I/O architectures
o	incompatable storage systems that require expensive systems
	    integration gurus to integrate and maintain
o	databases that are intimately bound into the quirks of an
	    operating system for performance
o	applications that are unable to obtain guarantees on when their
	    data and/or metadata is on stable storage
o	cheap tape libraries and robots that are under-utilized because
	    software for migration and caching to disk is not readily
	    available
o	nightmares in writing portable applications that attempt to
	    access tape volumes

This group is a forum for discussions on storage topics including
the following:

1)	commercial products - OSF Distributed File System (DFS) based on
	    Andrew, Epoch Infinite Storage Manager and Renaissance,
	    Auspex NS5000 NFS server, Legato PrestoServer, AT&T Veritas,
	    OSF Logical Volume Manager, DISCOS UniTree, etc.
2)	storage strategies from major vendors - IBM System Managed Storage,
	    HP Distributed Information Storage Architecture and StoragePlus,
	    DEC Digital Storage Architecture (DSA), Distributed
	    Heterogeneous Storage Management (DHSM), Hierarchical Storage
	    Controllers, and Mass Storage Control Protocol (MSCP)
3)	IEEE 1244 Storage Systems Standards Working Group
4)	ANSI X3B11.1 and Rock Ridge WORM file system standards groups
5)	emerging standard high-speed (100 MB/sec and up) interconnects to
	    storage systems: HIPPI, Fiber Channel Standard, etc.
6)	POSIX supercomputing and batch committees' work on storage
	    volumes and tape mounts
7)	magnetic tape semantics ("Unix tape support is an oxymoron.")
8)	physical volume management - volume naming, mount semantics,
	    enterprise-wide tracking of cartridges, etc.
9)	models for tape robots and optical jukeboxes - SCSI-2, etc.
10)	designs for direct network-attached storage (storage as black box)
11)	backup and archiving strategies
12)	raw storage services (i.e., raw byte strings) vs. management of
	    structured data types (e.g. directories, database records,...)
13)	storage services for efficient database support
14)	storage server interfaces, e.g., OSF/1 Logical Volume Manager
15)	object server and browser technology, e.g. Berkeley's Sequoia 2000
16)	separation of control and data paths for high performance by
	    removing the control processor from the data path; this
	    eliminates the requirements for expensive I/O-capable
	    (i.e., mainframe) control processors
17)	operating system-independent file system design
18)	SCSI-3 proposal for a flat file system built into the disk drive
19)	client applications which bypass/ignore file systems:
	    virtual memory, databases, mail, hypertext, etc.
20)	layered access to storage services - How low level do we want
	    device control?  How to support sophisticated, high-performance
	    applications that need to bypass the file abstraction?
21)	migration and caching of storage objects in a distributed
	    hierarchy of media types
22)	management of replicated storage objects (differences/similarities
	    to migration?)
23)	optimization of placement of storage objects vs. location
	    transparency and independence
24)	granularity of replication - file system, file, segment, record, ...
25)	storage systems management - What information does an administrator
	    need to manage a large, distributed storage system?
26)	security issues - Who do you trust when your storage is
	    directly networked?
27)	RAID array architectures, including RADD (Redundant Arrays
	    of Distributed Disks) and Berkeley RAID-II HIPPI systems
28)	architectures and problems for tape arrays - striped tape systems
29)	stable storage algorithm of Lampson and Sturgis for critical metadata
30)	How can cheap MIPS and RAM help storage? -  HP DataMesh, write-only
	    disk caches, non-volatile caches, etc.
31)	support for multi-media or integrated digital continuous media
	    (audio, video, other realtime data streams)

This group will serve as a forum for the discussion of issues which do
not easily fit into the more tightly focused discussions in various
existing newsgroups.  The issues are much broader than Unix
(comp.unix.*, comp.os.*), as they transcend operating systems in
general.  Distributed computer systems of the future will offer
standard network storage services; what operating system(s) they use
(if any) will be irrelevant to their clients.  The peripheral groups
(comp.periphs, comp.periphs.scsi) are too hardware oriented for these
topics.  Several of these topics involve active standards groups but
several storage system issues are research topics in distributed
systems.  In general, the standards newsgroups (comp.std.xxx) are too
narrowly focused for these discussions.


The following people voted YES.
===========================================

"A. L. Narasimha Reddy" 
"Aaron "Fish" Lav" 
"Aaron Sawyer" 
"Fred E. Larsen" 
"Jeffery M. Keller" 
"Julian Satran" 
"Kevin Wohlever"  
"LIMS::MRGATE::\"A1::NAIMAN,"@LIMS01.LERC.NASA.GOV
"Leo Uzcategui" 
"Lex mala, lex nulla  16-Mar-1992 1503" 
"Rawn Shah" 
"Ross Garber" 
"Russ Tuck" 
"Sam Coleman" 
"charles j. antonelli" 
(David Silberberg) 
(a. m. rushton) 




[email protected]
[email protected]
[email protected]
ANDY HOSPODOR 
[email protected]
Adam Glass 
Aki Fleshler 
Al Dwyer 
Alan Rollow - Alan's Home for Wayward Tumbleweeds. 
Bengt Larsson 
Bernard Gunther 
Brian Carlton 
Bryan M. Andersen 
[email protected]
[email protected]
[email protected]
CU Sailing 
Catherine Barnaby 
Charles Curran 
Christiana I. Ezeife 
Christopher Johnson 
Claus Brod 
Conor O'Neill 
Crispin Cowan 
[email protected] (Curt Ridgeway - SCSI Advanced Development)
Dana H. Myers 
Daniel G Mintz <[email protected]>
Daniel Huber 
Daniel McCue 
Dave Ford 
Dave Harper 
David Jensen 
David Newton 
Do what thou wilt shall be the whole of the law 
Dominique Grabas 
[email protected] (Don Deal)
[email protected] (E Le Page)
Edward J. Snow 
Esteban Schnir 
Gary Faulkner 
[email protected] (Gary Mueller)
Gerald Fredin 
Gholamali Hedayat (JRG ra) 
Greg Byrd 
Greg Pongracz 
Greg West 
HADDON BRUCE K 
Hans van Staveren 
Harald Nordgard-Hansen 
Harro Kremer 
Hugh LaMaster -- RCS 
[email protected]
[email protected]
[email protected]
James da Silva 
Jeff Berkowitz 
Jeff Wasilko 
Jerry Callen 
Jim Fox 
Joan Eslinger 
John G Dobnick 
[email protected] (John Hevelin)
Jon Solworth 
[email protected] (Joseph Wishner)
[email protected]
Karl Kleine 
Kevin Kelleher 
Klaus Steinberger 
Larry Pelletier 
Larry Stabile 
[email protected] (Madhusudan)
Marc Vaisset 
Marcus Jager 
Mark Russell 
[email protected]
Mathias Bage  
Michael Bethune 
Michael Brouwer 
Miguel Albrecht  +49 89 32006-346 
Oliver J. Tschichold 
P C Hariharan 
Paul Fellows 
Per Ekman  
Pete Gregory 
Peter Hakanson 
Peter R. Luh 
Po Shan Cheah 
[email protected] (Roland Buresund)
[email protected] (Dr Sanjay Ranade)
[email protected]
Randall A. Gacek 
Raymond E. Suorsa 
[email protected] (Renu Raman)
Rob McMahon 
Robert Bell 
Robert J Carter 
Rodney Shojinaga 
Scott Draves 
Scott Huddleston 
Sergiu S. Simmel 
Shabbir Hassanali 
Shel Finkelstein 
Silvia Nittel 
Stan Hanks 
Stuart Boutell 
Susan Thomson 
TM Ravi 
Takeshi Ogasawara 
Thodoros Topaloglou 
Tim Oldham 
Tom Proett 
Tony Wilson 
[email protected]
[email protected]
Zoltan Somogyi 
[email protected]
[email protected] (Adam Shostack)
[email protected] (Tom Beach)
[email protected] (Martin Golding)
[email protected] (Anthony Lapadula)
[email protected] (Ann L. Chervenak)
[email protected] (Dennis Allison)
[email protected] (Joe Keane)
[email protected]
[email protected] (Anant Jhingran)
[email protected] (Andi Krall)
[email protected] (Andrew Rothwell)
[email protected]
[email protected] (Aaron Werman)
[email protected] (Raminder S Bajwa)
[email protected] (Ken Schmahl)
[email protected] (Jo BogLae)
[email protected] (Rajendra V. Boppana)
[email protected]
[email protected] (Wilson S. Ross)
[email protected] (Lester Buck)
bytheway%[email protected] (Sidney Bytheway)
[email protected] (Bruce Richardson)
[email protected] (Loellyn Cassell)
[email protected]
[email protected] (Jack Benkual)
[email protected] (Jack Benkual )
[email protected] (Charles Storer)
[email protected] (Chris Galas)
[email protected] (Christoph Splittgerber)
[email protected]
[email protected] (Case Larsen)
[email protected] (Craig Anderson)
[email protected] (Felix Finch)
[email protected] (Jonas Skeppstedt)
[email protected] (Michael Donald Dahlin)
[email protected] (Volker Herminghaus-Shirai)
[email protected] (Jeff Stai (Stai) x7644)
[email protected] (Dave Goldblatt)
david carlton 
david d `zoo' zuhn 
[email protected] (David Hoopes)
[email protected] (william E Davidsen)
[email protected] (Dan Wilson)
default root account 
[email protected]
des%[email protected] (Dave Skinner 2-361)
[email protected]
[email protected] (Dick Karpinski)
[email protected] (Dinah McNutt)
[email protected] (Dan Jones)
[email protected]
[email protected] (Dan Cummings)
[email protected] (Steven J Dorst)
[email protected] (Doug Wong)
[email protected] (Doug Lee)
[email protected] (Dennis Andrews)
[email protected] (david s. channin)
[email protected] (David Seal)
[email protected] (Gregory W. Edwards)
[email protected] (Edward Feustel)
[email protected] (Paul Eggert)
[email protected] (Edward K. Lee)
[email protected] (ethan miller)
[email protected] (Ed McGuire)
[email protected] (Edward Vielmetti)
[email protected] (Ernest Prabhakar)
[email protected] (Eugene N. Miya)
[email protected] (robert futrelle)
[email protected] (George A. Michael)
[email protected] (George Zerdian)
[email protected] (Gertjan van Oosten)
[email protected]
[email protected] (gl sprandel x4707)
[email protected] (Greg Kemnitz)
[email protected] (Geoff Barrett x2756)
[email protected] (Guy Chesnot)
[email protected] (Haimo G. Zobernig)
[email protected]
[email protected] (Ed Hamrick)
[email protected] (Hannu H Kari)
[email protected] (No matter where you go, there you are!)
[email protected] (W. R. Hartshorn)
[email protected] (Robert L. Henderson)
[email protected] (Hideaki Moriyama)
[email protected] (Harvard Holmes)
[email protected] (Hong Men Su)
[email protected] (Iain McClatchie)
[email protected] (Milton Scritsmier)
[email protected] (Sam Pendleton)
[email protected] (Jan Vorbrueggen)
[email protected] (Jim Ray)
[email protected] (Randell Jesup)
[email protected] (John H. Hartman)
[email protected] (John Hood)
[email protected] (James E. Leinweber)
[email protected] (Jon Kay)
[email protected] (Jeff Wannamaker)
[email protected] (Joe Smith)
[email protected] (John Stephens)
[email protected] (Jon Hurley)
[email protected] (Joel A. Fine)
[email protected]
[email protected] (John Warburton)
[email protected]	(V. John Joseph)
[email protected] (Jon Krueger)
[email protected] (John Pettitt)
[email protected]
[email protected]
[email protected] (Seiji Kaneko)
kaufmann 
[email protected] (Khien Mien Kennedy Chew)
[email protected] (Kevin Fitzpatrick)
[email protected] (Stephen Kneuper)
[email protected] (Krishna Kunchithapadam)
[email protected] (Kurt Everson)
[email protected] (Swift)
[email protected] (Larry Pajakowski)
[email protected] (Peter Lawthers)
[email protected] (ld kelley x-6857)
[email protected] (Mark Linimon)
[email protected] (Larry McVoy)
[email protected] (Ken Lutz)
[email protected] (Mike Olson)
[email protected]
[email protected] (Mark W. Bradley)
[email protected] (Mark D. Hill)
matwood%[email protected] (Mark Atwood)
[email protected] (Mauricio Breteinitz Jr.)
[email protected]
[email protected]
[email protected] (Michael Coxe)
[email protected]
[email protected] (Mike McDonnell)
[email protected] (Matt Jacob)
[email protected] (Sean Goggin)
[email protected] (Margaret L. Thornberry)
[email protected] (Mark Muhlestein)
[email protected] (Antoine Mourad)
[email protected] (William Moxley)
[email protected]
[email protected] (David Muir Sharnoff)
[email protected] (Mark Walker)
[email protected] (bill collins)
[email protected] (Sanjay Nadkarni)
[email protected]
[email protected]
[email protected] (Pat Conroy)
[email protected] (Matthew Newman)
[email protected]
[email protected] (Nancy Yeager)
[email protected] (Ed Reed)
[email protected] (Serge Issakov)
[email protected] (Phil Nguyen)
[email protected] (Dave Patterson)
[email protected]
[email protected] (Peter Galvin)
[email protected] (Pekka Nousiainen /DP)
[email protected] (Peter Heimann)
[email protected] (Rob Pfile)
[email protected] (Phil Chan)
[email protected] (Peter M. Chen)
[email protected]
[email protected] (Paul Wells)
[email protected] (Ravi Tavakley)
[email protected] (Richard Wilmot)
[email protected] (Robert Cain)
[email protected] (Richard Croucher)
[email protected]
[email protected] (Richard Commander)
[email protected] (Richard H. Miller)
[email protected] (Rick Ellis)
[email protected] (Rob Guttmann)
[email protected] (Roger L. Hale)
[email protected] (Rich Thomson)
[email protected] (Richard Ruef)
[email protected] (Ross Alexander)
[email protected] (Scott Colwell)
[email protected] (Stephen C. Woods)
[email protected] (Mark Seager)
[email protected]
[email protected] (Sharan Kalwani)
[email protected] ( Spiros Arguropoulos )
[email protected] (Wayne Lyle)
[email protected] (Stephen M. Johnson)
[email protected]
[email protected] (Russell Crook)
[email protected]
[email protected] (Steve Rogers)
[email protected]
[email protected] (Shi-Sheng Shang)
[email protected] (Steve Kohlenberger)
[email protected] (Steve Beats)
[email protected] (Tim Wood)
[email protected] (Tom Lathrop (588-0677))
[email protected] (Timothy Jones)
[email protected] (Thomas A. Layher)
[email protected] (Tom Noggle)
[email protected] (Ted "Theodore" W. Leung)
[email protected] (Theodore W. Leung)
[email protected]
[email protected] (vicki halliwell)
[email protected] (Don Robinson)
[email protected] (Keith Jackson)
[email protected] (William Wake)
[email protected] (Douglas Waterfall)
wayne@mr_magoo.sbi.com (Wayne Schmidt)
[email protected] (Wayne Anderson)
[email protected] (Wayne Hurlbert)
[email protected] (Rob Mersel)
[email protected] (Wong Weng Fai)
[email protected] (Wayne H Scott)
[email protected]
[email protected] (BettyJo Armstead)
[email protected] (Crystal Ratliff)
[email protected] (Norbert Seidel)
[email protected] (David A. Remaklus)
[email protected] (Ron Rivett)
[email protected] (Calvin Ramos)
zabback 
[email protected] (Mike Zwilling)

The following people voted NO:
===========================================


John Haugh 
Norman Yarvin 
Timothy VanFosson 
[email protected] (Andreas Roemer)
[email protected] (Austin G. Hastings)
blm%[email protected]
[email protected] (Cristy)
[email protected] (Christopher Ward)
[email protected] (mehta)
[email protected] (Bill Owens)
-- 
A. Lester Buck   [email protected]   ...!uhnix1!siswat!buck

From [email protected] Tue Sep 12 10:42:44 1995
Xref: rpi news.announce.newgroups:1732 news.groups:37200 comp.arch:20361 comp.infosystems:484 comp.object:5340 comp.os.misc:1585 comp.periphs:3238 comp.periphs.scsi:4884 comp.std.misc:408 comp.sw.components:652 comp.theory.info-retrieval:5
Newsgroups: news.announce.newgroups,news.groups,comp.arch,comp.infosystems,comp.object,comp.os.misc,comp.periphs,comp.periphs.scsi,comp.std.misc,comp.sw.components,comp.theory.info-retrieval
Path: rpi!bounce-back
From: [email protected] (Lester Buck)
Subject: RFD:  comp.storage
Followup-To: news.groups
Sender: [email protected]
Nntp-Posting-Host: cs.rpi.edu
Organization: Photon Graphics
Date: 13 Jan 92 16:14:47 GMT
Approved: [email protected]
Lines: 102
Status: O
X-Status: 

comp.storage - Request for New Group Discussion

As processors become faster and faster, a major bottleneck in computing
becomes access to storage services:  the hardware - disk, tape, optical,
solid-state disk, robots, etc., and the software - uniform and
convenient access to storage hardware.  A far too true comment is that
"A supercomputer is a machine that converts a compute-bound problem into
an I/O-bound problem."  As supercomputer performance reaches desktops, we
all experience the problems of:

o	hot processor chips strapped onto anemic I/O architectures
o	incompatable storage systems that require expensive systems
	    integration gurus to integrate and maintain
o	databases that are intimately bound into the quirks of an
	    operating system for performance
o	applications that are unable to obtain guarantees on when their
	    data and/or metadata is on stable storage
o	cheap tape libraries and robots that are under-utilized because
	    software for migration and caching to disk is not readily
	    available
o	nightmares in writing portable applications that attempt to
	    access tape volumes

This newsgroup would be a forum for discussions on storage topics including
the following:

1)	commercial products - OSF Distributed File System (DFS) based on
	    Andrew, Epoch Infinite Storage Manager and Renaissance,
	    Auspex NS5000 NFS server, Legato PrestoServer, AT&T Veritas,
	    OSF Logical Volume Manager, DISCOS UniTree, etc.
2)	storage strategies from major vendors - IBM System Managed Storage,
	    HP Distributed Information Storage Architecture and StoragePlus,
	    DEC Digital Storage Architecture (DSA), Distributed
	    Heterogeneous Storage Management (DHSM), Hierarchical Storage
	    Controllers, and Mass Storage Control Protocol (MSCP)
3)	IEEE 1244 Storage Systems Standards working group
4)	ANSI X3B11.1 and Rock Ridge WORM filesystem standards groups
5)	emerging standard high-speed (100 MB/sec and up) interconnects to
	    storage systems: HIPPI, Fiber Channel Standard, etc.
6)	POSIX supercomputing and batch committees' work on storage
	    volumes and tape mounts
7)	magnetic tape semantics ("Unix tape support is an oxymoron.")
8)	physical volume management - volume naming, mount semantics,
	    enterprise-wide tracking of cartridges, etc.
9)	models for tape robots and optical jukeboxes - SCSI-2, etc.
10)	designs for direct network-attached storage (storage as black box)
11)	backup and archiving strategies
12)	raw storage services (i.e., raw byte strings) vs. management of
	    structured data types (e.g. directories, database records,...)
13)	storage services for efficient database support
14)	storage server interfaces, e.g., OSF/1 Logical Volume Manager
15)	object server and browser technology, e.g. Berkeley's Sequoia 2000
16)	separation of control and data paths for high performance by
	    removing the control processor from the data path; this
	    eliminates the requirements for expensive I/O-capable
	    (i.e., mainframe) control processors
17)	operating system-independent filesystem design
18)	SCSI-3 proposal for a flat filesystem built into the disk drive
19)	client applications which bypass/ignore filesystems:
	    virtual memory, databases, mail, hypertext, etc.
20)	layered access to storage services - How low level do we want
	    device control?  How to support sophisticated, high-performance
	    applications that need to bypass the file abstraction?
21)	migration and caching of storage objects in a distributed
	    hierarchy of media types
22)	management of replicated storage objects (differences/similarities
	    to migration?)
23)	optimization of placement of storage objects vs. location
	    transparency and independence
24)	granularity of replication - filesystem, file, segment, record, ...
25)	storage systems management - What information does an administrator
	    need to manage a large, distributed storage system?
26)	security issues - Who do you trust when your storage is
	    directly networked?
27)	RAID array architectures, including RADD (Redundant Arrays
	    of Distributed Disks) and Berkeley RAID-II HIPPI systems
28)	architectures and problems for tape arrays - striped tape systems
29)	stable storage algorithm of Lampson and Sturgis for critical metadata
30)	How can cheap MIPS and RAM help storage? -  HP DataMesh, write-only
	    disk caches, non-volatile caches, etc.
31)	support for multi-media or integrated digital continuous media
	    (audio, video, other realtime data streams)

The current Usenet hierarchy has no central place for these discussions.
The issues are much broader than Unix (comp.unix.*, comp.os.*), as they
transcend operating systems in general.  The peripheral groups
(comp.periphs, comp.periphs.scsi) are much too hardware oriented for
these topics.  Distributed computer systems of the future will offer
standard network storage services; what operating system(s) they use (if
any) will be irrelevant to their clients.  The architecture and
massively parallel computing groups (comp.arch, comp.parallel) are also
inappropriate.  [Of course, Usenet should have a comp.distributed
newsgroup, but that is for another time.]  Several of these topics
involve active standards groups but some of the standards aspects are
research topics in distributed systems.  Real products are evolving at a
furious rate, and commercial activity may outpace some standards efforts
underway.

I envision this group as being unmoderated.

-- 
A. Lester Buck   [email protected]   ...!uhnix1!siswat!buck

From [email protected] Tue Sep 12 10:47:14 1995
Xref: rpi news.announce.newgroups:1912 news.groups:40838 comp.arch:22165 comp.periphs:3455 comp.databases:14361 comp.std.misc:424 comp.unix.large:501 comp.unix.admin:5688
Newsgroups: news.announce.newgroups,news.groups,comp.arch,comp.periphs,comp.databases,comp.std.misc,comp.unix.large,comp.unix.admin
Path: rpi!think.com!yale.edu!jvnc.net!darwin.sura.net!haven.umd.edu!uunet!bounce-back
From: [email protected] (Lester Buck)
Subject: CFV:  comp.arch.storage
Followup-To: poster
Sender: [email protected] (David C Lawrence)
Organization: Photon Graphics
Date: Fri, 13 Mar 1992 16:41:27 GMT
Approved: [email protected]
Status: RO
X-Status: 

After more thought, comp.arch.storage seems to make more sense than
comp.storage.  Instructions for voting are given at the end.  The voting
address is NOT the From: address of this posting, so don't try to vote
by replying to this article!


CALL FOR VOTES TO CREATE NEWSGROUP
----------------------------------

NAME: 
          comp.arch.storage

STATUS:
          unmoderated


DESCRIPTION:

	storage system issues, both software and hardware


CHARTER:

To facilitate and encourage communication among people interested in
computer storage systems.  The scope of the discussions would include
issues relevant to all types of computer storage systems, both hardware
and software.  The general emphasis here is on open storage systems as
opposed to platform specific products or proprietary hardware from a
particular vendor.  Such vendor specific discussions might belong in
comp.sys.xxx or comp.periphs.  Many of these questions are at the
research, architectural, and design levels today, but as more general
storage system products enter the market, discussions may expand into
"how to use" type questions.


RATIONALE:

As processors become faster and faster, a major bottleneck in computing
becomes access to storage services:  the hardware - disk, tape,
optical, solid-state disk, robots, etc., and the software - uniform and
convenient access to storage hardware.  A far too true comment is that
"A supercomputer is a machine that converts a compute-bound problem
into an I/O-bound problem."  As supercomputer performance reaches
desktops, we all experience the problems of:

o	hot processor chips strapped onto anemic I/O architectures
o	incompatable storage systems that require expensive systems
	    integration gurus to integrate and maintain
o	databases that are intimately bound into the quirks of an
	    operating system for performance
o	applications that are unable to obtain guarantees on when their
	    data and/or metadata is on stable storage
o	cheap tape libraries and robots that are under-utilized because
	    software for migration and caching to disk is not readily
	    available
o	nightmares in writing portable applications that attempt to
	    access tape volumes

This group will be a forum for discussions on storage topics including
the following:

1)	commercial products - OSF Distributed File System (DFS) based on
	    Andrew, Epoch Infinite Storage Manager and Renaissance,
	    Auspex NS5000 NFS server, Legato PrestoServer, AT&T Veritas,
	    OSF Logical Volume Manager, DISCOS UniTree, etc.
2)	storage strategies from major vendors - IBM System Managed Storage,
	    HP Distributed Information Storage Architecture and StoragePlus,
	    DEC Digital Storage Architecture (DSA), Distributed
	    Heterogeneous Storage Management (DHSM), Hierarchical Storage
	    Controllers, and Mass Storage Control Protocol (MSCP)
3)	IEEE 1244 Storage Systems Standards Working Group
4)	ANSI X3B11.1 and Rock Ridge WORM file system standards groups
5)	emerging standard high-speed (100 MB/sec and up) interconnects to
	    storage systems: HIPPI, Fiber Channel Standard, etc.
6)	POSIX supercomputing and batch committees' work on storage
	    volumes and tape mounts
7)	magnetic tape semantics ("Unix tape support is an oxymoron.")
8)	physical volume management - volume naming, mount semantics,
	    enterprise-wide tracking of cartridges, etc.
9)	models for tape robots and optical jukeboxes - SCSI-2, etc.
10)	designs for direct network-attached storage (storage as black box)
11)	backup and archiving strategies
12)	raw storage services (i.e., raw byte strings) vs. management of
	    structured data types (e.g. directories, database records,...)
13)	storage services for efficient database support
14)	storage server interfaces, e.g., OSF/1 Logical Volume Manager
15)	object server and browser technology, e.g. Berkeley's Sequoia 2000
16)	separation of control and data paths for high performance by
	    removing the control processor from the data path; this
	    eliminates the requirements for expensive I/O-capable
	    (i.e., mainframe) control processors
17)	operating system-independent file system design
18)	SCSI-3 proposal for a flat file system built into the disk drive
19)	client applications which bypass/ignore file systems:
	    virtual memory, databases, mail, hypertext, etc.
20)	layered access to storage services - How low level do we want
	    device control?  How to support sophisticated, high-performance
	    applications that need to bypass the file abstraction?
21)	migration and caching of storage objects in a distributed
	    hierarchy of media types
22)	management of replicated storage objects (differences/similarities
	    to migration?)
23)	optimization of placement of storage objects vs. location
	    transparency and independence
24)	granularity of replication - file system, file, segment, record, ...
25)	storage systems management - What information does an administrator
	    need to manage a large, distributed storage system?
26)	security issues - Who do you trust when your storage is
	    directly networked?
27)	RAID array architectures, including RADD (Redundant Arrays
	    of Distributed Disks) and Berkeley RAID-II HIPPI systems
28)	architectures and problems for tape arrays - striped tape systems
29)	stable storage algorithm of Lampson and Sturgis for critical metadata
30)	How can cheap MIPS and RAM help storage? -  HP DataMesh, write-only
	    disk caches, non-volatile caches, etc.
31)	support for multi-media or integrated digital continuous media
	    (audio, video, other realtime data streams)

This group will serve as a forum for the discussion of issues which do
not easily fit into the more tightly focused discussions in various
existing newsgroups.  The issues are much broader than Unix
(comp.unix.*, comp.os.*), as they transcend operating systems in
general.  Distributed computer systems of the future will offer
standard network storage services; what operating system(s) they use
(if any) will be irrelevant to their clients.  The peripheral groups
(comp.periphs, comp.periphs.scsi) are too hardware oriented for these
topics.  Several of these topics involve active standards groups but
several storage system issues are research topics in distributed
systems.  In general, the standards newsgroups (comp.std.xxx) are too
narrowly focused for these discussions.

VOTES:

ONLY VOTES RECEIVED BY APRIL 10, 23:59 CDT WILL BE COUNTED

TO VOTE YES: send mail to [email protected] with the words "YES" and
"comp.arch.storage" in the subject line (preferred) or message body
(acceptable)

TO VOTE NO: send mail to [email protected] with the words "NO" and
"comp.arch.storage" in the subject line (preferred) or message body
(acceptable)

Only votes mailed to the above addresses will be counted.  In
particular, votes mailed to me directly or through replying to this
posting will not be counted.  Ambiguous votes or votes with
qualifications ("I would vote yes for comp.arch.storage provided
that...") will not be counted.  In the case of multiple votes from a
given person, only the last will be counted.

This Call For Votes, along with acknowledgements of votes received will 
be posted several times throughout the voting period.  
-- 
A. Lester Buck   [email protected]   ...!uhnix1!siswat!buck