VD Driver This area contains the VD: drivers I use. Included are vddriver and vqdriver; I recommend using vddriver4ae.mar and vqdrivere.mar as driver sources; others are older and less reliable. Asnvdm6.mar is the assign-vd: module. The VD: driver and VQ: driver use contiguous files or contiguous regions of disk for storage. VQdriver has also the ability to shadow its' contents to another file. VDdriver and VQdriver are high performance virtual devices, inserting only a few instructions into the QIO$ streams. FD: Driver FD: works in VMS 4 and 5. Assemble with VMS$V5 defined for VMS V5. FDDRV is a virtual disk that uses a PROCESS running a host image (FDHOST is such a one) to do all its' I/O. Thus it is very flexible and useful for things like compressing disks, cryptodisks, remote virtual disks, and memory disks in paged, user-mode memory. Among other things, that is... FDDRV draws somewhat on John Osudar's memory disk driver and got an assist from Chris Ho, who gave me some VMS V5 fixes for the V4.7 VMS version to get it running right in V5. FDDRV in a cluster DID work successfully when MSCP served to the cluster. This was not at my site, but one I know. To make a new flavor of FDDRV, modify FDHOST, which is a normal but privileged VMS image (needs CMKRNL to read/set driver database up) to do what you want. FDHOST tells FDDRV that it's there and carries on a dialogue with FDDRV to move the data. (By the way, an assembly conditional in FDDRV will allow FDDRV to pass the address of the driver's buffer to the host process in case it's thought faster to manipulate that data via change mode to kernel and copy rather than special QIO.) +------------------+ +-------------+ +------------------+ ! Host process ! ! FDDRV ! ! Client Process ! ! does actual ! ! disk driver ! ! Uses FDDRV as a ! ! bit moving !<--->! looks like !<--->! disk. ! ! ! ! disk to ! ! ! ! ! ! client ! ! ! +------------------+ +-------------+ +------------------+ FDDRV can be used for a variety of virtual disk types. These include: 1. Memory disk with memory in a process, therefore pageable. The process' working set determines how much physical memory is actually used. This is supplied in FDHOSTMEM.MAR and works. 2. Remote mountable virtual disk over DECnet. This allows a DECnet object on a remote system to cooperate with a host process on another system to allow a disk on a remote system to be mounted remotely (useful for remote backups and the like). This is supplied in FDREMSRV and FDHOSTREMOT and works. (I use this to make backups to/from TK50 on a remote machine; works fine even at blocksize 32256. I don't use larger saveset block factors due to RMS limits which prevent the results from being TPC-able; there's no reason to think they'd fail with fddrv however.) 3. Remote mountable virtual disk over asynchronous lines. This works like remote mount over DECnet but with asynchronous (terminal) line hookup instead of DECnet. This is supplied in FDREMASY and FDASYREMO and is not fully tested yet. As far as I know it doesn't work as supplied... 4. Local virtual disk on a file. The file is treated as a string of 512 byte blocks, and need not be contiguous. This is supplied in FDHOSTFILE and is not yet fully tested. Later tests did get it working. 5. Crypto disk. This works like local virtual disk on a file, but the file is encrypted before being recorded and decrypted when read. This is supplied in FDHOSTCRY or with a stronger encryption algorithm in FDHOSTCRY2. It works. The FDHOSTCRY2 version in particular has a VERY long encryption cycle and should resist attempts at cracking the code by all but people experienced in this sort of thing. The fdhostcry version uses a 64 bit XOR which is fairly easy to break but is OK against casual browsing. The fdhostcry2 host program supports a /WEAK keyword which will cause it to use the weaker algorithm of fdhostcry. FDHOSTCRY3 and FDHOSTCRY4 are stronger methods; I recommend the latter, which works nicely and closes some weaknesses of the older ones. I recommend the host process be a subprocess of the user, and that the disk be privately mounted. This way, at logout or process exit, things all get cleaned up. 6. Compression disk. This would use memory or a disk file to hold data, but compress it before recording it and decompress on retrieval. This is not yet debugged. Something in the compression logic isn't getting reset correctly between tracks. It uses lz compression a track at a time above a "fence" block number (so one doesn't have to constantly compress/decompress index file and/or directories by placing these below the fence). Also it uses an ISAM file to hold all data, trying to keep that compressed. This may not actually release data as it should even if the records were compressed. 7. Various flavors of FD: in addition exist. One makes a journal of disk accesses and updates a file once in a while from the journal. The disk is backed by a memory array. There is also the "BOH" disk ("Bat Out of Hell") which is a file disk backed by a memory array. It acts like a writethrough cached disk and is very fast on read. The container can be somewhere across the net,but the journalling disk is probably better for wide area nets, since the updates are occasional only and all at one go. One can modify the journalling disk a bit to save the journalling a long time and back up the disk asynchronously with the main process. It would be cleanest to do the updates in a slave of the host process in this case, so the host can be ready to handle user requests; adding ASTs to the host process introduces some thorny timing issues. 8. Striping driver. This is SDdriver and ASNSD. It is a full fledged disk striping driver, will work with up to 16 containers and should work with DEC volume shadowing. Note: I've tested it with 2 containers and it ran fine. Haven't tried all other combos. 9. WQdriver. This is a shadowing driver. Still experimental, may or may not work. It's specialized to 2 containers max but does have the ability to do shadow catchup while the disk is in use. It's a virtual disk of contig. file type that has 2 containers and writes to both, and reads from the one that had the closer logical block number last time it was accessed. Similar to VQdriver except that it sends I/O in parallel to both containers, not in series, it is not tested as VQdriver is, and it returns the 2 bit in the status if the first container had an error and the 4 bit if the second one did (both if both had errors). Others may occur to the reader. Enjoy! Glenn Everhart Everhart@arisia.dnet.ge.com 25 Sleigh Ride Rd Glen Mills, PA 19342 215 358 386 home