VD: Driver Some bugs existed in the VD: driver in the S89 tapes, depending on assembly conditionals. This version is fixed and is what I use in production. (In particular I use vddriver4a.mar, but the others have been tested also, though less heavily.) Also, the ASNVDM5.mar image now will set device geometry for VD: to the correct disk geometry for DEC disks of the same size as the container files if these are known. This can be overridden with /SEC64 or /SEC32 switches, which give a 64 by 1 by n geometry or a 32 by 32 by n geometry respectively. (Sectors, tracks, cylinders). VDDRIVER makes partitioning VMS disks trivially easy at tiny performance cost. Incidentally, the ASNVD image now can also report the filename associated with a disk (advd/report) and (thanks to mods by Dave Hittner) will report assignment errors instead of silently failing to activate a VD: unit if anything goes wrong. My system runs with 10 of these virtual disks in heavy use, and the sig tapes are made up on one of them. Be sure that you edit the sources to define VMS$V5 if running VMS V5.x. They are supplied for VMS 4.x. Note: Asnvdm6.mar supports a /LBN=number?LENGTH=number switch pair to allow virtual disks to be put anywhere on a disk, regardless of files. This can be used, for example, to re-use some of sysdump.dmp as a scratch disk when it isn't needed for anything else. A variant, VQDRIVER (or VEDRIVER) with ASNVQ (or ASNVE) is a software shadowed virtual disk. It writes to two files, and reads the file whose last accessed LBN is closest to the one wanted. It implements a transparent catch-up mode also, so a shadow set member can be added during operation and caught up while the disk is still in use. (Actually, current implementation requires dismount, adding the shadow member, and remount, but the catchup can happen after remount, so the delay can be exceedingly small. Modifying asnvq.mar could relieve this restriction.) In testing, I have found that container files remain IDENTICAL where I use asnvq/assign/shadow vqan: dev:primcont.fil dev2:seccontfil first, then use asnvq/catchup, and then mount/dismount/mount the disk. (Actually, I mount, catch up, then dismount/mount). Where I don't do the dismount/mount, I have noticed differences past the end of file in the container files, but nothing that is different in any files on the shadow disk...they always compare identical, and analyze/disk shows no file structure problems, ever, on either container file treated as a separate disk. The use of a shadowed unit allows the "Transparent" backup but only where vqdriver is used. It is perfectly OK to use ASNVQ/assign to assign a unit of VQdriver to a primary file only (it will work like a unit of VD:) and LATER to use asnvq/assign/shadow followed by asnvq/catchup. To do so, though, you'll have to dismount the VQ: unit prior to the second assign operation; the asnvq code checks to ensure the unit isn't already active. If you are willing to live dangerously you can remove the check in asnvq kernel code, but will enable it to reassign the container file of a mounted vq: unit on the fly, which is a certain way to produce, at the least, massive file corruption on whatever containers are used. If you remove the check, at least have the sense to make sure the primary file start LBN, host UCB, and size are the same as what's already in the VQ: UCB before allowing addition of secondary information! I feel that a minute's disruption to allow use for the rest of the hour or two for a backup of a huge disk is OK. Then too, one may use this driver to shadow critical data only, so it may be rare to use it for such backups. The logic to allow assigns of the PRIMARY file to arbitrary device and LBN and size was moved from VDdriver (asnvd specifically) to asnvq. Such logic is still missing for the secondary file. Incidentally, watch this space for a striping driver. Such a driver has to copy the IRPs and split the I/O across containers rather than reuse the IRP as done here, but given reasonable success in understanding how to copy the IRPs (and IRPEs, the major complication) the actual logic for such a driver is not too complex and is closely related to that of VDdriver or VQdriver. Note: One thing I have NOT tried is putting one VD: on another using the same driver. Ought to work, but untried. There could be weird effects... FD: Driver FD: works in VMS 4 and 5. Assemble with VMS$V5 defined for VMS V5. FDDRV is a virtual disk that uses a PROCESS running a host image (FDHOST is such a one) to do all its' I/O. Thus it is very flexible and useful for things like compressing disks, cryptodisks, remote virtual disks, and memory disks in paged, user-mode memory. Among other things, that is... FDDRV draws somewhat on John Osudar's memory disk driver and got an assist from Chris Ho, who gave me some VMS V5 fixes for the V4.7 VMS version to get it running right in V5. I have no idea if FDDRV works as MSCP served or not. I doubt it will cause a crash, but it might not work either. Try it and see. To make a new flavor of FDDRV, modify FDHOST, which is a normal but privileged VMS image (needs CMKRNL to read/set driver database up) to do what you want. FDHOST tells FDDRV that it's there and carries on a dialogue with FDDRV to move the data. (By the way, an assembly conditional in FDDRV will allow FDDRV to pass the address of the driver's buffer to the host process in case it's thought faster to manipulate that data via change mode to kernel and copy rather than special QIO.) +------------------+ +-------------+ +------------------+ ! Host process ! ! FDDRV ! ! Client Process ! ! does actual ! ! disk driver ! ! Uses FDDRV as a ! ! bit moving !<--->! looks like !<--->! disk. ! ! ! ! disk to ! ! ! ! ! ! client ! ! ! +------------------+ +-------------+ +------------------+ FDDRV can be used for a variety of virtual disk types. These include: 1. Memory disk with memory in a process, therefore pageable. The process' working set determines how much physical memory is actually used. This is supplied in FDHOSTMEM.MAR and works. 2. Remote mountable virtual disk over DECnet. This allows a DECnet object on a remote system to cooperate with a host process on another system to allow a disk on a remote system to be mounted remotely (useful for remote backups and the like). This is supplied in FDREMSRV and FDHOSTREMOT and works. 3. Remote mountable virtual disk over asynchronous lines. This works like remote mount over DECnet but with asynchronous (terminal) line hookup instead of DECnet. This is supplied in FDREMASY and FDASYREMO and is not fully tested yet. (My asynch line is VERY flaky between the two test systems and has trouble with 128 byte packets.) 4. Local virtual disk on a file. The file is treated as a string of 512 byte blocks, and need not be contiguous. This is supplied in FDHOSTFILE and works. 5. Crypto disk. This works like local virtual disk on a file, but the file is encrypted before being recorded and decrypted when read. This is supplied in FDHOSTCRY or with a stronger encryption algorithm in FDHOSTCRY2. It works. The FDHOSTCRY2 version in particular has a VERY long encryption cycle and should resist attempts at cracking the code by all but people experienced in this sort of thing. The fdhostcry version uses a 64 bit XOR which is fairly easy to break but is OK against casual browsing. The fdhostcry2 host program supports a /WEAK keyword which will cause it to use the weaker algorithm of fdhostcry. I use the FDHOSTCRY2 version for production. Even with the normal strong algorithm, speed is not a problem. The cryset.com is a sample command file to fire up a cryptodisk. It requires fdhostcry2.cld to be in one's DCL tables. Other variants are of course possible. FDHOSTCRY2 creates files which are almost incompressible by Huffman or LZW packing schemes, indicating it does an excellent job randomizing data. 6. Compression disk. This would use memory or a disk file to hold data, but compress it before recording it and decompress on retrieval. There are parts of this supplied, but it is still buggy; something not getting re-initialized correctly. The code here compresses in 32 block "tracks" or decompresses in such tracks, so the LZW algorithm can be effective. It will only compress blocks above a "fence" block number, so an index file can be stored uncompressed and accessed fast, with data compressed. (Since use requires privilege to install the driver, it cannot be used for sig tape compression, but get it working and it MIGHT be useful in your shop.) Others may occur to the reader. Enjoy! Glenn Everhart Everhart%Arisia.decnet@crd.ge.com 25 Sleigh Ride Rd Glen Mills, PA 19342 215 358 3866 home 215 354 7610 work Additonal notes: To create the files, one method would be something like this: $copy/contig/allocation=12000 sys$system:copy.exe mydisk:[somewhere]vdcont1.dsk $set file/end mydisk:[somewhere]vdcont1.dsk I suggest either running the analyze/media utility prior to INIT or using the init/noverify command to prevent some random bad blocks from being allocated that don't have to be. The vd: driver (and ve: driver) MUST have contiguous container files. They also should be fixed record length 512 byte/record for greatest compatibility, though vd: doesn't really care, just so the files are contiguous. FD: driver MUST have fixed 512 byte records, which is why copying a convenient .EXE file is handy; they have the desired record size already. The VEdriver.mar and ASNVE.* code are a version of a contiguous file Virtual Disk driver that allows software shadowing. They are at best alpha software at this point. Be careful, but try 'em if you want. The idea is to allow shadowed disks on virtual disks, presumably on different physical devices, so that you can shadow what's important to you but not necessarily a whole bunch of other cruft. You can (or will be able to when the code's fully tested) assign VE: to a primary and secondary file, then use the asnve/catchup command to catch the secondary file up to the primary. During this time you CAN USE the ve: unit, but all reads come from the primary file until the catchup is done. Then the reads come from whichever container file has the closest last logical block accessed. Writes go to both (one after the other, but you'll essentially never see this). Errors on write to secondary are reported like normal write errors. Errors on write to primary, or errors on both, get unique error codes but are reported as errors. Writes succeed only if both parts succeed. Reads are tried from only one container file and get errors or success reported according to the I/O on that container file. VE: does NOT (currently at least) attempt failover in any form. If writes start failing, it generally is something fairly catastrophic and needs human intervention. When writes start to fail though, you know that the two container files are the same apart from possible differences of the stuff involved in the errors. Therefore you CAN recover. The src code can be consulted to see what error reports are which failures. The error codes for primary or both writes failing are chosen to be weird ones that won't occur on normal systems for disk writes. The interlock used to allow use of VE: during catchup actually is slightly longer than it has to be, but this will have the effect only of slowing catchup down a bit. The catchup code in the ASNVE tries to do 16 blocks at a time for its' I/O; this should give adequate speed. It can potentially get stalled by application code that attempts to continually write to the same blocks, since it has to retry a read/write if another application tries to write between its' read and its' write to a set of blocks. This is unlikely enough that I consider it a non-problem in the real world. The current design of ASNVE and ASNVQ (see below; yet another confounded device name conflict!) assumes you'll specify the files for primary and secondary both at the same time. You cannot add a shadow volume while the disk is mounted. However, you CAN dismount the disk and run asnvq/assign/shadow to add a shadow file; just make sure you leave the same primary file there. Then an ASNVQ/Catchup vqan: will catch up the shadow copy. By removing the check for an already allocated device, the asnvq/ass/shadow could be made to work even with the VQ: device mounted. Note that using the /RWBOTH qualifier under such conditions is likely to result in VMS seeing a corrupt file structure; the /RWBOTH qualifier to ASNVQ is meant for when you're first setting up a shadowed disk and know that the two container files hold the same (useful) data. In setting up a virgin shadow set then one can create two files and do an $ asnvq/assign/shadow/rwboth vqa0: dev1:[somewhere]prim.dsk dev2:[dir]second.dsk to allow one to INIT the VQ: unit, create directories, move files to it, etc. You can safely use /rwboth ONLY when the disks are the same (either because they're brand new, or because a /catchup was done before the last dismount.) It avoids need for a /catchup to copy all data to the secondary, at the cost that YOU must be sure the files really are alike. BEWARE: I've noticed while trying VE: out on a VS2000 that there are already drivers there named VEDRIVER and VFdriver. Lord knows what they are but there's a name conflict. The driver will have to be renamed. ecch... Fortunately VDdriver seems to be an OK name, as does FDdriver. I've supplied ASNVQ.* and VQDRIVER as simple renames so that this (expletive deleted) name conflict will not occur. In testing ASNVQ and VQDRIVER, they appear to work OK under VMS 4.6 and the /catchup operation succeeds correctly as far as I can tell. After a shadow operation, mounting virtual disks of the individual container files gives valid disks for either, with no complaints from Analyze/Disk and no diffs of any files from the originals. A somewhat longer and more varied test should be used before production use of VQDRIVER (or VEDRIVER if you don't have a vaxstation). My test configuration put the VQ container files on a VD: disk by the way. Works fine. LATE NOTE: I have read that on Vaxstation I's there is a driver named VDdriver already, which stands for Virtual Display driver. Be a bit careful with driver names and rename VDdriver if necessary. The irp$l_media_id field should be recomputed with the macro in this directory if you do so, to get the right device name. Glenn Everhart 25 Sleigh Ride Rd Glen Mills PA 19342 Everhart%Arisia.decnet@crd.ge.com Some sources are in OLDERSRC.ZOO and may be extracted with the ZOO utility. These are generally older and less-recommended versions of the drivers however.