Article 151249 of comp.os.vms: Hi, I've copied this reply to the list in case anyone else finds it interesting, or wants to correct me on details. The following is a rough description of what happens when a VMScluster satellite boots: The satllite sends out a MOP boot request. This takes the form of an ethernet (or maybe FDDI) multi-cast. A cluster member sees this request, and passes it to its MOP server. This MOP server examines the ethernet addres in the request, and builds an image to download to the node. Some information specific to the node is built in to the image. I think this is purely the SCSnode name, the system disk name, and the system root. The MOP server and the bootstarp code co-operate to down-load this image to the satellite. The satellite then executes this image. This image is in itself bootstrap code, whose function is to load and start VMS. I think it's largely the same as SYSBOOT or APB. In order to load VMS, it must be able to read from the system disk. Consequently, it incorporates the boot time disk class driver (cut down DU), some SCS code (to allow DU to talk to an MSCP server), and uses the boot time LAN driver that allowed the MOP code to run. As stated, the code must be able to talk to an MSCP server serving the system disk - ie it must establish a Virtual Circuit with the node where this server executes, and establish a Connection over this circuit with the MSCP server. To form this virtual circuit, a special form of multi-cast packet is sent that includes the name of the disk it wants to access. Only nodes that are MSCP-serving this disk respond. The boot image picks one of these nodes. At this stage, the image has access to the system disk at the logical block level. [This is a feature not initially present in LAVCs. It used to be a requirement that the boot node had to MSCP serve teh system disk. Presumably so that the boot image knew which node to communicate with.] It has a primitive Files-11 file system, which is adequate for finding the necessary executive images from SYS$LOADABLE_IMAGES, and the SYSGEN parameter file. This primitive file system performs no locking (it can't, because the lock manager is not available). The appropriate files are loaded into the satellite's memory. At this point, the boot time disk driver is no longer required, and the SCS VC & connection are closed. (However, the boot time drivers remain in place, since they are the mechanism used to perform crash dumps). At this point, VMS 'proper' is started. LAN adaptors are configured, the other cluster members are discovered and proper virtual circuits are formed. Once the cluster has re-configured itself, the satellite is a cluster member, and DUDRIVER is able to establish connections to the appropriate MSCP servers for the disks in the cluster. Your question: > Is my opinion wrong ? The node which answers to the > boot request have mounted the systemdisk, MSCP serving is enabled on all > nodes (you have to mount local disk on the print managers node). In case > of this (this is my knowledge) the satellite (with service eneabled) will > give all the files to the boot requestor. No fallback to the alpha > bootserver. But there are three nodes involved and twice network transfers > take place. The boot node services the MOP request. Once the bootstrap image is loaded, it will talk to the most appropriate node MSCP-serving the system disk. If the system disk is local to the boot node, this will be the boot node. If it's local to another node it will be that node. (If it's on an HSC, shared SCSI bus, etc. it wil be slightly different.) Likewise the peoper VMS DU driver will talk to the most suitable server node. So, as I understand your opinion, it's wrong. MSCP traffic will go directly to the node hosting the disk. With OpenVMS Alpha V6.2, performing a conversational bootstrap provides more informative messages about what it's doing in terms of SCS VCs, etc. You might find it interesting to look at this. > Also you don't wrote anything about the platform common databases of UCX. > If you have the proxy and the host database of UCX on a common disk, how > do you tell this to the UCX startup commandprocedures without editing them? UCX looks for its files via SYS$SYSROOT:[SYSEXE] or SYS$SYSROOT:[SYSMGR], etc. Out of the box, SYS$SYSROOT is effectively defined as SYS$SPECIFIC:,SYS$COMMON: . I've effectively added CLUSTER$COMMON: on to the end of this. This is not supported, but that doesn't mean it's a bad idea, as long as you're aware of the potential consequences. You then simply decide whether a particular data file is specific to a node, a system disk, or the whole cluster, and locate it accordingly. The supported way of doing this is to define logicals in SYLOGICALS.COM as follows: SYSUAF defined as (eg) COMMON_FILES:SYSUAF.DAT I regard this as inelegant, but it's down to personal preference, and it has the advantage of being explicit and supported. Mark Mark Iline system@meng.ucl.ac.uk Dept Mech Eng, University College, London. UK Read at your own risk.