From: CRDGW2::CRDGW2::MRGATE::"SMTP::CRVAX.SRI.COM::RELAY-INFO-VAX" 15-MAY-1991 18:14:24.22 To: ARISIA::EVERHART CC: Subj: SPAWN (et.al.) takes sooo long - here's the answer From: RELAY-INFO-VAX@CRVAX.SRI.COM@SMTP@CRDGW2 To: Everhart@Arisia@MRGATE Received: by crdgw1.ge.com (5.57/GE 1.97) id AA01649; Wed, 15 May 91 17:46:40 EDT Message-Id: <9105152146.AA01649@crdgw1.ge.com> Received: From CUNYVM.CUNY.EDU by CRVAX.SRI.COM with TCP; Wed, 15 MAY 91 10:54:01 PDT Received: from DGOGWDG1.BITNET by CUNYVM.CUNY.EDU (IBM VM SMTP R1.2.2MX) with BSMTP id 3903; Wed, 15 May 91 13:53:47 EDT Received: from dnet.gwdg.de by DGOGWDG1.BITNET (Mailer R2.07) with BSMTP id 5250; Wed, 15 May 91 19:41:22 MSZ Date: Wed, 15 May 1991 19:41:20 +0200 From: "GWDGV1::MOELLER" To: info-vax@sri.com Subject: SPAWN (et.al.) takes sooo long - here's the answer I'm probably not the only one who noticed that SPAWN takes a considerable amount of time (like several seconds) to complete - when there are one ore more compute bound processes at interactive priority (DEFPRI), and you're on a single-CPU VAX (like a 9000/210). Turns out that the reason is sort of a "day-1 bug" in the scheduler: whenever a process gets preempted by a higher priority process (like editor users, or network ACPs), it is placed at the *tail* of its scheduling queue, thereby in effect losing its "QUANTUM" of CPU time. Surprisingly, the VMS 5.4/-1/-2 scheduler code already contains some support for the more obvious scheduling, which would be to place a process at the end of its scheduling queue only on quantum expiration or on its own request (SYS$RESCHED system service), but otherwise leave a preempted process at the head of the queue so it is allowed to finish up its quantum when the higher-priority process relinquishes the CPU. This made it easy to write a patch to obtain the latter behaviour, which I'm currently trying out. So far, I can say that SPAWN performance has dramatically improved, and I did not notice other effects yet (I have been using QUANTUM=5 for years; if you have a really large QUANTUM, I'd think the patch might have more visible effects, since QUANTUM will be taken more seriously *with* the patch, and 0.2 seconds - the default - is a *long* time). N.B. This would not work on a *vector* uniprocessor (like the 9000/210), unless another dumb "undocumented feature" has been patched: by some apparently ad-hoc code in 5.4, vector consumers get time slices of 32 * QUANTUM (while other processes get just QUANTUM). The observed effect at our site was that a vector consumer received a slightly preferred service at daytime (timesharing), but around 80% of the CPU at night, competing with one or two non-vector compute bound processes. I have turned that off completely (it is probably meant for a multiprocessor configuration where not all CPUs have vector units attached). If those "quanta" were taken seriously, a vector user would obviously leave *very* little CPU to other (compute-bound) processes. I'll gladly provide you with these patches (applicable to VMS 5.4/-1/-2) on request. Wolfgang J. Moeller, GWDG, D-3400 Goettingen, F.R.Germany | Disclaimer ... Bitnet/Earn: U0012@DGOGWDG5 Phone: +49 551 201516 | No claim intended! Internet: Moeller@gwdgv1.dnet.gwdg.de | This space intentionally left blank.