--------------------------------------------------------------------- NOV-MEM2.DOC -- 19970118 -- Email thread on NetWare memory management --------------------------------------------------------------------- Feel free to add or edit this document and then email it back to faq@jelyon.com Date: Fri, 15 Dec 1995 15:47:35 -0600 From: Joe Doupnik Subject: Re: Memory Requirements Again (Sorry) >In estimating Netware memory requirements I have been using the Red Book >Calculation (I know... It leaves a LOT to be desired). But I saw the >new Calculation in the listserve FAQ and tried it out. Help me out here: >I am estimating a new 4.1 server with 9.6Gb. 1 Gig is mirrored (SYS). I >ran the two Novell calculations and they came out to 104Meg and 118Meg. > When I ran the FAQ calculation it came to about 48Meg! The big >discrepancy seems to be in File/Directory Caching requirements and the >Red Book calculation has an added addition for Cache Memory. --------- Good question David. Let's take apart the memory requirements for disk drives. Each disk allocation unit requires a bit of memory to hold the data structure. If we had a 1GB drive with 4KB allocation units that would be 1024*1024KB/4KB = 262K of them, and that will cost us about 2.5MB (mostly FAT, some for Directory/DET). So this works out to be about 1KB memory per allocation unit, give or take a little, mostly in FAT but some in the directory entry table. When suballocation is employed then an amount equal to a small fraction of this memory is used to keep the suballocation data structure. Suballocation is not without cost. NW 4 lets us gracefully use 64KB allocation units, thus many fewer than with the default NW 3 case of 4KB allocation units. It takes back some savings for suballocation work, but not much. You can see these figures in a running server, by walking through MONITOR. That gives us the disk management memory consumption part. NLMs consume their quota, often large. Users consume some just being logged in, and open files consume a little just being opened. I lump together these two to be within Novell's new guideline of about 300KB per login. Lan adapters need memory, the disk system needs directory caching memory. Those two are readily seen and controlled. On top of all this we place a safety factor of a couple MB free memory to deal with transients which are not readily apparent watching MONITOR. Sanity check from a running system. NW 3.12, 4KB allocation units. 7GB unmirrored on line. 64MB server memory, about 41MB free at this moment with only a couple of people on. One 2.7GB volume has dual namespace, the rest has only one. Tape software, NFS, NLSP, whatnot loaded. Thus 64MB is far more than needed, and a 32MB server would still leave 9MB free space as it sits right now. Thus we see your 48MB value is in line with these observations. Sanity check #2: NW 3.12, 2GB with 4KB allocation units, one namespace, 32MB memory, NLSP, CD-ROM and software metering active, 15MB free with a dozen or so folks logged in right now. Be sure to trim down the maximum receive packet size to match Ethernet frames because otherwise receive cache buffers waste A Lot of memory: uses 4.3KB per buffer default, versus 2.2KB per buffer after trimming. Have a look at your running servers with MONITOR and see where memory has gone. Take into account the peak memory demands of your tape backup system (BackupExec v7 wants about 2+MB here, and gives it back when idle). Realize that NW leaks memory into the Allocation pool, so reserve a few MB for that event over the long term. Don't take anyone's word for all this. Please go explore with MONITOR to reassure yourself about your servers. I'd be happy to take off your hands those SIMMs you are about to save! Joe D. --------- Date: Fri, 15 Dec 1995 18:38:53 -0600 From: Joe Doupnik Subject: Re: Memory Requirements Again (Sorry) The last followup today on this server memory thread. As steady readers may recall, I have pooh-poohed Novell's estimate of allowing each logged in user about 300KB or so of server memory. I still do, and I think I have a better estimator. Below is the short story. A logged in user consumes some tens of KB to represent the login information. Fine. If the user reads or writes files then for a short time the disk system needs buffers to transfer the data, and to keep track of file open states, and file locks, and the communications channel needs buffers to create packets, and so forth. The size of the instantaneous consumption is not readily available to us via MONITOR or other common tools. But let is suppose that Novell's figure of 300 KB is about right for many cases. This is step one of three to the solution. Step two is to realize that people are not doing things in synchronization. They do different activities at different instants of time and hence their consumption differs from moment to moment, and from user to user. The overall consumption, summed over all users at any instant, is a random value. Step three is to apply elementary stochastic process (random variable) theory to estimate the average consumption of many independent users. As you may recall, if each user has memory consumption standard deviation of U KB then at any instant the total memory consumed is not N users times U KB (the case of everyone doing the same thing at the same time) but much less. The standard deviation of N independent users, the whole system, is the square root of the sum of their individual variances (variance equals the square of the standard deviation). Thus the standard deviation of memory used by the population as a whole is W KB = Sqrt(sum over N of (U KB * U KB)), which is W = Sqrt(N) U KB. The system sees memory consumption proportional to the square root of the number of logged in users. At any instant some users consume more than 300 KB but lots of users consume much less. As an example, if U were 300 KB and we have N = 100 users, then on average the system expends 10 * 300 KB = 3,000 KB servicing them, not 100 * 300 KB = 30,000 KB. This is simply the root mean square (RMS) summation of N independent random processes. Thus I suggest that when we scale the free memory in a server, after all NLMs have as much memory as they will use, disk and packet queues are allocated adequate reserves, etc, that we estimate users will consume about 300 KB times THE SQUARE ROOT OF THE NUMBER OF LOGINS. Rebuttals and further comments are welcomed. Joe D. ------------------------------ Date: Fri, 15 Dec 1995 20:59:24 -0800 From: Floyd Maxwell Subject: Re: Novell, statistics and (ed)U > Thus I suggest that when we scale the free memory in a server, after >all NLMs have as much memory as they will use, disk and packet queues are >allocated adequate reserves, etc, that we estimate users will consume about >300 KB times THE SQUARE ROOT OF THE NUMBER OF LOGINS. > > Rebuttals and further comments are welcomed. > Joe D. Big problem here is the assumption that usage is perfectly-smoothly average. I can think of times, Monday (and other) morning log-ins for example, or similar non-random moments (like deadlines for students at an .edu) where load will be much higher...you know, a wave every century is 100 ft high or whatever it is...so one load moment every ..xx.. days could cause a problem. So, maybe we add a couple of sigmas of uncertainty to our "bell" curve, and then draw a line down to the x-axis and read off the RAM value...eh, Eureka! More specifically, if 100 users...load avg. based on (100)^^0.5 = 10 users, then Figure 1 is proposed: 100% > | | | xxxxxxxxxxxxxxxxxxxxxxxxxxxxx | xxxxxxxxxxxxxxxxxxxxxx | xxxxxxxxxxxxxxxx | xxxxxxxxxxxxx 50% > | xxxxxxxxxxx Free | xxxxxxxxx Cache | xxxxxxx ^ Buffers | xxxxx | % | xxx | +xxxx---------------------------------------------------+--- ^ ^ ^ Joe FPM Novell (User-portion RAM calculation) Fig 1. - X's cover the "2 sigma" MarginalRange(TM) of free cache buffers - Safety lies to the right of the X's, by FPM's extension of Joe D.'s theory ...I have a feeling the "final" is going to be rough in this course... ------------------------------ Date: Fri, 15 Dec 1995 21:15:54 -0800 From: Floyd Maxwell Subject: Novell, Stats & U...Part Deux Correction to figure "2" below... ------------ > Thus I suggest that when we scale the free memory in a server, after >all NLMs have as much memory as they will use, disk and packet queues are >allocated adequate reserves, etc, that we estimate users will consume about >300 KB times THE SQUARE ROOT OF THE NUMBER OF LOGINS. > > Rebuttals and further comments are welcomed. > Joe D. Big problem here is the assumption that usage is perfectly-smoothly average. I can think of times, Monday (and other) morning log-ins for example, or similar non-random moments (like deadlines for students at an .edu) where load will be much higher...you know, a wave every century is 100 ft high or whatever it is...so one load moment every ..xx.. days could cause a problem. So, maybe we add a couple of sigmas of uncertainty to our "bell" curve, and then draw a line down to the x-axis and read off the RAM value...eh, Eureka! More specifically, if 100 users...load avg. based on (100)^^0.5 = 10 users, then Figure 2 is proposed: 100% > | |!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!xxxxxxxxxxxxxxxxxxxxxxxxxxxxx |!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!xxxxxxxxxxxxxxxxxxxxxx... |!!!!!!!!!!!!!!!!!!!!!!!!!xxxxxxxxxxxxxxxx............... |!!!!!!!!!!!!!!!!!!!xxxxxxxxxxxxx........................ 50% > |!!!!!!!!!!!!!!xxxxxxxxxxx............................... Free |!!!!!!!!!!xxxxxxxxx..................................... Cache |!!!!!!!xxxxxxx ^ Buffers |!!!!xxxxx | % |!!!xxx | +xxxx---------------------------------------------------+--- ^ ^ ^ Joe FPM Novell (User-portion RAM calculation) Fig 1. - X's cover the "2 sigma" MarginalRange(TM) of free cache buffers with safety lying *just* to the right of the x area, the "." area reflecting the amount of RAM bought but not needed, while the "!" area denotes the time-to-dredge-up-your-resume moment, by FPM's extension of Joe D.'s original theorem. ...I have a feeling the mid-term is going to be rough, never mind the final. ------------------------------ Date: Fri, 15 Dec 1995 22:48:13 -0600 From: Joe Doupnik Subject: Re: Novell, statistics and (ed)U > >> Thus I suggest that when we scale the free memory in a server, after >>all NLMs have as much memory as they will use, disk and packet queues are >>allocated adequate reserves, etc, that we estimate users will consume about >>300 KB times THE SQUARE ROOT OF THE NUMBER OF LOGINS. >> >> Rebuttals and further comments are welcomed. >> Joe D. > >Big problem here is the assumption that usage is perfectly-smoothly average. Not really. The probability density curve of time used versus memory consumed can be a pretty wierd looking thing, but it has a large value at very small memory consumption (user's are working on their PC rather than doing file transfers). The theory works fine provided user's aren't synchronized (Folks, on the count of 3 press...). The point is how all this scales with the number of users, mentioned again below. >I can think of times, Monday (and other) morning log-ins for example, or >similar non-random moments (like deadlines for students at an .edu) where >load will be much higher...you know, a wave every century is 100 ft high >or whatever it is...so one load moment every ..xx.. days could cause a >problem. > >So, maybe we add a couple of sigmas of uncertainty to our "bell" curve, >and then draw a line down to the x-axis and read off the RAM value...eh, >Eureka! > >More specifically, if 100 users...load avg. based on (100)^^0.5 = 10 users, >then Figure 1 is proposed: > >100% > | > | > | xxxxxxxxxxxxxxxxxxxxxxxxxxxxx > | xxxxxxxxxxxxxxxxxxxxxx > | xxxxxxxxxxxxxxxx > | xxxxxxxxxxxxx > 50% > | xxxxxxxxxxx >Free | xxxxxxxxx >Cache | xxxxxxx ^ >Buffers | xxxxx | > % | xxx | > +xxxx---------------------------------------------------+--- > ^ ^ ^ > Joe FPM Novell > (User-portion RAM calculation) > >Fig 1. - X's cover the "2 sigma" MarginalRange(TM) of free cache buffers > - Safety lies to the right of the X's, by FPM's extension of > Joe D.'s theory > >...I have a feeling the "final" is going to be rough in this course... > >Floyd Maxwell Certainly valid concerns here Floyd. One can put safety factors in of whatever desirable size without changing the conclusion (it just scales the "standard deviation" of memory consumption). I too keep memory in reserve for large excursions. The 300KB figure quoted above was my attempt to give Novell a few points rather than stake out (without proof) a value of my own. 300KB is a rather large amount of memory for a user, as I've noted previously. My point is: memory consumption very likely scales as the square root of the number of active users, not as the direct number of users. To caution readers, it must be apparent that I am speculating and theorizing here without a set of measurements in NetWare to substantiate that is how NW deals with matters. The stochastic analysis is standard stuff. As mentioned in my previous message, short term memory demands aren't visible in MONITOR nor in other convenient tools I have available. Our infallible network canary(*) is the user population. Joe D. *Birds are extremely sensitive to poisonous air. Canaries were used to signal bad air in mines, before myriads of animal rights groups said that people would be better. ------------------------------ Date: Sat, 16 Dec 1995 12:55:07 +0100 From: Bo Persson Subject: Re: Novell, statistics and (ed)U >From Floyd Maxwell > > > > Thus I suggest that when we scale the free memory in a server, after > >all NLMs have as much memory as they will use, disk and packet queues are > >allocated adequate reserves, etc, that we estimate users will consume about > >300 KB times THE SQUARE ROOT OF THE NUMBER OF LOGINS. > > > > Rebuttals and further comments are welcomed. > > Joe D. > > Big problem here is the assumption that usage is perfectly-smoothly average. > > I can think of times, Monday (and other) morning log-ins for example, or > similar non-random moments (like deadlines for students at an .edu) where > load will be much higher...you know, a wave every century is 100 ft high > or whatever it is...so one load moment every ..xx.. days could cause a > problem. I have also seen the reverse, where NON-random behaviour improves the situation by reducing the amount of cache needed :-) At our site, the main application (order entry & invoicing) is shared by most of the users. In this case, they also literally share the file and directory caches for the data files. Very soon most, if not all, of it will be in the server cache. I have seen long periods of hardly any disk reads, just a steady stream of writes as more data is added. In this case, adding a number of users would hardly make ANY difference to the server's memory consumption. The situation above, where I had 50% free buffers and no disk reads(!), together with Joe D's initial thread on server memory, made me believe that I could add more disks, without adding any more memory. So I did, and it works just fine with about 10 GB disk space in a 48 MB 3.12 server, having just over 16 MB free buffers (35 %). As always, Your mileage may vary :-) ------------------------------ Date: Tue, 19 Dec 1995 11:53:40 -0600 From: Joe Doupnik Subject: Re: How Much Server Ram >>Is there a rule of thumb for determing how much server RAM is needed for >>running Novell 3.11 on a 100 user system. Macintosh support is used, as >>well as Mercury mail services. >> >>Current system is IBM 486DX-77 with 16 Meg RAM. > >There are various theories about RAM, but my personal rule of thumb is 16 >MB for every 1 GB of hard drive space. I also adjust this to have 70%+ of >cache buffers after all volumes are mounted and other NLMs loaded, such >as ArcServe, NW Connect, etc. The number of users is really not a >significant factor in relation to the size of the volumes and the block >size. >------------------------------ > >Ok guys.... This was just discussed in the FAQ a few days ago and, >I my memory serves me correctly, Mr J.D. was rather animate (sic) about >the fact that 16 Mb per 1024 Mb was "Outrageous". > >So, I will provide you with the formulas for calculating memory requirements > >Netware Volume: A=.023 > V= _Size Of Volume_ > B= _Block Size_ > M= _Memory required_ >M = (A(V)) / B > >Netware Volumes with Added Name space support: > A=.032 > V, B, M remain constant >M = (A(V)) / B > >Total Server memory =M + 2 Round to highest multiple of 1024 > >These formulas are not perfect, But they do come close. One must >also take into consideration any additional services loaded >and use a little common sense. > >BTW These equations are located in the NW V3.11 Installation Guide on >Page 113-14 ---------- How quickly people forget! First, the formula you quote Tracy, after inserting proper units which not even the Novell folks can get right ("Take more math!") works out to be my 2.5MB of memory for each GB of disk assuming 4KB allocation units on disk and no suballocation. Second, while the FAT is the single largest ordinary memory consumer it is neither alone nor necessarily the largest overall consumer. The formula totally ignores Everything else in the server, and hence is absurd. Third, what counts is the memory available to do work. That means memory to service all the NLMs, which is difficult to predict so we must resort to observation over time, and memory to service clients. So observe. The client consumption issue has been discussed at length here, in detail. Please refer back to traffic as recent as the end of last week for strategic views on client consumption. Fourth, the Novell forumlae are poor and without adequate basis in fact. They appear to be good tries but don't come to grips with the underlying factors. To see us develop a basis for calculation please read file nov-mem.doc which is part of the enhanced list FAQ (for just the file see netlab1.usu.edu, cd novell.faq, else use WWW browser there and to the many other WWW sites carrying the FAQ). And see last week's list for client memory requirements. THESE CONCEPTS SAVE MUCH MONEY. It has been my hope and expectation that people (includes me) would learn from thought, analysis, experiment and observation, what it takes to make a server work well. That means following a discussion, applying fingers to local observation, and spending a few minutes thinking about the matter. I'm sure many silent readers have done just that. If spending money is not your problem, and not a problem for someone asking for memory advice, then clearly the answer is always insert as much memory as the motherboard can hold or get one which will hold even more. Be sure to indicate in answers the basis for making the recommendation (eg, money is no problem here so we spend and pray). Else we need rational answers for sites trying to estimate needs, and I believe our discussion has produced such answers. Joe D. ------------------------------ Date: Mon, 15 Jan 1996 18:09:52 -0600 From: Joe Doupnik Subject: Re: Alloc Short Term Memory Pool problems >I have a strange occurance happening on one of my two servers. The Alloc >Memory Pool amount and amount used differs by about 50+ percent in >monitor.nlm. Immediately after reboot all the memory pools are fine. >This difference, in the Alloc Pool slowly grows daily. ------------ The Alloc memory pool is a one-way acquistion device. Once acquired memory is not returned to other pools. Some NLMs are especially hungry this way and cause the alloc pool to grow suddenly. The only way of reversing the flow is to reboot the server. In your case I shouldn't worry about it because you have so much memory available that there will be no impact on operations from the alloc pool growth. Joe D. ------------------------------ Date: Tue, 16 Jan 1996 11:16:04 -0600 From: Joe Doupnik Subject: Re: cache buffers and large disk drives >Our department and University use NOvell netware for maintenance of LAN. >I was having some discussions with the staff who administer it. Quite >frankly, I don't believe my ears. Then it's best to apply eyes to the problem to ascertain the facts. >Apparnetly, when one puts up a Netware system, it copies all the directory >and file names into RAM. This does obviously improve performance. However, >if one often needs to mount hard disks or CD-ROMS whose materials must be >on-line. However the material is relatiely infrequently requested. An >example is user areas where many users might not log in each day. >In such cases one might want to have less RAM on the server and specify >that certain partitions should get their files as needed. No, it does not copy all, but only some. Directories are a LRU cache. FAT disk structure information is fully cached. Try any VMS VAX to see what failure to cache the FAT does to performance (unbelievably slow under load). Please read up on Console commands to gain a better appreciation of the o/s, and see the list's faq for amplification of Novell guidlines. >Certainly, there must be some way to do this. >I also understand that what gets eaten up are "Cache Buffer" I have been >told that if one runs out of buffers,t he system locks up. I expect that >if a system is in heavy use, it slows down and thrashes. It shouldn't >just "lock up." So what else is new. All machines go belly up when starved of memory, even Unix. System managers are paid to know about such things, and as the sterotype goes CS persons just expect there to be memory willy nilly. It's nice to see a CS person worry about memory quantity, for a change. "Thrashing" is a classical CS term, but it does not apply to NetWare because there is no swap file nor paging file, though there is a reusable directory cache. While NW does extensive caching of file components as they move between peripheral devices (lotsa queues) it does not treat memory the same way as Unix. >Assuming that this isn't the case and one does have to have every single >disk file name stored in RAM, I see a work arounds. They aren't. >Is there any stacker type utility that can be put on the PC's in the lab >that would take a NOVELL partition and keep a single file on it that >would correspond to an entire user's areas. That way the user can mount >his file area on the PC. However, NOVELL just sees one file so it doesn't >use up precious RAM for every single disk file. Oh my. As a CS Prof you ought to know better than to split responsibility of a file system between machines, particularly if some of those machines are desktop PCs subject to all forms of computer maddness. Stacker compresses files, it doesn't reduce the number of files. NW 4 has file compression, but one would be foolish to just turn it on and expect everything to be "better." >I am a UNIX system administrator as well as a Ph.D in computer science, I never >heard of a system other than NOVELL that behaves this way and uses RAM in this >manner. Well, now you know more and have another performance design to think about. Btw, the name of the product is NetWare (tm), rather than the company name. First one needs to decide if there is a problem with the computers or with the observer's perception of the computers. Then one looks for solutions and consequences. In this case I think perceptions are the issue since the product is noted for being highly successful worldwide for a long time. You may also wish to skim the list's collection of commentary on aspects of NetWare: say use a WWW browser to netlab1.usu.edu. Joe D. (also a Prof type person) >Dr. Laurence Leff Western Illinois University, Macomb IL 61455 >(309)298-1315 Pager: 309-367-0787 800-512 0787 mflll@uxa.ecn.bgu.edu >Stipes 447 Assistant Prof. of Computer Sci. >Moderator: Symbolic Math List, Technical Reports List >alt.binaries.pictures.fine-art hierarchy || FAX: 298-2302 ------------------------------ Date: Wed, 24 Jan 1996 00:33:00 MET From: "Arthur B." Subject: Re: Question about EMM386 >My name is Michael Beales from Southampton Institute. I am currently >looking into the configuration of PCs allowing them to run >Pathworks 5.1 and Infoserver software. As anyone who has used this >software will know it uses a lot of conventional memory. The best to >combat this I have found is to use the monochrome graphics area (B000-B7FF) >in the EMM386 line. Adding Highscan to EMM386 also gives more upper memory. >What I want to know does anyone know of any problems using these two >options in EMM386, as I have heard that they could cause problems. > B000-B7FF is a monochrome video adapter memory area. Most VGA-cards don't have problems with it, however Hercules and SVGA-cards (in SVGA mode) should go down on it. The HIGHSCAN option causes EMM386 to scan F000-FFFF for more upper memory. Since this part of the memory is mostly use for shadowed ROM I wouldn't suggest it. Remove HIGHSCAN. I have even seen PC's with 72 KbRAM shadowed ROM. What areas you can use and can't use is 'easy'. Start the PC. Press Ctrl-F5 when you see the message "Starting MS-DOS". Use MSD (?, others if you prefer) to see which areas of the upper memory aren't in use and which are. You know have a first map of your I= and X= parameters. Then REM out HIMEM.SYS out of CONFIG.SYS. Restart the PC (don't press Ctrl-F5) and use MSD again (if you have more then one card using upper memory, you may need to do this for each line). Adjust your 'first map' to X= (exclude) memory that is now in use. Then unREM HIMEM.SYS and REM EMM386 in CONFIG.SYS (EMM386 without parameters, use DOS=HIGH, UMB). Restart the PC and use MSD again. Again adjust your 'map' to X= (exclude) memory that is now in use. Then unREM EMM386 in CONFIG.SYS (no parameters yet). Restart and use MSD again. Any areas in use know that shouldn't be? X= them. Any areas not is use? I= them. Edit CONFIG.SYS accordingly, restart the PC. Other upper memory saving tips: EMM386 RAM NOEMS If you need EMS sometimes this helps: EMM386 RAM /FRAME=NONE As last you could run MEMMAKER to get the last byte out. ------------------------------ Date: Wed, 24 Jan 1996 15:04:32 -0500 From: "A. Grant" Subject: Re: Remote boot and Windows Joe Doupnik writes: > Then you can reference all server files via that single drive >letter. I never use the "automatic" letters such as that y: above >because they change depending upon setups. Be explicit, such as >/y=F:\dos\386emm.exe rather than vague (y:) and wishy washy >(no explicit path). The following is quite legal and avoids drive letter problems entirely. ... /Y=ServerName\SYS:PUBLIC\IBM_PC\MSDOS\V6.20\EMM386.EXE ------------------------------ Date: Thu, 25 Jan 1996 08:41:04 -0600 From: Joe Doupnik Subject: Re: Copyright Violation >Heres a good one for you all to chew over. A customer of mine runs MPR 3.0 >on their 3.12 server , the other day MPR threw up a memory corruption error >(IPXRTR 178) which accordingly reset the router with no problems, however >since then the server has been saying that there is another server on the >WAN with the same serial number and that there is therefore a 1.1.136 >Netware Copyright Violation. ------------- Corrupted server memory can and does result in these false copyright violation notices. Think of them as meaning "Server memory is corrupt, please reboot right away." Joe D. ------------------------------ Date: Thu, 8 Feb 1996 14:15:19 -0600 From: Joe Doupnik Subject: Re: Losing Cache Buffers >>I'm having a problem with a Netware 3.12 server losing cache. >>It's currently a 486dx2 66, 8mb Ram. If I down the server and run >>VRepair, I'll be back up to about 600 buffers. After a few days, it >>will go down to about 550, and eventually down to 20. Any >>suggestions? thanks in advance >When the server is booted as much memory as possible is allocated to >(file) Cache buffers. When needed memory is allocated to the Permanent >memory pool (from Cache Buffers) and once allocated it can not be returned. >Likewise, the short term alloc memory pool gets supplied from the Permanent >memory pool and this also can not be returned. > >To prevent too much memory being allocated to the Permanent memory pool, >decrease the "maximum directory cache buffers". Too prevent too much >memory being allocated to short term alloc memory, decrease the "maximum >short term alloc memory", (on a machine with 8MB, that should be set to >2MB). > >If you then start getting "out of memory errors", you should think about >putting more memory in your server, or increase the block size on your >volumes. > >Bo Bonnevie --------- Good advice. That alloc memory space is a puzzler item in practice. Watching one of my NW 3.12 servers over the past few days shows that area to grow and grow, and grow, never to be returned as cache buffers. It peaks eventually. My best guess is printer spooling is causing the largest consumption, with printing to an HP Deskjet color inkjet being the worst offender (would you believe that WP Presentations can create an 808MB file for one transparency? I didn't either until it happened several times. That takes real talent.) I haven't lowered the max alloc short term memory yet, but I may to see what side effects occur. Joe D. ------------------------------ Date: Wed, 21 Feb 1996 18:12:06 UT From: Dave Kearns Subject: Re: REGISTER MEMORY ON 3.12 (was Re: ? N Joe Doupnik wrote: > Well, maybe we are making progress here. > NW 3.11 certainly does not permit REG MEM in startup.ncf, but my >fading memory said NW 3.12 probably could since it is much more like NW 4 >in these matters. Could be that memory is faulty too. I'd RTFM but they >aren't on the system at this time (space problems). I'd try it here except >the NW 3.12 servers are populated by pesky users at the moment. > Editorial comment: > For what it's worth dept (US$0.00 real value): I too heartly wish >Novell would fix up the memory sizing business to work "automatically" >right at server.exe load time rather than us futzing with odd bits and >pieces, and having to stuff disk things into autoexec.ncf etc. If himem.sys >can do it then so could NetWare. > Joe D. Nope, for all versions of 3.x, REGISTER MEMORY can only go in AUTOEXEC.NCF. For all versions of 4.x (at least, 4.01 and above), it can go in STARTUP.NCF. However, the command REGISTER MEMORY is not recognized by 4.01, where the syntax (in startup or autoexec) is Load Memfrgfx Look for a discussion of this in my "Under the Hood" column, in NetWare Solutions magazine's April issue (due out in about 4 weeks). ------------------------------ Date: Tue, 27 Feb 1996 08:14:27 EST From: Larry Mascarenhas Subject: Computing server RAM requirement You could look in the Novell ftp site for a file called smem.exe. It will indicate the amount of RAM you need based on the input you indiacte. ------------------------------ Date: Fri, 1 Mar 1996 09:08:47 CST From: Richard Jordon Subject: Re: FF6A error message There have been a number of recent questions about NetWare 4.10 error message ff6a. According to the SDK Edition 4 documentation this error message means: "DSERR_NO_ALLOC_SPACE: attempt to write to file server which does not currently have enough free dynamic memory to process this request." It is my understanding that an authenticated DS connection uses a RAM cache (actually a "Connection Table Handle" according to the SDK). While in the past I've been using the server RAM formula found in this mailing list's FAQ I'm beginning to wonder if there is an adequate adjustment for number of users in a 4.10 environment. The current formula uses 0.4MB per client and assumes that the requirement goes down as clients increase. Perhaps a more realistic number for 4.1 is to stay at 0.4MB per client regardless of scale. ------------------------------ Date: Sun, 10 Mar 1996 18:10:13 -0500 From: Debbie Becker Subject: Re: Windows "out of memory" issues >We have windows 3.1 Dos 6.22. Novell 4.1 on an ethernet lan. >Frequently our users get an "out of memory" message when running >several windows apps. sometimes just three apps then others more >then three. This seems to be a new problem since novell 4.1, ..but >it's hard to tell since we are always adding apps on one kind or >another. I was wondering if nwpopup or another app could be causing >this. Any help would be greatly helpfull. I've had some problems with this in the past and have found a couple of areas to check. As most others have mentioned, resources (not memory) is the real problem in Windows. I found that, after loading some new VLMs a while back, I was getting constant "out of memory" error messages. Thought I'd been really clever in loading so many programs high, and then read somewhere that Windows likes a contiguous chunk of upper memory of a certain size or larger (can't remember how much for the life of me). Went back to MEMMAKKER, ran a simple configuration and loaded more files low. Much to my amazement, it seems to have made a difference! I've seen a few sample NET.CFG files on here in which people have loaded various VLMs low -- would be interested in the reasoning here so that I can use it/pass it on! Also picked up RAMDoubler a couple of months after that and it made a great difference! Can now run a bunch of resource-grabbing apps (and don't even have to worry about which ones I load first!). There have been a couple of programs my husband has had conflicts with, but it's not a big deal to load without it in that case. As someone else stated, it's easy to setup and use. Have also been warned by various folks about the size of the WIN.INI. Mine is now 34,000. Once I hit 35,000 I seem to have more problems with "memory." ------------------------------ Date: Wed, 10 Apr 1996 07:20:24 -0600 From: Joe Doupnik Subject: Re: Memory above 64 meg doesn't register >Correct me if I'm wrong, but Netware auto-registers this amount of >ram: > >isa 16 megabytes >eias 32 megabytes >pci 64 megabytes > >...so use a register memory command in your autoexec.ncf file. --------- It's not quite right. Experience here says 16MB is the no-help default recognition on all machines, and Bios support is needed to go higher. This is certainly true of EISA bus machines. What is often a puzzle are PCI bus machines (does their Bios report correctly by itself or do we need to help in some way etc). The Intel Triton 1 PCI chipset for low cost Pentium desktop machines, for example, caches only up to 64MB and thus has that inherent limit. My PCI/EISA motherboards don't report more than 16MB without me running the EISA configuration program, the same as for an ISA/EISA bus unit. It's a pretty muddle we have this decade, and even choosing the SIMMs is becoming a specialist task. Joe D. ------------------------------ Date: Thu, 11 Apr 1996 20:15:51 -0700 From: "Richard K. Acquistapace" Subject: Re: Memory above 64megs does not register >Why is it that I upgrade my server's memory to go from 64Megs to 130MB and >Netware 4.1 will not recognize it. It goes up to 66MB and stops. > >I have checked the boot up and the memory is okay all 130 MB are there. In >dos when I type MEM it tells me I have 65MB of RAM (But I think that's as >much as DOS recognizes), I took out DEVICE=c:\dos\HIMEM out so nothing is >affected, I have the latest SCSI drivers and all my scsi's are setup >correctly from Id 1 to ID 5. I'm not sure where else to look. I have >upgraded everything on the server including loader and all the recommended >patches. Follow these instructions. You will have no problems. Boot to MS-DOS 5.0, 6.0, 6.2 (no config.sys) run server -ns -na (do not run startup.ncf, autoexec.ncf) type in fileserver name type in internal ipx number type memory - if you see all your memory use procedure I - if you do not see all your memory use procedure II (This is due to a problem with server.exe and your motherboard) Procedure I =========== Netware 3.11, 3.12, 4.0x ------------------------ STARTUP.NCF set auto register memory above 16 megabytes = on set reserved buffers below 16 meg = 40 (Add 20 buffers for each additional SCSI device supported by the Adaptec driver with a minimum of 200 buffers for Sbackup, Arcserve) i.e. load AHA1540 port=xxx i.e. load AHA1640 port=xxx Note: - above16=y is not necessary anymore (154x, 1640). (autodetection of memory) Procedure II ============ Netware 3.11, 3.12: ------------------- STARTUP.NCF set auto register memory above 16 megabytes = off set reserved buffers below 16 meg = 300 (Add 20 buffers for each additional SCSI device supported by the Adaptec driver with a minimum of 200,300 buffers for Sbackup, Arcserve) AUTOEXEC.NCF (must be in same path as STARTUP.NCF, C: or A:) file server name ipx internal net
register memory 1000000 ??????? (see Memory) i.e. load AHA1540 port=xxx i.e. load AHA1640 port=xxx (if you have a problem with not mounting the first time, add command line "scan for new devices") mount sys load .lan bind ... mount (all) load nlms Netware 4.0x: ------------- STARTUP.NCF set auto register memory above 16 megabytes = off set reserved buffers below 16 meg = 300 (Add 20 buffers for each additional SCSI device supported by the Adaptec driver with a minimum of 200 buffers for Sbackup, Arcserve) register memory 1000000 ??????? (see Memory) i.e. load AHA1540 port=xxx i.e. load AHA1640 port=xxx AUTOEXEC.NCF (can be on SYS: Volume) file server name ipx internal net
load .lan bind ... mount (all) load nlms Memory: ------- register memory (both nummbers are in hex: ) decimal 16777216/1048576/65536/4096/256/16/1 16Meg = 1 0 0 0 0 0 0 =1'000'000 Example: 24M :register memory 1'000'000 800'000 32M :register memory 1'000'000 1'000'000 48M :register memory 1'000'000 2'000'000 64M :register memory 1'000'000 3'000'000 ------------------------------ Date: Sun, 24 Mar 1996 14:03:09 -0600 From: Joe Doupnik Subject: Re: NOVELL Digest - 20 Mar 1996 to 21 Mar 1996 - Special issue >>I have not seen this brought up before but I have a persistent problem >>since using VLM's. I periodically get an error message that states: >> >> "General failure reading device NETWORK. Abort, Retry, Fail?" > >I would be interested in the response to this as well. I upgraded to >the latest VLM's I saw reference to on this list from an earlier >version, and get the same "General failure reading device NETWORK. >Abort, Retry, Fail?" message. I revert to the earlier version and the >problem "goes away." I read the readme.txt file and tried >inserting/massaging the various parameters in net.cfg. I am running >Netware 3.12, 802.3 frame type (I know I shoud use Ethernet_II, but my >remote-boot roms require 802.3). ----------- The message means loss of communications. Device NETWORK is a channel within the DOS/Windows machine to join DOS calls to the networking stack. The most common situation yielding this message is lack of careful memory management. VLMs are normally loaded into a relatively small piece of Upper Memory Blocks, with VLM.EXE often loaded below 640KB (though it need not be). VLM.EXE is the manager and it calls the various VLMs into that small UMB area, paging style, to execute below the 1MB boundry. Usually the VLMs are "parked" above 1MB in either extended or expanded memory, controlled by the /M switch on the VLM.EXE invokation line. You can see where components are loaded with a decent memory display tool, and by saying VLM /D at the DOS prompt after the network is going. Should anything impact that UMB area execution will fail, and with the failure will be loss of communications. Thus it must be protected against intruders, at both DOS and Windows level individually. Similarly, if the lan adapter uses shared memory to exchange packets that area too must be protected at both levels individually. We need not mention that there can be only one hardware device per IRQ, irregardless of whether you plan using competing hardware. Ditto i/o ports, since each adapter uses a block of port addresses (we specify only the start of the block) of typically 16 or 32 slots. DOS often does a fair to poor job finding such memory uses. Thus you must eXclude= such areas manually for safety's sake, or use a proper manager such as QEMM/386. Windows pays little to no attention to DOS memory management settings and here you need to eXclude= the areas in the [386Enh] section of system.ini. If you employ expanded memory, as I do, then it needs a 64KB block (actually one to four contiguous 16KB blocks, called pages) in UMB with which to make material visible from above 1MB. We must tell managers where that page frame is located, typically as frame= on their command line. NOEMS means no expanded memory. There is NO need to use frame Ethernet_802.3 after remote booting. I have two student labs which use old boot roms and boot using that frame. The networking setup (net.cfg) uses only Ethernet_II. Try it and see. Net.cfg requires paying attention to its components and construction. The list's FAQ has examples well worth reading. Joe D. ------------------------------ Date: Tue, 16 Apr 1996 07:49:11 +0200 From: Henno Keers Subject: Re: PBURST NLM on a NW 3.12 network >Also for pburst to be enabled, do I have to modify the startup.ncf >file ? SET MAXIMUM PHYSICAL RECEIVE PACKET SIZE = ??? Apparently the >network board mfr's driver specifies the max packet size it can >handle. NetWare 3.12 has a default value of 4202 on this one, rather a waste when you have ethernet and consider that when using 100 packet receive buffers on 4202 bytes takes 410 Kb memory from the server where, when set correctly to 1518 bytes, receive buffers only use 148 Kb. So... modify it to what your topology and hardware can handle. In NetWare 3.11 you had the downside of the default value being 1514 bytes, even on Tokenring, hence poor performance. ------------------------------ Date: Fri, 26 Apr 1996 15:59:52 -0600 From: Joe Doupnik Subject: Re: netware 4.0 (Insufficient directory space ) >>>You probably don't really have 90 Mb free on the drive -- it is >>>probably occupied by deleted files. Do a CHKVOL on that volume >>>to see what is really available. If all, or most, of the space >>>is occupied by deleted files, do a PURGE /A from the root directory >>>to clean it up. That should fix the problem. >> >>I know purge cleans out the deleted files, but isn't NetWare supposed >>to write over the deleted files if the space is needed, or is purge >>required to regain the space. >>Brian L. Anderson > >Salvageable but deleted files still take up memory since they still >represent a directory entry. You need to purge not to free up disk space >but to free up memory, etc. >Eliot T. Ware, CNE ------------- Yes, but... Directory cache entries, the memory item above, are reused. The problem is likely the volume has devoted more space to directory bookkeeping that is permitted. See the console, SET, file system, screen two, for the nominal 13% limitation. Purgable files count against this limit, hence the good advice of purge \deleted.sav and purge /a from the root. Joe D. ------------------------------ Date: Wed, 1 May 1996 10:48:23 +0100 From: Richard Letts Subject: Re: DET maxing out. >You could try setting Maximum Percent Of Volume Used By Directory to a >higher number than the default of 13. Also, you will need to run a >purge *.* /all on a semi-regular basis otherwise all those temp files >will take over your DET. I recently found 260,000 temp files in my >etc\tmp directory (deleted of course). A useful trick is to capture the output of purge *.* /all and discover if any directories regularly have lots of purgeable files in them. If they do, then set the 'purge' option on that dierectory, carefully considering the implications of that (salvage won't work, and you will have to go to backup tape to restore files) The users here have a home filestore with a TEMP directory which all of the applications are configured to write their scratch files into; this is flagged delete-inhibit, rename-inhibit and purge. This reduces the amount of clutter in the user's filestore, and they can delete the contents of that whenever they like. We also ensure the print-queues are flagged purgeable (a bug in NW caused them not to be, and ever queue-status request created a 'deleted' file in the queues. Our sys: volumes grew to 500,000 directory entries in a month) ------------------------------ Date: Wed, 1 May 1996 09:24:37 -0400 From: "Eliot T. Ware" Subject: Re: Remove MAC/Appletalk support >Without MAC support, what is the recommended setting for minimum >packet receive buffers? I currently have this set at 100. > >We have all our servers set to 2000, but then our network is also quite >busy. I believe general wisdom is 1 PRB per client and 10 per EISA or PCI NIC. ------------------------------ Date: Wed, 29 May 1996 10:15:55 +0100 From: Bruno Belhassen Subject: Re : Re : About memory >[NetWare 4.x has the] Same 5 pools [as NetWare 3.x does]. The architecture of Netware 4 is different than Netware 3. In fact, Netware 4 has only one memory allocation pool, compared to Netware 3, which has five allocations pools. With Netware 3, applications can run out of memory because some management routines don't release memory back to the operating system. Using one allocation pool, Netware 4 alleviates theses conditions. A new characteristic also exists in NW 4. That is the memory protection. Two domains can be created into memory : OS Domain and OS PROTECTED domain. To perform this operation, you can load the DOMAIN.NLM to create the OS PROTECTED domain. It allows you to test your NLM programs without risking server memory corruption. The OS Domain is used by the operating system and NLM programs. ------------------------------ Date: Fri, 14 Jun 1996 09:10:37 -0700 From: Floyd Maxwell Subject: Re: VLM PROBLEMS >We are using Novell Netware version 3.12 and it is working fine. >However, every time we update the VLM (latest Ver.) on the client, >that's when our nightmare begins. We are constantly getting MEMORY >ERROR messages, " Close one or more application to continue". Each >machine has atleast 8MB of RAM, and used to be able to run multiple >windows application at once. Now, can no longer do that. We've tryied >every thing, and no success. I have seen 3 replies on this...all on the same (right) track. "Low DOS memory problem" - You (and all other WfWg'ers) should definitely run PCMag's 1MBFORT.EXE as first program on load= line of SYSTEM.INI. - You should consider going to Client 32 for DOS/Win -- it will net you 70KB to 100KB of lower 1MB RAM, about 30 to 50KB of this being gained in the all-important lower 640KB RAM. - If you do not go for Client 32, and running 1MBFORT does not cure the problem, then you should use a better memory manager than DOS (ie. try QEMM 7.x or higher) --------- Date: Mon, 17 Jun 1996 09:51:39 +0100 (BST) From: philr@hwcces.demon.co.uk (Phil Randal) Subject: Re: VLM PROBLEMS >- You (and all other WfWg'ers) should definitely run PCMag's >1MBFORT.EXE as first program on load= line of SYSTEM.INI. I've found that it's beneficial to specify a 'block' size of 5000 bytes in 1mbfort.ini instead of the default 10000 bytes. ------------------------------ Date: Fri, 14 Jun 1996 08:28:33 -0600 From: Joe Doupnik Subject: Re: VLM PROBLEMS >The /mx switch does send it upstairs, but into upper memory, not >extended. No matter if you have 4 or 8 or 16Meg, Vlm.exe will never >get past the 1Meg mark. > >By the way, I did a little testing to reassure myself. After loading >all other drivers, etc., into conventional memory, I tried three >different combinations: > >-c:\nwclient\vlm.exe /mx - this loaded app. 48K into upper and 15 in > conventional. >-c:\nwclient\vlm.exe - same thing. >-lh c:\nwclient\vlm.exe - this loaded app. 53K into upper and 10 in > conventional. Basically, what the loadhigh command did that > the /mx switch didn't do was load the 'program' part of Vlm.exe > into upper memory. > >All three displayed the message 'The VLM.EXE file is using extended memory >(XMS).', which tells me what I believed in the first place; that vlm.exe >will try to load it self into 'extended' memory, a.k.a upper memory, any >chance it gets. -------- Nope. *.VLM files execute below the 1MB address boundry because they are real-mode items. But they are stored above 1MB. VLM.EXE calls down the appropriate worker into memory below 1MB for the worker to execute. The space used below 1MB for such a worker (.VLM) is just enough to hold the largest .VLM executable, and it's reused over and over. The below 1MB swap area can be in conventional memory, in UMB, or in the page frame of expanded memory which is itself in UMB. VLM.EXE itself must live below 1MB, whether in conventional or UMB. Try VLM /D to see the "mapping" of one VLM over another; the counts wrap around. It also shows the segment address of items and so on. Joe D. ------------------------------ Date: Mon, 17 Jun 1996 11:28:36 -0400 From: Debbie Becker Subject: Re: VLM PROBLEMS >We are using Novell Netware version 3.12 and it is working fine. >However, every time we update the VLM (latest Ver.) on the client, >that's when our nightmare begins. We are constantly getting MEMORY >ERROR messages, " Close one or more application to continue". Each >machine has atleast 8MB of RAM, and used to be able to run multiple >windows application at once. Now, can no longer do that. We've tryied >every thing, and no success. I was having problems with Windows at one point in time and ran across an obscure bit of info that stated that Windows needs a certain size "contiguous" memory block in upper memory. We ran MEMMAKER, loaded some of the network components LOW (rather than HIGH as we're all conditioned to do) and that seemed to take care of it! ------------------------------ Date: Tue, 18 Jun 1996 17:05:21 -0600 From: Joe Doupnik Subject: Re: registering memory before disk drivers... >With all the traffic lately about registering memory, etc... I'll throw >this one out. > >I have an 3.12 486dx2-66 ISA server with 24MB ram and only 2 500MB volumes >mounted. It has the appropriate register memory command in autoexec.ncf, >and I thought about moving the reg. mem. command to startup.ncf like the faq >recommends, but then I noticed that the load statement for my scsi >host-adapter already has something about memory above 16MB on the load line. >The scsi adapter is an adaptec 1542 and the load line in startup.ncf >looks something like: > >load aha1542 above16mb=y That's to let the Adaptec driver know whether or not it should use "bounce buffers" as an intermediary to transfer data between adapter (limited by the ISA system bus to 16MB) and the rest of NW (parts above 16MB). It does not make memory available, nor do what REG MEM tries to, etc. >(sorry I cant get to the server console just now so I cant give the exact >line) > >My question: in this particular case should I still move the reg. mem. >command to startup.ncf or is the scsi adapter driver smart enough to use >the memory above 16MB anyways. NetWare needs to know about the memory, else all is in vain. The MEMORY command shows how much NW knows at that time, but it does not show where software components reside (because of what memory was available at their load time). Thus NW should know how much memory the system has before loading anything, particularly before loading disk drivers and allocating file/disk data structures. The FAQ discusses what happens to the latter when memory isn't known properly (you have more or less a 16MB machine in that case, with disk structures digging down to where other items reside). Novell isn't going to tell us about the memory allocation details so that we could second guess the consequences of loading things one way or another, nor to easily know where major components reside in memory. Hence we feel out the situation on live servers and try to create plausible reasons for what we see. Not science. Joe D. >I haven't changed anything just because it ain't broke so it doesn't need >fixing... I'm just wondering. ------------------------------ Date: Wed, 19 Jun 1996 10:27:07 +0100 From: Phil Randal Subject: Re: registering memory before disk drivers... On 18 Jun 96 at 17:05, Joe Doupnik wrote: > NetWare needs to know about the memory, else all is in vain. >The MEMORY command shows how much NW knows at that time, but it does not >show where software components reside (because of what memory was >available at their load time). Thus NW should know how much memory the >system has before loading anything, particularly before loading disk >drivers and allocating file/disk data structures. The FAQ discusses what >happens to the latter when memory isn't known properly (you have more or >less a 16MB machine in that case, with disk structures digging down to >where other items reside). I discovered by accident the other day an interesting Novell TID ("Memory Segmentation", TID2908018, dated 23MAY96). It explains why memory should be registered before volumes are mounted, and makes very interesting reading. "As the first 16MB of RAM is registered with NetWare a cache control structure is allocated at the top of the 16MB block. Later, when the manual register memory command is used to register the remaining memory, another block of memory will be added. However, the cache control structure will separate the memory into two blocks or segments..." ------------------------------ Date: Thu, 20 Jun 1996 12:32:55 -0600 From: Joe Doupnik Subject: Re: more on Registering Memory >Joe Doupnik wrote: >>The MEMORY command shows how much NW knows at that time, but it does not >>show where software components reside (because of what memory was avail. >>at their load time). Thus NW should know how much memory the system has >>before loading anything, particularly before loading disk drivers and >>allocating file/disk data structures. > >I've read the FAQ and followed the recent postings regarding >"register memory" and the related topics. I'm still wondering >if our server is really registering all the memory it has, and >at what point. > >The server is EISA/PCI (we use the eisa utilities to configure the stuff, >so I guess that makes it an EISA-bus server) with 256 MB RAM and two 4-GB >RAID-based volumes. It runs 3.12, all patched. Each RAID volume runs >off an EISA DPT controller, and there is another PCI Adaptec SCSI for >the tape and cd-rom. > >I don't "register memory" anywhere, and have the following in startup.ncf: > set reserved buffers below 16 meg = 200 > set maximum alloc short term memory = 32000000 >How do I even know if I'm doing things correctly, since "memory" >reports all 256 MB ? That looks just fine to me. My acid test is to bring up the server with no autoexec and no startup: server -na -ns, and then type MEMORY. If the number is correct then the hardware is setup correctly. A machine with an EISA bus needs the full EISA configuration treatment, even those with another bus on the motherboard too (say PCI). Joe D. ------------------------------ Date: Mon, 24 Jun 1996 07:55:34 +0200 From: Henno Keers Subject: Re: Novell Patches, 312pt8 >I downloaded the latest and greatest...but what is LSWAP.EXE, >LSWAP.NLM, and LOADER.EXE? Loader wants server.nlm, and lswap??? If you had read the accompanied .txt and readme files then you would have found that loader.exe is a DOS real mode OS loader that loads server.nlm (the actual server program) and searches the hardware for its memory size & so on. The new loader should now recognize all the memory in PCI style machines without the use of register memory (I hope so). ------------------------------ Date: Tue, 25 Jun 1996 09:50:11 -0600 From: Joe Doupnik Subject: Re: Memory calculations in FAQ >I was looking over the NetWare Memory Requirements in the FAQ (H.25.2) and >was surprised to see that it included a recommendation of 0.4MB file cache >per client. That would mean if I had a server with 100 users I would need >40MB of memory just for file cache! > >I use to administer a server with about 110 users (concurrent), 1.8 gigs >of disk and a handful of NLMs, with 24MB of memory and had no problems. ------------ The figure is from Novell, and represents connection information, open file information, some file buffering in both directions, and so forth. While Novell suggests linearly adding this per user I have commented that that's nonsense. Instead one should add at a rate of about the square root of the number of active users; that's an RMS result. This is stochastic (random) processes in action, where the standard deviation of the sum of many similar independent variables is the std dev of each times the square root of their quantity (engineers: add power, not volts). The simple reasoning is that file buffering space is largely transitory on a very short time frame and thus is normally shared rather than dedicated to each user for the duration. Even then 400KB per user is still a large number, larger than I think is realistic. But for safety's sake it is not a bad starting point. Just don't multiply it by the number of active users, multiply by about the square root of the number of active users. Joe D. ------------------------------ Date: Thu, 11 Jul 1996 10:15:43 GMT+1000 From: Michael Strasser Subject: Some memory numbers A few weeks ago we added a second 4GB disk to our NetWare 3.12 server to expand its capacity. We were able to examine some memory use figures as reported by MONITOR. They make interesting reading and provide useful information for the "How much memory do I need?" issue. The server has: * Pentium 100 CPU * ISA/PCI ASUS Motherboard * 64MB RAM * Adaptec 2940UW SCSI-Wide controller * Quantum Grand Prix 4GB SCSI-Wide disk * Seagate Hawk 4GB SCSI-Wide disk * Other stuff (CD-ROM, 2 tape drives, etc.) Some comments on the hardware: 1. We will upgrade to an EISA motherboard if we can afford it. I don't like having AUTOEXEC.NCF on C:, but it is absolutely necessary so the Adaptec driver knows about all the memory via NetWare (via REGISTER MEMORY before the disk driver is loaded), unless Novell releases a server patch that recognises PCI as a 'real' bus rather than merely a 'local' one. 2. The Quantum drive runs much hotter than the Seagate, and is much noisier. We are going to install a pair of fans in the box to help with the heat. Now some figures. We loaded MONITOR.NLM immediately and watched what happened as various disk volumes were mounted. (View this in a monospaced font to see the columns line up.) Total server memory 65,638,848 100% Base: Cache buffers: SERVER.EXE + MONITOR.NLM (+ AIC7870.DSK?) 65,096,064 99% New disk volumes (no files): Base + SYS (4K blocks, 2000MB, no namespaces, empty) 59,934,336 91% Base + MAC (4K, 1000MB, no NS, empty) 62,037,888 95% Base + APPS (4K, 1004MB, no NS, empty) 62,037,888 95% Old disk volumes (with files): Base + OLDSYS (8K, 1015MB, OS2+MAC, nearly full) 61,294,464 93% Base + LEAF (8K, 3065MB, OS2, 90% full) 59,710,464 91% Name spaces added: Base + SYS (4K, 2000MB, OS2+MAC, empty) 59,849,856 91% All volumes old and new: Base + all 48,445,584 73% It was not possible to experiment with all permutations and combinations. I have copied files from OLDSYS to SYS (OS2+MAC namespaces) and MAC (MAC), and some files from LEAF (OS2) to APPS (OS2). OLDSYS is not currently mounted and will be recycled soon as another volume with MAC and OS2 namespaces. As I write this, the server has 55% cache buffers (35MB out of 64MB) with 57 connections, 194 open files and these programs loaded: PSERVER, all of Mercury, ARCserve 5.01g, FAXserve 3.01a, UPS, RDATE, etc. Disk volumes mounted, name spaces and their 'occupancy rates' are: SYS 2000MB MAC + OS2 39% used APPS 1004MB OS2 30% used LEAF 3015MB OS2 83% used MAC 1000MB MAC 31% used EXCHANGE 1015MB MAC + OS2 32% used Next time I down the server I'll look at figures with our current volumes and usage levels, and before there are any logins so we can get an estimate of memory usage per login. We know that the Red Book formula is hopelessly wrong with >16MB RAM and multi-gigabyte disks. I haven't reviewed the latest estimated formulae for memory use but I hope this information will contribute to real knowledge on the subject. Comments? --------- Date: Thu, 11 Jul 1996 08:18:05 +0200 From: Henno Keers Subject: Re: Some memory numbers You didn't give us any info on the amount of files and is average size, so I can't use the formula in the FAQ exactly. However I did some assumptions: 65 connections 3 Mb for the NLM's 150.000 files that occupy 3.5 Gb. The calculation came up with 50 Mb RAM, not couting the extra RAM needed for OS2 and MAC. I presume you should be save. The formula in the FAQ is pretty good although is calculates a too large memory value for each connection (connection x 400kb). According to Joe, it should be more close to sqr(connection) x 400. ------------------------------ Date: Thu, 18 Jul 1996 07:48:21 +0200 From: Henno Keers Subject: Re: above 16 meg of memory on a PCI mboard >I just installed Netware 3.12 on a server with a Tyan PCI motherboard >with 64 meg of ram. However, Netware doesn't recognize more than 16 >meg of memory. I checked, and: > >Auto Register Memory Above 16 Megabytes: ON > >is automatically set (even though I thought this was for EISA >anyway). I'm stumped now. I've searched on Novell some without any >luck. Anyone got any suggestions? And yes, the Server does count up >64 meg of memory and does recognize it in the bios. This is probably >something easy that I'm just missing, but I can't for the life of me >find it. Get file 312pt8 from your closest mirror. It includes a new Loader.exe wich is said (from the readme file) to detect memory in a PCI machine correctly. Use Lswap to apply it on a backup of your server.exe. ------------------------------ Date: Tue, 12 Nov 1996 10:12:53 -0600 From: Joe Doupnik Subject: Re: Memory 4.11 >Netware 4.11 on a Pentium with 11 GB storage. > >According to the 4.0 manual (I don't have the 4.11 manuals yet): Start by looking in our fine FAQ since we have beaten this subject to death on the list. The manual has no basis in reality; disregard it. SMEM is the same as what I worked out in public about two years ago. Do use 64KB allocation units, with suballocation, and watch your memory consumption plummet. 6GB of disk fits onto a 32MB server with 64KB allocation units, NW 4.11. Add more memory for extra programs and better cache space. >1) I have told SMEM.EXE that I will use 4 KB blocks. Since I will >enable sub block allocation, could I not use a 64 KB block size and >dramatically decrease required server ram? See above. Please do run an experiment, think, setup accordingly. >2) I seem to vaguely remember reading somewhere that a PCI / ISA bus >Pentium motherboard would only support up to 64 MB ram when running >Netware. It may have been a chip set issue if I remember correctly. >Anybody know about this? Vague reading won't cut the mustard. Please look hard at the particular machine you have, not some vague kind mentioned by others. Cheap Windows boxes are indeed cheap boxes, so avoid them. State your specs to the vendors, and test the products before final acceptance. >3) As I don't need the full 8 GB vol1: (see below) is there any >problem with creating vol1: as 6 GB and later on adding more ram and >increasing the volume size to the full 8 GB available. Can I expand >my volume painlessly? Please do read the fine manuals on volume creation. Hint: do it right the first time, play about on test machinery rather than production machinery. Joe D. ------------------------------ Date: Sat, 16 Nov 1996 01:05:56 -0800 From: Marcus Williamson <71333.1665@compuserve.com> Subject: Additions to Novell FAQ A Server Memory Calculator (SMEM.EXE) and a REGISTER MEMORY Calculator (REGMEM.EXE) can be obtained from: http://www.connectotel.com/ctsoft.html ------------------------------ Date: Fri, 29 Nov 1996 15:59:50 -0600 From: "Mike Avery" To: netw4-l@bgu.edu Subject: Re: Too Many GPFs - Reply This is off topic in a NetWare discusion list, but all of us DO have to support Windows, even if we think it's a dog, and there have been a number of questions on the matter. I've been stabalizing Windows for a while, and here are a list of things I look at when there is trouble. Every tip in this list has been researched, and could probably warrant a note the size of this one to fully explain it. So, this will be the abbreviated version of the tips. If you want more information, please ask for it, either in or out of the discussion list. 1. As mentioned before, I try to upgrade to 3.11 or WFW, the memory requirements are smaller, the product is faster, tighter, and more reliable. But don't despair, 3.1 can be reasonably reliable. 2. Memory management must be impeccable. In general, I don't trust Microsoft's ability to identify and include or exclude the memory in the 640k to 1mb range. So, I use includes and excludes to handle the matter. Usually I find that including the B000 page memory is more trouble than it's worth. Still, I often do it. Also, a number of PC's have surprises in their memory maps, expecially 386SX's. Make sure you are using the same HIMEM.SYS and EMM386.EXE. If they are different versions, "funny" things will happen. (Hint - if you are using c:\dos\himem and c:\windows\emm386, they are almost certainly different versions.) Also, use the matching Smartdrv. My usual emm386 statement looks something like this: device=c:\dos\emm386.exe ram on noems x=a000-afff i=b000-b7ff x=b800-c7ff i=c800-efff x=f000-feff Sometimes, depending on the video card, you have to exclide the b000-b7ff range and may include the b800-bfff range. Some versions of emm386 get upset if you exclude memory all the way out to ffff. So, use the smaller number. 3. Some other config.sys options. Set FCBS=1,0. FCBS are not used by contemporary programs, so setting them to 1,0 saves memory. Set buffers=9. On all the PC's I've used, higher numbers actually hurt performance. Also, buffers does duplicate the effects of a cache, so both a large cache and buffers as a cache is wasteful. Make sure you have files set to a reasonable number. 65 is usually a good number to start with. Each file entry will us some memory, so not using it here will save some memory. 4. Make sure all your network and video drivers are up to date. The micrsoft home page has a log of information, as do the vendors home pages. If you have problems you can't resolve, try using the Windows VGA or SVGA drivers and see if that resolves the problem. If it does, call your video card vendor. Video card problems crop up all over the place as a wide variety of problems. 5. Some preventative medicine. When Windows, and WIndows applications, terminate normally they are supposed to remove their temp files. Sometimes that doesn't happen. It pays to make sure that the Windows temp directory is clean at boot time. Too many files in this directory, or a pre-existing file with the same name that windows wants to use can cause GPF's. I add some lines to my autoexec.bat file to deal with the matter. Here they are: set temp=c:\temp set tmp=%temp% if not exist %temp%\nul md %temp% if exist %temp%\~*.tmp del %temp%\~*.tmp I like my temp file off the root, rather than under the windows directory. Your choice. However, since many appplications require the temp (and tmp) variables, I prefer to use them in the last two lines which insure the directory exists, and that the purgeable temp files are gone. This step has GREATLY reduced the number of gpf's and other oddities my users suffer. 6. Swap file musings. As per a fair amount of information in the press, if you do not have a swap file, or you limit its size to less than 1mb, you alter Windows memory management code, and you will have performance troubles. Also, the code that handles swapping is rather crude, according to some sources. If you have too much space in the swap file, Windows will swap just for the heck of it, just because it can. As a result, the optimum size for a swap file is 4 megs for PC's with 4 megs of ram, and 1 meg (1024k) for PC's with 8 or more megs of RAM. If you allocate too little the worst thing that will happen is that you will get an "out of memory" error. If that happens, increase the swap file size. If that helps, put more memory in the PC, and reduce the swap file size again. Keep it small. 7. In the [386enh] section of the windows SYSTEM.INI file you should add a line that reads "MaxBPs=768". MaxBPS is the maximum number of breakpoints per second that Windows will support. The default is 128 or so. When 386SX20's were considered to be power machines, that was adequate. With higher speed machines, the 768 value is more appropriate. 8. "Out of memory"... There are a number of reasons why you can run out of memory. You can run out of graphics memory if you are using too many fonts, or if you have too many icons on your desktop. The higher the resolution you are using, the sooner you will run out. You can run out of real memory - loading a large spreadsheet could do that for you. Or... All programs have to allocate at least 800 or so bytes of memory in the lower 640k. If you load a single program that allocates all of the available 640k of lower memory, you will not be able to start another program. Even if you have 64megs of ram in your PC. This frustrates users. "I have 128 megs of ram, and all I can run is a single copy of WORD!!!" The answer is a free program from PC-Magazine (http://www.pcmag.com) called 1mbfort or 1 megabyte fortress. It keeps programs from allocating all the memory space available below the 640k barrier. With it, users have been able to run 15 programs, where before they were stopped at 4. 9. Another way you can run out of memory is in the system resources. There are three areas of system resources. The graphics memory which has been briefly discussed. There are also the system memory area and the user memory areas. If these get too low, your system will become unstable. The easiest way to monitor them is to use the SYSMETER package bundled with the Windows Resource Kit, which is available at no cost from the Microsoft ftp site. Look at ftp.microsoft.com in the peropsys\windows\public\reskit directory and read the readme's and take it from there. I usually run the SYSMETER minimized on people's PC's who are having trouble. [Floyd: Plug-In for Program Manager has an excellent way of showing Free Sys Resources.] There's another interesting thing to be aware of with regards to system resources. Some programs "leak" resources. That is, they allocate the resources when they are invoked, but do not restore them to the system when they terminate. After you load and unload a leaky program a few times, your resources can be at critical levels even though nothing but Windows is running. Once of the worst offenders has been Microsoft Word for Windows. Some early versions of David Harris' Pegasus for Windows were leaky, but that has been fixed for quite some time. My advice to people who have to use Word is load it as late as possible, and then keep it loaded. Instead of terminating, just save your document and then open another. 10. GPF's. Once the above is taken care of, you may still have problems, so here's some suggestions. Log the errors. Keep a notepad handy and write down the error messages. Also write down the names of all the program you have running at the time and what you were trying to do. "I had 20 programs running and then tried to save an Excel file to the network drive on the fatchance 4.1 NetWare file server" would be OK, if you wrote down what the other programs were. Also write down the contents of the gpf error window. The usual error is something like "Application error. WAD.EXE caused a General Protection Fault in module WAD2.DLL at 00A2:0A34." Write down the program name and the module name. If it happens once, it's usually not worth hunting down. If it happens several times, then it's time to track it down. Use the /s option on your dir command to find all your copies of WAD2.DLL. You want to make sure that you have only one version of that file and that it is in the Windows\System directory. If you have mutliple copies, you'll need to find out if they really are the same routine. Sometimes it's obvious, sometimes it's not. The only place you should have .dll files is in the Windows\System directory unless you are VERY sure about what you are doing. Having them elsewhere, or having duplicates, can cause problems. Once that's taken care of, check with the vendor for updates to the offending DLL and application. Quite often there are free updates to your applications that will resolve problems. This may not resolve all your problems, but it should get you closer to having a liveable system. ------------------------------ Date: Sat, 7 Dec 1996 10:28:43 -0600 From: Joe Doupnik Subject: Re: Adding Memory to 3.12 Server >Ok folks, this is probably an easy one (hope so). Currently I have 32 megs >of memory in NW 3.12 server. Everything is fine. Memory commands sees all >32. Have bus that does not need Register Memory statement. So which bus in particular? Or is this a pop quiz? >Adding additional 32 megs of memory. Steps: > > Brought server down > Opened up - added two new chips (already had two) > When server turned on, DOS recognized all 64 megs (so far so good) > When typing SERVER, get message "Can't start Novell 386, need at > least 3 megs of extended memory" -------- Remove DOS memory management, if any. Please read the list's fine FAQ for memory advice. Please realize that your machine may not be able to use those particular memory SIMMs (cold boot memory figures are a joke, not serious testing). Please ensure the server is fully patched, especially for PCI buses. Do spend time in the machine's configuration material, every single piece of that material, to ensure the configuration is proper. Joe D. ------------------------------ Date: Tue, 14 Jan 1997 12:23:11 -0600 From: Joe Doupnik Subject: Re: NW312 Memory Requirements >>>Any suggestions as to how I should calculate memory requirments ofr >>>OS2 and MAC name space support? >>---------- >> Namespaces don't use memory. >> Namespaces do consume space in directory cache buffers, meaning >>more namespaces result in fewer directory caches. >> Joe D. > >Joe, wouldn't that have an effect on a memory calc - to the extent that >more directory cache buffers mean less free cache bufferss available? If >I remember rightly, tho, it *should* not be a major amount of memory >consumption (couple meg per namespace ?) - especially considering the >much cheaper price of RAM these days. ----------- To repeat: namespaces don't use memory. All that is going on is holding the same number of file names in the same number of directory cache buffers but now for fewer files (name slots divided by number of name spaces). How many dir cache buffers one uses is a system manager's decision, hopefully based on local conditions. There is no formula worth anything which can predict this without knowing local usage details. If you follow the logic then the story is clear. Joe D. --------- Date: Wed, 15 Jan 1997 11:00:24 -0600 From: "Lindsay R. Johnson" Subject: Re: NW312 Memory Requirements Perhaps I'm a victim of ignorance, conflicting information, and/or poor word selection. It has been written, Part 1A: > To repeat: namespaces don't use memory. > All that is going on is holding the same number of file names in the >same number of directory cache buffers but now for fewer files (name slots >divided by number of name spaces). How many dir cache buffers one uses is a >system manager's decision, hopefully based on local conditions. There is no >formula worth anything which can predict this without knowing local usage >details. > If you follow the logic then the story is clear. > Joe D. And Part 1B: >>I just went through calculations for Long Name Space Support (OS2.NAM in >>NW4.10). At risk of disagreeing with Joe let me relate my impression. In > It's not disagreeing. I thought I plainly pointed out that ADD >NAME SPACE foobar volume does not use memory. It doesn't. But if the >system manager wishes to change the number of dir cache buffers then >that does change memory usage (up or down, entirely a local affair). > Keep in mind that directory cache buffers are a cache, a reusable >and expirable resource. How many such buffers is strictly a local affair. Joe, I believe I understand what you're saying here. I believe verbiage to be the problem, my apologies. Let me quote the FAQ: "Having multiple Name Space loaded does not increase the memory used to cache the files themselves, only memory caching the DET. Thus, to accomodate the additional memory needed to prevent loss of performance, increase the number in Line 6 above by 25 percent for each additional Name Space loaded." Line 6 is described as "Calculate the memory requirement for file and directory caching using the following table:" This is the information upon which I based my comments and the plan at my site. While this may be somewhat "fudge" it's the only information I can locate - that's why I termed it my "impression". The size of my DET in this one case would go from 86MB to 172MB. I'd love to have more definitive information from which to work. Any specific recommendations you may have would be greatly appreciated. Now Part 2: >>In general, this FAQ was educational. Unfortunately it came out claiming my >>server needed only 85MB. It's been running with 160MB for 140+days and >>remains at 75% Cache Buffers. I shudder to think where I'd be if I'd only >>installed 85MB! > > Back to flying by seat of the pants. I'm glad I'm not financing >such things. > The "seat of the pants" to which you refer is the conflict between real-world situation and the information in NAEC classes, Novell's website, and this list's FAQ. Let me reiterate my lament on having definitive information for server planning and my request for such specifics. Finally, Part 3: >>I've never heard of a server dying because of too _much_ memory! > > Well, welcome to the 90's and PCI bus machines unable to utilize >memory above 64MB. It's nice to know that 64MB can do the job in a great >many cases, but it may take an instrument rating to appreciate it. > Joe D. Do you have a reference for this information. I'd like to ask some pointed questions of my servers' manufacturer about now... --------- Date: Wed, 15 Jan 1997 10:25:26 -0600 From: Joe Doupnik Subject: Re: NW312 Memory Requirements >Perhaps I'm a victim of ignorance, conflicting information, and/or poor word >selection. It has been written, Part 1A: It's not complicated. The components are a) how many files can be referenced in a given amount of directory cache buffer memory (name space dependent). (16 / name-spaces) as I recall vaguely. A directory cache buffer is 4KB. b) how many files need to have their names cached. Realizing that caches are useful only on the second and later references, else they are a drag on the system, then item b) is a totally system dependent consideration based on patterns of use by clients. I find that a few hundred cache buffers does a good job in my environment. If I raise the limit and look at Monitor later often the limit is not reached in practice. Windows, alas, often tries to see "everything" upon a single click and hence drives the system into a cache exhausted state. If a file name is not cached then NW looks at the disk drive, and often that is acceptable and unnoticed. These cache buffers have a reuse time too. Directory cache buffer space is a one way accumulation. What's acquired is not returned to a general pool. Thus I put limits on it to prevent hording. I find it a little amazing that 86MB or more is really needed as directory cache buffer space. Clients simple can't remember that many filenames, and one has to work hard to even have that many files to name (except if you have NEWS stored on the machine). But I'll take your word that you find it necessary (have tried less and things are sluggish...). Joe D. --------- Date: Thu, 16 Jan 1997 05:00:35 +0100 From: Bo Persson Subject: Re: NW312 Memory Requirements >From {"Lindsay R. Johnson" } >Perhaps I'm a victim of ignorance, conflicting information, and/or poor word >selection. It has been written, Part 1A: > >> To repeat: namespaces don't use memory. >> All that is going on is holding the same number of file names in the >>same number of directory cache buffers but now for fewer files (name slots >>divided by number of name spaces). How many dir cache buffers one uses is a >>system manager's decision, hopefully based on local conditions. There is no >>formula worth anything which can predict this without knowing local usage >>details. >> If you follow the logic then the story is clear. >> Joe D. I'll give you an example: On my server we had about 35.000 files before adding long file name support. At the time it used 200 directory cache buffers. After running Windows 95 for about a year, we now have 55-60.000 files and use about 800 buffers. At 4kB a buffer, the directory cache uses about 1-3 MB of RAM for a server with 8-10 GB of hard disks. If you want to, you could say that adding name space support used 2 MB of RAM. On the other hand 2 MB is really _nothing_ for server, so namespaces don't use (hardly any) memory. >And Part 1B: > >>>I just went through calculations for Long Name Space Support (OS2.NAM in >>>NW4.10). At risk of disagreeing with Joe let me relate my impression. In > >> It's not disagreeing. I thought I plainly pointed out that ADD >>NAME SPACE foobar volume does not use memory. It doesn't. But if the >>system manager wishes to change the number of dir cache buffers then >>that does change memory usage (up or down, entirely a local affair). >> Keep in mind that directory cache buffers are a cache, a reusable >>and expirable resource. How many such buffers is strictly a local affair. > >Joe, I believe I understand what you're saying here. I believe verbage to >be the problem, my apologies. Let me quote the FAQ: "Having multiple Name >Space loaded does not increase the memory used to cache the files >themselves, only memory caching the DET. Thus, to accomodate the additional >memory needed to prevent loss of performance, increase the number in Line 6 >above by 25 percent for each additional Name Space loaded." Line 6 is >described as "Calculate the memory requirement for file and directory >caching using the following table:" This is the information upon which I >based my comments and the plan at my site. While this may be somewhat >"fudge" it's the only information I can locate - that's why I termed it my >"impression". The size of my DET in this one case would go from 86MB to >172MB. I'd love to have more definitive information from which to work. >Any specific recommendations you may have would be greatly appreciated. It is very difficult to give any hard figures, as the environment changes all the time. We had a long discussion last year (initiated by Joe D) about how much memory you "really" need. I think the conclusion was, as usual, "it depends". Look for it in the FAQ section S.25 NOV-MEMx.DOC - Email thread on NetWare memory management It contains a discussion on how _little_ memory you actually need and that the "Line 6" part of the calculation is much better that the original Novell Red Book figures, but still an overestimate of the "real" amount of memory needed. We just couldn't agree on exactly _how_ much off it was. Adding an additional 25% for an extra name space is just plain wrong. Time for a FAQ update? >Now Part 2: > >>>In general, this FAQ was educational. Unfortunately it came out claiming my >>>server needed only 85MB. It's been running with 160MB for 140+days and >>>remains at 75% Cache Buffers. I shudder to think where I'd be if I'd only >>>installed 85MB! >> Well, you would be down at around 50% Cache Buffers and probably running just as well. _You_ might need 120 MB Cache buffers, but I doubt it. I have been running down to about 30% cache buffers without problems (16 MB free out of 48 MB). Didn't have the opportunity to run it for 140+ days, but it ran a month or two between maintainance several times. >> Back to flying by seat of the pants. I'm glad I'm not financing >>such things. >> > >The "seat of the pants" to which you refer is the conflict between >real-world situation and the information in NAEC classes, Novell's website, >and this list's FAQ. Let me reiterate my lament on having definitive >information for server planning and my request for such specifics. > Times change, it's hard to be definite! I am just building a new server, and was amazed to find that 128 MB of RAM cost me less than _one_ 4 GB hard disk. Maybe we don't have to count the MBs too carefully after all... >Finally, Part 3: > >>>I've never heard of a server dying because of too _much_ memory! >> >> Well, welcome to the 90's and PCI bus machines unable to utilize >>memory above 64MB. It's nice to know that 64MB can do the job in a great >>many cases, but it may take an instrument rating to appreciate it. >> Joe D. > >Do you have a reference for this information. I'd like to ask some pointed >questions of my servers' manufacturer about now... Some chip sets, like the original Intel Triton, aren't really designed for servers! A couple of years ago 64 MB looked like an infinite amount om RAM and it wasn't considered that you would ever need more than that in a PC. History always repeats itself! "640K ought to be enough for anybody." Bill Gates, president of Microsoft, 1981 --------- Date: Thu, 16 Jan 1997 10:11:22 -0600 From: Joe Doupnik Subject: Re: NW312 Memory Requirements >>From {"Lindsay R. Johnson" } >>Perhaps I'm a victim of ignorance, conflicting information, and/or poor >>word selection. It has been written, Part 1A: >> >>> To repeat: namespaces don't use memory. >>> All that is going on is holding the same number of file names in the >>>same number of directory cache buffers but now for fewer files (name slots >>>divided by number of name spaces). How many dir cache buffers one uses is a >>>system manager's decision, hopefully based on local conditions. There is no >>>formula worth anything which can predict this without knowing local usage >>>details. >>> If you follow the logic then the story is clear. >>> Joe D. > >I'll give you an example: > >On my server we had about 35.000 files before adding long file name support. >At the time it used 200 directory cache buffers. After running Windows 95 >for about a year, we now have 55-60.000 files and use about 800 buffers. > >At 4kB a buffer, the directory cache uses about 1-3 MB of RAM for >a server with 8-10 GB of hard disks. > >If you want to, you could say that adding name space support >used 2 MB of RAM. On the other hand 2 MB is really _nothing_ >for server, so namespaces don't use (hardly any) memory. >Bo Persson >bop@gandalf.se -------------- To finish the logic here. How many files are present on the disk farm has not a thing to do with directory cache buffer counts. Those files could sit untouched for eons. It is how many files and directories are actively referenced in a short time (tens of seconds or less) by users that causes seeks to cache and then to disk. Those buffers age away and can be reused, so it is the peak short term demand that counts. Total file count is reminesent of Novell's attempts at server memory calculation (% free cache buffers), based on indefensible criteria. NW does not cache all file entires. It does cache all disk block pointers (the FAT), but not all file entries. The file entries (directory cache buffers) are for an *active* collection of references. We can control the life time (console SET commands). If one creates directories with zillions of filenames therein, and only a small subset of those files is touched then one wastes good resources scanning names of dead wood. A little thought on file system organization might well improve performance, particularly with undisciplined Windows poking around. Joe D. ------------------------------ Date: Sat, 18 Jan 1997 10:00:37 -0600 From: Joe Doupnik Subject: That 64MB barrier, again There has been a rash of msgs on servers seeing only 64MB while more SIMMs are actually installed. This is a short note on possible reasons why. NetWare does not blindly test memory when it starts. To do so is dangerous to the health of the system. It calls on the system Bios to report its memory capacity. The normal call is Int 15h, function 88h, which returns 16 bit register AX with the number of 1KB blocks of memory above 1MB. Simple binary math says that can count only to 64MB. The system's CMOS memory may hold more information from the original Power On Self Test (POST) cold boot probes, if we could get a decent answer. EISA bus machines use this CMOS technique, and NetWare obeys the findings. For EISA bus (and perhaps others) the memory capacity must be set into CMOS by running a configuration utility shipped with the motherboard; automatic detection is most often wrong. Cold boot tests are not what callers see. HIMEM.SYS can get memory size information by probing, and it may (not always) make it available to the Bios for NW to see. DMPI memory management code is able to do this, but we aren't running that stuff on server machines. Recent patches to NW should help. Here is a snipping from file updates\nwos\nw410\410pt6.exe: LOADER EXE ============ SYMPTOM: 3) ADDED support for using the new BIOS call int 15 sub function E8 for memory detection on PCI machines and other ISA machines that have more that 64 Meg of memory. If your machine's Bios does not support the call it may not be able to reveal more than 64MB to NetWare. A Bios upgrade from the maker might help. Joe D. ------------------------------