--------------------------------------------------------------------- NOV-MEM1.DOC -- 19951208 -- Email thread on NetWare memory management --------------------------------------------------------------------- Feel free to add or edit this document and then email it back to faq@jelyon.com Date: Wed, 24 Aug 1994 17:46:53 -0600 From: Joe Doupnik Subject: Re: formula for calculating server RAM >What's the formula again? The formula in the 3.12 Electrotext doesn't >look right to me, since it doesn't factor in block size or have you count >up how many NLMs you have loaded, etc. --------- A live example: NW 3.12 (EISA) 3GB of disk with 4KB allocation units, NFS namespace on 1GB, Mac namespace on a 10MB courtesy volume, lots of NLMs (but no Arcserve), > 4000 cache buffers free with 32MB memory. 4K buffers times 4KB = 16MB of buffer space sitting around, which seems to be very adequate indeed. More files means more FAT entries and hence more memory consumption, but it does have a very large number of files. My rough rule of thumb: 2.5MB for each GB of disk, plus o/s and NLMs, plus 1000+ free cache buffers (say 4KB each). Check your thumb by looking at memory figures in Monitor. Joe D. ------------------------------ Date: Thu, 25 Aug 1994 08:55:25 -0400 From: "Jed Proujansky Greenfield Ctr. School" Subject: Re: formula for calculating server RAM The formula is: ??? x volume size / blocksize where ??? is replaced with .023 for DOS, .032 for Name Space, Unix or Mac Then add to that 2 meg for the Kernel Ram. Round up to the next whole meg and then round up to the unit of memory needed for your machine. Example 1: 500 MB DOS system with 4k block size. 0.023 (DOS) * 500 MB / 4 (block size) = 2.875 add 2.875 plus 2 MB for kernel = 4.875, so you probably need at least 6 MB. Example 2: 2.5 GB DOS system with 4k block size. 0.023 * 2500 / 4 = 14.375 add 14.375 plus 2 MB for kernel = 16.375 therefore the minimum would be 20 MB. This formula is based on the fact that most RAM in needed to do file caching and directory caching. Therefore it is directly related to the size of the drive and the operating system that is storing the information. My personal recommendation is to use more. The more RAM, the faster the system and the fewer the problems. The best performance per dollar is gained by adding ram in a 2.x and 3.x system. ------------------------------ Date: Thu, 25 Aug 1994 14:47:54 LCL From: Paul Badour Subject: Re: formula for calculating server RAM >From the session titled: Server RAM Sizing and Cache Tuning For 4.01 servers: RAM Core Netware Operating system 5 MB Media Manager .5MB per Gig of disk Connections in use 2KB per user connection Packet receive buffers 2.3KB per buffer Directory cache buffers 4.3KB per buffer Service processes 9KB per service process File compression enabled on any volume 250KB FAT tables (total # of volume blocks) * 8.2 bytes Block suballocation enabled ((Blocksize * 2) -1) * 4096 bytes + (5 * Number of files) bytes Directory entry tables Number of files * 10 bytes NLM Requirements: BTRIEVE.NLM 700KB CLIB.NLM 500KB INSTALL.NLM 600KB PSERVER.NLM 200KB ------------------------------ Date: Fri, 26 Aug 1994 10:13:00 +0100 From: Albert Eric Van Der Most Subject: Formula for calculating server-RAM When calculating server-RAM for Novell 3.x you mustn't forget to add room for the loadin of NLM's. You should also check the memory-requirements of NLM's. When using a standard installation of Novell, you wouldn't be using that much NLM's, but when you load several NLM's for UPSses, anti-virus software, Btrieve, Interbase and things like that, you might end up with very little room for cache buffers. For instance: a standard file-server with one Gig-disk and a blocksize of 4K needs at least: 0.023 * 1000Meg / 4 + 2 = 5.75 + 2 = 7.75 ~ 8 Mb of RAM. But that starts off with approx. 70% cache buffers. When you load some NLM's like Btrieve, ArcServe etcetera you might start your server with less than 50% cache buffers. When using the file-server the NLM's might take more memory. Also Novell gobbles up more memory and using it in the permanent memory pool. This might result very soon in having less than 20% cachebuffers and places your volumes at risk. ------------------------------ Date: Wed, 13 Sep 1995 22:05:49 -6 From: "Mike Avery" To: netw4-l@bgu.edu Subject: Re: memory issues >We are having big-time memory problems in our division due to a >combination of soundcard drivers, cdrom drivers, and all the network >stuff taking up conventional memory, on top of having to run a few apps. Doncha just hate it when the apps get in the way? >Ideally, our users need to run a bare minimum of: >a soundcard driver (which may vary) >a cd rom driver (which, again, may vary) I have to raise an eyebrow or two here..... it seems odd that they all, or almost all, need sound cards with drivers. But, I can swallow that. As to the CD-ROMs, there are a number of NetWare shareable solutions that save time and money, both at purchase time and in maintenance. And, the NetWare based solutions require no dedicated I/O cards in the PC's, and at most one driver (if the software on the CD-ROM absolutely requires MSCDEX). This move helps a lot in reducing ram-cram and maintenance nightmares. The speed is entirely adequate in most cases. >LSL >MLID >TCPIP (from LanWorkplace 4.2) >IPTUNNEL >IPXODI >VLM If you need IP connectivity, you might consider moving towards NWIP. This will reduce the number of stacks you are supporting, and reduce the number of drivers you are loading. Also, keep an eye out for the 32 bit DOS drivers. They load entirely into extended memory.... which frees up high and conventional memory. >vi-spy (a virus checker) I have mixed feelings about resident virus checkers. I'd rather do a pre-login scan of memory and a few key files and then have an NLM based anti-virus checker. Then again, I really don't care that much about my users PC's..... it's the network that really counts to me. >On top of this, they want to run (from within windows): >Eudora >Meeting Maker >Oracle tools >bits and pieces of the MS Office suite (word and excel mostly) >Netscape >Filemaker Pro or their own DOS database >Host Presenter (or some telnet client) >Winsock FTP >And some users need to run lotus 123 and/or WordPerfect. >Right now, if they get all the first set running, they can't get more than >eudora or maybe word to run. This is a user education issue. No resources will ever be adequate if they are not well managed. PC memory is such a resource. I'll return to some specifics here in a few paragraphs. >One of our memory gurus has spent a lot of time with EMM386 >including memory blocks to allow for more conventional memory, but >it just ain't happenin'. That's the first step...... and it's real easy to dig yourself a grave in a number of ways. If you succeed, you've created a very labor intensive cottage industry. And every time the users needs change, you get to re-optimize the memory. I've written programs to optimize the memory setup.... and they aren't pretty either. On the other hand, if you fail, then you face the problem that your users have reduced reliability and robustness on their PC's.... and you and your staff get a bad reputation. All in all, I suggest fairly conservative memory management. Reclaiming the video space, I am becoming increasingly convinced, is about the most expensice 32k of memory you can find. >This, to me, is not a happy thing. If any of you have any great >ideas as to things to try, or 3rd party apps which might help (QEMM >makes things worse, by the way, since the soundcard drivers tend to >be greedy), we'd very much like to hear from you. QEMM can recover some memory compared to EMM386. At the price of stability concerns. I consider QEMM to be an excellent product for the hacker or tweakoid who has the time, desire, or need to play with their setups endlessly. All in all, I feel that it is an unwarranted expense for businesses. Now then... let's talk about Windows..... MicroSoft leads users to believe that they can run as many programs at once as they want to. It just ain't so. If the user tries this, they will be unhappy. The first symptom that there is a problem is the dreaded "out of memory" error. Sadly, the out of memory error message can have at least 3 different causes, and it won't tell you what the problem is. Each program under Windows 3.X MUST have at least 800 bytes of common memory. If a large program has allocated all the conventional memory, there isn't enough left to start another program. This leads to angry phone calls, "I've got 32 megs, and I'm running out of memory." The PC-Magazine free utility 1mbfort addresses this issue very well. Each program also requires graphics resources. But all the graphics resources are in a single 64k page of memory. Too many fonts can eat this up.... as can too many icons in a folder. There is also user memory. And under some circumstances, that can get filled up too. Using the SYSMETER program that was shipped with the resource kit will help you determine what the memory status is. I tend to load it onto troubled users' PC's, and tell them it's like a gas gauge. This greatly reduces the number of user calls. A final problem to discuss is Word, and programs like it.. For ever so long Word has suffered resource leaks. That is, when the program was invoked it would allocate resources. When the program terminates, it does not release all the resources. WORD is nortorious for this. There are two solutions.... don't use word or use it only have to, and delay loading the progran until you really need it. And then leave it loaded until you shutdown. Everytime you re-invoke the program, you will lose the memory the program reloads and exits. The use of sysmeter can tell you which programs suffer a resource leak. Mike Avery ------------------------------ Date: Sat, 7 Oct 1995 09:52:23 CST From: Mike Williams Subject: Re[2]: I need help: Autistic Fileserver! >>memory stats (source: MONITOR): >> >> Perm. Mem Pool: 1,385,556 Bytes 9% >> Alloc Mem Pool: 259,564 Bytes 2% >> Cache Buffers: 4,726,400 Bytes 30% >> Cache Mov Mem: 7,114,920 Bytes 46% >> Cache NonM Mem: 2,059,360 Bytes 13% >> Total Server WM: 15,545,800 Bytes > >Don't know about the rest but I can tell you that you are begging >for trouble with 46% Cache Movable Memory. I had a server thrash >itself to death because of this low number. Didn't understand what >was going on at the time (it was the first time I cored a server) so >called Novell and got told that any time this number gets below 50% >you are on the fringe of disaster. My guess is that you have a >pile of NLMs loaded and/or a huge number/quantity of files and >other resources that the server must track. > >If you have a bunch of deleted files sitting on the server then you >can improve your numbers and overall speed by purging them off the >server. > >Unload anything you don't need and load those NLMs last in the ncf >file. If you load and unload an nlm alot you might want to get a >handle on what resources it isn't giving back if any just so you >are aware of what effect they have. > >Novell told me that a number over 60% is preferred and a number at >or below 40% is an almost guaranteed crash and at minimum will slow >your server to a crawl. I think you mean "cache buffers", not "cache movable memory". Cache buffers is the pool of available memory which gets allocated to other areas as needed. This is the critical area that you want to keep above 60%. You should get nervous when it's around 50%, and you should start watching the wantads if you let it get below 30%. You will definitely lose data. Cache movable memory is one of those pools which takes from cache buffers as needed, then releases its memory back to cache buffers when no longer needed. This happens automatically. This pool gets used for things like the directory entry table (DET), and file allocation table (FAT). BTW, cache nonmovable memory is another pool which behaves the same as cache movable memory (expanding and contracting). This is where the NLMs get loaded and unloaded. The memory here gets returned to the cache buffer pool when no longer needed. Permanent memory also takes from cache buffers, but doesn't give it back when it's done. That's why you have the 3rd column (Don doesn't show it here) that shows how much of it is IN USE. You want to watch this, because the difference between the size of the permanent memory pool and the amount of it that is in use equals the amount of RAM that is being wasted. This wasted memory is only available for things that take from the permanent memory pool such as the semipermanent memory pool (disk drivers, CD-ROM drivers, and LAN drivers), and the packet receive buffers. (Alloc mem pool also takes from permanent memory pool, see next para). The only way to return this memory to the cache buffer area is by rebooting the server. The alloc memory pool is another one to watch. It draws from the permanent memory pool, and it doesn't give it back. So, you compare the size of the alloc memory pool with the amount of it that is in use. The difference between the 2 is being wasted. The only way to return this memory to cache buffers is to reboot the server. Alloc memory pool is used for things like popup window screens created by NLMs such as monitor.nlm. Also used to hold broadcast messages before they're sent, etc. CACHE BUFFERS | | | | (returnable) | |(nonreturnable) _______________| |_____________ | | | | | | MOVABLE NONMOVABLE PERMANENT | | | | (returnable)| | (nonreturnable _______| |____ | | | | SEMIPERM ALLOC MEM MEM POOL POOL [Thanks Mike Williams] ------------------------------ Date: Mon, 9 Oct 1995 10:27:56 -0600 From: Joe Doupnik Subject: Re: eisa pc did not rec. mem bel. 16 mb >Recently I tried to install a nov 3.12 on an EISA Compaq PC >with 40 meg Ram. But Netware only recognized 24 MB of Ram. >And the surprising fact was that this was the Ram between >16 and 40 MB. The server program started with the message >"loading OS at 1000000 " that means that the first 16 mb >were not recognized at all. >One of the results are that it is not possible to login from >any workstation. --------- Goodness, a new variation on this theme. I think we had better ask if, by mistake, any DOS level memory management software were loaded before starting server.exe. There should be none (with one possible exception). I don't have a Compaq EISA machine to examine, but a suggestion is to run the entire configuration process over again in detal, at both ISA Bios level and again with the EISA config utility. Watch out for cache memory limits being set beyond real memory. Now the exception mentioned above is that on a few EISA machines one needs to run HIMEM.SYS only to get all of memory known at DOS level. Strange but true. It does not manage memory, but it seems to get system parameters straightened out at DOS level just enough to launch server.exe in good shape. Such machines ought not be used as servers. Finally, if the server has a really nifty video board of say the ATI equivalent where it's video memory page buffer maps itself to the system as just above all of real memory then confusion can ensue. Finally**2, Compaq's are known for being very picky about the memory chips in the machine. If you have mixed kinds, or parity and non-parity, or SIMMs for non-Compaq machines then expect trouble. Joe D. ------------------------------ Date: Mon, 23 Oct 1995 08:20:05 -0600 From: Joe Doupnik Subject: NW server memory I thought I'd start a thread on the topic of how much memory needs to go into a NW server, and start by indicating a little personal history of this weekend and thoughts thereupon. Conventional wisdom/Red Manuals say we should keep our servers well stocked with memory such that after it has been running for some time the free cache buffers (MONITOR, RESOURCE) should be around 66% or so. That turns out to have been good advice many many years ago, but not today. Being conservative I adopted this rule of thumb this weekend. NW 3.12 server netlab2.usu.edu has a 5GB disk farm and 64MB of SIMMs. After all the sundry NLMs load and the BackupExec tape software eats buffers etc the machine has the magic 66% free quantity. But... volume sys: needs to be doubled from 2GB to 4GB to hold the ever expanding Novell archives. That said I'd be below the mark after adding another disk drive. I backed up to tape and rebuilt the server with 8KB disk allocation units (versus the default 4KB), and that did give more free memory. You see, the largest memory consumer is normally the FAT structure followed by directory tables. Taken together they eat about 2.5MB per GB of disk space with 4KB allocation units, or half that with 8KB allocation units. I started with about 44MB of free cache buffers and ended up with about 48MB free. But both values are huge amounts of memory! There is no need for that quantity of buffering/caching in this machine. Something does not compute. Looking at a second NW 3.12 server. 2GB disk, 32MB, four dozen heavy users, read-only to them. 15MB free cache buffers and a very happy server. Now if that many people can work nicely with 15MB of buffering available to handle dynamic conditions then adding more disk space ought not require more memory than the FAT+dir item mentioned above. If this server were to have say 10GB additional disk farm added then it would need about 64MB and still have that 15MB free for dynamics. The new disks would consume 10 * 2.5MB plus a little for safety. The percentage free cache buffers is a meaningless number. The consequence is servers often need much less memory than folklore would recommend, and that's important. How much memory is really needed? Good question, no solid answers. The answer is really user dependent. First we load up the server with all the many NLMs needed in active duty. That's the base memory requirement. Then we add users. Each user and open file consumes some memory, and the file read/write part needs comfy elbow room to buffer material (and that's our free cache buffer target). Add users and some files, then add many MB for elbow room. Many MB is vague, purposefully so because I don't have a magic value, but a dozen is a generous first guess. From this we see that the first server with 64MB has a large capacity to absorb more disk drives. Half the memory could have been removed without causing problems, meaning 5GB in 32MB with 4KB allocation units and 12MB free cache buffers (12/64 = 18% free, yikes). I'd say the average server is probably fine in 32MB unless the disk farm is rather large. Take a look at MONITOR to see where your memory is allocated. Joe D. ------------------------------ Date: Mon, 23 Oct 1995 17:16:44 -0600 From: Joe Doupnik Subject: NW server memory, cont'd Well, my message on this topic earlier today generated exactly zero interest so far, yet it's an often asked expensive question. Let me try one more time in simpler terms. How much memory does a NW server require? Divide server memory into two categories: System - memory for the operating system, which includes server.exe and VLMs (many acquire memory dynamically as they load), backup tape program buffers, comms receive buffers, directory cache buffers. User - memory for each login, which includes both the formal login bookkeeping as well as an average number of open files. Open files are remembered on the server too since we need to know who owns what and locking and all that jazz. Assume you have a nicely operating server, with 32MB as an example. You decide to add 10GB more disk farm and are worried about server memory. Each GB uses about 2.5MB to hold static disk structures (the FAT at 2MB/GB and some directory info for the rest, all figured at 4KB disk allocation units). That means another 25MB for disk stuff. Add 32MB and put an even dozen new GB on the box. NW 4 suballocation is not a free lunch because suballocation consumes memory to keep books on the sub parts. The now 64MB server has a much much reduced percentage free cache buffers, well below the quoted 66% safe lower threshold. We gobbled the new memory with disk static structures. Users do what they did previously and use exactly as much memory as previously. Category User remains the same, Category System needed more memory for disk structures. You are home free. THE PERCENTAGE FREE MEMORY FIGURE MEANS NOTHING. How much memory does an average user consume? We don't know at this moment, but some crystal ball work says 3-6 users probably fit into 1MB of buffering, maybe more per MB. Gee, that's not all that much on an average server. It's the Category User area that is involved. So, when sizing up a server, measure how much memory the system components consume by loading up those NLMs. Allocate the disk/comms maximums and load up your tape backup software. Look at the memory used (total - free). That's the Category System part. Estimate the user population, guess 3-6 users per MB of server memory for bookkeeping. Add a few MB for general systems temporary use (such as dirty cache buffers going to disk) and that's that. Double check the user figure by firing up Excel/WordPerfect on each machine as a file quantity hog test. What we need to figure out is just how much an average user needs in server memory. Because then when someone asks the How Much Memory question we can ask in return How Many Users and what NLMs, rather than knee jerk repeating Novell's ancient and unrealistic memory formula based on disk capacity. What this means is a rather large number of servers probably have much more memory on board than will be used, and that we can add disk drives to them without adding memory. If NT and Unix can live in 32MB with lotsa disk then NetWare can too, and do even better (even though it has no swap file). Maybe NW servers have an undeserved reputation for needing lots of memory. Joe D. ------------------------------ Date: Mon, 23 Oct 1995 21:20:00 EST From: SUMBILC Subject: NW server memory, cont'd *** Reply to note of 10/23/95 21:21 I guess you're right regarding Netware's memory requirements. In our case, we have a 3.12 server of 16MB RAM, all fixes and patches applied, with NLM-based antivirus tool and around fifty users log at the same time. Works well for months with available cache buffers hovering around 48-50% of the system's total RAM. Workstations also load Excel/Word off the network with not much of server memory problems. This is on a gig of disk space. Del C. Sumbillo ------------------------------ Date: Tue, 24 Oct 1995 08:19:51 +0000 From: Robin Dinerstein Subject: Re: NW server memory, cont'd On 23 Oct 95 , Joe Doupnik wrote: {various snips} >The now 64MB server has a much much reduced percentage free cache >buffers, well below the quoted 66% safe lower threshold. We gobbled >the new memory with disk static structures. Users do what they did >previously and use exactly as much memory as previously. Category >User remains the same, Category System needed more memory for disk >structures. You are home free. > >THE PERCENTAGE FREE MEMORY FIGURE MEANS NOTHING. I'd agree with you Joe, but I think there is something that needs to be taken in to account. On a smaller server, ie 25 user, 1gb, 16mb ram or lower I think that the percentage figures would be very important, as they bear some relation to Netware's (and other nlms) core requirements. Dropping too low and you end up with modules that won't load or a poorly performing server. When you get into the realms of more disk space, I think the amount of memory really required is proportionally related to the number of users, however as this again would be dependant, as you say, on how much memory an average user on your own system uses a more generic formula seems to have evolved which in general seems quite wasteful. ------------------------------ Date: Tue, 24 Oct 1995 09:26:27 EST5EDT From: David Jackson Subject: Re: NW server memory, cont'd >Well, my message on this topic earlier today generated exactly >zero interest so far, yet it's an often asked expensive question. Let >me try one more time in simpler terms. > >How much memory does a NW server require? Joe, I hope--and think--you are wrong about interest. I have installed, supported and administered NetWare servers since 2.0a and have subscribed to this list for many months--and I can think of few strands more intriguing or important than this one. Even folk who may have limited interest in what's 'really going on under the hood' should recognize the real world (read: $$$) upshot of your arguments. My guess is that many of us are having some trouble suspending belief --not in the Red Book formulae; those are patently silly, especially for those of us who tend heavily patched 3.11 servers with all manner of things going on in them that Novell did not speak to years ago), but in the folklore gospel that NetWare has a virtually insatiable appetite for memory and that the central key to keeping it purring contentedly is to feed it, big time. And I suspect we are having some trouble grasping your argument--again, not because it is arcane or its upshot is baffling, but because it opens a Pandora's Box of issues and questions, many of which are evidently very situational. It also suggests that conditions vary a great deal (as we all know, of course, but maybe much more than we have thought we knew) on system tuning. These can be rather 'scary' notions: a calculus this complex, and perhaps indeterminant, is not an attractive prospect to those with many duties and a user clientele that has little patience for system problems. (Just throw more memory at it and get/keep it up and running: clearly this is a simpler and more expedient solution, especially when it can be assumed endorsed by all the 'experts'....) At any event, I for one hope this thread is caught hold of and we are able to run with it. The issue is fundamental and not quickly or easily resolved; it calls for considerable thought, research and--maybe most of all--experimentation, for it is a matter that will be enlightened mostly by empirical data. I wish I had something to contribute. At this point, at least, I don't--but I have a server needing disk right now and I may be able to summon the courage to do a bit of experimenting.... ------------------------------ Date: Tue, 24 Oct 1995 11:12:52 -0600 From: Joe Doupnik Subject: Re: NW server memory, cont'd >On 23 Oct 95 , Joe Doupnik wrote: >{various snips} >>The now 64MB server has a much much reduced percentage free cache >>buffers, well below the quoted 66% safe lower threshold. We gobbled >>the new memory with disk static structures. Users do what they did >>previously and use exactly as much memory as previously. Category >>User remains the same, Category System needed more memory for disk >>structures. You are home free. >> >>THE PERCENTAGE FREE MEMORY FIGURE MEANS NOTHING. > >I'd agree with you Joe, but I think there is something that needs to >be taken in to account. On a smaller server , ie 25 user , 1gb , >16mb ram or lower I think that the percentage figures would be very >important, as they bear some relation to Netware's (and other nlms) >core requirements. Dropping too low and you end up with modules that >won't load or a poorly performing server.When you get into the realms >of more disk space, I think the amount of memory really required is >proportionally related to the number of users , however as this >again would be dependant, as you say, on how much memory an average >user on your own system uses a more generic formula seems to have >evolved which in general seems quite wasteful. > >Robin --------- I don't blame anyone for being cautious here; I am too. Let's take apart the situation analytically as far as we can. Those NLMs and whatnot that sit in the server when no one is logged in, and they constitute the Category System memory consumption. That is independent of the user count license etc. We see that figure in MONITOR, RESOURCES, when users are shooed away. Look at the amount of free memory then too, in bytes and not percentage. Percentage is a totally meaningless number. That free memory is what your users can consume, and some of it is needed for dynamic situations such as queueing to disk etc (very short term transients). Imagine adding many GB (we can dream) and another set of memory SIMMs to hold the static (cat System) disk data structures at about 2.5MB per GB of disk, and we keep adding nifty drives until the no-user free memory is once again at the same number of bytes as when we started. Let's say we got 12GB out of this setup. Now, the "Category User" memory is the same number of bytes as originally. Let users login and work in exactly the same way and hence use exactly the same number of bytes in the server as they did previously, etc. The percentage free memory is a tiny number. At this point logic says that percentage free is meaningless. You can see why I split memory into System and User categories: one remains fixed in size regardless of the number of logins, the other depends in size on the number of logins and files open but not on what else exists in the server. Conservative engineers always leave a margin for error. We do too, by staying away from limits of the system. In this case we'd leave several MB memory free after all our calculations are done, and that's wise. We explain it away as dynamic elbow room, and indeed it is just that. Joe D. ------------------------------ Date: Tue, 24 Oct 1995 08:51:14 -0400 From: "The WorkPlace Inc." Subject: Re: NW server memory, cont'd I agree that Novell's formula for RAM size determination may not be right in general. I had to (well, non-profit company, not really a lot of money for equipment) run my file servers first at 4, then at 8 and then at 16Mb of RAM. Basically when my file servers were at 4Mb, the percentage of free cache buffers was around 30-40 (3.11, 300Mb ESDI HD) and the servers were quite stable. Well, usually we had a crash once a year for some reasons (e.g., power supply died once, power lines were disconnected for long period of time, bad LAN drived - ARCNET - caused famous lost hardware interrupts and abended the server finally). With 3.12 and 4.x the situation got worse though. On 3.12 you better have 8Mb if you want to run BasicMHS. Otherwise it may abend "just because"... From my experience I would say that "plain" Novell may run with (well, I would not say MUCH, but) less RAM than Novell specifies. But extra applications such as TCP/IP, backup, MHS etc may cause "unpredicted" behavior. As far as I know Novell does not give you more-or-less reliable figures on how much RAM those extra apps REALLY use. Well, they say "4Mb extra for TCP/IP", but again, I probably would not count on it. Alex Tsekhansky ------------------------------ Date: Tue, 24 Oct 1995 14:04:30 EDT From: "Eliot T. Ware" Subject: Re: NW server memory, cont'd >*** Reply to note of 10/23/95 21:21 >I guess you're right regarding Netware's memory requirements. In our case, we >have a 3.12 server of 16MB RAM, all fixes and patches applied, with NLM-based >antivirus tool and around fifty users log at the same time. Works well for >months with available cache buffers hovering around 48-50% of the system's >total RAM. Workstations also load Excel/Word off the network with not much of >server memory problems. This is on a gig of disk space. > >Del C. Sumbillo Yes, but isn't Joe D. suggesting an even more radical change in memory requirement "theory". Your setup above is fairly standard and appears to fall within the traditional view of memory requirements. What Joe D. seems to be saying (and I am not qualified to put words in his mouth) is that the available cache buffers you show (48-50%) coupled with a logical assumption on observed memory usage means that a large portion of that RAM goes unused. I've been reading Joe D.'s two emails and don't see any flaw but I don't have the tools to test his supposition. One question that comes to mind is whether memory usage on a server experiences some transitory spiking that requires all of that elbow room. Let's face it, many of us have seen situations where throwing more memory at a server "solves" a problem. However, I guess the question then becomes does it solve a problem or simply mask some inefficiency we don't know about? Could said inefficiency (spiking of memory requirements) be built in to the traditional memory requirement model and, thus, necessary for happy and well-functioning servers? I don't know...gotta think about it. Jeez, Joe...next you'll be telling us there's no Santa Claus... ------------------------------ Date: Tue, 24 Oct 1995 13:41:20 -0600 From: Joe Doupnik Subject: Re: Adding 2 GIG drive all in 1 volume. Smart? Dumb? >I have a 3.12 server with a 1 gig hard drive and 20MB of RAM. >I'm not using mirroring or anything else fancy. >I need to add a 2 gig drive to the SCSI chain in the server. >Therefore, I'm going to add 32 Meg of RAM along with the extra drive. >(Make sense?) Not much. Please do read the current thread on server memory. The right question to ask is how much memory is free (bytes, not percentages) as shown by MONITOR, RESOURCES, after all of your users have logged in and are busy. As a guide, on one of my NW 3.12 servers 48 users live within 15MB "Free cache buffers" and there is no slow down. If you have more memory per user than this then consider just adding the drive. The drive will cost you about 2.5MB of memory for each GB of drive (based on 4KB disk allocation units), and that includes both FAT and a reasonable directory heirarchy. You can always add memory later. >I'm considering making the entire 2 gig drive one volume. >(Currently the 1 gig drive that's installed is the SYS volume.) By today's standards that's middle of the road for volume size. 2-4GB volumes are common. >Are there any advantages/disadvantages to creating a volume this large? Be sure you can back and restore from tape. Joe D. >Thanks for any info you can offer! >Jeff ------------------------------ Date: Tue, 24 Oct 1995 14:04:46 -0600 From: Joe Doupnik Subject: Re: NW server memory, cont'd >>*** Reply to note of 10/23/95 21:21 >>I guess you're right regarding Netware's memory requirements. In our >>case, we have a 3.12 server of 16MB RAM, all fixes and patches applied, >>with NLM-based antivirus tool and around fifty users log at the same time. >>Works well for months with available cache buffers hovering around 48-50% >>of the system's total RAM. Workstations also load Excel/Word off the >>network with not much of server memory problems. This with 1 GB of disk >>space. >> >>Del C. Sumbillo > >Yes, but isn't Joe D. suggesting an even more radical change in memory >requirement "theory". Your setup above is fairly standard and appears to >fall within the traditional view of memory requirements. What Joe D. >seems to be saying (and I am not qualified to put words in his mouth) is >that the available cache buffers you show (48-50%) coupled with a logical >assumption on observed memory usage means that a large portion of that >RAM goes unused. I've been reading Joe D.'s two emails and don't see any >flaw but I That's right. >don't have the tools to test his supposition. One question that comes to >mind is whether memory usage on a server experiences some transitory >spiking that requires all of that elbow room. Let's You are definitely on the right track. What we don't know are a) the transistory requirements of some NLMs in the server (say, for example, GroupWise which is known to be a complete memory pig, or Arcserve which holds the title), and b) the transitory requirements of a user action (read, write, delete files, etc). We get very little to no information on NLM requirements. We can look with MONITOR to see their static usage (some astounding values appear) but not their rapid transitory ones. We can do some crystal ball forcasting about user requirements, however. One second's of full speed lan send/receive is about 3-400KB, and during that time the server is processing part and queueing part (it's busy, after all). That means at worst we need several hundred KB per user file send/receive operation, on a transitory basis to be shared round with other users. Cautious managers would say: Ok, let's assume 250KB per user as a rough allocation, or 4 users per MB of "Free cache buffers." Some users are hitting the server while most aren't, and the overall space is then enough to handle all but worst case senarios. [Networking folks immediately start asking about flow control mechanisms to keep traffic from overwhelming any part of the system.] None of us has tools or much time to decode this transitory business. Right now we can collect annecdotes and infer a middle ground. That is beginning to happen now, and the message at the top of this is an example. A real test would be to take a happy server and start adding disk drives but no user access to them; that merely consumes memory without pulling SIMMs. I'm fresh out of disk drives to try this. If someone has them then grab program Perform3 from directory apps on netlab2.usu.edu as but one very simple minded non-random system exercisor, else try real humans. >face it, many of us have seen situations where throwing more memory at a >server "solves" a problem. However, I guess the question then becomes >does it solve a problem or simply mask some inefficiency we don't know >about? Could said inefficiency (spiking of memory requirements) be built >in to the traditional memory requirement model and, thus, necessary for >happy and well-functioning servers? I don't know...gotta think about it. > Jeez, Joe...next you'll be telling us >there's no Santa Claus... You know, I've been doing some back of the envelope calculations on that sleigh speed problem, and ... Joe D. >Eliot T. Ware ------------------------------ Date: Tue, 24 Oct 1995 14:45:26 -0700 From: Floyd Maxwell Subject: Re: NW server memory > Well, my message on this topic earlier today generated exactly >zero interest so far, yet it's an often asked expensive question. Let >me try one more time in simpler terms. > > How much memory does a NW server require? Joe: How about... - "we" ( volunteers ?? ) design a program to monitor the variables you suggested? Then... - we run the program on our own servers, fine-tuning its calculations and "testing" subjectively to see if its results are accurate. I'll volunteer to compile the results/commentary and feed these back to the list & to the volunteer(s) above. After that short (but cool!) beta period... - we have a program that others/newbies can download & run on their machines. - we also have something useful whenever we are considering upgrading our hardware...espec. if we design into it a couple of prompts like "Other/new NLMs (specify est. RAM needs):" & "Other/new disk storage:" that can factor our upgrade needs into the equation. The program could even calculate memory requirements by several means: (1) The "Original" Red Books calculation (2) The "Calculating Memory Requirements for NetWare 3 and 4" Novell Application Note Supplement (Dec, 1994) (reproduced in the FAQ) (3) Using your proposed method ------------------------------ Date: Tue, 24 Oct 1995 17:28:08 -0600 From: Joe Doupnik Subject: Re: NW server memory >> Well, my message on this topic earlier today generated exactly >>zero interest so far, yet it's an often asked expensive question. Let >>me try one more time in simpler terms. >> >> How much memory does a NW server require? > >Joe: > >How about... > - "we" ( volunteers ?? ) design a program to monitor the variables > you suggested? >The program could even calculate memory requirements by several means: > (1) The "Original" Red Books calculation > (2) The "Calculating Memory Requirements for NetWare 3 and 4" > Novell Application Note Supplement (Dec, 1994) (reproduced in > the FAQ) > (3) Using your proposed method > >Floyd ------------ Very much on target, Floyd. The final answer to our question will come from just such experiments and combining of information. Deep Thought takes to long to compute them. I am tempted to add option 4, an amplification by Novell on the matter, but that's implied anyway. For what it's worth dept, which is very little, here is the revised memory formula from Novell. It has problems understanding units of things, so deal with it carefully, and realize that its basis is probably as weak as the original formulae. In particular, it assumes each active client needs 400KB of server memory, which I find to be unreasonable and too large. Also note that it blithly ignores major consumers of memory, such as tape backup programs as but a single example. I've commented to Novell on this formula. In case that's not understood, let me make two quick remarks on the above paragraph. The amount of memory used by NLMs et al is not readily known until those programs have been loaded and run. They allocate memory upon demand and the docs don't tell us how much. So we must do a live test to find out. Secondly, the 400KB/user figure is much too large in my judgment and needs to be strongly downsized. While downsizing keep in reserve several MB for general use under transitory peak demands; and that's not MB per user but for the server as a whole. Finally, system requirements (Category System) we must include the receive cache buffer pool and the directory cache buffer pool since those resources are not returned to general use. The # files item below is really the directory cache buffers, in my estimation, and we have control of how large that can be. Joe D. ------ You need: V1) Total number of megabytes of disk; remember mirroring counts as double, ie. 10 gigs mirrored is 20 gigs here V2) number of megabytes of useable disk space, ie 10 gigs mirrored is 10 gigs V3) Server volume block size V4) number of disk blocks per megabyte = 1024/V3 V5) Total number of disk blocks V6) max number of clients attached V7) max number of files on the server Calculations all calculations result in kilobytes till step 10: 1) Base memory for the OS, 2048 for 3.x, 5120 for 4.x 2) Media Manager (V1*.01) 3) If using file compression add 250k 4) If using suballocation add V7*.005 5) Fat cache V5*.008 6) File cache < 100 clients V6*400 >100 < 250 clients: 40k + ((V6-100)*200) >250 < 500 clients: 70k + ((V6-250) * 100) >500 < 1000 clients: 95k + (V6-500*50) 7) total memory for support NLM's +2048 (BTRIEVE, CLIB, INSTALL, PSERVER) 8) total memory for other servers 9) add lines 1-8 for total memory requirements in KB 10) divide by 1024 for megabytes EXAMPLE for Netware 4 with suballocation and compression turned on, 10 m required for support NLM's and 10 m more required for other services. V1 - 30000 V2 - 30000 V3 - 64k V4 - 16 blocks/mb V5 - 480000 V6 - 500 clients V7 - 50000 files 1) 5120 2) 3000 3) 250 4) 250 5) 3840 6) 95000 7) 10000 8) 10000 9) 127460 kb 10) 124.4656 mb ------------------------------ Date: Wed, 25 Oct 1995 05:00:57 +0100 From: Bo Persson Subject: Re: NW server memory, cont'd >I agree that Novell's formula for RAM size determination may not be right >in general. I had to (well, non-profit company, not really a lot of money >for equipment) run my file servers first at 4, then at 8 and then at 16Mb >of RAM. Basically when my file servers were at 4Mb, the persentage of >free cache buffers was around 30-40 (3.11, 300Mb ESDI HD). And the >servers were quite stable. I think (or should that be "guess"? :-) that the magic numbers for amount of free memory were established when a "large" server had 100s of MBs of hard disk space. A good rule of thumb then, but might not be valid any more. As an example, I recently upgraded our server from 32 to 48MB RAM to support a new 2GB hard disk and a CD tower. After the upgrade, it has 5 MB more free memory, but the percentage has gone down! Should I worry? If so, why? To put it another way: To keep 66% free buffers, you have to add three times the amount of RAM needed for the directory structure of a new disk. Does this make any sense at all? I think not. Unfortunately, the new problem is: How do you REALLY calculate the amount of RAM needed?? ------------------------------ Date: Wed, 25 Oct 1995 14:48:55 EST From: Peter Medbury Subject: Re[2]: NW server memory, cont'd >>I agree that Novell's formula for RAM size determination may not be right >>in general. I had to (well, non-profit company, not really a lot of money >>for equipment) run my file servers first at 4, then at 8 and then at 16Mb >>of RAM. Basically when my file servers were at 4Mb, the persentage of >>free cache buffers was around 30-40 (3.11, 300Mb ESDI HD). And the >>servers were quite stable. > >I think (or should that be "guess"? :-) that the magic numbers for >amount of free memory were established when a "large" server >had 100s of MBs of hard disk space. A good rule of thumb then, >but might not be valid any more. > >As an example, I recently upgraded our server from 32 to 48MB RAM >to support an new 2GB hard disk and a CD tower. After the upgrade, >it has 5 MB more free memory, but the percentage has gone down! >Should I worry? If so, why? > >To put it another way: To keep 66% free buffers, you have to add >three times the amount of RAM needed for the directory structure >of a new disk. Does this make any sense at all? I think not. > >Unfortunately, the new problem is: How do you REALLY calculate >the amount of RAM needed?? The Novell Application Notes (January 1995) provide a NEW formula for calculating the amount of RAM required in File Servers. The calculation is based on disk configuration, NLMs in use, number of users & an estimate of the number of files. The formula gives significantly different results to the formula published in the Red Books. I have found that increasing the RAM in accordance with these calculations really does improve server performance & reliability. I manage a network of 31 file servers (NW 3.12). All servers have been configured using the formula. I have found that performance & reliability has increased with the additional RAM. ------------------------------ Date: Wed, 25 Oct 1995 09:50:18 GMT From: Joachim Koenen Subject: Re: NW server memory I agree! Novells formula is not suitable for larger servers (>32 MB). I have 64 MB and 50% of memory are cache buffers. Thats a lot and I am sure 16 MB cache buffers would also be sufficient. But imagine a server with 256 MB memory. According to Novell it has to have at least 128 MB cache buffers. Too much I think. Your approach to calculate the memory requirements includes some good ideas (MB per user) but also requires more knowledge about user habits, software etc. on the server. A server with 100 users all working with windows and winword has not to buffer many things, but 100 users which use 100 different programs need much buffering of code. In my opinion data does not need as much buffering as code, because they are accessed once only in most cases (open a document, alter it, save it, print it). Databases are different of course. So perhaps cache buffers also should be related to the amount of code (exe, dll, ...) stored on the server. It would be very nice if this discussion would result in a new memory requirements formula. ------------------------------ Date: Wed, 25 Oct 1995 14:06:28 EDT From: "Eliot T. Ware" Subject: Re: NW server memory Doesn't a lot of this indicate that the original memory model is based largely on the assumption that increase in user count results in an increase in Category User requirements not only equal to the 250 KB. If one were to assume that Category System requirements outside of NetWare core requirements, ie "other" NLMs, increase linearly based on user count this would account for much of the apparent waste in the current memory needs model. In other words, if Novell assumed that the applications platform of choice would be NLM (and they did in the past) and, that said NLM would require additional resources as new users (of the resource/NLM) were added then the additional memory might make sense. This could explain why true office automation application NLMs (GroupWise, Lotus Notes, etc.) are notorious memory hogs. Each appears to utilize memory in direct proportion to the number of supported users. However, since most NLMs are "system" related and result in fairly static memory use (as best as we can tell) the traditional model falls on its face. If you accept the above, then abandoning this model would be a tacit admission by Novell that NLMs have largely failed as the platform of choice for non-system related applications. 'Course all of this could be total BS resulting from a lack of sleep. ------------------------------ Date: Wed, 25 Oct 1995 14:41:50 -0600 From: Joe Doupnik Subject: Re: NW server memory >Doesn't a lot of this indicate that >use (as best as we can tell) the traditional model falls on its face. Much speculation above. >If you accept the above, then abandoning this model would be a tacit >admission by Novell that NLMs have largely failed as the platform of >choice for non-system related applications. Whoa. Net.lawyer in training perhaps? Whenever we start asking outfits *why* they did what they did we might as well do something more constructive instead, such as watching disks format. GW memory consumption might well be due to ignorance (in memorium to the adage of not subscribing to malice what can be attributed to ignorance). On the same hand, management of complex projects has often exceeded the grasp of major corporations (and the software field is filled with examples). Customer unpleasantness can sometimes yield better attempts, but only if sales take a nosedive. I'd rather think the formula creators were good guys trying to sort out a muddle, and they guessed wrong, twice. We are good guys trying to sort out a muddle, and we are still guessing but with shrewder questions. Joe D. >'Course all of this could be total BS resulting from a lack of sleep. ------------------------------ Date: Wed, 25 Oct 1995 17:48:13 EDT From: "Eliot T. Ware" Subject: Re: NW server memory Obviously, not making myself clear. I ascribe no malice to Novell re: this or any other project. What I'm trying to say is that the reasoning behind the traditional memory "requirement" model may have been premised on factors that simply never materialized. This thread is obviously timely but after all these years, you need to understand it borders on heresy. I just need to be able to place the original thought in some sort of context in order to follow what you guys are saying in this thread. No Net.lawyer here. Just a simple PC Jock trying to keep up with the big dogs. ------------------------------ Date: Wed, 25 Oct 1995 22:24:13 GMT From: Teo Kirkinen Subject: Re: NW server memory, cont'd Joe Doupnik (JRD@CC.USU.EDU) wrote: > THE PERCENTAGE FREE MEMORY FIGURE MEANS NOTHING. Here is one example of cache buffer % being erronous on the other direction. Netware 4.1 seems to keep some interesting statistics which can be used to evaluate the effectiviness of disk caching, for example Monitor / Cache Utilisation / LRU sitting time. We have a server holding a replica of a (too) big nds-partition, and doing almost nothing else. With 64 MB memory and a (slow) 1 GB disk the cache buffers were over 70%. But after extensive nds-updates the server always run with 100% utilisation for several hours (yes, all the latest patches are applied). When we observed that the Least Recentry Used Cache Buffer sitting time was about 7-15 seconds in contrast to all the other servers with 6-30 minutes sitting time. We upgraded the server to 96 MB memory and the server is much more responsive even when Arcserve is loaded. The LRU sitting time is now over one hour. So it seems that memory formulas for Netware 4 should take account for the NDS size (number of user accounts) in addition to concurrent users. Our Netware Service Center has also told us that the only way to get large nds partitions working stably is to install huge amounts of memory. I almost long for NW 2.15 and 3.11. ------------------------------ Date: Thu, 26 Oct 1995 13:42:00 -0800 From: Jan Bojeryd Subject: Re[2]: NW server memory, cont'd >So it seems that memory formulas for Netware 4 should take account >for the NDS size (number of user accounts) in addition to concurrent >users. Our Netware Service Center has also told us that the only >way to get large nds partitions working stably is to install huge >amounts of memory. I almost long for NW 2.15 and 3.11. I'm a devoted 3.12 fan and totally against customers "upgrading" to 4.x just for the fact that it's a newer product. I can understand Teo's wish that 2.15 and 3.x was some wonderful products. Especially the little statistic in 2.x, called Cache Hit Rate, that was a wonderful little figure that said it all. Keeping a 2.x server running at full speed meant looking at that figure and not allowing it to go under some 94 to 92 %! My suggestion to the discussion is: Could this be an approach to clearify the problem. That Cache Hit Rate is what we actually are searching for as a measurement of the optimum cache size. ------------------------------ Date: Thu, 26 Oct 1995 08:45:28 CST From: Tom Bonvillain Subject: Re: Enough about memory requirements I have been tracking this list for 3+ years now, and I find it very useful. But you guys are beating a dead horse. I have a 486 EISA running a 250 user version of 3.11 and 254 user version of Netware for SAA 1.2 and I have over 225 users on constantly during the hours of 8am to 8pm. I have 64 meg of RAM. I starting getting the "short alloc memory error" blurb on my console. I took out the AUTO REGISTER MEMORY = OFF statement. NO MORE PROBLEMS! It comes down to whether or not the manufacturer of the server and the bios have done there homework. NOVELL is not going to change! If you have a problem, try switching that one parameter from ON to OFF, or vice versa. If you still have a problem, ADD MORE MEMORY, THIS STUFF IS CHEAP! You can't tell me that you are running a big time shop, and you are going to let a $500 commodity (about $550 for a 16MB simm) cause you this type of grief. Furthermore who cares what the exact formula is, one sure way to fix it is to add more memory, a commodity. One does not need to know the theory of a combustion engine to operate a car, one only needs to know basic things about it like oil in the engine, gas in the tank, and air in the tires. The same with running a server, if wants more memory give it more. I have better things to with my time than trying to figure out how Novell coded Netware's memory usage. [rant continued but deleted to protect the noble...] ------------------------------ Date: Thu, 26 Oct 1995 13:05:34 EDT From: "Eliot T. Ware" Subject: Re: Enough about memory requirements >I have been tracking this list for 3+ years now, and I find it very >And don't flame me, flame Mr. Doupnik for letting this get way out of >hand. > >Thomas M. Bonvillain No flame intended but you're talking about a different issue. When purchasing a server it is of some use to be able to size the thing. In order to size it, a reasonable method of calculating memory requirements seems in order. If I didn't exercise that basic amount of professionalism, they'd run me out of here. I agree...it doesn't take knowledge of the theory of a combustion engine to operate a car but it does to fix the engine. I assume that lots of folks on this list (myself included) are mechanics and not cab drivers. My regards to your children. ------------------------------ Date: Thu, 26 Oct 1995 14:03:51 -0400 From: Don Voss Subject: Whoa ..there .. Tom [Bonvillain], Your approach has it's merits. It works. But.. I find reading these detailed discussions, few and far between, a good excersise. If I knew more I would contribute. Each pass brings me a bit further in my understanding and directs my experiments. I feel these type of novell specific operational details are what this list is all about. I do not believe in the re-install theory of networking. It has a sub-section concerning memory ..snap in a SIMM and see what happens. That's just after the turn it off and on and see if it works this time chapter. I venture we all have had to put out a fire or two at 8:05 am. If snapping a SIMM in brings people on-line ..so be it. I cannot go to a staff meeting with that information ..I can play with it and get a laugh. "the server has alzhiemers ..."[ please all you with loved ones and alzheimers ..it's just a small joke ..] The dean, et all, enjoy that stuff, they feel secure that the task is being taken care of and they know I will hunt down the details, completely, to the point of "this X is broken we are moving to Y because .Z..so give me $$ ." thats when they really laugh ...they say can we live with X ? I say it's your call. You saw what might happen. I personally want and need to know what is really happening, as best I can understand. I repeat myself, I never saw, in my 5+ years of following this list a mission statement that the list was an emergency help desk for all and sundry hardware, software, wash'n wear snafus which might be in site of what could be termed a network connection, throw in async, strings and cans too. With some of the need to limit the scope of the list and with some reasonable concerns about bandwidth ..I think people can get the wrong message. IE That every response needs to be a one-liner. What adds to that is the "crisis"mode that some members might be in ..and need a answer Quick .. that happens. We need not re-act that way when a hit and run news-reader wants us to outline his company's network 1 year plan. Thats his job ..and his schedual. If those brighter than I want to try and figure out the details of novell memory managment ..I can't lose following along. This is exactly the forum for it. IMHO Joe is not letting anything get out of hand. His initiation of the topic was a none too subtle example of proper use of this list. Shall we say I would rather see a sermon than hear one ... No flame Tom ..but there are a few sides to all this. ------------------------------ Date: Thu, 26 Oct 1995 12:07:49 -0700 From: Floyd Maxwell Subject: Re: Enough about memory requirements Tom Bonvillain said: >I have been tracking this list for 3+ years now, and I find it very >usefull. But you guys are beating a dead horse. I have a 486 EISA >running a 250 user version of 3.11 and 254 user version of Netware >for SAA 1.2 and I have over 225 users on constantly during the hours >of 8am to 8pm. I have 64 meg of RAM. I starting getting the "short But no mention of how much hard disk space on your server, thus making your example approximately meaningless. >alloc memory error" blurb on my console. I took out the AUTO REGISTER >MEMORY = OFF statement. NO MORE PROBLEMS! It comes down to whether or >not the manufacturer of the server and the bios have done there >homework. NOVELL is not going to change! If you have a problem, >try switching that one parameter >from ON to OFF, or vice versa. This is possibly good advice. > If you still have a problem, ADD MORE >MEMORY, THIS STUFF IS CHEAP! You can't tell me that you are running a >big time shop, and you are going to let a $500 commodity (about $550 >for a 16MB simm) cause you this type of grief. Well there are some problems with this "head in the sand" approach: - what if you want/need 100 GB of storage...or 1,000 GB? At a fairly conservative 8 MB of RAM needed per GB of storage, that would suggest one add 800 to 8,000 MB of RAM to the server... but I haven't heard of a server with more than 512 MB of RAM capacity..."oh well, back to the sand..." - there is also the minor matter of "waste"...you buy a $4,000 9 GB drive, and then have to add about 72 MB of RAM (~$2,500 at the $550/16MB rate you quoted above)...seems to me that RAM is a healthy _third_ of the cost of an "add storage" decision. And what if that *darn* motherboard doesn't have room for more than 64 MB... > Furthermore who cares >what the exact formula is, one sure way to fix it is to add more >memory, a commodity. One does not need to know the theory of a >combustion engine to operate a car, one only needs to know basic >things about it like oil in the engine, gas in the tank, and air in >the tires. The same with running a server, if wants more memory give I once had a fuel pump go on my Gremlin X...I drove straight to the garage 2 miles away...and used 2 gallons of fuel to get there...not something I enjoyed doing...wasteful... >it more. I have better things to with my time than trying to figure >out how Novell coded Netware's memory usage. You people look more immature >than my 8 year old and my 5 year old children. > >And don't flame me, flame Mr. Doupnik for letting this get way out of >hand. Don't worry, Joe D. is too professional to do that. >* Full Name = Thomas M. Bonvillain >* Phone = (504)448-4412 >* Fax = (504)448-4308 ------------------------------ Date: Fri, 27 Oct 1995 10:13:57 MST From: Jared Brown Subject: NW 4.1, EISA, AND RECOGNIZING ALL THE MEMORY I have been following a thread of messages on this subject for the last several days. I first realized that I had the same problem after reading a reply to someone posing the same question. Here is the problem, NW 4.1 server 32 meg of ram installed and a 6gig disk farm. The Mother Board is and AMI GA-586ID EISA board w/ 1 p90 installed. When the server boots the POST test tests 32 meg of ram, but when I type memory at the console it reports 15997 KB. Looking through old list posts I found several references to a memory register setting in the EISA config utility. So I came in early a couple of mornings ago, downed the server, and loaded the utility confident that I could find that setting and rectify the problem. The word memory was not mentioned anywhere in the utility, or the two readme files which accompanied it. In the mother board user's manual it said that memory configuration (in the standard cmos setup) is read only determined by the POST test. No further mention of memory configuration was made there either. According to Joe D's post on memory 2.5 meg per mounted gig I am using 15 of my almost 16 meg for disk caching so you can imagine what happens when 20 or 30 users log in and load windows off the network. I need some more suggestions ------------------------------ Date: Fri, 27 Oct 1995 10:45:06 PST From: "Ferrell, Bruce" Subject: Re[2]: Enough about memory requirements >Well there are some problems with this "head in the sand" approach: > >- what if you want/need 100 GB of storage...or 1,000 GB? At a > fairly conservative 8 MB of RAM needed per GB of storage, that > would suggest one add 800 to 8,000 MB of RAM to the server... > but I haven't heard of a server with more than 512 MB of RAM > capacity..."oh well, back to the sand..." See the ALR Revolution series... The trick is finding 128 Meg simms. It'll take 8 of 'em. ------------------------------ Date: Fri, 27 Oct 1995 13:19:35 -0600 From: Joe Doupnik Subject: Re: NW 4.1, EISA, AND RECOGNIZING ALL THE MEMORY >Here is the problem, NW 4.1 server 32 meg of ram installed and a 6gig >disk farm. When the server boots the POST test tests 32 meg of >ram, but when I type memory at the console it reports 15997 KB. >I came in early a couple of mornings ago, downed the server, and >loaded the utility The word memory was not mentioned anywhere >in the utility. In the mother board user's manual >it said that memory configuration (in the standard cmos setup) is >read only determined by the POST test. > >According to Joe D's post on memory 2.5 meg per mounted gig I am >using 15 of my almost 16 meg for disk caching so you can imagine what >happens when 20 or 30 users log in and load windows off the network. >-------------- That's a good description of the problem. The memory setting is within the motherboard section of the setup program. See the motherboard and other boards, open up the motherboard section, review the details within. It won't be written up in a manual, most likely. This is the reason I often say explore EVERY detail in the EISA configuration process. Those who are too shy about it miss those details. May I add that it happened to me too in the beginning. Fire up MONITOR, look at RESOURCE section. See cache movable and non-movable sections. See the Server FAT section, which is where about 2MB of that 2.5MB figure occurs, and directory information uses the rest. Free cache buffers are outside of these "Category Server" allocations. Joe D. ------------------------------ Date: Fri, 3 Nov 1995 20:44:40 GMT From: Doug Horne Subject: Re: Where oh where did my memory go? :I have a 3.12 server. :It has an EISA bus with an Adaptec 1740/42 SCSI card running in :enhanced mode. It has the AUTO REGISTER MEMORY ABOVE 16 MEG setting :set at the default, ON. :It has 20 MEG of RAM (20,096K according to the power on self test). :It has 15,936K of RAM according to the MEMORY console command. :Question: Shouldn't the MEMORY command report the same 20,096K that :the POST does? This may have been answered in a followup already: We recently put 64 meg in our 3.12 server and found out that mounting additional volumes would cause us to run out of memory. On startup 65,536 K would register, but it seemed like we were only getting 16 meg to use. Turns out that the bulk of the memory was not being seen by Novell 3.12 and that the bootup sequence had to be changed (placing the bulk of autoexec.ncf on the C:\ drive with autoexec.bat....). Details are on the netwire home-page, I think this is a relatively new addition to the recognized bugs list, and is solvable. When I saw this bug on that page, I figured there must be a heck of a lot of people running out of memory out there... after fixing the problem, though, I have been able to mount half a dozen new volumes. ------------------------------ Date: Sun, 5 Nov 1995 15:24:00 +0100 From: Achim Stegmeier Subject: Re: PCI (no EISA) 64MB Memory limitation >Anybody using a non-EISA PCI machine with more than 64MB? We have several servers with 128MB running. Built with Intel + Asus boards. You will have to register the memory manually. Achim Stegmeier ------------------------------ Date: Sun, 5 Nov 1995 21:27:52 +0000 From: Rob Mcgillen Subject: Out of Memory problems.... I too have encountered problems with RAM above 32 meg not registering on an EISA machine with numerous volumes- spent 2 hours on the phone with Novell last Spring about it- finally got an "official Solution" about 3 days later... put the autoexec.ncf on the dos side in the SERVER.312 dir.... and make sure you have the REGISTER MEMORY {numeric numeric} near the top of the ncf.... Maybe something to add to the FAQ ??? (if it is not already there...) Rob Mcgillen ------------------------------ Date: Thu, 2 Nov 1995 20:44:12 -0500 From: jeff Subject: 16 MEG MEMORY BOUNDARY PROBLEM - SOLVED!!! I've posted a few questions over the last several months concerning the breaking of the 16 Meg boundary in my Netware 3.12 EISA server w/SCSI controller attached. Well... I found it! The answer isn't in the FAQ! I am running in Enhanced Mode, I don't have any memory settings in the autoexec.ncf or startup.ncf, and DOS is reporting 20MEG while Netware is reporting only 16 Meg. Can you guess the problem? Just before trying the REGISTER MEMORY command, I took one last look at the EISA configuration. I saw I was running in Enhanced mode and all looked fine. BUT WAIT!!! There's a settable switch for the amount of memory installed in the server! Hey, it's saying I only have the first bank filled with four 1 meg simms! Well, that ain't even correct! In bank one I have 4 4meg simms. I wonder if Netware is looking at this setting, while DOS isn't. Hmmm.... Let me set it to the correct setting! ...there goes... Now let me reboot the system and try "server -ns -na"...there's the POST still reporting 20 Meg installed as usual. OK, now I'll run server. ... OK! Server Prompt! Let me try "memory". Here goes! ... ... Woah!!! 20 MEG!!!! Bingo! The 16 Meg boundary is shattered!!! In conclusion, if your using an EISA bus, don't forget the memory setting! (If your system has one!) Thanking those who were helpful to me in my search for the solution, ------------------------------ Date: Fri, 3 Nov 1995 07:45:45 +0100 From: Henno Keers Subject: Re: PCI (No EISA) 64 MB RAM Barrier >I'm glad I caught the discussion on the PCI Bus memory limitation, because >I'm in the process of spec'ing out a pair of servers for a customer >installation that will probably require more than 64MB and we were >considering a machine that has no EISA bus. This thing really has me >worried! The only responses have been from EISA/PCI users who have solved >their problem through the EISA memory setup. What if there is no EISA? >There are machines out there that have PCI and no PCI, you know. Fred, most PCI/ISA chip-sets like Intel's Triton or SiS's models don't support more then 128MB Ram on the system-board, most of them have only 4 SIMM slots so to fill it up you need 32Mbit SIMM, which are still rare. ------------------------------ Date: Wed, 8 Nov 1995 05:22:29 GMT From: Bryan Subject: Re: NDS files In article <01HXBNCDXOJS8WYB7H@cc.usu.edu>, Joe Doupnik says... >>When we had bindery I did know that all server objects was into 3 files: >>NET$OBJ.SYS - NET$PROP.SYS - NET$VAL.SYS I was not able to read those >>files, but I had a clean view of how the server was working. >>Now things are different: where are NDS objects stored? How much space >>do I need to allocate for that? >---------- > They are very well hidden and not accessible by ordinary means. >You don't "allocate" disk space, you get what's left over. As it turns >out the sundry NDS files are generally fairly small and you won't notice >their disk space. To feel comfortable, imagine several MB being tucked >away for such items, though often the space is much less (clearly, this >is a topology and object-count dependent situation). The bindery is >smaller, and NW 4 emulates a bindery or binderies as well as doing its >relational database thingy. > Joe D. We have been testing some of the NDS space requirements for our environment and have found that, during testing, we needed 15k on average per object. While this is not much for most sites, our site will likely exceed the 10,000 object mark (6,000+ users alone). This becomes an issue now for replica placement and planning of SYS volume sizes. Obviously not every server will have all 10,000 objects, but some will have nearly half based on replication needs. Anyone else looked at Object sizes with different results? ------------------------------ Date: Wed, 15 Nov 1995 16:07:03 -0600 From: Joe Doupnik Subject: Re: Packet Size >I have a Netware network with 3 3.12 servers. I am running Ethernet_II >frame type on the network. What should the maximum physical receive >packet size be set to? By default it is set to 4202. I feel this needs >to be reducded to better utilize memory. I have it set to 1524 and it >works but someone told me it should be 1518. What is the correct >setting? What would the setting need to be if I ran 802.2, 802.3, or >SNAP frame types? --------- This turns out to be an interesting question folks. I replied that I leave the setting alone, thinking as Novell has remarked that the MLID (board driver) would automatically shrink the allocation to match the observed media. Today I ran an experiment to find out what's happening, and the result can be useful. The MLID doesn't shrink the value. If you look in NW 3 MONITOR, Permanent Memory, SERVER.NLM: LSL receive buffers, the number there is 4.2KB + a smidge times the number of allocated receive buffers. That's fine for Token Ring and FDDI which use 4KB frames but it's much too large for Ethernet's 1500 byte frames. The total number can be large, depending on the number of buffers allocated. In startup.ncf I added a line reading set maximum physical receive packet size=1516 and rebooted. The above MONITOR value now reads 1642 bytes per receive buffer. So how did I arrive at 1516? An Ethernet frame looks like this (see Don Provan's Ethernet story in the FAQ, or your Ethernet tech refs): 6B 6B 2B 1500B 4B dest src type/len data crc The trailing CRC check is kept on the lan adapter, but the other 1514 bytes are visible to ODI handlers. If we figure 32 bit transfers as optimum then we get 1516 bytes to set aside (1514+extra / 4 = no fraction). NetWare adds some bookkeeping space to arrive at its storage value shown above. Anything above 1514 ought to be fine, but leave a tiny bit of elbow room. If you reserve 100 receive buffers you can save about 270KB. If you reserve 500 the savings are 1350KB (about 337 4KB cache buffers). Naturally, if you have TRN or FDDI running to the server then the original size is best. Joe D. ------------------------------ Date: Mon, 20 Nov 1995 12:48:06 -0600 From: Joe Doupnik Subject: Re: NW 3.11 & ArcServe 4.02 problem >16MB is nowhere close to what you should have for 5Gigs of available >space...allow for at least 8MB per Gig. Depending on you system config, >you should have at least 48MB. > >Mike ----------- Close, but incorrect reasoning. We went through this in much detail several weeks ago. 5GB of 4KB allocation units will use about 12.5MB for FAT entries etc (works out to about 2.5MB of bookkeeping per GB). That does not leave enough to run the server, but just barely. Add what the server modules need, and that includes memory hungry tape backup programs. Then, and only then, add what each user may need, figured at a max of 300KB per login and usually less. If all users are off the system while the tape backup software is loaded, and vice versa, then calculate accordingly. There is no relationship I know of between your quoted 8MB/GB and the real world. Example: NW 3.12, 4KB allocation units, 7GB disk farm can work very nicely in 32MB (has about 6MB left for allocating per-user work space). Don't take my word for it. Use MONITOR and see these things for yourself. See that 2.5MB of bookkeeping per GB (4KB alloc units style). Joe D. ------------------------------ Date: Thu, 7 Dec 1995 20:51:06 +0100 From: Bo Persson Subject: Re: Not enough memory w/ 48 megs???? >>I've got a 486-80 w/ 48 megs and 4 gigs of drive space. I start w/ >>11,000+ original cache buffers and end up w/ 7500+ free. But when >>arcserve starts a scheduled backup I get this error: >> >>12/06/95 01:01:04 Start Backup Operation. Queue: CHEY_A_Q Client: >>SUPERVISOR 12/06/95 01:01:04 Target Services: 12/06/95 01:01:04 >>MAIN 12/06/95 01:01:04 Connection Established with TSA MAIN >>12/06/95 01:01:04 SMS Name Spaces Supported: OS2 12/06/95 01:01:04 >>Source Directory: MAIN/SYS1: 12/06/95 01:01:32 Not Enough Memory on >>Host Server 12/06/95 01:01:34 Backup Operation is Cancelled >> >>I've got SET Reserved Buffers Below 16 Megs=250 in my AUTOEXEC.NCF. > >If you are running on a 3.11 server, you'll find that it will ignore >the reserved buffers if set to over 200. On the other hand, if you are running a 3.12 server, it should work fine. I have a 3.12 server with 48 MB RAM and 5 disks: a SYS: volume mirrored on two 1.8 GB drives and 3 additional volumes, 1080, 2150 and 4300 MB respectively. It runs with just over 4000 free buffers (35% !), which seems to be enough for the 40 users. We also have a DiscPort with 7 CDs and OS2 name space on all the hard disks AND run ArcServe without problems so far. I have set the "Reserved Buffers Below 16 Meg" to 300 though :-) Not enough memory COULD also mean that you have an ISA bus SCSI controller (like an AHA-1542) and load the disk driver before REGISTER MEMORY. I did that :-( until I read the FAQ, section H.7 (hint, hint :-) ------------------------------ Date: Thu, 7 Dec 95 08:23:00 -0500 From: Thomas Geoghegan To: netw4-l@bgu.edu Subject: RE: SERVER RAM Following are Novell's instructions on registering memory. They are worth reading... When a disk driver is loaded in the startup.ncf, a scan takes place to see if the sys volume is on one of its devices. If it is, the sys volume is automatically mounted. If NetWare has only recognized 16 megabytes of memory at this point the OS will define a memory block within this 16 MB to cache the sys volume and all other volumes will cache within this memory block as well. To prevent the OS from defining this memory block prior to NetWare recognizing all available memory the sys volume must be prevented from mounting until all memory is available to the OS. To do this the following steps must be taken: NetWare 3.1x: STARTUP.NCF Set auto register memory above 16 megabytes=off Set reserved buffers below 16 meg=200 (optional, device dependent) etc... AUTOEXEC.NCF file server name ipx internal net
load register memory 1000000 mount all etc... ***(See bottom for further info on register memory) *** Note - For the 3.1x solution the AUTOEXEC.NCF must be copied to the server boot directory and the copy residing in the SYS:SYSTEM directory should be renamed. NetWare 4.1 STARTUP.NCF Set auto register memory above 16 megabytes=off Set reserved buffers below 16 meg=200 (optional, device dependent) Load Load Memfrgfx 1000000 (replaces REGISTER MEMORY) etc... ***** Failure to do this can result in errors indicating Cache memory allocator out of available memory or insufficient memory to mount volumes even when there is a considerable amount of memory being recognized by NetWare. ******* ***** A description of the REGISTER MEMORY XXXXXXX XXXXXXX command along with expansion instructions are detailed below. The first set of Xs is the starting address in hex, usually 1000000 (16 MB in hex), to which memory will be added. The second set of Xs is the amount of memory above 16 MB (in hex) being added. The ending address can be calculated as follows: 1MB=1024x1024 bytes=1,048,576 bytes. The hex conversion for 1 MB is 100000. Two examples are listed below: 1. For a server containing 64 MB, subtract 16 MB (the starting address) to get 48 MB (or 48x1,048,576), then convert this to hex (3000000). The following syntax would be required: REGISTER MEMORY 1000000 3000000. 2. A server containing 17 MB would require the syntax listed below: REGISTER MEMORY 1000000 100000. ------------------------------ Date: Fri, 8 Dec 1995 10:44:05 +0100 From: Bo Persson Subject: Re: Not enough memory w/ 48 megs???? > From Mark Cramer > On 7 Dec 95 at 20:51, Bo Persson wrote: >>It runs with just over 4000 free buffers (35% !), which seems to be >>enough for the 40 users. We also have a DiscPort with 7 CDs and OS2 >>name space on all the hard disks AND run ArcServe without problems so >>far. >Bo, I think you should be aware that Novell themselves state, "If >cache buffers fall below 65%, you should consider adding memory to >the server, if they fall below 45%, you are in immediate danger of a >server crash" I am unsure of how old this quote is, and it may refer >to the days when 16Meg was a large server. I've seen instability >develop in a server that went below 45% free cache buffers, it was >crashing once a week minimum, installing extra memory cured the >crashes, we never had a subsequent problem with that machine. Here, Up North, we have always been told never to go below 50% free cache, but anyway... A while ago, I added a 2GB drive and some CD players to the server. To keep up the amount of memory free, I also added 16 MB RAM (from 32 to 48 MB). After the upgrade, the server had about 6 MB MORE free memory, but the percentage had gone down from about 55% to barely 50%. I though this was a bit strange. The server would be "dangerously" close to the lower limit, but in fact it had MORE space to work in. This didn't make too much sense to me. At about the same time, Joe D and others on the NOVELL list questioned the validity of the "percentage rule" for about the same reason. This convinced me (or fooled me to believe ??) that I could add another disk without adding any more memory. So I did! I started with a (3.12) server with 18-19 MB free. After adding disks, CDs, and some memory, I now have 17.5 MB free. Seems OK to me. I haven't (yet!) enough statistics to show that the server will run for months with this configuration, but at least it has run for weeks... ------------------------------