------------------------------------------------------------------------ NOV-HDW5.DOC -- 19980329 -- Email thread on NetWare file server hardware ------------------------------------------------------------------------ Feel free to add or edit this document and then email it back to faq@jelyon.com Date: Tue, 17 Mar 1998 15:31:31 -0600 From: Karl Klemm Subject: Re: Changing Hardware For additional research & info, you will want to check out an article in the March 1998 NetWare Connection (by Mickey). It has a good guideline on various upgrade methods. If you don't have the hardcopy, you can look at: http://www.novell.com/nwc/mar.98/techsp38/index.html --------- Date: Tue, 17 Mar 1998 14:54:37 -0700 From: Joe Doupnik Subject: Re: Changing Hardware >>I have 2 intranetware servers that I need to move to diffrent boxes. >>What is the best way to do this? I use arcserve 6.1 to backup my servers, >>should I just do a full backup and then a restore? What do I need to >>look for? > >I had just posed this question and have just returned with a meeting >with our local CNE about this. I am doing the same thing this Friday, >changing hardware for a server. Here is a quick rundown on what needs >to happen. The server being replaced is the primary time provider and >also holds the master replicas for [ROOT], the O= and the OU= in which >it lives. > >1) Transfer time sync responsibilities to another system temporarily. >2) Make another server the master of the replicas. >3) Backup the old server at least twice (not to worry about NDS backup). >4) Bring down the old server. >5) Install the new server as the old name from scratch. At this point the drill differs. Don't install into the same tree, install into a new dummy tree. Use the tape program to restore NDS and files, reboot the server (which will be back into the production tree). Naturally the tape backup is done before touching the machine, and again after removing responsibilities. Use the latter as the tape restore. Failing this, use DSREPAIR to "prepare for hardware upgrade" to remove and save NDS material to user-level files. After the new server is in its dummy/private tree use those files to restore NDS. Reboot. If you don't do this there will be a dustup because a fake server appeared with the right name and the wrong credentials. Plus signs will appear against objects in NDS displays (netadmin/nwadmin) indicating duplication of names; not a good thing. Ensure volumes are loaded back into NDS. Let time go by for NDS to fully settle before moving replicas and such. Joe D. > This installs it into the tree, on the proper network number, and > installs NDS onto it. >6) Move a R/W replica onto the server. >7) Restore from backups, being very careful about what you restore to the > SYS volume. >8) Restore the time sync responsibilities to the new server. > >Somewhere in here it may be required to run DSRepair to get the old >internal server and the new internal server numbers resolved. --------- Date: Tue, 17 Mar 1998 17:03:42 -0500 From: "McCown, Chad D." Subject: Re: Changing Hardware Not mentioned were print Q's, Home Directories, and NAL objects. The fact that print Q's require the use of dsmaint to move to a new server is well documented. Recently we installed a new server and had to run uimport to update the users home directory locations even though the server and volume name were the same. A potentially larger issue were NAL objects. They failed until we reselected the EXE. It appears that the values were tied to the volume object number rather than name. Pretty obvious if you think about it but caught me by surprise. --------- Date: Tue, 17 Mar 1998 14:28:42 -0800 From: Barry Wenger Subject: Re: Changing Hardware Interesting remarks about removing NDS. There are two servers in the container (OU) where I am upgrading the hardware. Can't this other server maintain the NDS structure while the one server gets upgraded? I guess I am having a problem understanding why NDS has to be removed here. I could understand if it was the ONLY server in the container. Also, if that is the case, and it has to be prep'd for the hardware upgrade, the do I need to build a new tree? Wouldn't I have to merge them to ultimately get the upgraded server into the proper OU? Couldn't one just build another OU=TEMP and install the new server there and then move it? During my CNE training the fact that having replicas of the partitions was the backup. What did I learn wrong here? --------- Date: Tue, 17 Mar 1998 18:13:31 -0600 From: Karl Klemm Subject: Re: Changing Hardware You REALLY want to do more "hands-on" research for this -- it'll make more sense. Check out: http://www.novell.com/nwc/mar.98/techsp38/index.html http://support.novell.com/cgi-bin/search/search.pl?database_name=tid&search_term="hardware+upgrade"&maxhit=25 (ESP. DOCUMENT #2924224 - very similar to your situation) These will yield some GREAT background and information on the task at hand. >Interesting remarks about removing NDS. There are two servers in the container >( OU) where I am upgrading the hardware. Can't this other server maintain the >NDS structure while the one server gets upgraded? It sure can, and that gives you the fault tolerance. However, if you want the simplest way to swap hardware, you've been getting good advice from the LISTSERV. >I guess I am having a problem understanding why NDS has to be removed here. I >could understand if it was the ONLY server in the container. If I understand you're original request, you want to replace a server name, oh say "ABC" with new hardware and name it "ABC". It will perform the same functions as the original "ABC" including exactly the same NDS replication (and the same types of replicas), as well as time providing, etc. If this is correct, this is a second reason why this option is available under "DSMAINT" (for 4.10) and INSTALL (for 4.11). You WILL want to do this to minimize traffic involved with upgrading the hardware. If you remove the server from the old server from the tree, you will have to insure that there are no masters on this server and move them if there are (takes time). You will also have to go through the removal process -- including cleaning up all of the external references (XRef) to the server and related objects. These could, quite feasibly, be stored on many of the servers within your tree. This will take (more) time to remove it. Also, it advisable that EVERY server that holds an XRef or a copy of the real object be functioning correctly to insure that the object(s) are removed correctly. This is esp. so since you want to reuse the same server name and internal IPX address and is extremely difficult in a large tree. Now when you put the new server in, you will probably get a replica of the partition that you put it into (traffic and time). As you add replicas back, this will add to the traffic and time. >Also, if that is the case, and it has to be prep'd for the hardware upgrade, >the do I need to build a new tree? Now if you use a temporary "dummy" tree, there will be very little to setting it up (about 3 minutes - tops!) and very little to removing it (required before you restore the database from DSMAINT/INSTALL). Also, you don't need more than one replica -- ie. the server itself is the only one -- because the tree will be going away during the upgrade. For comparision, the remove server/add server procedure will take 1.5+ hours. The "move database" option will take about 10 minutes (assuming that your new server has Netware already installed, patched, and running fine in it's own tree). >Wouldn't I have to merge them to ultimately get the upgraded server into the >proper OU? Nope. (You are going to replace the NDS database on the new server). >Couldn't one just build another OU=TEMP and install the new server there and >then move it? It's going to be more work and convoluted. >During my CNE training the fact that having replicas of the >partitions was the backup. What did I learn wrong here? Nope, it's fault tolerance. However, a "good" NDS backup to take is good to insure that if critical objects do get corrupt, they can be restored and save time and headaches in the future. I'm planning this same task for one of our main NDS servers in London (it contains all of the Master partitions for the region -- about 50 in all -- and is a time reference for the continent). Luckily, it only has a SYS: volume In practice runs, it is taking less than 45 minutes from running DSMAINT on the first server, to having the server up and communicating with the other servers (projecting about 1/30th the time to removing the server from the tree, given WAN links, and re-adding it.) Then there is only the restore time of files. (being CAREFUL with files on the SYS: volume). ------------------------------ Date: Tue, 17 Mar 1998 19:46:47 -0700 From: Joe Doupnik Subject: Re: Changing Hardware >>Interesting remarks about removing NDS. There are two servers in the container >>( OU) where I am upgrading the hardware. Can't this other server maintain the >>NDS structure while the one server gets upgraded? > >You REALLY want to do more "hands-on" research for this -- it'll make >more sense. Check out: > > http://www.novell.com/nwc/mar.98/techsp38/index.html > > http://support.novell.com/cgi-bin/search/search.pl?database_name=tid&search_term="hardware+upgrade"&maxhit=25 > (ESP. DOCUMENT #2924224 - very similar to your situation) > >These will yield some GREAT background and information on the task at >hand. > ---------- Let's look at the situation in simple terms, rather than blasting a checklist at us. Suppose your healthy INW 4.11 server experiences a sudden loss of its disk drive which held two NW volumes, SYS: and DATA:. Oh my. You acquire a new drive and start to build a simple fresh installation. Part way through you are asked about which NDS tree to join and you say "my old tree, thanks." That's fine, sort of, but not really so fine. Quickly there appears a problem: the system says "Hey dummy, there is a server by that name already known to NDS, do you want to replace it with this thingy you are just building?" If we say No then installation ceases to progress. If we say Yes then we are asked for the credentials of admin above, as usual, and the old server ident is placed in the known deadwood dept. Our new server is present, but lacks a replica of other pieces of the tree (a small deal, easily handled later). So you continue and discover NDS information pouring across the net from the tree to your new server. Nifty? Yes, but not complete. Our old volume names are still known to NDS, and the replacement drive has the same names but lacks the underlying numerical signatures. There has to be those signatures to avoid confusion when we merely rename (text) a volume rather than create a fresh one, and the signature is on the disk drive (in the volume area). So old and new volumes are present with the same name and the old stuff gets changed to NDS type "unknown". To put the new volumes into the tree we use Load Install, Directory, add mounted volumes to NDS. Something like this happens with the server object. It too had a numerical ident tucked away, supplementing its name. We can change the server's name but NDS knows that's just spelling and the real server still is present by checking its number. With a fresh build that old numerical ident is lost (deadwood, shows as plus signs in netadmin, type unknown). At this point we want to stand back and look at the situation. Forcing fresh hardware on NDS, thereby reusing existing names, is not a swift idea because the real idents, numbers, differ between the fresh server and the rest of the tree. We have a couple of options. One straightforward one is to use DSREPAIR (or DSMAINT) to remove NDS from the server and write its data into files we can carry away on floppies etc (getting to be rather a large etc these days). That strips the existing drives. To exploit those files we build a fresh server into a dummy tree, and thereby avoid clobbering the name/number business held in other replicas. Then we remove NDS and add the old NDS from those files. Ah ha! Now the numbers are carried along and stored on disk, and the rest of the tree thinks we are legit. The similar option is to first tape record everything, including NDS, and then build a fresh server into dummy tree. Dummy tree again to avoid clobbering existing info outside. If, a big if, the tape software will restore NDS information to the server we do so and reboot. This is the tape equivalent of those floppy files, but it does not strip down the original disk drive. NDS looks fine and as its old self, our server ident number and name are the same, our volume numerical idents and names are the same. Oh joy. Simply pulling NDS info across the wire collides with fresh numbers. That is why we take the trouble to save the old NDS via dsrepair/dsmaint/ tape and put it back carefully *before* rejoining the production tree. Once all this directory stuff is happy we restore user files and go home. The only pieces likely to be omitted are print queues (a long history of that trouble, so just regenerate them) and the lack of files on tape which were open for writing at the time of tape recording. Hint: for really good recordings unload the fancy NLMs and leave only the bare server going, to avoid open files as much as possible. That's the general idea. It so happens I have a little story at this point (skip to next msg if not story time at your place). This evening I had to interchange two disk drives on a INW 4.11 server, SYS: and USER:. Don't ask why. I tape recorded thoroughly (Backup Exec here) and did an NCOPY *.* /s/e of all of SYS: to all of USER:. While Load Install was active I dismounted both drives, used Install Volumes option to rename them to SYSOLD: and SYS:, respectively. Install asked if they could be remounted. Sure I said. They mounted, but note that I no longer have NDS on SYS:. The next step was Install, add NDS to server. It did. It gave me that "Hey dummy, this name exists already" message, and I say Yes to continue. Fine, the server is now in the tree, no replicas on it yet. But there is an interesting fine point illustrating the name-number dichotomy. I told Install to add all mounted volumes to NDS, and it added only the newly named SYS:, and not the SYSOLD: one. Hmmm. Why? Because SYSOLD: is a new name but the numerical ident is the same and still known to the outside world. SYS: (new edition) got a new number from creating NDS afresh on it and hence Install had a drive to be added to NDS. To clean up I used netadmin (DOS) to remove both volumes names from NDS and then Load Install, Volumes, add to NDS, again to get clean name numbers in the tree. I removed a couple stray +gibberish "unknown" items. Finally I used partmgr to place the server back in its normal replica ring. I did not replay that tape. The point of the story is one can force the issue with fresh hardware but there is a price to pay. Better is restoring from tape to the server in a dummy tree and then rebooting. I did the switch the way I did simply to see what would happen (today's question uppermost in my mind). There is no checklist here. Rather we remember to collect all our NDS information into a movable pile (floppy and/or tape), rebuild the server into a dummy tree but with its old name and hence avoid name-number confusion in the production tree. Once that server is working then remove NDS from it and use tape/floppy to replace NDS with the movable pile (replaces names and numbers with their original values). Reboot. Joe D. --------- Date: Wed, 18 Mar 1998 09:35:17 -0800 From: David Nelson Subject: Re: Changing Hardware > We have a couple of options. One straightforward one is to use >DSREPAIR (or DSMAINT) to remove NDS from the server and write its data >into files we can carry away on floppies etc (getting to be rather a >large etc these days). That strips the existing drives. To exploit those >files we build a fresh server into a dummy tree, and thereby avoid >clobbering the name/number business held in other replicas. Then we >remove NDS and add the old NDS from those files. Ah ha! Now the numbers >are carried along and stored on disk, and the rest of the tree thinks >we are legit. If he's got two servers to play with, wouldn't it be easier to remove all replicas from the server to be upgraded (and reassign any Master replicas to the other server), remove NDS from INSTALL (using a temp object to store server references), reinstall NetWare, insert into the production tree, and restore the server references?? All of the DSMAINT operations seem to be an awful lot of trouble to go through unless you're dealing with a single-server environment!? --------- Date: Wed, 18 Mar 1998 14:10:50 -0700 From: Joe Doupnik Subject: Re: Changing Hardware Following up my long message on this topic from last night. The executive summary: Keep all NDS material as a bundle, restore it as a bundle. Now to the details. After trying this and that approach to replacing drives on INW 4.11 the only method of handling NDS that is reasonably secure is to use Load Install (or dsmaint) to first remove NDS to a file on the server, then copy that material to a safe place on a client, and then restore it to the server. This gives a complete snapshot of NDS in a portable form. Then build the new server with old volume names but in a dummy tree. Use Load Install to remove NDS from the server, and continue by loading it again from the save-file set made above. Reboot. The reload puts NDS back so the old disk has a working image as a safety fallback. We all recall why a rebuilt server should be installed into a dummy tree initially. It is so the new server idents and the volume idents created during a fresh build do not conflict with their real (original) counterparts in the production tree and destroy them. Once in the dummy tree we can restore the original NDS material and reboot to join the original production tree. There is a gaping hole waiting for us to plunge into on the save-set versus tape approach. The tape system may require NDS to get to your new server, and it may require schema extensions and particulars in NDS *before* it will talk to that server. This is known as Catch-22. Thus I would not depend upon that tape backup to restore NDS. Use the manual save-file method too. These days NDS material won't fit on a floppy. Use Rconsole to see how large it will be: use the directory option to sys:_netware and look. To ensure you can move the save-set file to a safe place login as a priv'd client before using Install and stay logged in through the move. If you don't treat NDS as a bundle then expect to lose things, such as licenses and schema extensions etc. Not good and fixing up NDS loose ends can be a hair-tearing experience if the pressure is high. Keep NDS in that portable bundle, plus whatever tape recordings you can manage. Joe D. --------- Date: Sat, 21 Mar 1998 21:21:53 -0800 From: Barry Wenger Subject: Re: Changing Hardware Joe, thanks for pulling the cork here and letting the concepts take over instead of trying to second guess things. Yes, this does work, I have tranformed a clunky clone P-120 into a Compaq 3000 system!!! Everything works as advertised here. The longest part of this whole deal is backups and restores. It gives one an opportunity to perform some timesync, some partition manipulation, some nds work, a little bit of everything, including a gut wrenching feeling when you miss a step late in the night. It's actually quite easy, once you have done it once. I've been working this for about a day and a half now, and still one more service to restore (Unix print services) and then I'm outa' here like a flash. I will post the steps that I went through and some of the pitfalls that I met and conquered along the way. ------------------------------ Date: Wed, 25 Mar 1998 09:23:44 +0200 From: Mike Glassman - Admin Subject: HP NetRaid Controller / Netware partition - very long - a fwd Got this email from my cousin in the states regarding HP servers and Raid, and thought it would be of Interest. It's very long, but important info if you have one. HP NetRaid Controller / Netware partition (Last modified: 23JUN1997) Symptom When doing manual install he comes up with 80 gig free space which is impossible because he's only got 34 gig available on this hp raid running with the mega4xx.dsk driver.. Customer had turned on " virtual sizing " which allows him the capability to add drive space to the RAID if necessary. Solution Customer talked to HP and they said with this NetRaid Controller, to allow you to add a drive to the RAID box, you have to turn on "virtual sizing" The problem with this is if you do a normal install of Netware it automatically creates a Netware partition and in this case it created an 80 gig netware partition with an 80 gig netware sys volume.. After starting the install over and chosing a manual install, the option to create a Netware partition still showed 80 gig of free disk space.. After talking to HP, when virtual sizing is turned on, the available disk space will show up as maximum disk space the Raid controller can handle. This can create a problem with Netware because if the partition is created larger than what the physical disk capacity actually is, and a volume is created for the maximum space of the partition, then once it hits the end block, it will wrap around to the beginning.. which in turn will cause data corruption and "ABENDS".. The following comes from an HP appnote regarding Virtual Sizing in Netware with their NetRaid Controller: Theory of Operation Normally when a logical drive is created on the NetRAID controller, it presents this logical drive to the operating system as configured. The drawback is that operating systems do not support expansion of a logical drive where the partition and physical capacity are the same size. Adding capacity requires downing server to reconfigure/restore an existing volume or adding the new storage space as a new volume. Using the Capacity Expansion feature allows you to expand an existing volume without downing the server. Capacity Expansion is enabled on a per-logical drive basis. When enabled, the controller presents to the operating system a logical drive of 80 gigabytes. However, only a part of the 80 gigabyte logical drive exists as actual physical storage. You configure volumes to only use the actual physical space while the virtual space allows room for on-line expansion. For example, assume you have 1 logical RAID-5 drive built from 4 physical hard disk drives of 9 gigabytes each; the result is 27 gigabytes of actual storage space. If you enable Virtual Sizing for this logical drive, then the OS will see a logical drive of 80 gigabytes, but only the first 27 gigabytes are real while the last 53 gigabytes are virtual. Under NetWare, you create an 80 gigabyte partition, but within that partition you only create volume(s) totaling 27 gigabytes or less. Since there is unused partition space, the physical storage of 27 gigabytes can be expanded on-line by adding another hard disk drive, but the partition remains at 80 gigabytes. Precautions: When using the Capacity Expansion feature, it is very important to not create volumes which exceed the actual physical capacity. You must add up all volumes which may be using the physical storage space such as a DOS volume, SYS volume, Hot Fix Area, and any user volumes. This is most important if NetWare will be installed on the disk array (rather than a separate disk on an embedded SCSI controller). During installation if the total physical capacity is exceeded during volume creation, a NetWare abend and loss of the installation will occur. As long as the physical capacity is not exceeded, the installation will be successful. Although undesirable, NetWare will allow you to create volumes into the virtual space. This is because during volume creation, NetWare only looks at the beginning of the volume and if there is real storage space there, the volume will be created. However, when writing to this volume, you will not be able to write beyond the physical limit and write errors will be generated when the physical space is filled. Obviously you want to take care in when creating volumes in a partition containing virtual space. Use the NetRAID Config module to check the actual physical capacity available and be sure the total size of NetWare volumes do not exceed this value. One other useful measure is to set the capacity alarms under NetWare so that warnings will be generated when you approach the limit of a volume. When using capacity expansion, you should use a single logical drive since capacity expansion is controlled on a per logical drive basis. Reconstruction (e.g., adding a drive to an array) can only be done on an array having a single logical drive. It is also important to plan future storage expansion into your installation. This will ensure that you can easily expand capacity without need for backup/restore operations or reconfiguration. Setting Up Your Array for Capacity Expansion: When using capacity expansion, you should use a single logical drive since capacity expansion is controlled on a per logical drive basis. Reconstruction (e.g., adding a drive to an array) can only be done on an array having a single logical drive. It is also important to plan future storage expansion into your installation. This will ensure that you can easily expand capacity without need for backup/restore operations or reconfiguration. For NetWare installations, you will need to plan ahead and consider your storage use. Since NetWare only permits one NetWare partition per logical drive, you need to make the NetWare partition the size of the virtual logical drive in advance to be able to expand that volume. Under NetWare, you cannot grow a partition, but you can add additional segments within an existing partition. The added segments can be "joined" to be part of the same volume, or they can be made separate volumes. It does not matter if NetWare is already installed or not at this point assuming NetWare will reside on a separate drive. If NetWare must be installed on the disk array, create a single logical drive with Virtual Sizing enabled. Create a DOS partition of 500 megabytes2 GB or lless for booting. NetWare volumes can then be added after the DOS partition on the same logical drive. The unused space on the partition can be used later for capacity expansion. Be sure to follow the precautions above. For this example assume that the OS is installed on a drive connected to the embedded SCSI channel A. The following steps are necessary to prepare your array for capacity expansion. 1. Connect Drives to the NetRAID Controller. Connect physical drives to the NetRAID controller. Example: Assume there are four drives of 4 gigabytesGB each connected to the controller. 2. Configure the Disk Array. Configure your DAC and create a logical drive (this can be done in either NetRAID Assistant or in Express Tools). If you create multiple arrays (groups of physical drives), you should know which logical drive(s) will be designated for capacity expansion. You should only assign one logical drive per array, otherwise the logical drive will not be reconstructable. Save your configuration. For this example, assume the 4x4 gigabyteGB drives are configured as a single RAID 5 logical drive. This will produce a logical drive with 12 gigabytesGB of real storage capacity. It is important to initialize your logical drives; if the drives have been previously configured under an OS, there can sometimes be residual partition/format information which can subsequently cause misrepresentation of logical drives under NetWare's Install module. 3. Enable Virtual Sizing. If not already, enter Express Tools. Select the logical drive to be setup for capacity expansion by selecting Objects/Logical Drives/Properties/Virtual Sizing and enabling Virtual Sizing. Virtual Sizing is enabled on a per logical drive basis. Note: Clearing a previous configuration does not reset the Virtual Sizing setting previously used for a logical drive; use the Reset to Factory Defaults in Express Tools to disable Virtual Sizing for all logical drives or manually change the setting. 4. Load NetWare and Create NetWare Partition. Load NetWare and load the "Install" module. Select "Disk options", then "Modify disk partitions ...". Create a NetWare partition on the logical drive (which has Virtual Sizing enabled); the partition size will be 81,917 megabytesMB (80 gigabytes). Save the partition. 5. Create NetWare Volume. Select "Volume options" from the "Install module". Add a segment up to the actual physical capacity available; 12 gigabytesGB for this example. (If this was a NetWare sys volume, you would want to use a smaller size of 2 gigabytes or size appropriate for your system and use the balance for a user volume.) Save and mount the volume. At this point, the logical drive has a NetWare partition of 801 gigabytesGB with a 12 gigabyteGB segment set as a volume. The 12 gigabyteGB volume is mounted and ready for use. Be sure not to exceed the actual physical capacity when creating the 12 gigabyte volume and include other uses such as a Hot Fix area, etc. Because NetWare will only allow you to create a volume which is no larger than actual available physical storage capacity, there is no concern with writing and losing data into "virtual" storage space. The new volume is now ready for use. Assume for this example the volume is called Vol1. Leave the left over virtual storage space (81,917 megabytesMB minus 12 gigabytesGB) as unused. You can write data up to 12 gigabytesGB on the drive.; NetWare will not allow you to write beyond 12 GB and lose any data. Reconstruction and New Volumes After using the array created above, assume you are nearing the 12 gigabyteGB limit and you want to add another 4 gigabyteGB drive to the existing array. This can be done without downing the server or rebooting the system. 6. Add Capacity by Reconstruction. Load the NetRAID Config utility (megamgr.nlm module) under NetWare. Select "Advanced" menu, "Reconstruct Logical Drive". Select the logical drive to reconstruct. The controller scans for new drives. Select the drive to be added per screen instructions and enter the Reconstruct Menu. This allows you to add the drive and reconstruct the 4 drive RAID 5 array to a 5 drive RAID 5 array. Reconstruction is done in the background so there is no need to down the server. When reconstruction finishes the logical drive now has 16 gigabytesGB available physical capacity. The original 12 gigabyteGB volume Vol1 is still intact. The reconstruction rate is about 80 to 180 megabytesMB per minute (depending on drive performance, system loading, etc.). Count the capacity to be reconstructed as the number of physical drives participating in the reconstruction times drive capacity. 7. Make the Added Capacity Available. Return to the "Install" module. Select "Volume options". Add a new segment under the NetWare partition. You can either make the added capacity a new volume or it can be part of the original volume. If made part of the original volume, the original volume need not be dismounted. The new segment size must be 4 gigabytesGB or less as this is the amount of added capacity (for this example). Save changes. If the new capacity is part of an existing volume, it is mounted automatically (if the existing volume was already mounted). If the new volume is separate, mount the volume. The new capacity is now available for use. When adding to an existing volume, be sure not to exceed the actual physical capacity available. NetWare will only make available up to the actual physical capacity. Existing Installations Without Virtual Sizing Enabled If you already are using the NetRAID controller without Virtual Sizing enabled, but now wish to add capacity to an existing volume, you will be limited in your options. Here are the likely scenarios when Virtual Sizing has not been enabled. Without Reboot You can only add capacity as a new volume. You will need to add enough physical drives to create a new array and logical drive using NetRAID Config. Then under NetWare Install, you will need to Scan For New Devices, and configure the new logical drive as a new NetWare volume. With Reboot If a reboot is acceptable, then the server can be downed, and Virtual Sizing enabled in Express Tools. This assumes that you have not yet used the logical drive so that the 80 gigabyte partition can be created. If the logical drive has already been partitioned and used, to enable volume expansion, you will need to save data, enable Virtual Sizing, then repartition the drive and restore the data. Now that volume can be expanded on-line whenever required. ------------------------------ Date: Thu, 26 Mar 1998 08:22:16 -0600 From: "Gregory Gerard Carter (Mascot)" Subject: OEM Server Builds VS the big guys. I would like to know how many of you guys do what I do and build your own servers?? Right down to the motherboard??? I recently built a server with a PII DK440 INTEL motherboard. (INCIDENTALLY, imagine MY surprise when I COULD NOT turn off the peripheral components in the DK440LX BIOS. It seems you can, but they still take up the IRQ's and memory areas even when they are off. So, you cannot add two 2940's like I tried to for a duplexed mirrored setup. Yep, if you buy it you are STUCK with the SCSI controller that is on the board. I don't recommend the DK440LX if you are expecting the server to last more than a year.) I also have a 4.10 server running duplexed ADAPTEC 2940 controllers with removable cartridges, all of that I built myself and saved a bundle over $13-14K "Enterprise Server" systems. I ended up paying out about 9K. My logic in not buying a Compaq or a IBM server was that since it isn't proprietary, and most of the cards/board already have 1 year warranties, or lifetime warranties like my SMC's do, what is the point in paying almost twice the amount when I can get it 2-3% above cost on the market? Furthermore, given the market dynamics of hardware, every year I usually do a server upgrade on the processor or motherboard. I just down my server and in about 5 minutes have it back up again with minor changes to the PCI slot options and other minor things in my INETCFG or DSK section on my ncf file with usually a huge whopping performance boost. (Last time I did this I went from a P75, to a P233). What? A motherboard costs about $280 bucks? Besides, I find it sort of warm and fuzzy that if my server dumps itself, I can just throw in a new motherboard myself, or take a trip to CompUSA for emergency parts if I have to. So, do most of you guys do the "Compaq" thing because of politics?? My last boss...(thank GOD and I do mean LAST) made us buy all Compaq server equipment. He said we needed the support in case anyone decided to leave as not everyone knows how to build a server. ??? Now, God forbid anyone should leave thier job because the hours and pay suck, :), but that philosophy sort of wags a big red flag in my mind. If you are a Network Manager, and you cannot build a server, why are you the Network Manager? I know Administrators just administrate server systems, but, most of the time you don't have to do administration. It just runs! (Well, most of the time....). In any case, your thoughts? --------- Date: Thu, 26 Mar 1998 16:32:20 +0000 From: Guy Dawson Subject: Re: OEM Server Builds VS the big guys. >I would like to know how many of you guys do what I do and build your >own servers?? Right down to the motherboard??? We've just bought a shiny new Compaq 3000R server right down to the expensive rack with a Compaq badge on it! I do know how to build PCs and servers and have been doing so since 1990. My first computer was a 6809 based board with a hex keypad & LED display in 1981/2. Why did I do this? We're a medium size company with a small IT department and I want to be able to go on holiday! When I'm not on holiday as I live quite close to the office, I certinally could provide H/W support for a home build. However, if I'm on holiday in Scotland, well out of reach of even mobile phones who's going to pick up the pices? With a Compaq server even the MD can ring up Compaq and say 'fix it'! --------- Date: Thu, 26 Mar 1998 17:55:00 -0400 From: "BURTT, PETER (AEL)" Subject: Re: OEM Server Builds VS the big guys. >I would like to know how many of you guys do what I do and build your >own servers?? Right down to the motherboard??? >I built myself and saved a bundle over $13-14K "Enterprise Server" >systems. I ended up paying out about 9K. > >My logic in not buying a Compaq or a IBM server was that since it isn't >proprietary, and most of the cards/board already have 1 year warranties, >or lifetime warranties like my SMC's do, what is the point in paying >almost twice the amount when I can get it 2-3% above cost on the market? At my shop getting enough money to buy brand name isn't usually a big problem. I won't bore you with the myriad idiocies of purchasing in government, suffice it to say that we get infrequent, but large, chunks of money that have to be spent in a hurry. Off the top of my head, the advantages of the current crop of big, expensive servers are (in no particular order) - huge chassis for lots of disks - redundant power supplies (this has saved our bacon on 2 occasions!!!) - huge and plentiful fans in the case - rack mounting, i.e. a _lot_ of computer power in not much space - the built-in management stuff can be helpful, i.e. SNMP alerts if the internal thermometer goes above 30 deg Celsius. We bought a bargain basement hard drive cabinet a couple of years ago, and I would cheerfully strangle each and every member of the design team given the opportunity. The airflow sucks, the wires are too short, the drives and their mounting screws are hard to reach... We'd have been better off building something out of plywood, scrap metal, and bailer twine. >Besides, I find it sort of warm and fuzzy that if my server dumps >itself, I can just throw in a new motherboard myself, or take a trip to >CompUSA for emergency parts if I have to. > >So, do most of you guys do the "Compaq" thing because of politics?? My >last boss...(thank GOD and I do mean LAST) made us buy all Compaq server >equipment. He said we needed the support in case anyone decided to >leave as not everyone knows how to build a server. > >??? > >Now, God forbid anyone should leave thier job because the hours and pay >suck, :), but that philosophy sort of wags a big red flag in my mind. >If you are a Network Manager, and you cannot build a server, why are you >the Network Manager? Good people are very hard to find, harder to keep, and cost lots of $$$. Lately, the local trend is to hire people with no experience, fresh out of the various computer diploma-mill "schools". These jokers certainly wouldn't be able to build a PC from scratch, regardless of the nice CNE certificate on their wall. (That's not to imply that these guys could look after a Compaq server, either...) --------- Date: Thu, 26 Mar 1998 07:00:00 MST From: "Forrest H. Swick" Subject: Re: OEM Server Builds VS the big guys. >On a totally unrelated note, does anyone know where I could find >myself a standalone SNMP thermometer?!?!? Or a really cheap A/D >converter on an ISA card, so I could build my own? APC http://www.apcc.com/ JDR Microdevices http://www.jdr.com PCPower and cooling http://www.pcpowercooling.com/ --------- Date: Thu, 26 Mar 1998 20:54:51 -0500 From: Israel Forst Subject: Re: OEM Server Builds VS the big guys. I thought like you for a short (very short) while. What could Compaq have in there that I can't get? But now, as I look back over the last two years, which clients did I spend the most time scratching my head in front of some weird looking problem? Which clients do we get our monthly check from and say to ourselves "this has got to be the best deal in the east!" You see, the way I look at it is as follows. Rule 1: The Sh*t always hits the fan! Rule 2: When it does, it usually will hit in the same place over and over again until you resolve it. You don't need to be a math genius to know that if you built the server yourself and your drive keeps dismounting, you're clueless (after the standard troubleshooting of course). Yet if the drives dismount on your Compaq server, probability says it will dismount on 1,000 other servers to in the same week. What does that mean to me? Well if Compaq does not have their act together after 2 weeks they will loose a lot of business. Being business smart, they have a very knowledgeable support team. What does that mean to me? LESS DOWN TIME!!!! Which in turn gives me better odds on being hired next year, which is a good thing. I still do assemble my own servers but only for non mission critical servers, i.e. BorderManager, GWIA, CD-ROM, Message gateway, ect.... Production servers? no clones anymore! If the client says no, then they stick with the old equipment. In the end they usually see the light when you show them how much money they are saving in the lack of downtime. I mean one day down can easily cost $10k or more! so where did all that money you saved go? It just ain't worth it. --------- Date: Fri, 27 Mar 1998 08:39:00 -0500 From: David Weaver Subject: Re: OEM Server Builds VS the big guys. We build many servers and workstations and they work great and are reliable. No generic parts are used, Intel motherboards and processors, ATI video, Seagate harddrives, Adaptec controllers, blah blah blah... After the box is built and the OS is installed, we document the hardware and all the settings and put it into our common contact database in a searchable field. Before a service tech shows up he can get the serial number and print out a hardware inventory and not have any surprises. We'll either have the part in stock or can have it in a day or so from the distributor. Don't get me wrong, Compaq does have a stable product but when it comes time to get a spare in a hurry you'd be better off with a clone. And by building boxes and doing the network service/installs we (our company) provides a full service platform. Kinda like one stop shopping. --------- Date: Fri, 27 Mar 1998 02:57:21 +0100 From: "Arthur B." Subject: Re: OEM Server Builds VS the big guys. >I would like to know how many of you guys do what I do and build your >own servers?? Right down to the motherboard??? I do. For starters because I can't cope with the response-time the big guys give. Waiting 4 hours for someone to show up with replacement parts is just too long. And what they charge for that "service" doesn't sound cost effective to me. Standard is "next business day" I believe. And I don't need one big case, one server that does all. I'd rather have several servers so I can make use of the NDS replication feature, load balance between servers, provide fail-over features and create specialized servers that perform specific tasks. Thus creating an environment with a decreased user down time, whopping performance and more stability because the chance of conflicting software is minimized. If I was to fulfil all those wishes with 'big guy' systems my budget would be spent rapidly. I'd rather pay less for a server, get more performance and still work with Top 1 hardware components only. With the money saved I can obtain a complete test environment which is also my spare stock if need be. Many, if not all, of the features 'big guy' servers have can be purchased and put to use. If desired that is. So no need to buy 'big guy' because of that. Still, that leaves the valid question that the company must have a "fix it yourself" admin around. That's very true. Perhaps that's why companies should make their admin redundant, given the vital position most one-and-only admins usually have. If that's not an option there's always the option to talk with a nearby service provider. Just in case the non-redundant admin had the misfortune to walk under a car or is otherwise unavailable just when the big-big server decided to start smoking. And the 'big guy' serviceman did replace the defective parts after 6 hours downtime but now refuses or isn't able to perform the restore. Maybe "next working 0day" it must be? --------- Date: Fri, 27 Mar 1998 14:14:38 -0500 From: Jeffrey Migliaro Subject: Re: OEM Server Builds VS the big guys. I agree [with Israel Forst]. Also, Compaq servers are Netware tested and certified. A generic box with generic parts most likely is not Netware certified. There is a stringent testing process to make hardware Netware tested and approved. --------- Date: Fri, 27 Mar 1998 07:00:00 MST From: "Forrest H. Swick" Subject: Re: OEM Server Builds VS the big guys. Here's a good rule of thumb. If your budget can afford it, buy HP or Compaq. They have support and use it when necessary. If you can build a good system, know what you are doing and plan on being around to "babysit", then build it. If you build it, get the best parts, don't put El Cheapo parts in. Else you will pay. If you use/make the best boxes there will be few hardware problems. --------- Date: Sat, 28 Mar 1998 07:19:51 +0800 From: "Jon L. Miller" Subject: re; server issues Seems to me you need to have the correct "big guy" come in the first place, as a general practice, most hardware repair person, will not restore anything software related, whereas, a software/hardware engineer will. Also, I alway advise my clients to make sure the repair engineer is certified in the hardware/software that they are using. As for the type of servers, the main issues surrounding the use of "true server" vs "homemade" is the components and the reliability testing and the internal design of the board and components. The data throughput is not the same nor the memory path. There is a vast difference between the two, if performance isn't important then buy clone, on the other hand if performance, reliability is then stay with the "big boys". Just have your buyer stock the most replaced components or do it yourself. --------- Date: Fri, 27 Mar 1998 22:52:31 -0500 From: Shawn Connelly Subject: Re: OEM Server Builds VS the big guys. Since Novell is a very robust NOS and I have an annual budget of just under 80k, the decision to build my own servers was easy. Of course, I used only the best components such as a genuine Intel motherboard (Advanced EV), TI ECC memory and DPT RAID controller with Seagate 9GB drives. This is the key to success; use quality components that you understand. Last summer we suffered through a series of long (and short-rapid) power outages that taxed the UPS to death; as a result I lost a motherboard. Because I had a spare motherboard, I brought the server up in less than two hours. I could not have done this with a name brand unless I had a completely redundant system. Incidentally, in three years this was the only crash I've experienced with NW 4.1!!! Yes, that simple Pentium 166 unit churns through over 4,550,058,140 packets and 48,650,592,219 bytes per year. NT is another story. I think one is better off with a MS certified box because the NOS (um..just OS that I've OD'd on) is so damn finicky. --------- Date: Fri, 27 Mar 1998 14:10:23 -0700 From: Joe Doupnik Subject: Re: OEM Server Builds VS the big guys. >I agree. Also, Compaq servers are Netware tested and certified. A generic >box with generic parts most likely is not Netware certified. There is a >stringent testing process to make hardware Netware tested and approved. -------- My, there certainly is a lot of discussion of the obvious in this thread. Buying brand names gives service fallbacks. It does not make technical blockages disappear. It does not necessarily save any money. Building from parts can be, and often is, very successful when done carefully (engineering), but can be a disaster without the attention to detail and testing. Big brand names often do good engineering, but not always and not without penalty. Novell YES certification applies only to that particular hardware combination with the specified software; it does not apply in every other circumstance. Alas, "every other circumstance" occurs when one or more items change. In short, neither Novell nor the original vendor are guaranteeing problems will not arise. The choice between these approaches often has more to do with the personality of the selector than with technical matters. So what else is new? Joe D. --------- Date: Sat, 28 Mar 1998 21:16:19 +0100 From: "Arthur B." Subject: Re: OEM Server Builds VS the big guys. Israel Forst wrote: >[pro "big guys" email...] Typical. For me it went the other way around. I started with name brands only and paid hell. Then I started to build my own systems from ground up and found that downtime and TCO went down. I know what kind of service Compaq provides (at least in this part of the world) and what kind of service all the other name brands provides. To say the least, I'm not impressed. One thing I don't like is that they want to do their things during business hours. I also know what they dare say to non technical customers that don't have the first clue and how differently they respond if a techie calls them about the exact same thing. In my job, then and now, I encounter many different systems at many different sites and sizes. In terms of uptime, and TCO, the best results come from sites that don't make use of name brands and have a capable admin around. However, the sites that make use of "post order company" systems and/or have incapable "admins" around have the worst results. As Joe D. pointed out, what makes the difference is the person building the machines. And what kind of experience and attitude that person has. Turning systems inside out, tuning and testing them, has given me greater insight and knowledge about what makes systems works and which components really make the difference. As a result my troubleshooter skills have largely increased and identifying bottlenecks has become easier ever since. The overall end result is: for less money, bigger, better and far more powerful systems, largely increased insight, knowledge and experience in all areas, far less downtime, severe decrease in troubleshooting time... I guess the message is that if you know what you're doing you're better off building your own system. But before you know what you're doing you need to stick your nose inside machines and think and test, think and test, think and test...test it, prove it, do it. Try it with a workstation first. Name brand versus home-made. You might be surprised. ------------------------------ Date: Fri, 27 Mar 1998 14:25:43 -0700 From: Joe Doupnik Subject: On I2O support by NetWare >>Sometime back, Novell promised support for i2o in 4.11, via add-in >>modules. However, I have been unable to find anything in their kb, >>with the exception of the aforementioned press release. >> >>Does anyone have any concrete information on this? >> >>I'm putting together a new kick-butt server, and I'll spring for an >>i2o motherboard if Novell supports it, or will be supporting it soon. >>Otherwise, the difference in motherboard cost will pay for a second >>cpu. It seems strange to me that hot plug and play is supported, but >>not a word is said about i2o. -------------- Novell is a fundamental partner in the I2O consortium, and an active advocate of it. They have and will offer I2O support in NetWare 5 and in NW 4.11, but not this month. Please do not consider I2O as either inexpensive (it is just the opposite) nor a cure-all (it is not). I2O requires an I/O processor (say the typical Intel i960), plus software. This week Novell demonstrated I2O solutions in action before our very noses, at Brainshare. They made formal presentations too. I2O is the way things will be in the future, period, given plenty of time to transition. Intel corporate has stated that I2O is vital to their forthcoming IA64 high performance server suite, and Intel is another very active partner in the consortium. Supporting I2O in hardware will require new hardware: motherboards. Peripherals can be, but really shouldn't be, regular PCI boards. As an example, Novell is getting nearly the same performance from an Intel 100B client style Ethernet board which is in a Supermicro motherboard containing an i960 I2O processor, as one gets from the Intel 100/Server board containing its own personal i960. What does I2O do for us? It offloads driver details to a dedicated I/O processor system (LAN adapters, disk adapters, serial comms, etc). This frees the main CPU for other work. More importantly it greatly reduces trashing that CPU's cache and thus causing it to stall fetching instructions and data. This effect is more pronounced with multiple CPUs. An I2O motherboard has the supporting hardware so the peripherals can be lower cost units. How do I know this much? First I read the available I2O material. I am about to join that consortium. I talk with the Novell folks who do the I2O work. I paid attention at Brainshare. I'll know more once I run material. Joe D. --------- Date: Sun, 29 Mar 1998 15:13:33 -0500 From: Israel Forst Subject: Re: OEM Server Builds VS the big guys. I guess the answer can depend on who is asking it. Most replies that were pro clone servers made a very valid point. Replacement parts for clone servers are a lot easier to come by than for a Compaq server. No argument can be made against that. The best possible policy from a OEM can be next day, while we can run into CompUSA and purchase a new controller in a flash. That argument can be made only for lower end servers which have parts that are easily purchased from a reseller. But if you are putting together a high end server, with powerful RAID controllers, or high end NICs, you can't purchase those parts from a reseller down the block, so you have to order them from a catalog. What do you get? The same next day. So you'll tell me that you keep a stock of spare parts in your office, well that will cost you a pretty penny, and you need a set of spares for each vintage server you support. In our case that would be 30 to 40 different vintages, So you say, standardize your clones and you only have to keep one set. Well in that case, standardize your Compaq's and purchase one set or spares. The issues I have had with servers, were sometimes simply bad NICs or failing drives, but usually were more like unaccounted for high utilization, spontaneous broken mirrors, deactivating drives, unaccounted for high concurrent disk requests, unexplainable abends, etc... These issues are not resolved by replacement parts. They can and will hound you until you solve them. All the next day service and replacement parts will not save you, decent support might. So to recap, yes a competent and knowledgeable Admin is crucial, but IMHO, support is the key. I guess my prospective comes from my background. Before my company made the transition to integrator, we were Technology Consultants. You paid us to tell you where you needed improvement as it relates to your IS infrastructure. We didn't look at your budgetary constraints. All we said is, this infrastructure is not adequate. Here is what we think you should be purchasing. If they were not ready to spend money, they didn't call us. We would also implement and support the recommendations we made, but our outlook was one of a Technology Consultant. So we have found that in the long run our clients were better served with the support that stands behind a more expensive server, rather than the money saved by clones. If you are on sight in a small shop with a handfull of low end servers, yes you can easily swap parts, but from the prospective of an integrator, its just not worth it. --------- Date: Mon, 30 Mar 1998 01:04:58 +0200 From: "Arthur B." Subject: Re: OEM Server Builds VS the big guys. Israel Forst wrote: >I guess the answer can depend on who is asking it. Interesting. Very interesting. Almost identical experiences. Different outcome. One might ask why. I also work in the area of consultant, integrator, auditor, troubleshooter, administrator, engineer, designer, maintenance concepts and management, the lot. In fact "total solutions". My main problem with name brand systems is that by standard they have next day service where in-house spare stock doesn't have that problem. Furthermore, I don't work with clones. I make use of the same name brand components the name brand systems make use of or better. Which I call home-made, not clone. Clone is what you get when you order in complete cheap systems that spell disaster. Home-made is what you get when you put a knowledgable person to work. The need for high end servers doesn't arise that much because we like to scale down servers and devide the workload amongst several not-that-big-nor-complex servers. Aka applying KISS (Keep It Short and Simple) to the tech environment. Which greatly reduces SPOF (Single Point Of Failure), server down time and user down time (two very seperate things!), TCO and budgetary constraints. The spare parts is what we call our test servers/ test environment which help greatly reduce operational downtime by testing new stuff before implementing it. And help getting new admins on track and much, much more. So our need of next-delivery of spares isn't that great (we can always take apart the test environment if need be). Besides, we couldn't effort it. Too much server down time in case of next-day delivery. It's either now or never is what we tell our suppliers. DHL if you have to but deliver now our find your bussiness elsewhere. Very rarely needed though, todays hardware components can indure a lot. In cases we do find ourselves in a situation where we need that special spare part (some of our customers have their own way of conducting utilizing networks) it's just a matter of asking "how much". As we all know the cost of hardware are just a small part of the TCO. So why bother over pennies? If it only costs $200 more to get the part this day then please do. $200 is nothing compared to an extra 4 hour total user down time. The out of the ordinairy problems you mentioned (like unaccounted high utilization and such) are, IMO, mostly due to bad driver software not the hardware. Which is rarely encountered when you use the software drivers that come with purchasing name brand components only. In cases where downloading the latest patch doesn't solve an unforseen problem I simply switch name brand component and be done with it. Try that with a name brand system. Also, before placing any kind of system into the operational environment, stress-test that system first. All in all, seeing how this thread is beeing replied too, I think what an admin thinks is best depands on what kind of expiriences that admin has had in the past. I say, today, try both approaches to test what meets your unique demands today. But never ever put your operational environment at risk. Because one thing is for certain. The road towards superior home-made is full of traps and one must be prepared to learn the hard way. If, and only then, that track becomes past tence one becomes an admin with more overall insight in almost every aspect that concerns the systems they are administrating. Thus becoming an better admin IMO. However, the best admin in my eyes is still the admin that knows where to draw the line. Putting the company interest before his or her own interest. Meaning, know your limits. If you don't have the time, budget, management support and/or insight to start learning "home-made" the better admin will decide that that is where to draw the line. And stick to name brand systems. Because that's the best next thing after knowledgable home-made IMO. That beeing said. If you, as an admin, do feel much more confident working with name brand systems then please do so. The one thing your company doesn't need is their admin feeling uneasy about the system components. Otherwise, start testing and see (proof!) if you can make a difference for the better. IMHO support is just part of the key. Insight, hands-on practise and knowledge are the most part of the key. Because those factors will help you seek out the relevant parts in the support given that works in your unique situation. And help you deal with helpdesks and thus get what you need. Building your own systems from scratch will force you to look at all aspects of what a system is made out of. And thus give you greater insight. However, error in that field and you will find yourself in a world of hurt as soon as your actions hurt the operational enviroment or whenever you decide to try this without consulting management first. Closing words. Do start in the world of home-made. But start slow, easy and without risks. Learn while you're going. One step at a time. Never ever putting your operational enviroment at risk. Be double sure before taking the next step. Rewards can be great. Penalties even higher. --------- Date: Sun, 29 Mar 1998 01:02:34 -0800 From: Mike Neal Subject: Re: OEM Server Builds VS the big guys. We have a practice of building generic servers with brand name components. We have spares for everything including a spare server that serves as a test bed. Right now, a P200 Intel motherboard, 64MB, mirrored 4GB SCSI, Adaptec2940UW, 3C905 and Enlite case runs about $1300. We could run everything off of a big (expensive) name brand server, but for less, we have six seperate servers for: 1. Arcserve + Faxserve 2. File + print services 3. GroupWise 4. Managewise + DHCP 5. Callware 6. Test bed / spare If something dies, it is a smaller part of the overall system. We can replace a component or a server in a few hours. We have complete control over our systems. This approach requires a knowledgeable H/W and NOS admin (we have two) and top managements support for our philosophy of architecture. Of course this is not a good approach for an integrator's client. ------------------------------ Date: Thu, 19 Mar 1998 07:27:24 EST From: "Robert L. Herron" Subject: Re: 100BaseT and Patch Panels >>We have a 100BaseT network, all the way to the user's desktop. Cables >>terminate in comms closets at a punch-down block, with UTP from the punch >>down block terminating in a RJ45 connector. There are no patch panels, >>cables plug directly into the hubs. >> >Category 5 equipment is rated for 100Mb/s transmission; As long as >the patch panel is installed according to Cat5 specs, using Cat5 >equipment, your 100mb rating is safe. Best to get with an installer >that is familiar with Cat5 (obviously not the original company) and >go over it. Another thing to remember, CAT 5 compliance is more than just equipment. The compliance is jeopardized if the pairs in the cable are untwisted more than 1/2 inch of at the termination points. Also, running too close to fluorescent light fixtures, motors, and other power cables decreases cable performance. Since the CAT 5 spec. does not forbid patch cables, I would be skeptical of the original contractor's cabling job. Like Nathan said, get independent verification. ------------------------------ Date: Tue, 24 Mar 1998 01:39:14 -0500 From: Derrick E Barbour Subject: Re: I2O >Sometime back, Novell promised support for I2O in 4.11, via add-in >modules. However, I have been unable to find anything in their kb, >with the exception of the aforementioned press release. > >Does anyone have any concrete information on this? > >I'm putting together a new kick-butt server, and I'll spring for an >i2o motherboard if Novell supports it, or will be supporting it soon. >Otherwise, the difference in motherboard cost will pay for a second >cpu. It seems strange to me that hot plug and play is supported, but >not a word is said about i2o. Go to http://developer.novell.com ..... Select "Yes Bulletins Certification Info" -->Search Bulletins --> Field Level Search -->Search Full Document Text --> enter "i2o" without the quotes in that search box...... you will get a list of 7 or 8 servers which are already "YES" Program Certified as I2O systems. You don't find this info if you search the Knowledgebase at "support.novell.com" or at "www.novell.com" .... the search has to be done at the "developer.novell.com" site. That is where all the product certification information is kept. ------------------------------ Date: Tue, 24 Mar 1998 17:10:40 +0100 From: Jan Chochola Subject: Re: waiting...port is inactive >>I may be mistaken, but I think NetWare + standard or even >>newer (with TRX buffers that work, 16550) COM port + high >>speed is not recommended combination. Thanks to little >>'in silicon' processing (simple little buffers) there're >>lots of serial comm interrupts and CPU cycles. > >I'm not sure if I understand you correctly, please forgive me if I >don't. 16550 is better for high speed communications because it >includes a 16-byte buffer, which results in fewer interrupts (the >old chips signalled in interrupt for every byte because the buffer >size was 1 byte). > >>New motherboards often came with on-board serial ports >>that can be configured in AUTO mode (BIOS setup). At least >>for server this should be avoided and ports configured >>permanenty (statically). > >Agreed. Almost true. There's 14 byte receive buffer and 16 byte transmit one. Interrupt thresholds can be set to several levels (I don't remember exactly the watermarks). There was 16450, which had silicon bug and therefore buffers could not be used reliably. (If not mistaken, there was a 450A or B, which should work.) So, 16550 is much better than older chips, can reduce interrupt count significantly. But buffer data must still be transferred by interrupt service routine byte-by-byte. There's timeout (I think a word-long gap on the line) that may cause interrupt to retrieve the buffered data. The odd thing may came when one sets high speed betwen modem and computer and the modem connects to the remote one at relatively low speed (115k vs 28.8k, eg.). Rx interrupt with each received byte. I think even for today's fast processors there's a bottleneck in slow ISA bus to o which COM ports are usually attached (even on-board). And I think all this is also one of the reasons Digiboard and similar boards exist. IMHO. --------- Date: Tue, 24 Mar 1998 16:26:42 +0000 From: Guy Dawson Subject: Re: waiting...port is inactive >was 16450, which had silicon bug and therefore buffers >could not be used reliably. This was the original single byte buffer chip. The 16550 has 16 bytes and the early ones were buggy. There are now 16C650, 16C750 and even 16C850 chips with 32, 64 and 128 byte buffers. There has been some fragmentation in the market however and these later chips don't all work in exactly the same way. >I think even for today's fast processors there's a bottleneck in slow >ISA bus to o which COM ports are usually attached (even on-board). >And I think all this is also one of the reasons Digiboard >and similar boards exist. IMHO. Ah, but... many of the high speed cards are ISA cards! An ISA ethernet card (with the right OS and drivers) is quite capable of saturating a 10MHz ethernet. An ISA serial card should be able to run at 1 or 2MHz given the extra interupt overhead. ------------------------------