------------------------------------------------------------------------ NOV-HDW1.DOC -- 19960308 -- Email thread on NetWare file server hardware ------------------------------------------------------------------------ Feel free to add or edit this document and then email it back to faq@jelyon.com Date: Fri, 7 Oct 1994 14:47:44 CDT From: James Munnerlyn Subject: Re: Network Cable > The only other way I would know of to do this is to use an ohm meter, > if you can get to both ends. That way you could test continuity. > Someone else may know of another way but this is the only way that I > know of without a cable tester. There are three conditions you can look for with the RG58 type cableing. 1) Properly terminated cable. Read resistance of terminator. Place terminator at one end of cable, read impeadence across cable at opposit end. If you get an impedence close to the value of the Terminator, your cable is fine. If you get 0 or Infinate resistance, you are shorted or open circuit. Perform test 2 and 3. 2) Short one end of the cable, measure opposite end. You should get 0 ohms, or less than 1 ohm. Cable appears fine, perform test 3. If you do not read 0 ohms, there is a break in your cable. Can also use the continuity test with this shorting one end and reading continuity on the other. 3) Leave terminator off opposite end, leaving the segment open. Measure the impedence on the opposite end. If it reads LO or infinite, the connection is open. This what you should read. If it reads 0 ohms, you have a short. The infinite or LO reading depends on what type of Meter you are using. You must make sure all work stations are turned off, so that your meeter does not read ambigious signals from the NICs on the WS producing voltages that your meeter would interpate as an impeadence change. Causes of open and shorted segments: Faulty installation of Connector. It is shorted or the cable has stretched and pulled out the center connector. --------- Date: Fri, 7 Oct 1994 21:20:29 -0600 From: Joe Doupnik Subject: Re: Network Cable >I don't think it's going to be easy unless you have a Time Domain >Reflectometer (TDR) which costs $$$$. > >> I have a question on network cable. Is there a test that can be done >> to test the network cable. I need to know if there is a break or fault >> in the cable I keep getting error or the system will not log into the >> network. Can you use a meter anything other than buying some expensive >> network cable device tester. ----------- Much can be done with a US$19.95 volt/ohm meter. If your wiring is coax (you never did tell us coax or twisted pair) measure the resistance at a Tee connector. 25-28 Ohms is ok, even though the reading will bounce from signal traffic. Higher is an open, lower is a short. Move along the cable to find the bad spot. Wiggle wires and connectors to find flakey connections. This can be done on a live network. Ethernet signals swing between 0 and -2V. For twisted pair measure a wire in a cable pin to same pin for continuitity. Usually just recrimping solves the problems in the short term. While TDR's are nifty one needs to know what to look for. Also I lack a formal TDR so I just make one from a scope with a calibrator test point. Costs nothing. If you aren't comfortable with a scope then don't bother. Joe D. ------------------------------ Date: Tue, 10 Oct 1995 22:10:27 -0700 From: Virendra Rode Subject: Re: Null cable >I need the correct pin to pin connections to build a UTP cable to >connect a server and a workstation directly together (no HUB) for >training. Thanks in advance. Ethernet UTP uses pair 1,2,3 and 6. 1 and 2 are for transmit, 3 and 6 are for receive. Connect the 1 and 2 to 3 and 6 on the other end, 3 and 6 to 1 and 2 of the other cable... Let me put in a diagram form. C r o s s o v e r C a b l e Tx+ 1 ----------------- 3 Rx+ Tx- 2 ----------------- 6 Rx- Rx+ 3 ----------------- 1 Tx+ Rx- 6 ----------------- 2 Tx- [Thanks Joe Mathew and Virendra Kumar Rode] --------- From: "Daryl Banttari" Date: Sun, 29 Oct 1995 22:42:24 +0000 Subject: TP Ethernet "Crossover" Cable An Ethernet "Crossover" or "Null-Modem" Cable How you hook up pins 4,5,7, and 8 is moot. This is useful if you want to connect a single workstation to a test server, for example. The other possiblity is to get 10Base2 (Ethernet Coax) cards, but those seem to be getting rare these days... The other semi-popular use is for daisy-chaining Ethernet hubs, but most hubs I've seen lately have a crossover switch on one port to help you with the process. Be sure to label the cable, though :) Don't wanna try to reuse these as "regular" patch cables. ------------------------------ Date: Tue, 21 Nov 1995 14:04:04 -0600 From: Joe Doupnik Subject: Re: DX4-100 or P-100? >Mansour Ahmadian wrote.... >> Our novell server is a DX2-66 with 16 M RAM and 2 * 1G HDD. >>I plan to buy a new server with the following config. >>DX4-100 or P-100 with 64M RAM and 2 * 4G HDD >> >>What is your suggestion about this configuration? >>somebody told me that P-100 isn't good for server what do you think? >>please note that that this server must work for several months >>continusly. >> >Install bus-mastering network and disk-controller cards at the expense >of CPU power. > >Richard Letts --------- Richard is so right. One of my NW 3.12 servers is a 486-33 EISA bus machine using bus master NE-3200 Ethernet boards and a bus master Adaptec 2742A SCSI board, 2GB, 32MB. It services four dozen Pentium-90 clients banging away with everything in the arsenal (inc Win 3.1 and Win 95). The bottleneck had been time on the wire until I trifurcated the feeders. The server mopes around looking for things to do, poor thing; should call it Marvin. That's 48 P-90's hammering one 486-33, and the latter is the coolest running of the bunch. Buses and bus mastering boards count for a great deal. The problem with tempting Pentium servers is the bus. Most have ISA and PCI slots. ISA is what we wish to avoid. PCI boards are highly problematic, still, unfortunately, and you boards WILL vary considerably. EISA bus Pentium machines are almost entirely the dual Pentium variety, one of which is now masquerading as netlab1.usu.edu with UnixWare. A multi- cpu version of NW will arrive shortly, but we don't need that kind of fire power serving clients (but it's terrific for backend database servers). It turns out that ASUS (Taiwan) makes an EISA bus single/dual Pentium board and it's rather nice (netlab1 has one), but even so the cost of the board is steep; PCI/E-P54NP4 is the model number before anyone asks. Busing is where we should invest our server money. Conservative managers, which includes myself, lag behind and prefer EISA for performance. Later we'll move to PCI, after components have stabilised. The cpu could be anything from a 386 on up, and clients would hardly know the difference. What that implies is Novell's server code is very tight indeed, wasting very few cpu cycles in either drivers or in the kernel. Drew Major, chief architect, loves to talk about optimizing away a couple of instructions in large bodies of code. Joe D. ------------------------------ Date: Tue, 14 Nov 1995 19:00:47 -0600 From: Joe Doupnik Subject: Re: Impacted server >I have a 250 user server, 486/66 EISA with 64mg of RAM and 3Gg drive. It >supports Mac services and Discport with three nodes. The network card on >the server is a 3COM 3c579. I do believe, from comments by others, that the 3C579 is not a bus mastering board. That's the first place to look for congestion. >Here are some basic server statistics (numbers are rounded): > Permanent Memory Pool = 3mg (5%) > Alloc Memory Pool = 3mg (5%) > Cache Buffer = 47mg (72%) > Cache Movable Memory = 9mg (14%) > Cache Non-Movable Memory = 3mg (4%) All look just fine or better. You might also look at the number of processes reported by Monitor. If it is >> 10 then that suggests the congestion is within the server and probably with the motherboard or a peripheral board. The number of processes grows if more work appears than can be finished by the current process. For this I am discussing NW 3. As but one example of process count as an indicator, one of my NW 3.12 servers is a 486-33 EISA machine. For reasons no longer valid I had to increase the number of wait states used on bus i/o transfers. The number of processes was about 13-15 after a day or so. Recently I reduced the wait state value to one notch more than minimum and the number of processes dropped to 10. This weekend the last notch will be removed for another examination. This server has four Ethernet segments and four directly attached printers, and all seem to be used heavily all the time. >We are running most of the software from the server, including Netscape, >Word, Excel, PageMaker, ClarisWorks. It takes about 20 seconds to get >into any of these applications. The maximum number of simultaneous users >has been 116. No more than 45 calls on all applications, with no more >than 13 users on a single application at that time, have occurred within >a 30 minute period. I am in the process of moving non-metered applications >to individual hard drives. > >One of the tech support people came by with a sniffer and claimed that the >server was heavily impacted, that it could not handle the traffic it was >receiving (the CD-ROM's were not in use at the time). Here's his report: > >> Sniffer provided these results: >> >> 1) Athena is usually in a state of overload. This includes but may >>not be limited to, Applications getting too many requests at one time, >>network running as high as 80% load on peaks. 60 - 70% of the load are >>Novell services, >>20 -25% are Appletalk services and the balance is TCP. Unfortunately that's not much information. The "overload" condition can be revealed by monitoring particular IPX packets from the server, and they say "Hmmm. You already requested this and I'm still busy, please hold." The network "load" probably means percentage bandwidth used, and 90% is still a very nice value. Thus bandwidth is not a heavy weight. A far better indicator is how many packets are dropped from excessive collisions (after 16 retries by the lan adapter). What can be inferred is that when traffic is intense, in terms of packets per second not bandwidth used, then weak lan adapters fall behind and drop packets, in addition to perhaps using lots of server cpu cycles trying to stay abreast of the traffic. How much is "intense?" A rough value is > 1000 pkts/sec. The effort to deal with a packet is the same no matter what the packet length. >The period of time compared to what is described in the second paragraph. >According to the tracking program I am using, whose shortest polling time >is one second, the CPU utilization averages 12%, and the figures for the >LAN (packets and Kbytes per poll) and the disk activity (Read/Write) >follow a similar percentage. > >We have not had problems with the server recently, although for a while >everything was very slow (1 minute+ for loading software). Should I assume >potential problems (utilization is bound to increase)? Where would the >problem most likely reside--network card or CPU? Is a 66Mhz machine really >that slow? Or am I dealing here principally with line congestion (we plan >to install some switchers)? Any suggestions? A 486-66 is a heck of a powerful server. But the peripherals need to be swift too. And my interpretation is they aren't on your server. I recommend you look at NE-3200 Ethernet adapters as the best there is under heavy load. The disk system should be high quality SCSI, not IDE. Joe D. ------------------------------ Date: Mon, 27 Nov 1995 22:37:37 GMT From: "Stephen M. Dunn" Subject: Re: DX4-100 or P-100? $ Our novell server is a DX2-66 with 16 M RAM and 2 * 1G HDD. $I plan to buy a new server with the following config. $DX4-100 or P-100 with 64M RAM and 2 * 4G HDD $ $What is your suggestion about this configuration? $somebody told me that P-100 isn't good for server what do you think? There's nothing inherently wrong with a Pentium in a fileserver. But if it's just a fileserver, you're spending money on the wrong thing. You should be looking for plenty of RAM, a good I/O bus (consider both EISA and PCI), and good I/O cards (that means 32-bit, busmastering host adapter and network cards). If you have a fixed budget, then whatever the price difference between a P/100 and a DX4/100 should be invested in better I/O cards and/or more memory rather than a faster CPU. How would you characterize the performance of your present server? What are the I/O cards in it? If the I/O cards are reasonably good, and if the performance is reasonably good, then there's no reason to put in a Pentium. If the I/O cards aren't much good, it's hard to draw conclusions other than that your new server should use better ones. ------------------------------ Date: Thu, 7 Dec 1995 15:35:06 -0600 From: Joe Doupnik Subject: Re: Opinions/Questions? on switch to 10baseT & 100Mbs?? >Q1: Several of my books (some by Novell Press) are quite hard on UTP >(even Cat5). Is Cat5 UTP really as susceptable to interference as they >claim? Is it something noticable with an "average" installation? Sure it is, more than vendors would like to say, and it radiates too. But you don't have a choice if running 100Mbps (yes, I know about 100BaseT4, four pair in Cat 3, but that's to accomodate old wiring plants). The HP VG stuff works, but some of us remain unconvinced about its long term utility. >Q2: If we were to install UTP (Cat5), and were to later switch to one of >the fast ethernets (VG or 100), we would require no cabling changes, >correct? Only NICs and Hubs? You need to wait, please, for the market to settle down. And it would be a good idea to monitor the Ethernet NEWS group comp.dcom.lans. ethernet for a blow by blow discussion by the designers. >Q3: Can we run 10Mbs and 100Mbs traffic over the same segment? How about >over different segments (a multipurpose hub with some 10 and some 100 >segments)? Or when we _eventually_ make the move to 100Mbs do we need to >do it in one fell swoop? Same segment? Twisted pair is point to point only, no taps. Please talk with vendors about their dual-speed hubs and get the real scoop. >Q4: Any other comments about switching to 10baseT? Any good books on the >100Mbs technologies? Books lag behind by years. Read the current NEWS to stay in touch with matters, and exercise a great deal of caution and skepticism about what you read there too. 100Mb/s is not for the faint of heart right now. Joe D. ------------------------------ Date: Fri, 8 Dec 1995 03:26:14 GMT From: Mike Subject: Re: Opinions/Questions? on switch to 10baseT & 100Mbs?? >Q1: Several of my books (some by Novell Press) are quite hard on UTP >(even Cat5). Is Cat5 UTP really as susceptable to interference as they >claim? Is it something noticable with an "average" installation? I suspect it's not so much amatter of "being hard" on UTP as pointing out it's merits and shortcomings. All tools and techniques have good and bad points. Unshielded cable is susceptible to interference. This means you may not want to use long runs of UTP, or that if you are in an electrically noisy environment that UTP might be a bad choice. However, UTP is considerably cheaper than shielded cable, and cost savings are usually a good thing. If you stay within the cable length guidelines given by the vendors, you should be in good shape, assuming you have an electrically quiet environment. >Q2: If we were to install UTP (Cat5), and were to later switch to one of >the fast ethernets (VG or 100), we would require no cabling changes, >correct? Only NICs and Hubs? UTP Cat 5 should be able to handle 100mbps as well as 10mbps. This growth capacity makes it very attractive. However, if you are thinking about cable, I'd look into what ATM uses and see if you can cable to that standard. At times I think that ATM may blow fast ethernet away in the not too distant future. >Q3: Can we run 10Mbs and 100Mbs traffic over the same segment? How >about over different segments (a multipurpose hub with some 10 and some >100 segments)? Or when we _eventually_ make the move to 100Mbs do we >need to do it in one fell swoop? A segment is restricted to one speed. However, you can just move a node's cable from one hub port to another to move the node to another segment. Some nodes allow you to assign ports to segments in software, so it becomes very easy to upgrade PC's. And this means that you can do piece meal upgrades. Upgrade a NIC, move the PC to the other segment, and the PC is up again. >Q4: Any other comments about switching to 10baseT? Any good books on >the 100Mbs technologies? We've migrated from a poorly laid out 10Base2 to a very professional 10BaseT over the past year. At this point, I have next to no use for 10Base2. 10BaseT is very very nice, especially with intelligent hubs. ------------------------------ Date: Mon, 11 Dec 1995 01:13:31 -0500 From: Paul Mujica Marchena Subject: Re: Novell: How can I gain Novell experience? >I will graduate with an Masters degree in Library Science this month. >I have computer skills but no Novell experience. What is the best way >to gain this experience? Get Netware from Novell Inc. It's easy to learn. If you are experienced with personal computers (IBM compatibles), and if you used to be a programer, you will have no problem with Netware. Making a red LAN with 2 PCs, 2 NICs (network card like Ethernet) and 2 m. of coaxial (RG-58), 2 BNC conector, 2 BNC-T and 2 terminator (Resistance 50 ohms) would be a Minimum Configuration. You need 2 PCs: - One PC, the server, the principal computer, like a 386SX - 4 M ram, about 100 Mbytes of Hard Disk and the Installation Disk to Netware 3.12 it's only tou want. This the minimum configuration to install a server, but if you have more money, better. A pentium system is the best for a big Novell NetWare LAN. - Other PC, a workstation, like the one described above. Connecting the NIC, take care with the correct configuration of the card, I/O and IRQ, to prevent conflicts. Install the client software at the station to get access at the server Then begin to enjoy Netware. ------------------------------ Date: Fri, 15 Dec 1995 09:11:58 -6 From: "Mike Avery" To: netw4-l@bgu.edu Subject: Re: Novice Has Questions Regarding LAN and Ethernet >Do you recommend the coax cable verses UTP?. Someone suggested using >category 5 UTP with a hub. Is a hub really necessary for only 3 pcs? COAX is physically fragile. That is, it is easy to damage the connectors on the cable, and as the network segment size increases, the probability of failure increases. Worse yet, if the network segment is broken, every node on that segment loses service until the segment is repaired or the damage bypassed. This is similar to bad Christmas tree lights wired in serial, where when one bulb burns out the whole string goes dark. I have spent more hours than I care to think about trying to figure out where the break in the cable is *THIS* time. With 10BaseT, if a node, or a node's cable, is damaged, only that node is affected. This makes trouble shooting easier. Go to the node with the screaming user and work back to the hub. No sweat. In a small cooperative environment, this is not a problem. For me, the point where I'd rather have UTP and a hub is somewhere around 10 nodes, unless extremely high reliability is needed, in which case the number would be much lower. Hubs are getting cheaper, some are in the $120 range for an 8 port hub. A final thought, if you expect that your network will grow to the point where 10BaseT would be advantageous, it might be a good idea to plan ahead and start out with 10BaseT - that would reduce the number of things that would go wrong in the conversion, since their wouldn't be a conversion. ------------------------------ Date: Fri, 15 Dec 1995 22:18:17 -0500 From: GannonT@aol.com To: netw4-l@bgu.edu Subject: Re: Novice Has Questions Regarding LAN and Ethernet With regards to Coax versus UTP, you do need a hub to connect all the utp lines into in order to create an electrical ring; thus a variation of "STARLAN". However, with coax, you just daisy chain all the connections as in a BUS, like Apple does in its LocalNet. Even a small hub will set you back a couple hundred bucks and the cabling isn't cheap. But RG-58AU coax goes for about $0.50 per foot. Fairly inexpensive (e.g., CheapNet). Definitely go coax as long as it is only a few machines, you can string coax from one machine to the next and to the next, and you don't exceed 250 ft in any one cable. As far as I know, Windows 95 has the same and more capabilities as WfW. You just don't have DOS underfoot anymore. And, Mike Avery is correct about some of the problems with breaks. With proper planning about where to run the cables and preventing the connector ends from being jammed, you should have few problems. I concur with the breakpoint in going to UTP -- around 10-12. And with the speed at which things change in computing, you can expect 3-5 years life with todays technology before you will want to upgrade anyway to something better. ------------------------------ Date: Sun, 17 Dec 1995 23:17:02 -0800 From: rgrein@halcyon.com (Randy Grein) To: netw4-l@bgu.edu Subject: Re: Novice Has Questions Regarding LAN and Ethernet >Even a small hub will set you back a couple hundred bucks and the cabling >isn't cheap. But RG-58AU coax goes for about $0.50 per foot. Fairly >inexpensive (e.g., CheapNet). Inexpensive hubs can be had for about $120 for an 8 port hub; Cat 3 & Cat 5 UTP is less per foot than coax, while the coax connectors are $3-7, but rated RJ45 cable ends (for UTP, or 10Base-T) are $0.1-.5 each. The biggest difficulty is that you MUST have a crimping tool to install UTP cable, but many users get away with twist on coax connectors. If you go that route, PLEASE do yourself a favor and make sure they're for RG-58. The RG-62 cable used for Arcnet and IBM terminals is thicker, but the difference is small enough that the larger cable ends can sort of work, for a little while. Maybe a few months, sometimes a year before weird problems bring your net to a crashing halt. The point here is that if you're going to do it yourself, take the time to learn how to do it right or expect problems you won't be able to solve yourself. BTW, a quick way to test for BASIC Thin ethernet cable problems is to remove the T connector from the back of a computer, and apply an Ohmmeter between the outer casing and inner electrode of the T connector. You should see between 26.5-28.5 ohms, depending on the length of cable, condition and number of connections, ect. Much above or below those values indicates problems that you may not want to solve yourself. >Definitely go coax as long as it is only a few machines, you can string coax >from one machine to the next and to the next, and you don't exceed 250 ft in >any one cable. Sorry, but the limitation is 185 meters (about 608 feet) TOTAL length. I'd hate to have to tell you how many networks I've spent time on, some even installed professionally, that greatly exceeded this limit. >As far as I know, Windows 95 has the same and more capabilities as WfW. You >just don't have DOS underfoot anymore. Sorry again, but this is the biggest lie out of Redmond since they told everyone that Windows NT was crashproof. Win95 still runs on DOS; the easiest way to see it is click the start button, select shutdown, then click the button to restart in DOS mode. If you enter the VER command at the resulting prompt it may say WIN95, but it's really DOS 7. Sure, it's got more capabilities and is marginally better than WFW, but be aware that it has quite a few bugs, and is flat out incompatable with a number of applications. Check out your software BEFORE you upgrade and you should be safe, and have a much easier time. ------------------------------ Date: Tue, 19 Dec 1995 11:55:31 +1000 From: Richard Phillips Subject: Designing a large, stable network Ok. Personal experience here. I have a wan, the largest site/part of which would have over 1000 nodes. At this particular site I have a Netware 4.1 server that runs at around 350 concurrent users, peaking at well over 400, and the majority of whom load Windows and other applications such as cc:Mail, Excel, Word, and Pagemaker off the server. They are very very active, but the server is generally coping quite well averaging well under 10% utilization. It is a Compaq Proliant 2000 Pentium 66 with five Compaq "Packet Blasters", 128meg of ram, and 8gig of RAID drives on a fast/wide scsi array controller. Words of wisdom (at least I think so) follow. A Netware server can cope with a large number of very active users. In my experience far more than NT can, at least at the moment (no arguments here - I repeat _personal_ experience, plus who knows what the future holds....). If a lot of data is going to/from the server, then you need a big "pipe" into the server. This means lots of _good_ network cards and/or fast network cards. You do not want to use cards that drain the processor of the server. I personally use a number of nics and put them through a switch - this provides speed/bandwidth plus redundancy. If you have a lot of users on a server, then do not skip on the speed of the server, the amount of memory, or the amount of disk space. If you have a lot of users, be wary of using the file compression as it does take processor cycles when being used. I use it but have set it to only compress files if they have not been accessed for over a month rather than a week. If you want your server to be reliable, make sure that you have properly setup the server physically with no memory clashes etc. Make sure that the server is in a protected environment - nice power, well situated, clean filtered air, cool _locked_ room, good UPS, power cords firmly fixed. Make sure that you are using current versions of drivers for the server - if at all possible have a duplicate server to install and run updates on for at least a month prior to installing on the "live" server. Make sure that you run the _least_ amount of things on the server that you can, so you can basically leave it alone. ie ideally do _not_ load pserver, monitor, arcserve, remote, or anything else other than the nic and disk drivers. To backup, I have a dedicated "backup" backbone and a server running Arcserve that I also use to test programs on prior to general release. If you are using 4.x, upgrade everything to 4.1 with the latest version of the NDS. Get yourself a decent network monitoring tool so that when people blame the network and/or server for being too slow, you have a better chance of confirming this or (more likely) locating the true cause of the problem. There are more things you can do to improve both performance and reliability, but the above are probably amongst the most important. ------------------------------ Your coax isn't "fine"; it is in bad shape. A few pointers: never any stubs; Tee connectors go directly on the computers. Don't mix batches of coax, the run should be identical. Check for bends/dinged coax where chairs have rolled over it; replace such spots. Limit a run to 30 taps. Double check termination: 25 Ohms plus a couple when looking at a running cable, and the two terminators are at the very ends of the run. Look for broken lan adapters on the wire; they do strange things. Find out the real cable length; 185M is the do-not-exceed value. If you find any twist-on BNC connectors cut them off and kill the perpetrators (or send to Bosnia/Thule, whichever is most painful). Joe D. ------------------------------ Date: Sun, 24 Dec 1995 10:29:36 EST From: Bob Earl Subject: Re: IDE drives >Was in the process of bringing up a new 3.12 server. It was my hope >to buy a new mother board that has a built in super-IDE drive >controller and run four ide drives. Alas, I can't seem to get the >driver to find the two drives attached to the secondary channel. Is >there some kind of magical switch or parameter that tell the driver >to load the second channel. Maybe, it needs to be loaded twice. What >would the second load look like? Though I don't have experience adding IDE drives on a second channel, with other types of controllers (e.g. SCSI), you must add a second LOAD DISK-DRIVER.DSK line in the STARTUP.NCF with the secondary address and interrupt info. As a side note, this message is one of many recently talking about adding multiple IDE drives to a Netware fileserver. Since no one else has talked about why NOT to do this, here is my two cents worth: Having more than one IDE drive on a channel in a Netware fileserver could be a VERY BAD IDEA. The second drive runs as a slave to the first (master) drive. This means that if the first drive fails, so does the second. This is especially bad if they were set up as mirrors of each other, but even if not, a single drive failure would cause a second drive failure! If you are willing to take the risk, that's OK, I know I wouldn't be - not in a production server. ------------------------------ Date: Sun, 24 Dec 1995 19:03:26 -0600 From: Joe Doupnik Subject: Re: Help: Increase the netware partition size >I am sad to say you cannot. To get rid off or reduce the DOS >partition, you need to use FDISK. Once you use FDISK, all your >disk info. will be wiped, including NetWare. The easy way out >is to add another drive. The more tedious way is to do a full >backup of the data, including the binderies, do the fdisk, >reinstall netware and backup software (if any), and then do a >restore of user data and binderies. The second method is more >difficult and you need to know NetWare enough to do it without >lost of data. > >H P Siew, CNE --------- Nope, not so. Fdisk may create and/or remove one or more partitions without touching the others. What everyone has missed is NetWare creates only one NW partition on a physical drive. CNE's take note. Joe D. ------------------------------ Date: Wed, 27 Dec 1995 21:26:50 -0800 From: tracy pillsbury Subject: Re: Problem on initializing PSERVER on NOVELL 3.12 >Can somebody help me with a printserver problem on Novell 3.12. When I'm >loading pserver.nlm on the server, using locale mode, I get the answer >that the print port, printer 0 want's to use, is already in use, and >don't initialize. LPT1 is not used, queues and pserver is Give the server some time after unloading and reloading the NLM. Unload the PSERVER. Wait about 5 minutes and try again. If this works you know it's a timing problem. I had the problem with loading Rprinter telling me the printer is in use which it wasn't. You can write a batch file for it. Does the Printer show up as online from a cold server boot? If not, Is anything else using IRQ7? If it is showing up, then perhaps you do have a timing problem. or, a I/O Port adress is being stepped on or a bad I/O adapter. ------------------------------ Date: Thu, 28 Dec 1995 14:24:32 GMT From: "Gordon A. Lew" Subject: Re: IDE drives >This goes against everything I have read and heard about IDE >master/slave setup. In fact in my own hardware tests, if the master >drive controller fails (simulated by removing master drive from >chain), then the slave drive is not accessible. Is this not the case? In your test, when you removed the master from the chain, you effectively made the remaining drive the master, but it was still jumpered as a slave. Thus when you turned your machine back on, and either your BIOS or your disk driver interrogated the drive chain, there was only a single drive present and yet it didn't respond because the address was not correct for a single drive system. Had you re-jumpered the remaining drive, it would have operated normally. Most IDE drives are jumpered the same if they are the master in a 2 drive system or if they are the only drive present. For all the details on EIDE, you can get the full ATA-2 spec via ftp from fission.dt.wdc.com/pub/ata Consider that the majority of IDE disk failures are surface related rather than controller related. ------------------------------ Date: Thu, 28 Dec 1995 23:05:57 -0500 From: "Philip R. Reznek" Subject: Re: Server Yes, Dos No. >I have installed a 2GB SCSI drive on one of the servers. Dos sees only >1GB, however, Netware sees the whole 2GB. The Controller is an >Adaptec 1740A. You named the answer: 'Server Yes, DOS No'. Don't touch a thing. DOS versions can't see more than 1GB on a drive without translation being set on in the controller. Netware doesn't like translation. It's a no-no-no. (The third 'no' is for emphasis.) If the server is booting from the drive, make the DOS partition the first on the drive. DOS will be able to see enough for the 10MB or so that's usually needed for NW material. ------------------------------ Date: Thu, 4 Jan 1996 20:36:58 -0600 From: Joe Doupnik Subject: Re: Cache memory allocator out of available memory >I've got a 486-80 VLB/ISA bus. It has a VLB IDE controller, and VLB >Adaptec SCSI controller. I've got a 1 gig EIDE drive and a 2 gig >SCSI drive. 48 megs of memory. Cache resources are at 34 megs or >70% > >I'm consistently getting memory errors will xcopying a large amount of >files from disk to disk. > >The xcopy gives an error: > >File creation error - Insufficient system resources exist to complete >the requested service. > >The console gives these errors. > >1.3.39 Cache memory allocator of of available memory > >That is consitent. Occasionally I'll get >1.3.52 Error expanding XXXXXXX directory because no more memory is >available for tables. > >70% cache buffers should be MORE than sufficient? Are there some >setting I can use to get more available memory??? --------- I'm a little surprized the server worked at all. The reason is Vesa Local Bus is an unregulated bus which lacks control of who owns it. And you put two disk controller boards on it under server condition! Hmmmm. It can't cope, and was never designed to. This advice may cost you a small amount of money, but it will save much grief later on. Leave VLB to individual PC users. Servers run on their busses and strength there is vital. The cost of an EISA bus motherboard is cheap, and a good Adaptec SCSI controller is about the same. I recommend you consider using these components rather than the personal/hobby level PC you now have. And if you do change then give thought to moving that IDE drive to a person's desk and use only SCSI on the server. You have plenty of server memory, actually more than you likely need, so 32MB may well be just fine. Joe D. ------------------------------ Date: Mon, 8 Jan 1996 17:50:36 GMT From: Michael E Willett Subject: Re: Netware 3.X/4.X & RAID 7 >>Is there any particulars concerning Netware & RAID 7 that will impact on >>performance? The company I work for will be buying a RAID 7 system that >>supports quite a few SCSI/SCSI-II connections & I will be >>administering/setting up these servers. Are there any known >>pitfalls/problems etc. That I should be aware of or avoid, except for the >>normal SCSI ones? Any performace considerations I should know about? Storage Computer RAID 7s are widely installed around the world in major organizations, now in more than 30 countries. RAID 7s provide up to 12-host connectivity of different vendor hosts, the entire RAID 7 appearing as one large virtual storage area to be allocated among the hosts. We have lots of information available about RAID 7 in the Nordic Countries on our WWW server: http://www.storage.com/ http://www.storage.com/norway.html http://www.storage.com/sweden.html http://www.storage.com/finland.html (just getting started with this) We are just getting going with the Finland portion of our WWW server which is at http://www.storage.com/finland.html. We have just agreed with a Finland WWW magazine to prepare our Finland WWW material in Finnish and this material is expected to be running in January, so hopefully people in Finland will rapidly become more aware of RAID 7s. In the Nordic Countries, so far we have RAID 7 technical information available on the WWW only in Norwegian Bokmal and Swedish (Finlandsvenska). RAID 7s can provide storage server services to Novell and IBM RS/6000, SGI, and Sun servers simultaneously, as an example. Mike Willett, Webmaster, http://www.storage.com/ ------------------------------ Date: Wed, 10 Jan 1996 09:55:52 EST From: Peter Medbury Subject: Re[2]: Questions on RAID5 server I have been running a number of similar RAID 5 systems for some time now. I fully populate the disk cabinet with 7 drives. The 1st 6 are live (configured for RAID 5) whilst the 7th is configured as an on line spare. Perhaps it is overkill however if a drive fails, the spare is brought online automatically and the RAID 5 configuration is rebuilt. Without the spare, if a drive fails you are effectively using a normal disk system with little protection. The disk system is configured to hold a Compaq EISA partition, a DOS partition and a single Netware partition. On my systems, the Netware partition is divided into a smaller SYS Volume (1gb) and a large DATA volume (9gb). A major reason for this configuration has been to reduce the impact from disk fragmentation. I am running NW3.12 systems at the moment. Print Queues are all held on the SYS Volume & because they autopurge the drive on which they are stored will fragment. Fragmentation doesn't affect static data (such as software) but it certainly affects volatile data systems. Elevator seeking does not seem as effective on highly fragmented disk systems, with large amounts of data transfer in large user communities. I never purge the disk systems either. The effectiveness of elevator seeking seems to be improved in large disk systems when additional RAM is installed in the server. I have 150mb RAM in 10gb Systems. The result is a fast reliable disk system for your users. ------------------------------ Date: Thu, 11 Jan 1996 09:45:23 -0600 From: Joe Doupnik Subject: Re: Super server >I'm considering the option to migrate our HP NetServer 486/66MHz to Pentium >120MHz based machine (from Compaq), should I get significat performance >improvement with the new server? > >How can I measure such an improvement? > >Are there any "Server Benchmark" that I could use? ---------- There is no answer to your first question because the environment isn't given. Just imagine only one user on the server, running WP 5.1. NW servers are bus intensive machines, so the bus characteristics will dominate performance. A faster cpu won't hurt, naturally, but may not be necessary. Novell's "standard" measuring tool is Perform3, and it's only an approximation to some common environments. See netlab2.usu.edu, cd apps. To discover the changes in your environment you'll need to evalute before and after performance as seen by the users. Naturally you will also pay careful attention to the numbers in MONITOR to judge lan traffic, disk traffic, etc characteristics which user's don't sense. Note that Perform3 is not a benchmark in the ordinary sense since no two sites are alike. Joe D. ------------------------------ Date: Fri, 12 Jan 1996 11:08:22 -0600 From: Joe Doupnik Subject: Re: Server room temperature >>What is maximum safe tempature for a server room? We just moved to a new >>bldg and the server seems VERY hot to me. I'm getting a temperature >>recording device to monitor temperature fluctuations over a period of time. > >No idea - but my server room ranges from too cold to too hot >and we've not had any failures over the 4-5 years with this setup. >>What component(s) should be the first to fail from the heat? > >Usually the sysadmin :) > >Roy Coates --------- Donning Engineer hat again. If the room is warm then it's too warm. Computer electronics starts failing at about 80 F *in the cabinet*. True for old huge machines as well as nifty new tiny boxes. Cool it or lose it. It doesn't make much difference what seems to fail (and you aren't about to discover it anyway); the system as a whole is deceased. I'm going through yet another episode of just this problem, with netlab2.usu.edu failing every few hours over the past 1.5 days. The room is about 70+ F, but the case is much warmer from electronics and disks. I *think* the problem has just been solved: the 5-9V converter module on an Ethernet board was very very close to a memory SIMM, and that module runs hot. The server crashes in the idle polling loop, meaning nothing much is happening. I swapped Ethernet boards for a less heating unit, moved it down a slot in the mid-tower case (which are terrible for servers, heat flows all the wrong way), and it's coming up again now. The machine is triple fanned: cpu tiny fan, power supply fan, case inlet (low down) fan. If trouble persists then Saturday I'll move the whole thing to a conventional flat case. Appologies to callers for the unexpected outages. Joe D. ------------------------------ Date: Fri, 12 Jan 1996 14:22:38 -0600 From: Joe Doupnik Subject: Network meltdown, short story for you This morning the phones rang off the hook, every one, with angry users saying "the network is down", from all over the campus. It wasn't down, but many things sure didn't communicate. Bring up a wire snoop and see about 1600 packets/sec of short Ethernet_802.2 packets all alike. The destination Ethernet address was 303030303030 hex, which turns out to be a multicast address. That gets through all the routers and bridges, and it certainly did. The interior had DSAP of 42 and SSAP of 41 (or the reverse, can't recall), and data of TESTMSG454545... Panic ensued, which is the object lesson for system mangers. Board was not in the campus Bootp database, but the vendor ident was of the kind we sell locally. Since the packets were spread everywhere we could not pin point the source instantly. Ports on each main cisco router were turned off one by one, slowly because the remote command packets had to fight for time on the wires. Finally the critical port was located and defeated; the rest were reenabled. Persons walked across the campus to the locale of the problem and found the culprit. A NW systems manager installed a new Ethernet board in a PC in the VP for Student Affairs office. It didn't work, so he started one of these trashy vendor board test programs. That did not seem to work either, according to this person, so left it on and went to lunch. Major mistake. That program generated the packets as fast as its shoddy code could go. We are now removing such "test" programs from the distribution floppy sold with each board. Instructions for the program vaguely say don't use on a live network, but who RTFM? The outage lasted about an hour. That's very very long. Better procedures to isolate wacko network segments are now better understood. I think the lessons are fairly clear here. Joe D. ------------------------------ Date: Fri, 12 Jan 1996 19:07:35 GMT From: Teo Kirkinen Subject: Re: Buying a new server - PCI vs EISA? cache controllers? >We are in the process of buying a new server and am wondering which >is the best way to go. Are bus-mastering PCI SCSI controllers and PCI >ethernet cards stable enough yet to run reliably in a netware server? Like many other sites, we have been reluctant to change from EISA to PCI, but now we have been using PCI-bus servers for half a year and are quite happy. We use mostly AST Manhattan P servers. We went to PCI because PCI cards are less expensive than EISA cards and many new cards are only available as PCI. The only real complain I still have is the amount of PCI slots: 2, 3 or 4. Of course they have EISA slots in addition to that. >What problems have people been encountering using PCI bus technology? Quite a lot of compability problems at the beginning. Almost all of them have been solved with bios upgrades or newer version of PCI-cards. The last upgrade we had to do was the microcode of a DPT RAID controller. It corrupted the the data on the PCI-bus when there was also a Digital FDDI-card in the server. We are for example running (well - testing, and with NT, not Netware) Zeitnet's PCI-bus ATM-card and it works fine. (The only problem we still have, is a IBM PC Server 300 with an IBM (really Adaptec) Wide-scsi controller for the disks and a narrow IBM SCSI controller for the DAT drives. They have different ASPI-drivers and Cheyenne hasn't been able to tell us how to get Arcserve recognize the tape drives. I blame the PCI-bus because IBM doesn't have an own wide-scsi controller which would use the same drivers as the narrow- scsi controller. I also couldn't boot the PC Server from a EISA-bus SCSI adapter. The worst problems have been with workstation-grade motherboards, like Intel's Zappa. For example PCI-cards with PCI-to-PCI bridges (like AHA-3940) won't work at least when there are any other PCI cards installed. PCI-configuration is done with ICU (Isa Configu- ration Utility) which is far worse than ECU ;-) Configuration of PCI-servers is some times more difficult than with EISA-only servers. Some servers (like PC Server 300) use one shared interrupt for all the PCI cards while some drivers doesn't support interrupt sharing - even when PCI specs say that it is mandatory. Well designed servers, like AST and HP, use ECU to configure both PCI and EISA resources. >Also, do cached disk controllers justify their cost since files >are cached in main memory anyway? (Assume that server RAM more >than exceeds spec etc.) This has been discussed quite many times on this list. My opinion is that at least write-back caching in the controller helps write- performance in RAID5 arrays. ------------------------------ Date: Fri, 12 Jan 1996 16:33:22 -0600 From: Joe Doupnik Subject: Re: Buying a new server - PCI vs EISA? cache controllers? Just a mention in passing. I purchased a dual Pentium EISA/PCI motherboard this fall. It's an ASUS PCI/E P54NP4 unit, from a that major maker in Taiwan. 4 PCI, 4 EISA, (one shared), all bus master. The PCI support is PCI to PCI bridging on the motherboard. Setup is in the ECU too. I mention it because the kind of PCI support, and the fact that it has worked like a champ with UnixWare (and hence probably with SMP NetWare when that ships). Use one or two Pentium chips, 133MHz and slower; P6 board is in the works I understand. Neptune II chip set, not the Triton I consumer grade stuff. Writeback cache. This board has six SIMM sockets (72 pin, any size memory, paired SIMMs); the PCI/ISA rendition has only four SIMM sockets. Intel SMP spec for the multiprocessing. Not that expensive. See Computer Shopper or www.asustek.asus.com.tw Joe D. ------------------------------ Date: Fri, 12 Jan 1996 23:49:54 -0800 From: rgrein@halcyon.com (Randy Grein) To: netw4-l@bgu.edu Subject: Re: Conner Raid Drives and NW4.1 >We have a Conner Raid (CR-12) have added 6 additional 2GB drives. >We had the original 6 drives as volumes: > > SYS: 400MB > VOL1: 3GB > VOL2: 3GB > VOL3: 3GB > >The first 6 drives are Conner Pack 0 we added 6 additional drives >(actually 5 and 1 standby for hotswap). These drives total 8GB >and it appears that in NW 4.1 in the INSTALL.NLM we can only add this >8GB free space to one Volume. We would like to add it to the individual >volumes as VOL1: + 3GB Vol2: + 3GB and VOL3: + 2GB. Is there a way to >s without backing up, recreating and restoring the Volumes?? We are using >RAID-5.. I've been thinking about your message, and what you're not seeing is that as someone else explained NetWare sees the raid system as a single physical drive. You CAN segment the logical drive up, adding some to each, but look at it this way: Raid is MUCH slower that mirroring/spanning - 3 times slower on write operations; cutting it up in the fashion you envision will slow it down even more. (This is apparently unknown information to most. Anybody who's interested, I can find the references from Compaq and others for you.) If you're at all conserned about throughput, I'd take the time to backup/delete/restore all of the volumes, using one of the RAID units as SYS and the other as VOL1. ------------------------------ Date: Sat, 13 Jan 1996 10:13:39 -0500 (CDT) From: John_Cochran@odp.tamu.edu To: netw4-l@bgu.edu Subject: Re[2]: Conner Raid Drives and NW4.1 I would agree with Randy on RAID beig much slower, that is the OLD RAID. I have been running RAID on Novell servers since '93. The original RAID products out for Netware were software RAID, meaning that the RAID was handled strictly through a software driver. My latest RAID stack is the new Gandiva from Micropolis. This is hardware RAID, meaning that there is a hardware controller handling all of the RAID functions. This is MUCH faster. I can not notice a speed decrease between this and a single drive, but I get the added redundancy needed. Back to the original question. Netware sees a RAID device as one large drive, and you can create as many volumes out of that space as need be. I am running an 8gig RAID box with 300meg SYS:, 2gig VOL1, 2gig VOL2, 2gig VOL3, 1.7gig VOL4. This has been running on a Digital Prioris P90 with 128megs of RAM for 8 months. The drive is running on an Adaptec 2940W (Since the Gandiva is a Fast and Wide SCSI device). I can give you more specifics if need be. If you already have data on the volumes and you wish to change the volume sizes, yes, you will need to backup/restore the data. ------------------------------ Date: Sun, 14 Jan 1996 18:52:00 -0800 From: rgrein@halcyon.com (Randy Grein) To: netw4-l@bgu.edu Subject: RAID, was Conner Raid Drives and NW4.1 >I would agree with Randy on RAID beig much slower, that is the OLD >RAID. I have been running RAID on Novell servers since '93. The >original RAID products out for Netware were software RAID, meaning >that the RAID was handled strictly through a software driver. Actually John, I was referring to RAID 5 without controller limitations. The original Micropolis Radion units, being software raid as you correctly identified, are far, far slower. The reason RAID 5is slower than mirrored/duplexed drives (Raid 0 combinded with RAID 1) has to do with the # of writes and seek/spin overhead per logical write operation. I can quote the white paper later this week if anyone's interested, but the bottom line is that RAID 5 is great for reducing the # of drives needed for redundancy, but there is a performance penalty on write operations. BTW, if anyone is interested, Johns point about software RAID is particularly appropriate with regard to Windows NT. Not only is is slower in file/print services than Netware (unless MS is paying for the tests) but they BRAG about their SW RAID 5, which places an even greater burden on the CPU. MS appears to REALLY want everyone to buy quad processor pentium 132s! ------------------------------ Date: Mon, 15 Jan 1996 12:42:44 -0600 From: Joe Doupnik Subject: Re: Buying a new server - PCI vs EISA? cache controllers? >> Just a mention in passing. I purchased a dual Pentium EISA/PCI >>motherboard this fall. It's an ASUS PCI/E P54NP4 unit, from a that major >>maker in Taiwan. 4 PCI, 4 EISA, (one shared), all bus master. The PCI >>support is PCI to PCI bridging on the motherboard. Setup is in the ECU >>too. I mention it because the kind of PCI support, and the fact that it >>has worked like a champ with UnixWare (and hence probably with SMP NetWare >>when that ships). Use one or two Pentium chips, 133MHz and slower; P6 board >>is in the works I understand. Neptune II chip set, not the Triton I consumer >>grade stuff. Writeback cache. This board has six SIMM sockets (72 pin, any >>size memory, paired SIMMs); the PCI/ISA rendition has only four SIMM sockets. >>Intel SMP spec for the multiprocessing. >> Not that expensive. See Computer Shopper, or www.asustek.asus.com.tw. >> Joe D. > >The disadvantage of the PI/E P54NP4 compared with the P/I P55TB4XE is: >- The memory to PCI performance of the Neptun chip-set is about > 40% lower then the Triton set that is being used on the TB4XE. > This is pretty important in a server environment and thus can be > a major disadvantage in the P54NP4. >- The PCI compatibility of the Triton is a lot better then then the > Neptun, especially on PCI to PCI bridges. >- Triton supports EDO-RAM in combination with Pipe-lined burst SRAM, > Neptun only supports normal Page-Mode DRAM in combination with > synched SRAM. Just to put back some perspective here. The cited PCI/E P54NP4 board supports EISA and PCI for dual Pentium cpus. So far as I am aware the Intel Triton chipset suports ISA and PCI for one Pentium. Apples and oranges. I'll take your word on the memory to PCI performance because I have no tools to probe the situation. Observations of real life say the board is faster than greased lightning when using UnixWare, and it has yet to exhibit a problem at my place. Unix, like NetWare, pushes hardware to the limits, and that's why I'm contributing my UnixWare observations to this NetWare list. >The greatest disadvantage of the TB4XE board are its limited >number of PCI slots and its limited SIMM slots (remember: Triton >supports "only" 512Mb RAM). Don't heavily count MB, but do count SIMM sockets because they will determine the practical expansion capabilities. Modern board support up to 64MB SIMMs, if anyone can afford them. At reasonable prices 16MB SIMMs are available; 16MB x 4 sockets == 64MB on most boards and 16MB x 6 == 96MB on mine. Practicalities of finance and operation said I needed only 32MB at this time. >The newer Orion based Pentium-Pro boards by Asus are equipped with >more PCI and a couple of EISA slots. However, these boards still lack >the performance of the TB4XE because of less maturaty in the Pentium- >Pro and its chip-set, Orion. Wait a couple of months to see it grow >up. Yes, I totally agree with waiting for Intel to get its chipsets in order, and the board makers to figure out how to manufacture stable systems with them. And I concur that Intel makes wierd motherboards. There is always a pot of gold at the end of that rainbow, if we could reach it. My point, however, is the board I acquired is stable, highly effective, and reasonably priced. I cannot speculate upon the combination of it and the sundry PCI boards one may want to use; I have only a video PCI board in the machine at this time, plus an EISA disk system. These work together well in my experience, and that's saying something of interest to NetWare managers. Joe D. ------------------------------ Date: Fri, 19 Jan 1996 08:05:46 +0100 From: Henno Keers Subject: Re: Server upgrading >I will be doing some server upgrading on one of our older servers and am not >exactly sure of the steps to take. My concern is that I dont' forget >anything. > >What I will be doing is: >-upgrading the motherboard Take a good look at Triton (PCI+ISA) or Neptun (PCI+EISA) based boards. Asus makes boards with a good name in PCI compatibility. >-upgrading the controller card to 32 bit Take a peek at Adaptec's venerable 2940 and 3940 series PCI cards. >-upgrading the Nic card to 32 bit Take a look at 3Com's 3c595 fast-etherlink 10/100 board. >-change the frame type to 802.2 You might want to re-consider this if you are also running IP on the wire. Then it might be wiser to use Ethernet_II and standarize on 1 frame for all traffic. See Don Provan's story in the faq http://netlab1.usu.edu >I will be not be changing the drive. I presume it is a scsi drive ? >My plan is: >before bringing the server down Make a couple of full backup's on different tapes. >-copy any needed driver files for the new cards to the server DOS partition Copy the .LAN driver to SYS:SYSTEM so loading will be faster and you can perform upgrades & backup's. >-modify autoexec.ncf and startup.ncf for new drivers and cards >-modify autoexec.ncf for 802.2 see above. >-bring server down write down various older soft- and hardware settings. >-remove old parts >-install new parts -motherboard, memory and controller card >-bring server back up ------------------------------ Date: Sat, 20 Jan 1996 07:10:45 -6 From: "Mike Avery" To: netw4-l@bgu.edu Subject: Re: Disk Drives >Ok folks, a simple 'what is your drive of choice' question: >The question is, what do you recommend for a cost effective (yet >trouble free, reliable) Fast/Wide SCSI drive in the 2Gb range? I'd >like to get as close to 2Gb for data as possible. >What are your thoughts on: >Maxtor? >Quantum? >Seagate? (I tense up even mentioning that, but just for kicks) >Fujitsu? Others?? I like Maxtor, but haven't seen anything larger than 1.6 gigs from them. If they have larger drives, it's pretty recent, so I'd wait on using them - let someone else determine if the drives are good or bad. Quantum - I'vehad good luck up until recently, but a few dealers I know are dropping the line due to excessive return rates. Could be a bad batch or two, could be a trend. Seagate - I hate to say it, but their larger drives are quite good. If you use the baracuda series, make sure they have generous ventilation - they run HOT. Once that issue is taken care of they are fast and reliable. I can't comment on their support, as I've not used/needed it. (The shop where I work bought a lot of Seagates before I got there. Not my choice, but it seems to have been a good choice.) Fujitsu - Highly recommended. I've never had to pull a fujitsu drive out of a system due to drive failure. I've replaced them with larger drives. Other - DEC. Avoid at all costs. Dell uses them. Insider sources at Dell indicate a 40% per YEAR failure rate. Not even in a RAID array! This may have been on a particular model and be corrected now, but I am very cautious. IBM. Excellent. They are at least as good as Fujitsu, and the prices may surprise you. >I'm inclined to go with the Adaptec controllers regardless of drive >make, but if there are any comments or suggestions along those >lines, I'd love to hear that as well. It's hard to beat Adaptec, but I've also had good luck with DPT. I really like their RAID controllers. Last time I was putting servers together, about a year ago, I heard warnings that the PCI SCSI setups weren't quite ready for prime time. In your position, I think I'd hedge my bets by getting a motherboard that supports both EISA and PCI and get a 30 day return privelege on the PCI adaptecs. If they don't delight, go to the EISA bus cards. ------------------------------ Date: Wed, 24 Jan 1996 03:10:26 GMT From: Michael E Willett Subject: Re: DEC RAID System >I am thinking of purchasing a DEC Prioris XL or HX system with RAID >level 5. Currently I run a netware 3.12 server and use Arcserve for >server backups (5.0g I think). I will be migrating to netware 4.1 on the >new server. > >First, does anyone have any experience with the DEC raid system? Is it >worth the extra expense for what is a relatively small LAN (50-80) users, >probably only 6-8GB total storage. >Will I run into problems with Arcserve and RAID (I am assuming that I >will have to upgrade to a 4.1 version of Arcserve)? NetWare applications are speeded up rather significantly with RAID 7 storage servers. See the full page story in the Jan. 22 issue of LAN TIMES. ------------------------------ Date: Tue, 23 Jan 1996 04:36:56 GMT From: "Stephen M. Dunn" Subject: Re: Buying a new server - PCI vs EISA? cache controllers? >We are in the process of buying a new server and am wondering which is >the best way to go. Are bus-mastering PCI SCSI controllers and PCI >ethernet cards stable enough yet to run reliably in a netware server? I believe that there are at least some which are stable. We generally sell Compaq servers. They integrate a PCI Fast SCSI-II host adapter and a PCI Ethernet controller on the motherboard, and these components seem to work well. We have also installed Compaq PCI Ethernet cards and Compaq PCI host adapters into the actual slots, with similar results. I think the Adaptec 294x host adapter is pretty reliable, too. Off the top of my head, I can't think of any other PCI NICs I've used. >Also, do cached disk controllers justify their cost since files are >cached in main memory anyway? In general, they're a waste of money for NetWare. Let's say you have 20 MB of memory on the motherboard being used for disk cache, and 4 MB on the host adapter. When the system reads something from disk, it gets cached in that 4 MB, and cached in that 20 MB. It will disappear from the 4 MB cache sooner than from the 20 MB cache, and so at any future time when the data is needed again, it will either be in the 20 MB cache (so it doesn't matter whether it's in the 4 MB cache or not), or it won't be in either - either way, there's no benefit from the 4 MB cache. The same basic idea applies to writes as well. NetWare already caches writes, and hardens the filesystem in the background, so it's not like the clients are sitting around waiting for the data to be written. Even if there _were_ a benefit in general, you'd still be better off to put the extra memory on the motherboard. Access to system memory is faster than access to a peripheral device, and so cache in system memory is faster than cache on a peripheral device. Unless the host adapter's drivers integrate with NetWare's TTS, there's also a slight possibility of problems if the server's power crashes if the host adapter uses a write-back cache design. TTS is very particular about the order in which writes are issued, and it has the ability to alter the behaviour of NetWare's cache algorithms to ensure that it gets its way. If it can't also do the same to the host adapter, then it's possible (though admittedly unlikely) that you might have problems if the server crashes after TTS thinks it's written something, but before the host adapter actually gets around to writing it. If you're running a system which writes transaction data, such as a SQL database, the same problem is amplified, because your database engine lives for transactions whereas TTS is only a small part of what a typical NetWare server does. However, using a write-back cache on the host adapter _can_ significantly improve the performance of a transaction-oriented database engine in some cases, at the expense of data integrity in the case of a crash. ------------------------------ Date: Fri, 26 Jan 1996 19:25:38 -0600 From: Todd Herring Subject: Problem with ARCSERVE 5.01g -Reply Hold it right there! I had a VERY similar problem with an Adaptec 2940 card in a GW2000 Pentium. I was using all the latest drivers, patches, and software, but no one from Gateway 2000, Adaptec, or Cheyenne could explain the problem. The only thing I could come up with was that the 2940 card was in a PCI slot that did not support bus mastering. There's a little snippet in the Adaptec user manual that states, roughly, "the 2940 card must be inserted in a PCI slot that supports bus mastering". It doesn't say the card won't work or that it will burst into flames, just that it "must be" in a bus mastering slot. I confirmed with GW2000 that their PCI computers DO NOT support bus mastering. BTW, I got a very similar error on the same computer using SBACKUP. SBACKUP would not recognize the media and I could not format a tape. ALSO, I got the same SBACKUP problem in _another_ GW2000 with a _different_ 2940 card, using two different tape drives and several new/different tapes. ------------------------------ Date: Sun, 28 Jan 1996 19:17:13 -0600 From: Joe Doupnik Subject: Re: What's a better server? (reply) >What makes Novell's implementation of SMP hot is a technique called >'strong affinity'. The SMP processes are mapped to processors in such a >way that if the execution is interrupted , the process will resume on the >same processors. The new NOS is also backward compatible with NetWare 4.1. >If you are only using file and print services and some utility NLMs, the >SMP won't do you any good. If youe application aren't multithreaded and >haven't been written to take advantage of SMP, NetWare SMP won't help. >CPU-bound application will greatly benefit from this architecture. >Though the current kernel has not been replaced all of your non-smp >application will run on processor 0. >SMP is already a part of Unix and WindowsNT NOSs, but neither pf those >systems makes use of strong affinity. It makes uses of 'weak affinity' in >which there is no particular connection between process and processor. >Although this technique could minimizes the overall load but it can also >create a massive load as the system grows. > >Virendra Kumar Rode >------------------- Very interesting (to hardware persons). I expect we will hear more later this spring on the multiprocessor issues. Not least is the muddle in the PCI support chip department at Intel, and the Pentium Pro stuff etc. Observations of my own dual Pentium machine running UnixWare 2.03 say the processor-balancing act is economical of overhead. That is, threads tend to stick to a given cpu chip. But I haven't run formal tests nor have I loaded the machine very heavily (that's not easy with two P-100's). To give a sense of portion here, that machine used to be a 386-33 which eventually got run flat out (100% utilization and stayed that way). Now it's at 5% utilization and less; it's very embarassing to have that many horses standing around waiting to work. The inference is Pentium based NW servers of the ordinary kind will have no difficulty with available cpu cycles, until we load them up with many 100BaseT circuits. Those interested in low level performance issues might want to find a copy of the NDIS v3 spec from Microsoft (probably still kept under wraps, but I haven't looked recently). The problem of driver buffers in kernels figures in promenently in the board driver spec; read that as NT design issues. The point is the overhead in swapping pointers around is not small (have to lock/unlock/critical-section a bunch of things) and the difficulty doing so within interrupt routines is large and messy. We may guess that lan adapter and disk handler interrupt processing issues were of high importance to Novell. Joe D. ------------------------------ Date: Mon, 29 Jan 1996 05:43:53 GMT From: Michael Farace Subject: Re: What's a good reliable RAID 5 PCI->Fast Wide SCSI controller? I just installed the Adaptec 3985 PCI Raid controller yesterday for a customer. I installed it into an ALR Revolution MP wit a P100 and 100+ mb RAM. I did have to flash the M/B BIOS to the latest version to get it work. Other than that, it was a piece of cake. It has a nice set of utilies to schedule regular testing/verifying of your raid sub-system and also lets you test your spare drive, if you have one. The controller is availabe in Fast SCSI-2 and Wide SCSI, with 2 or 3 SCSI channels. ------------------------------ Date: Fri, 2 Feb 1996 10:58:32 +0100 From: Henno Keers Subject: Re: 10baseT cable testers? >I'm looking to buy a cable tester for 10baseT. This would be a >handheld tester for field installation use. At the moment I need >to know the cable distance to the hub and/or server, and get some kind >of (valid!) quality rating of the link. I'm pulling 4-pair level 5 >UTP, so far we're just running 10Mbps over it but in the future...? >This is going to come out of my personal pocket, but cost is not a >MAJOR issue. Check out http://www.Anixter.COM and look for the Microtest MT350 or PentaScanner. --------- Date: Fri, 2 Feb 1996 13:40:11 -0500 From: Daniel Tran Subject: Re: 10baseT cable testers? >I'm looking to buy a cable tester for 10baseT Take a look at Microtest device. It's easy to use. Depending on the model you buy, the more $$$ then more sophisticated. Daniel Tran - dtran@ucla.edu ------------------------------ Date: Fri, 2 Feb 1996 11:24:34 -0500 From: "Larry C. Hansford" Subject: Re: convert BNC to 10 BASE 2 >Our company is converting from BNC (10 BASE 2) to 10 BASE T. Our current >system is giving us a lot of cabling problems as can be imagined with any >10 BASE 2 system. > >Anyway I have a few questions: > >1. My current NIC cards are dual BNC and 10 BASE T, therefore I should be >able to use the NIC's in the server? What configuration do i need to >change if any. That depends on the cards. Most are auto-sensing and will look for the BNC connection, and if it doesn't find it will look for the UTP connection. If your cards aren't autosensing, you will have to change a jumper or software switch. >2. Should I connect my Hubs into the server using UTP or BNC? That depends on your setup. If the hubs are co-located with the server, a UTP connection will work fine. If they are some distance from the server, and on the other side of a noisy environment, you may want to use Thinnet or UTP in conduit. >3. Should I locate my hubs close to the server or does it not matter? That depends on whether you are building workgroups or have a spread out network. If you have workgroups, you can put a hub there, and have one cable to the area and then branch out. If you have distributed work areas you can probably run cables from the central server/hub area to each workstation location just as easily. >4. What is the AUI connection for? Is it more reliable? The AUI is for the 10Base5 (Thicknet) cable. It provides better distance and reliability, but is a bear to install and even worse to change. You probably won't want to use it. ------------------------------ Date: Wed, 7 Feb 1996 22:18:10 -0600 From: Joe Doupnik Subject: Re: Workstations disconnecting >I am having a problem with one of our networks. In one lab we have >machines that seem to frequently disconnect from the server. It does >not always do it and it may go for hours without a problem and then >several of the workstations will disconnect. The server is running >Netware 3.11. The server is supporting three labs with each lab on its >own network card. Two of the labs are using ne2000 cards and one is >using a NP600 card. The lab that we are having the trouble with has >just recently been connected to the Internet using the same cabling so >we are not using the TCP/IP on the server. ------------- The real cause is probably dropped packets. The reason they drop is likely too many packets/sec for the boards involved. You don't quote any figures for traffic, but a rough guide is when NE-2000's see 1000 pkts/sec they are in trouble. Your server is working harder than the clients because it has all the disk stuff to perform too, and thus the server overloads before most clients. The NP600 is a venerable board, and should be retired with a pension. It's not up to current levels of activity. To examine the traffic you need to put a monitor on the wires, one by one, and look carefully. Novell makes a nice one for this purpose, Lanalyzer/Windows. In lieu of that just watch MONITOR and the packet update rate. If it's near that 1000/sec (about one sec per screen update) you have too much traffic for the current equipment. Also see the section on no ECBs available under the lan adapter heading, and if it's a bunch then the server is clearly falling behind. Coax is really good wiring, if done right. Often it isn't. Mixing cables, adding any stub whatsoever to a Tee, dinged cable, cable too long are common failings. Bad BNCs (twist-on being the very worst there is) cause strange errors too. Flakey lan adapters are hard to track down, but not that hard. While here let me relate yet another network story. This afternoon just as my networking class was finishing another system manager rushed in saying "The network is down! It's down!" I got a VOM (Volt/Ohm Meter to the rest of you) and looked at the coax involved. Measured about 25 Ohms, more or less. The clue is in that more or less. Checked a NW server and it was ok, no bad counts in Monitor, but no comms to the rest of the world. A multiport repeater at the end of the coax run had its red light on and would not clear it. Small pause to think and I had it: d.c. on the wire upsetting all the Ethernet 0 to -2V signaling level sensing. Sure enough, and average of -1V and that biases the Ohm meter readings. The manager said one room had a wiff of ozone about the same time as the outage. Hmmmm. No flames visible, no crisp'd students. Switch off machines one by one while watching the wire voltage and all of a sudden it went to 0V. Ah ha! Cycle the machine back on and no problem, comms are normal. Cause: dust from vacuuming caused a monitor to arc (ozone) and that zapped the computer which in turn put the Ethernet adapter into wacko mode and the wire jammed hard. It's a good thing the manager waited until the end of class. The campus network alarm system turned red, my phone rang, folks went looking for me while I droned on about nifty heuristics in TCP. Had they stopped me in mid-heuristic I would have dragged the whole class along on the treasure hunt, and the result would have been a trail of debris, crushed machines, and a bunch of grinning grad students. Lan adapters fail, mysteriously and not solidly. Multiport repeaters suffer similarly or worse. Joe D. ------------------------------ Date: Thu, 15 Feb 1996 18:09:59 -0500 From: Jerry White Subject: Re: Which RAID 5 Controller? -Reply Check out the Compaq Smart Array controller. Ours has never hiccupped once in two years (knock on wood.) Of course, it's installed in a Compaq system and the drives are Compaq drives in a compaq external cabinet. There's something to be said for a one-brand system. They test all the components together before they hit the market. It's solid! ------------------------------ Date: Sun, 18 Feb 1996 21:00:24 GMT From: Joerg Trawinski Subject: >1 Netware partition on 1 drive According to a question on this list I've tested the ability of Netware (3.12) to deal with more than one Netware partition on one (IDE with IDE.DSK) drive. I've tested the behavior of one disk with Novell's INSTALL.NLM and unmirrored partitions only. _Whenever possible use the standard Netware partition scheme_ (one physical Netware partition only). You have to manually edit the partition table. If you are not familiar with manually editing hard disk structures, be warned: One wrong bit and your valuable data may be lost. Test this procedure on a spare system, where data can't be corrupted. The partition table is located at physical sector 0-0-1 (cylinder 0, side 0, sector 1) on each hard disk. This sector (normally) contains a small program (master boot record, different for every OS, which created it) starting at offset 0 and the partition table starting at offset 446 (hex: 1BE). The partition ID's are located at offsets 450, 466, 482 and 498 (1C2, 1D2, 1E2 and 1F2) for partition number 1,2,3 and 4. This one hex byte determines the partition type: 00 free 01 DOS-12bit FAT 03 DOS-16bit FAT 05 extended 06 BIGDOS 07 HPFS 0A OS/2 boot manager 64 Netware 286 65 Netware 386 If you want to convert an existing partition to Netware, delete it first (manually or fdisk of any OS or INSTALL.NLM). Down the server. Mask the netware partition(s) by changing it's ID to 07. Fire up Netware and create a new Netware partition. Down the server and change the other netware partition's ID back to 65. Fire up the server again. The partition menu of INSTALL.NLM will now act like this: You can only change the hot fix of the _first_ netware partition. You can't create a netware partition (not new). You will be asked for the partition number to delete. Every else in INSTALL.NLM should work. ------------------------------ Date: Mon, 26 Feb 1996 11:30:36 -0500 From: JRKUNKLE@aol.com To: NETW4-L@bgu.edu Subject: A new beginning - The Best Server If you could start your own brand new NetWare Network, What brand of Server would you choose ? And why would you choose it (cost, reliability, processing power, etc..) ? My Company has been buying IBM Servers (which for whatever reason tend to crash a lot - I have not heard from our server group if the issue is hardware (IBM) or software related) Thanks, JR Kunkle --------- From: Ghhoffman@aol.com (George Hoffman, CNE) Compaq Prosignia - Great, fast. --------- From: Jan Burroughs Compaq Prosignia - easy setup & easy to configure --------- From: John_Cochran@odp.tamu.edu Digital Prioris HX or MX. PCI bridging that works!!! I have 6 EISA slots, 6 PCI slots loaded with 3 EISA cards, and 4 PCI cards. The bus actually works with all of these devices! ...Dual power supplies, and a GREAT storage works enclosure. Have tried ALR, Compaq, HP. Noone else had PCI bridging that worked with MANY devices...Had too many bus errors. Great support services available for the Digital boxes too. --------- From: Hussain Hasan We are using IBM PC server 720s with 128 meg of RAM and 40 GB RAID 5 storage. These servers have 8 PCI/MCA slots and upto 6 processors. Working out great so far. --------- From: JST604@aol.com (Joe Thompson) I tend to like the Compaq Proliant due to is over all flexablility although I think a multiprocessor version would be wasted on Novell. The fastest box for straight file and print services that I've worked with is the TRICORD. The drive array and internal bus simply outperform any machine I've seen, and they seemed to hyper tune it for Novell. Most of the truly big nets I've seen have one. Redundant everything! We have one handling 600 people consistantly in a shared windows environment. It really gets hit hard since we have 36 GB of useable storage and are at 92% capacity. We have several others but the Tricords are by far the best performers (and also the most expensive). But overall the most enjoyable server I've worked with is the Proliant, but that was with Windows NT server so it's kinda off subject. I DO like the bootable CD though!! But I would rather work on the Prolaint --------- From: LionsHair@aol.com We (me & the co. I work for) chose a Compaq Proliant with 2 2GB hot pluggable HDs and run NW 4.1 (and still would!!). The server has proven itself over and over...Very Stable. A little trouble with Archserve & Exabite Backups, but was fixed with a controller card (seemed to have a problem with SCSI). --------- From: "Mike Avery" To: netw4-l@bgu.edu Even before I saw the other answers I knew that a lot of people would say "Compaq". And they are good machines. I have about 4 of them. Different flavors. They are fast, and they are powerful, and they are easy to set up..... as long as you stay in the Compaq family. If you want to use someone else's disk controller, all the proprietary bells and whistles go away. You wanted to use a 3COM Ethernet card? Well...okay, but don't call us when the NIC starts ignoring the network. As to overall reliability, I have found that more depends on the correct tuning of the box and the boxes environment than what box it is. I've used all sorts of monsters as servers, and the reliability is pretty consistent. Bring the patches up to date, avoid the X.0 release, make sure your power is conditioned and things are usually pretty good. My recommendation is to go with a super-server if you need the reliability and can cost justify it. If not, go with an intelligently specified clone. I like ASUS motherboards, and the DPT SmartRaid boards myself. I also spend the extra $8.00 to get a UL listed power supply. It might not matter, but I sleep better. Another general comment about proprietary systems... For a while last year Compaq was having severe problems delivering memory, and most of the third party memory does not work in their high end machines, even some pretty expensive memory specifically designed to work there. It's galling to spend $25,000 on a server and be unable to use it because one vendor can not deliver enough memory to make it useable, and another can not deliver memory that works. As a result, we looked around. HP seems to be reselling Intel motherboards. Not a bad product, but hardy the state of the art either. And that led to another thought.... suppose we are using a Compaq server and it dies. Can I move it's RAID controller to another system and have it work? To check this out, I put a Compaq EISA RAID controller in a Dell machine that was awaiting overhaul. The Dell EISA setup routine did not like the Compaq's setup file... it was "too big to fit in memory". Knowing full well what was about to happen, I called Dell and asked for help. Their friendly tech support guy asked me to repeat what I had done and then told me that "you have just voided your warranty. The combination you are trying is extremely ill-advised, we have never tried it, and if we had we would not comment upon it." I asked if he would have the same reaction if I was using an Adaptec or DPT controller. He replied that he'd never heard of DPT and avoided answering the question. I called Compaq next. They were friendly and didn't get huffy, but they did say that they had never tried putting one of their controllers into other computers so they could not advise me on how to make it work. With a third party controller, such as an Adaptec or DPT, the vendor has a vested interest in making it work, no matter what piece of junk you put it in. So, if your server dies and the accouting department is screaming about missing a critical deadline, you can move the controller to another box, almost any other box with the same bus, and feel fairly confident that it will work. These comments are less important if you have already made a commitment to only buy one brand, and have only systems of that brand in house, or at least one spare. But most of us live in a hetergenous world. Systems from a dozen vendors, ranging from 386 antiques (a place to test new NLM's or our own playthings if we're lucky, home to 200 users if we're not) to pentiums and 586's, having buses from ISA to PCI with many strange things in between. In this environment it may not be wise to tie yourself too closely to a single vendor. ---------- From: OEVEREND We are using Compaq Pro-Liants a lot for installing Netware 4.x and we have good experiences with these machines... --------- From: GannonT@aol.com (Thomas F. Gannon, CDP, CNE, MCPS) For my money, and all the client sites I have worked at, the best server has been Compaq machines. Longest uptime, least repairs, and rock solid, once configures properly. Another workhorse has been the Acer line. Very good clones. P.S. IBM hasn't made a good box since the original AT, and that was too slow to begin with. Every competitor blew its doors off in the next 6 months. Digital has solid harware and faster than most. --------- From: JRUSSO@Gems.VCU.EDU (Joe Russo) >Just curious: What do you think of ALR servers? I've got two with another on the way and I love 'em. They're reliable, well backed (I believe they still have the only 5yr warranty around), and they're tech support is excellent. They're also very open systems, there's nothing proprietary (sp?) about them. In March they have a Pentium Pro version of the their quad processer Q- SMP server and dual processor Revolution along with the SMP nlm to take advantage of all that horsepower. They'll be a pricey, but if you need the power... --------- From: John_Cochran@odp.tamu.edu We have several older ALR 486DX/2-66 EISA servers running bothe here and on my organization's drill ship.... Have NEVER had any motherboard/memory failures. Not a power supply failure or anything. They have been running for 3.5 years. I have used Adaptec cards and 3com cards in them (ISA and EISA), but everything I have thrown in seems to work. Have had the occassional drive failure, but that has nothing to do with the ALR box. Personally, I prefer Digital boxes. But both have been just as reliable. --------- From: Brian Howe >Just curious: What do you think of ALR servers? I'll have to agree with Joe. I have 2 also and they run non stop. Fold away side plates on both sides makes it easy to get inside and work. Built great. --------- From: "Dave Pacheco" If I may jump in.... I recently left a job in MN where we were transitioning from our purchases of no-name clones to the ALR machines. They are excellent workstations and wonderful servers: well-designed, and nice and roomy inside, so you're not scraping your knuckles raw every time you change a NIC. In the machines we were transitioning from, you had to remove the hard drive enclosure to change SIMMs, which was utterly brain-dead in terms of design. The ALRs never gave us any trouble and were very good performers. They did as well as the Compaqs we looked at/demo'd, and at somewhere around 2/3 the price. --------- From: Patricia Thorp >Knowing full well what was about to happen, I called Dell and asked >for help. Their friendly tech support guy asked me to repeat what I >had done and then told me that "you have just voided your warranty. >The combination you are trying is extremely ill-advised, we have >never tried it, and if we had we would not comment upon it." I asked >if he would have the same reaction if I was using an Adaptec or DPT >controller. He replied that he'd never heard of DPT and avoided >answering the question. This describes my experience with Dell in a nutshell. Unless something very dramatic happens, I will not be buying equipment from them in the near future. So far, I've had a machine that, *while* in warranty, died and the tech support guy asked me to try some things. While I was doing this, the warranty expired (these phone calls occured over a period of a few months, I should add), and I was informed that they could no longer help me. Then I found out that our university had some on-site contract with Dell and that they should have never asked me to do *anything* -- they should have seen I was from here, and said ok, we'll send someone out. Period. I did finally get someone higher up from Dell to send me the part I needed. *sigh* Now I have inherited a server (Dell, who else) and I can't get them to help me install a new drive (and all I want are the setting/parameters so I can do it myself.) It's like pulling teeth. AUGH!!! Compaq's support is definitely in a different league. I called them on New Year's Eve day, and got someone who was a CNE (and actually knew what he was talking about). Even though I had something that they didn't really "officially" support, he helped me figure out what I needed to do to make it work. He was polite and pleasant. Gold star for Compaq. --------- From: Randy Grein I've got a personal preference for Compaq, mostly because I'm most familiar with them, but IBM and HP are also quite good. Things to look for: Hardware based management tools ECC memory Purpose built servers Warranty Reputation There MIGHT be a hardware based problem, but more likely it's a patch/driver/update issue. --------- From: Suskop@aol.com >The ALRs never gave us any trouble and were very good performers. >They did as well as the Compaqs we looked at/demo'd, and at somewhere >around 2/3 the price. I am curious, I have about 20 older/newer Compaq Server's where I work, I guess they are expensive, but the post sales support and accessibility is really where it counts when all is said and done. I have no experience with ALR or any other's, do their server's have all the same features of Compaq's? I know of a few Compaq features that have saved me many hours of trouble-shooting, i.e. Insight Manager, re-mapping of defective RAM and their very reliable RAID technologies. Also, their new Smart SCSI Array 2 card lets you add another drive to an existing array without destroying the existing array's data. How is ALR's support, have you called on them yet? Also, what about ease of setup and white papers, downloadable from Compaq's Web site, on configuration and optimization of the server and NOS, not just NetWare? --------- Date: Thu, 7 Mar 1996 19:01:10 +0000 From: "Tim Stewart" To: netw4-l@bgu.edu Subject: Re: Tape backup >>I have a Compaq Prosigna 300 server with an embedded 53c710 scsi and an >>add on Adaptec 1540 scsi running a CDNetrom 7 disk cd tower. My problem >>comes in when I try to run Arcada Backup exec. ver 7. My tape backup unit >>(a Compaq python) is on the embedded scsi . When I run arcada it doesn't >>see the tape unit unless compaq's CPQSASPI.NLM is loaded. CPQSASPI will >>not load with ASPITRAN.DSK loaded which is required for my AHA-1540 and >>cdrom tower. > >Unfortunately you are kind of stuck. Compaq's aspi shim is not the same >as adaptecs, and you can only have one loaded at a time. This isn't >anyone's 'fault', and is universal with anyone using aspi technology. >Your choices are: > > 1. Replace the adaptec card with a compaq. > 2. Add an adaptec card for the tape drive. Yet another plus for the clones.....I know that people go with compaq, dec, etc, because they are superior machines in every way, EXCEPT compatability. I absolutely refuse to purchase a machine from a manufacturer which insists that the solution to my 'incompatibility' problems is to purchase add on hardware from them as well. If I can't buy a machine, and install the peripheral hardware of my choice, and have it run flawlessly, then I WON'T buy the machine. End of story. As far as embedded interfaces go, as far as I would ever go with that is a built in EIDE interface and built in serial/parallel ports....other than that, I insist on the ability to use whatever interface I choose....when it comes to SCSI, it usually ends up being Adaptec. To qualify my philosphy, I have been running a 'clone' as a 3.1 server for almost 6 years, and I've always enjoyed using controllers and peripherals of my choice, and not once has the machine EVER gone down. Not once. In your case, if you have hardware which will not co-exist with other aspitran driven hardware, then the problem lies in your hardware, and at your next upgrade, you should consider turning the compaq into a workstation, and replacing it with something which is truely 'Industry Standard.' In the meantime, take Randy's advice, put the tape machine on an adaptec, and disable the built in SCSI interface. Just make sure that you have seperate SCSI controllers for your CDROM setup and your file server's hard drives...having this hardware on the same controller causes problems, no matter what the platform is. OK I'm off my soapbox, and I know that I haven't answered any of your questions, but I thought I'd vent...Compaq is excellent, DEC is excellent, but when they join the real world of ISA, EISA, VLB, and PCI, in other words, INDUSTRY STANDARDS, and get off their little proprietary trend, I'll consider buying them. Their system boards may be optimized for server environments, but when compatibility is compromised, what's the point??? --------- Date: Thu, 07 Mar 1996 23:15:56 -0500 From: fratus1@ncgroup.com (John Fratus (POP)) To: netw4-l@bgu.edu Subject: Re: Tape backup Tim Stewart wrote: >Yet another plus for the clones You're way off base here. I have a lot of experience with these "server class" machines. I've yet to see "industry standard" adapters refuse to work in them. In my mind, Compaq _defines_ the standards. They created the "clone" market, and in many respects continues to lead it. They are not like IBM which defined a market and then left it or like Gateway which, since it does no R&D, can only follow the market. If you want to disable the internal SCSI and run an Adaptec card, great. You'd have to do the same on a "clone" that came with a SCSI card you didn't want to use: remove the card. Big deal. Are you complaining that they built SCSI into the system? And how is that different than the built in IDE, I/O, Video, Sound, etc. etc. etc. on your beloved "clone"? Except that I _CAN_ disable it. I've run into many "clone" computers where truly disabling a built-in feature is difficult or impossible. On the Compaq (or HP or even IBM), I simply turn it off. Now, if you want to talk price... --------- Date: Thu, 7 Mar 1996 23:12:47 -0800 From: Randy Grein To: "NetWare 4 list" Subject: Re: Tape backup Tim, I couldn't resist correcting several misapprehensions in your post... First, Comaq, HP, etc. ARE part of the group that gave us EISA, ISA, PCI, etc. They in fact do define the standards - well, actually they help define the standards. A little background is in order. The original PC had a well defined and documented bus architecture - IBM wanted people to develop to it, apparently. The result was a rush to clone the PC, and the attendant rush to boost performance. Compaq was in the forfront of this push for speed, market share and profits. With the advent of the AT IBM now had a vested interest in a little ambiguity. In any case the new, faster and wider bus was never explicitly documented; the resulting clones were less than compatable. Only time and the obvious benifits of working with every machine has gradually forced a uniform standard. Lest some doubt, I cite an example: The clock speed of the ISA bus is 8 MHz, right? Sometimes. Many, many clones allow different processor clock speed divisors to be used busclock/2, busclock/3, etc. Depending on the exact configuration I've had many machines run up to 16 MHz on the ISA bus; most will run at 12 MHz. The speed increase is quite noticeable. However, some cards can't reliably take this overdrive mode, and the only sure way to tell is to test it. Now the real truth is this: There's no such thing as a "compatable" PC, they are all incompatable to one degree or another. Now as far as server hardware goes, consider this: I've seen incompatabilities with EVERY brand of machine I've worked on, from IBM down to the cheap, wonderful nameless clones that have made computing affordable. Something as simple as a BIOS change can render a perfectly fine machine useless with the right hardware/software mix; I've tested and discovered 3way incompatibilites: Software, motherboard, and NIC for example - that simply wouldn't show up in any rational testing program. I do agree with you that the test is to ensure compatibility with your desired peripherals/software/etc, but if it doesn't work now wait 3 months and everything you think you know has changed regarding compatable pieces. There is two final points I'd like to make vis a vis dedicated server/pc based servers. Most machines designed as servers features such as ECC memory and management capabilities that allow detection of hardware errors, sometimes BEFORE they get to the point of failure. This requires circuitry simply not available in no-name clones. The second point is performance. Workstations are essentially single tasking, servers multi task heavily. This difference in focus places different weights on features: For example, EIDE drives are great for workstations because they are fast when fetching one item at a time; generally faster than SCSI. They are, however useless on machines destined to be servers as they quickly reach saturation during loads attempting to satisfy multiple requests. I recently had an opportunity to replace an EIDE drive and controller on a server with a SCSI of comparable speed ratings; the client was estatic with the doubled performace under load. Memory systems have similar issues; witness Compaq's triflex architecture. Useless for a workstation, it buffers multiple requests for the assorted memory resources resulting in improved performance - but only on a server. --------- Date: Fri, 08 Mar 96 07:36:09 CDT From: John_Cochran@odp.tamu.edu To: netw4-l@bgu.edu Subject: Re[2]: Tape backup DEC has some of THE most compatible boxes I have EVER worked on from the BIG names. I use them in my netware server farm, along with ALR, and they have been nothing BUT compatible. Digital Prioris HX for example. The only box that I have worked with (out of Compaq, ALR, HP) that has PCI bridging that works! 6 PCI slots and 6 EISA slots. Have it almost loaded with controller cards of any flavor and it runs 24x7 without a hitch. Now, you could be talking about the old days when DECs VAX architecture was proprietary (Still is in many instances), but the modern day PCs are great. We have 6 various server models, and ~70 of their Pentium desktop XL models. On the desktop models, we have had 1 power supply failure and 2 cd-rom drive failures in 2 years. Stable, compatible, and a wonderful warranty. Of course, we have a DEC VAXcluster, and 4 DEC AlphaServers (2100's). My point here is that it might just be worth it to look at the DEC machines again. They have made vast strides in the PC architecture from the ones they had years ago. ------------------------------