------------------------------------------------------------------ NOV-NIC1.DOC -- 19970620 -- Email thread on Network Interface Cards ------------------------------------------------------------------ Feel free to add or edit this document and then email it back to faq@jelyon.com Date: Sat, 20 Jan 1996 07:32:04 -6 From: "Mike Avery" To: netw4-l@bgu.edu Subject: NICs >>PS: Any beefs about 3com for a PCI 10BaseT interface? >Only a CLAIM that 3com uses the CPU heavily for I/O, even with >busmastering. Bear in mind that this is a claim only, I've not had >it verified by what I'd consider a reliable source. A friend maintains that he's seen no evidence that links smoking and cancer. We ask him what reports and studies he's read. "None", he replies. You have been given references on the matter. To repeat.... a general discussion of NICs and server usage can be found in "Optimizing NetWare Networks" by Rick Sant'Angelo, published by M&T Books, ISBN 1-55851-305-1. Starting about page 161 is a good discussion of nics and processor utilization. Bluntly, a second nic doubles throughout on most EISA NIC's. Then the point of diminishing returns sets in. With a number of brands, there is a reduction in throughput when moving from a three to four NIC's in a server. This is due to processor load with even bus mastering cards. Each of the recent PC-Magazine reviews of NIC's has compared throughput with processor overhead. The NE3200 seems to do very well in that regard. The 3COM cards do less well. The 3COM's have somewhat higher throughput, and are well designed cards. But, they extract the price of higher CPU utilization. This might be due to their drivers, but that is still what you get when you buy their product. As a side comment on 3COM - we use them, so I have some experience with them. The EISA Ethernet card seems to cause strange problems in Compaq ProSignia EISA file servers. The system just starts ignoring the LAN. Annoying..... a quick glance at the server shows nothing's wrong. A closer look at the NIC statistics shows no data is moving. Unloading and reloading the drivers will often correct the problem, but not always... then a reboot is needed. As a result, I tend to use Compaq cards in Compaq's, or NE3200's if I can't get a Compaq in a timely manner. ------------------------------ Date: Tue, 23 Jan 1996 01:07:10 GMT From: Warren Block Subject: Re: Trying to identify ARCNET card >>Hello. I'm trying to find some documentation on some ARCNET cards > >The card is made by DANPEX. Their web site is located at >http://sweb.srmc.com/danpex/ I had four of those, quite some time back. Eventually each was removed from its workstation to cure lockup problems (which it did). Maybe they have improved in the meantime. ------------------------------ Date: Thu, 25 Jan 1996 02:44:01 GMT From: Steve Kurz Subject: Re: Best NIC for LZFW >Can anybody recommend a good network interface card for Lanalyzer? >The one that passes packet erros/bad frames to the application layer. My suggestion is to use a plain vanilla NIC with a promiscuous mode driver. I use 3Com 3c509 or NE2000 Plus when I want a solid connection and I am not necessarily concerned with raw speed. They both have good drivers and work well with LZFW. ------------------------------ Date: Thu, 25 Jan 1996 16:41:33 GMT From: Don Wolf Subject: Re: Best NIC for LZFW I can't recommend one that works, but the 3C579 and SMC Ultra's didn't work for me. I'm still looking too, so this is one thread I'll follow. Its been said to me by my NOVELL rep that NOVELL has a list of acceptable NIC's, but I don't know where its at. ------------------------------ Date: Thu, 25 Jan 1996 14:17:37 -0600 From: Joe Doupnik Subject: Re: Best NIC for LZFW >I can't recommend one that works, but the 3C579 and SMC Ultra's didn't >work for me. I'm still looking too, so this is one thread I'll follow. >Its been said to me by my NOVELL rep that NOVELL has a list of acceptable >NOC's, but I don't know where its at. --------- NE-2000 and NE-3200 and NE-2000 clone (a very close clone) work just fine here with LZFW. Latest drivers, no need for rxmonstk. Joe D. ------------------------------ Date: Wed, 7 Feb 1996 22:18:10 -0600 From: Joe Doupnik Subject: Re: Workstations disconnecting >I am having a problem with one of our networks. In one lab we have >machines that seem to frequently disconnect from the server. It does >not always do it and it may go for hours without a problem and then >several of the workstations will disconnect. The server is running >Netware 3.11. The server is supporting three labs with each lab on its >own network card. Two of the labs are using ne2000 cards and one is >using a NP600 card. The lab that we are having the trouble with has >just recently been connected to the Internet using the same cabling so >we are not using the TCP/IP on the server. ------------- The real cause is probably dropped packets. The reason they drop is likely too many packets/sec for the boards involved. You don't quote any figures for traffic, but a rough guide is when NE-2000's see 1000 pkts/sec they are in trouble. Your server is working harder than the clients because it has all the disk stuff to perform too, and thus the server overloads before most clients. The NP600 is a venerable board, and should be retired with a pension. It's not up to current levels of activity. To examine the traffic you need to put a monitor on the wires, one by one, and look carefully. Novell makes a nice one for this purpose, Lanalyzer/Windows. In lieu of that just watch MONITOR and the packet update rate. If it's near that 1000/sec (about one sec per screen update) you have too much traffic for the current equipment. Also see the section on no ECBs available under the lan adapter heading, and if it's a bunch then the server is clearly falling behind. Coax is really good wiring, if done right. Often it isn't. Mixing cables, adding any stub whatsoever to a Tee, dinged cable, cable too long are common failings. Bad BNCs (twist-on being the very worst there is) cause strange errors too. Flakey lan adapters are hard to track down, but not that hard. While here let me relate yet another network story. This afternoon just as my networking class was finishing another system manager rushed in saying "The network is down! It's down!" I got a VOM (Volt/Ohm Meter to the rest of you) and looked at the coax involved. Measured about 25 Ohms, more or less. The clue is in that more or less. Checked a NW server and it was ok, no bad counts in Monitor, but no comms to the rest of the world. A multiport repeater at the end of the coax run had its red light on and would not clear it. Small pause to think and I had it: d.c. on the wire upsetting all the Ethernet 0 to -2V signaling level sensing. Sure enough, and average of -1V and that biases the Ohm meter readings. The manager said one room had a wiff of ozone about the same time as the outage. Hmmmm. No flames visible, no crisp'd students. Switch off machines one by one while watching the wire voltage and all of a sudden it went to 0V. Ah ha! Cycle the machine back on and no problem, comms are normal. Cause: dust from vacuuming caused a monitor to arc (ozone) and that zapped the computer which in turn put the Ethernet adapter into wacko mode and the wire jammed hard. It's a good thing the manager waited until the end of class. The campus network alarm system turned red, my phone rang, folks went looking for me while I droned on about nifty heuristics in TCP. Had they stopped me in mid-heuristic I would have dragged the whole class along on the treasure hunt, and the result would have been a trail of debris, crushed machines, and a bunch of grinning grad students. Lan adapters fail, mysteriously and not solidly. Multiport repeaters suffer similarly or worse. Joe D. ------------------------------ Date: Mon, 4 Mar 1996 11:26:20 -0600 From: Jason Oliver Subject: Re: 3C595-TX 10/100 in server? I asked a question about 2 weeks ago about 3COM cards and have been researching the quality of these cards in a server. I have found a site that has some testing results that compares these 3COM cards (PCI and EISA Bus Master cards as well as others) to other vendor's cards: http://www.lanquest.com/pages/reports.html ------------------------------ Date: Wed, 13 Mar 1996 22:15:16 -0600 From: Joe Doupnik Subject: A note on Intel Pro 100 Ethernet boards Steven Shippee and I are working on some tests of Ethernet boards of the 100Mbps variety and I came across an interesting item for the group. These EtherExpress Pro/100 boards have a feature of wishing to start transmitting a packet after N buffer bytes have arrived, unless told otherwise. The default is a couple hundred bytes. Well, it turns out that such "parallel tasking"-like behavior reveals itself on the wire in the form of bad packets (fragments with no proper end). The cure is to tell the board driver to wait for all of a packet to be delivered before touching the wire. My setup is rather simple at the moment, just lashed together. A brand new, hours old, Pentium 100 EISA/PCI board as a NW 4.10 test server, with the Intel PCI board. Cat 5 wire strung across the hallway to my office where an Intel EISA board (same kind, different bus) is in my desktop 486-66 DX/2. VLMs on the client, with Pburst but no local cache. No hub ($$$). As I moved a few hundred MB back and forth I noticed that MONITOR reported some TX DMA underruns, and that's the emitted fragment effect. This was on the server, naturally. The obvious candidate for causing delays is the disk system, a rather primative Adaptec 1542 SCSI controller and an old SCSI I Seagate drive (you work with what you have). Now at 100Mbps a 128 byte fragment is about 1000 bits or 10^3 bits / 10^8 bps = 10 microsecs to send on the wire. That is a very small time for the rest of the system to deliver a missing buffer, even on a Pentium box. I suspect that the current 3Com Ethernet boards, and their SMC rivals, have the same problem about wishing to get on the wire too quickly. They may have software control to prevent it as does the Intel board, but I don't know that myself. But if you have these boards I suggest preventing the problem. That is the point of this message. Before folks ask, yes, it does go fast. Not nearly as fast as one might expect because there are all the other components of both systems in series. So how fast? I'm not going to say presently since I just hooked up things tonight, but less than a factor of two difference from 10 Mbps Ethernet. The throttle seems to be the slow drive on the server, as expected, and I'll find out later when I can move over a faster drive. That 1542 controller is an ISA busmaster board, rather old by now, and yet all things considered it is faster than the drive. Server utilization was up a ways, around the 40% mark, during big transfers, and I attribute it to the disk system. MONITOR showed no noticable overhead from the Intel Ethernet board, but it did show lots and lots when a plain jane NE-2000 was used on coax. Dirty cache buffers shot to the limit of 16MB memory (currently, will be doubled shortly) for files going to the server, and that is a sure sign that the disk system is slower than the comms link (simple queueing theory in action). Both Intel Ethernet boards have performed fine so far (a couple of hours since being hooked up). Now I have to roll up the wire before someone trips over it; a permanent link awaits the chore of dragging it through tiny conduit filled with other wires, and we know who gets to do that task. There will be more on these experiments as time becomes availble. Joe D. ------------------------------ Date: Thu, 14 Mar 1996 00:19:16 -0600 From: Joe Doupnik Subject: Addendum to note on Intel Pro 100 Ethernet boards I did manage to get some rough unscientific numbers between two ways of communicating between the same server and client machines. The iozone tests are shown side by side below. See netlab2.usu.edu, cd APPS for iozone.exe. The differences are a little larger than I thought. Joe D. Operating System: MS-DOS, VLMs, no local cache. Slow server disk system (which is the overall throttle on the exchange). 16MB server. Notice how performance drops once we run out of cache buffers in the server to buffer more disk requests (at which point server utilization climbs to above 80% as it slaves to move things to disk). Disk read-check after writes was turned off to get half way acceptable disk performance tonight. NW 4.10 server, Pentium, 32KB disk allocation units, no other users. The 10Mbps case shows that the server has to spend lots of time split between servicing the disk system and the NE-2000. Waiting a tad too long means waiting for the disk to come round again. Reading means wait on the disk; writing is buffered by the server and hence limited by server free memory. Reads are from server to client. Server: Intel PCI 100Mbps Novell ISA NE-2000 10Mbps Client: Intel EISA 100Mbps Novell EISA NE-3200 10Mbps IOZONE: auto-test mode IOZONE: auto-test mode IOZONE: Performance Test of Sequential File I/O -- V1.15 (5/1/92) MB reclen bytes/sec bytes/sec MB reclen bytes/sec bytes/se written read written read 1 512 516539 516539 1 512 254508 235635 1 1024 659481 825650 1 1024 353055 313007 1 2048 504123 1191563 1 2048 398698 397187 1 4096 1361787 1278751 1 4096 514007 466033 1 8192 1476867 1456355 1 8192 529583 504123 2 512 428865 483214 2 512 237234 244708 2 1024 794375 779610 2 1024 328707 329223 2 2048 979977 1158647 2 2048 449068 394201 2 4096 1198372 1361787 2 4096 537731 423667 2 8192 1416994 1416994 2 8192 626015 449068 4 512 426684 505947 4 512 238583 249660 4 1024 713317 755730 4 1024 319444 339619 4 2048 1158647 1033079 4 2048 417343 414866 4 4096 1252031 1271001 4 4096 483214 459901 4 8192 1416994 1384258 4 8192 534306 489417 8 512 430405 427771 8 512 236432 230013 8 1024 655359 505642 8 1024 313007 299486 8 2048 954335 425601 8 2048 399838 231729 8 4096 954335 437590 8 4096 462692 387643 8 8192 960894 465775 8 8192 512751 410401 16 512 442670 451121 16 512 235139 232274 16 1024 637674 557382 16 1024 311670 307613 16 2048 718818 513378 16 2048 398224 245928 16 4096 734232 543479 16 4096 458644 405148 16 8192 730714 550433 16 8192 499024 433295 Completed series of tests Completed series of tests ------------------------------ Date: Thu, 14 Mar 1996 12:00:57 -0600 From: Joe Doupnik Subject: Addendum to addendum to note on Intel Pro 100 Ethernet boards The last message on this topic for a while. To remove ambiguity about mixed experimental conditions, the table below is for the same setup as last night but using the pair of Intel EtherExpress Pro 100 boards set to 10Mbps rather than 100Mbps. One has to restart the drivers to change speeds. IOZONE: Performance Test of Sequential File I/O -- V1.15 (5/1/92) By Bill Norcott Operating System: MS-DOS IOZONE: auto-test mode MB reclen bytes/sec written bytes/sec read 1 512 347210 317750 1 1024 476625 455902 1 2048 576140 579323 1 4096 680893 635500 1 8192 733269 680893 2 512 318232 326659 2 1024 476625 444311 2 2048 587437 553338 2 4096 683111 635500 2 8192 748982 670016 4 512 323634 333410 4 1024 462947 454420 4 2048 578524 549712 4 4096 687590 626015 4 8192 755730 668948 8 512 324260 297152 8 1024 460154 318111 8 2048 572210 315479 8 4096 664181 313007 8 8192 666820 322266 16 512 324260 308858 16 1024 451243 336824 16 2048 455284 381821 16 4096 464228 261286 << 16 8192 298261 534136 << Note: MONITOR switched to show Processor util here, consumes resources Completed series of tests Comments. The results are similar to those of the NE-3200/NE-2000 pair over coax. Server utilization ramped up strongly only when the disk queue (writing to the server) used all available memory (16MB NW 4.10 box). Hence the Ethernet board driver was not eating the server alive at either speed. There is a marked improvment in performance by going from 10Mbps to 100Mbps with exactly the same equipment, even when the server's disk system is slow. Performance suffers when the cpu has to work harder, as when dealing with an ISA bus adapter and when dealing with excessive disk request backlogs and when running MONITOR in show processor utilization mode (even on a P100 machine). Joe D. ------------------------------ Date: Fri, 15 Mar 1996 13:07:04 -0600 From: Joe Doupnik Subject: more performance misc notes Taking out 20 min before lunch today I decided to run a wire-melter test program which sends IPX packets to another board and gets back a reply. It's a Novell test program, and it does not use Packet Burst (just send and wait for reply). Sorry, I can't distribute the program. With 10Mbps Ethernet the speed is about 550KB/s using 1500 byte pkts. With 100Mbps Ethernet the speed is about 2200KB/s, ditto. 100Mbps Ethernet means 100Base-TX (two pair Cat 5). Neither is the capacity of the media (a tad over 1MB/sec and 8+MB/sec, respectively once we include framing overhead). The above figures basically represent driver plus board throughput, with no disk nor file system nor NCP interaction at all. Looping back through just my own board yields about 3.3MB/sec to an NE-3200 and 2.3+MB/sec (must be somewhat higher but no notes in front of me) to an Intel Pro/100 at 100Mb/s. NE-3200's are fast smart boards, and we must be seeing efficient buffering and loopback. Nothing went onto the wire in this test. Running iozone (or perform3 if you wish, similar results) we can achieve higher values by using Pburst and hence have fewer wait-for-ACK intervals per sending opportunity. But "higher" is what I sent out the other day, say 800KB/sec peak (more like 350-400KB/sec) for 10Mbps, and 1.4MB/sec using 100Mbps Ethernet. Neither of these is full wire capacity, and the 100Mbps values are especially disappointing. Thus a serious component of our lans is the driver and board efficiency, not that is startling news, but it does help reveal bottlenecks. I replaced the slow SCSI drive system on the test server with a much faster one, and the throughput numbers reported the other day remain very nearly the same. What changed was the number of dirty cache buffers for writes, which went way down because the disk system was keeping up with the incoming traffic better than the slow disk system. Cpu utilization still shot up, but I suspect that's polling the disk system for bytes (are we there yet?). If time and opportunity permit I'll discuss this driver throughput situation with the ODI group next week (assuming they are let loose to attend Brainshare). We don't expect wire-speed because there are many other considerations to make drivers hospitable in a general environment. However, I'm concerned that the 100Mbps technology now becoming available won't be usable at the capacity that folks may expect. Joe D. ------------------------------ Date: Sun, 14 Apr 1996 22:17:47 -0400 From: Glenn Fund Subject: NetWorld+Interop Las Vegas '96 In Review Matrox Shark 10/100 Multiport NIC, Matrox Networks, http://www.matrox.com, (800) 837-3611, $995, with onboard CPU $1395 Single slot PCI based server card that is configured to support four independent Fast Ethernet server segments. All ports can be configured for full duplex operation, yielding an amazing 800Mbits/sec throughput from this single card. Busmaster operation yields low CPU utilization. NetWare, NT, Windows 95, LAN Manager and LAN server drivers available. Matrox Piranha Switch NIC 10/8 Board, Matrox Networks, http://www.matrox.com, (800) 837-3611, $995, with onboard CPU $1195, Prianha RJ-45 Connector $125 (required) Single slot PCI based server card that is configured to support eight independent Ethernet server segments. All ports can be configured to run in full-duplex operation, yielding an effective 20Mbit/sec per port. Busmaster operation yields low CPU utilization. NetWare, NT, Windows 95, LAN Manager and LAN server drivers available. ------------------------------ Date: Tue, 30 Apr 1996 19:24:56 EST From: Jayson Agagnier Subject: Re: 100 mb test / minimum hardware required? >I am looking to test the performance of a particular network application >over a 100mb link. Can I do this between a 3.12 server and one workstation >with 2 cards and no hub? If the two are wired with a crossover cable? I >know you can do this with 10bt but what about 100mb? What cards would you >recomend? has anyone done this? A vendor told me you had to buy a 100mb >hub too. ??? truth or hype?? You certainly can attach one workstation and once server together using a twist or crossover cat5 cable, but the results won't give any indication to actual network performance. At this time, you will be hard pressed to find any decently priced 100Mbps hubs or switches, so a station to station test would most likely be your best choice for now. I have been testing various cards, hubs and switches at my lab at home, and have not found any significant difference between card vendors. Only within the last month was I able to set the cards and switches to full duplex, and even then, the increase was marginal. The network ODI drivers need allot of improvement before any significant speed gains can be had. I have tried out the Intel EtherExpress Pro 10/100 PCI & EISA, the Compaq NetFlex EISA & 3Com PCI 100 cards. They all operated pretty much within the same specs, only a few percent difference between each in terms of speed and performance. For now, we have standardized on the INTEL cards, they seem to have fewer problems than the Compaqs, and use less server resources than the 3Com. So our network backbone is 100Mbps over cat5 going to four Synoptics 28115 switches and a WellFleet BLN with four 100Mbps ports. We are also in the process of setting up our VLANS into workgroup format so users do not need to access other servers over the router. If you can avoid going to 100Mbps, try to, the market hasn't quite matured yet, and there still a couple of 100Mbps ethernet camps out there. Wait about a year or so if you can. 10Mbps is still by far fast enough for a properly segmented network. I'd say for the most part, 100mbps is still allot of hype by zealous sales people who hunger for those big commission cheques. ------------------------------ Date: Tue, 30 Apr 1996 21:46:15 -0400 From: Ron Johnson Subject: Re: 100 mb test / minimum hardware required? >>I am looking to test the performance of a particular network application >>over a 100mb link. Can I do this between a 3.12 server and one workstation >>with 2 cards and no hub? if the two are wired with a crossover cable? I >>know you can do this with 10bt but what about 100mb? What cards would you >>recomend? has anyone done this? A vendor told me you had to buy a 100mb >>hub too. ??? truth or hype?? >----------- > If you go back to early March in the list's material you will see >a number of lengthy messages from me on testing 100Mbps Intel Ethernet >boards, and the disappointing results. You can expect to get maybe 1MB/sec >or so. > Testing is a lot more subtle than just running Windows or WordPerfect. >No hub is needed (basics, an indication to tread with caution when interpreting >test results). One client to one server isn't reproducing normal lan activity, >as the simplest example of what such benchmarks don't cover. > I took my findings to Novell since I went to Brainshare a few days >after the tests. The ODI group is fully aware of the situation, and I talked >with Drew Major about it too (he has a home setup roughly similar to my lab >configuration, but with different software). This predated the recent gush >of concern in the trade rags. > The bottom line on all this is: board vendors have a long way to go >writing efficient high speed drivers. Novell's ODI material is getting >improvements for high speeds (the LSL contributes some load, as but one item). >I can indicate there is a great deal more concern and activity on fast >Ethernet (and ATM etc) than will appear in public forums. Don't expect new >drivers right away or even this summer. > Below the bottom line. Next year I'll be teaching EE undergrads about >writing high performance Ethernet drivers: Packet Driver and ODI. Response >from industrial contacts are very positive indeed since there is a great >demand for people with such skills. All I need to do is generate the course >successfully, using modern components, and with all the help from industry >that I can get. > Joe D. Hey Joe, How about thinking about putting together an internet based course. A person like myself, who can't have the pleasure of your presence would sure appreciate the clarity of your mind in dealing with such a course. Think about it seriously, this is the time to consider some alternative teaching AND *compensation* arrangements. That's my 2cents. RON JOHNSON ------------------------------ Date: Tue, 30 Apr 1996 21:39:19 -0600 From: Joe Doupnik Subject: Re: 100 mb test / minimum hardware required? >> Below the bottom line. Next year I'll be teaching EE undergrads about >>writing high performance Ethernet drivers: Packet Driver and ODI. Response >>from industrial contacts are very positive indeed since there is a great >>demand for people with such skills. All I need to do is generate the course >>successfully, using modern components, and with all the help from industry >>that I can get. >> Joe D. >> >Hey Joe, > > How about thinking about putting together an internet based course. >A person like myself, who can't have the pleasure of your presence would >sure appreciate the clarity of your mind in dealing with such a course. >Think about it seriously, this is the time to consider some alternative >teaching AND *compensation* arrangements. That's my 2cents. >RON JOHNSON -------------- That's very flattering and appreciated. But doing a course face to face is by far better than a talking head (video) or even magic fingers (as you read this). A real course also has a heavy practical component, well mine do, and that's very difficult in a diversified environment. The exchange of information coming back is difficult too, and yet that is often an excellent learning tool. I do my bit to slowly add information to the common pool on this list, as long term readers are aware, and it comes out of my professional hide (according to my superiors here). Do I hear violins playing in the background? A course to non-registered students counts for no points in academia, alas. What a system. Anyway, it's all I can do to teach what I have plus help out here and there plus, in the odd moments, try to do the creative things of interest. Some folks think Univ Profs have nothing to do during summers. Ha! It's worse than during the academic year because of all the delayed but imperative creative work. Interesting suggestions are welcomed, of course, with the proviso that personal time and energy are my limiting factors. My Dean would be interested in those that involve meaningful industry involvement. Joe D. ------------------------------ Date: Tue, 30 Apr 1996 19:43:46 +0100 From: Richard Letts Subject: Re: ATM Fiber Cards or FDDI Cards in a NW4.11 Server Brian wrote.... >We're looking at upgrading our Novell NW4.11 Servers and Unix Servers >to ATM speed (155Mbps) and skipping over FDDI speed (100Mps). > >We have 7 Novell NW4.11 Servers. Each has 3 Ethernet Cards for: >IPX Frame=EtherNET_802.2 ( 10 Base-T PCI Card for now ) >IPX Frame=EtherNET_802.3 ( 10 Base-T PCI Card for now ) >IP Frame=EtherNET_II ( 10 Base-T PCI Card for now ) > >We want to replace the 3 EtherNet cards with 3 PCI ATM Fiber Cards in >all 7 Novell Servers. We would have an ATM style Fiber cable from >each ATM server card to an ATM Switching Hub. Then ATM Fiber >connections between 18 Buildings and finally keep our 10 Base-T live >inside each building and upgrade Novell WorkStations to 100 Base-T or >Fiber ATM as needed. The first thing to note is that Lan Emulation is going to be tricky; I don't belive anyone is delivering ATMODI specification drivers yet; the ATM drivers are probably going to present the card to the NetWare operating system as an ethernet driver. There are SIGNIFICANT performance problems with that, as the card can de-multiplex the ATm streams in hardware, whereas LSL has to do this in software. Also, 3 155Mbps cards is 930Mbps of bandwidth (it's 155Mbps FULL DUPLEX), which is 116MBps -- I didn't think PCI or EISA went anywhere that speed. There was an excellent Prsentation on this by Henk Bots from the MPR division on ATM support in NW servers (PRO303S) at brainshare this year. >Another option is to go with FDDI or 100 Base-TX in the Servers. Ick! >At only 100Mbps, can this be better than ATM? Why? Expect What? War >Stories? What can/will byte us? Is FDDI more standardized than ATM? You are buying into tried and trusted and (relatively) well standardixed equipment at reasonable prices with 100Mbps ethernet, however you are probably going to hit the same fundamental limitation in the LSL on the server FDDI is WAY more standardised than ATM. I can probably plug (almost0 any vendor's FDDI equipment togetehr and get it to work (we have cisco, 3com, Network Products (?) and SUN equipment on the campus FDDI network here. with bridges, routers and concentrators...) Personally, I think that ATM is 6-12 months down the road before it gets the same level of standardisation. until then we are sticking with FDDI in existing installations, and 100baseTX in new installations (with slots free for ATM cards when they are delivered) If you want to go ahead with ATM now, I'd suggest you bought everyhting that has to support LANE from the same supplier, with cast-iron guarantees that they will interwork with other suppliers in the future (10% retainer concetrates their minds on solving the problem) ------------------------------ Date: Tue, 30 Apr 1996 08:20:06 -0600 From: Joe Doupnik Subject: Re: Ethernet >I currently have my unmanaged ethernet hubs connected to my fileserver via >coax with my workstations attached to the hub using Type 5 twisted pair >cabling. I am in the process of upgrading my server. I want to but 10/100 >PCI ethernet cards into my new server. Can I still use my unmanaged >ethernet hubs if I connect them using twisted pair? I made one crossover >cable (1236 to 3612) to connect 2 hubs. It worked fine. Can I connect 4 >hubs using this method? -------- Go no further. Hubs/repeaters in series equals asking for trouble big time. The normal limit is two hubs/repeaters between ANY two points, though the design permits up to four. More than two simply flunks often in the field. Bridge, please. That cuts junk packets, reduces the number of stations in a collision domain, resets the hub/repeater counting. Joe D. ------------------------------ Date: Tue, 30 Apr 1996 16:37:36 -0600 From: Joe Doupnik Subject: Re: Ethernet >What's your feeling on switches in series with routers? I've seen >some pretty odd stuff go on and I'm wondering if you have any >thoughts. ------------- Fundamentals time. Hubs/repeaters are bit level devices, where what comes out is amplified, wave shaped some, a few leading bits of the Ethernet preamble are often lost, wave shapes are often narrowed or widened, and what one port says every other port hears (same collision domain). Bits come out as the frame still comes in; only a bit or two delay through the box. Adding hubs/repeaters in series multiplies the packet shortening and bit width distortion (jitter is the term), to the point where bits become misunderstood (bit rot is the term). Not to mention funneling a lot of traffic to N-1 unwanted wire segments. Etherswitches are multiport bridges. Ethernet frames are taken in as a whole, they sit for a short time, and then fresh frames are generated on an output port (if the bridging rules say so). It's a full Ethernet receiver and transmitter, not a bit level device. Each port is a different collision domain. One packet time delay (to read it all in and check the bits before doing anything with it). Routers are bridges with a college degree and peer deeply into a packet to understand protocols, whereas bridges are often physical layer devices understanding ONLY hardware source/destination addresses. More work, higher pay, bigger box, very very much faster internal architecture. Adding stations to a collision domain means each gets less of the wire. Simple math here. That's one of the main reasons for bridges. Routers are smart enough to filter packets to keep down the bridge leakage (all broadcasts go through bridges), and do intelligent routing when there are alternative ports, and a great deal more. I say again: Etherswitches are nothing more than multiple port bridges. Joe D. --------- From: "Philip J. Koenig" Date: Sun, 26 May 1996 18:56:40 -0700 Subject: Addition to thread "Nov-Nic" Joe, Yes, "Etherswitches" (Kalpana trademark) or their equivalents are architecturally multiport bridges. However the speed advantages are gained not just by dedicated collision domains and port-to-port exclusivity, but also by cut-through forwarding that can forward the packet after only the header is read (~64 bits) rather than waiting for the whole packet (<=1514 bits) to arrive, as is the methodology of a traditional bridge, no? ------------------------------ Date: Fri, 24 May 1996 13:25:02 -0400 From: John Navarro Subject: Re[2]: How Many NICs in One Server? 16 nics per server allowed. Good luck getting 16 in. --------- Date: Fri, 24 May 1996 15:53:03 -0400 From: Rick Troha Subject: Re: How Many NICs in One Server? Sometimes I save messages that I find interesting, thus this blast from the past: >Newsgroups: bit.listserv.novell >From: donp@novell.com (don provan) >Subject: [none] >Date: Wed Aug 18 16:38:18 1993 > >>The consuensus here (at Salford) is that there is a limit of 65536 >>LOGICAL boards in a fileserver, and as many actual boards as you can >>fit in the server without an interrupt, DMA, I/O address or memory >>address clash. >> >>None of the Installation manuals mention a restriction, and The above >>was gleaned from looking at the Protocol Driver docs, and the hardware >>driver docs. > >I'm sorry. I suppose I should have spoken up sooner, but I just >assumed it was in the docs somewhere and someone would find it. The >correct answer in NetWare 3.11 is 64 logical boards. NetWare 4.0 >supports 256 logical boards. In either case, I think the odds are >fairly good that the original poster will run into other restrictions, >physical slots or DOS interrupts for example, before he runs into the >NetWare restriction. > >I believe, contrary to what someone else posted, that the NetWare 3.x >releases *before* 3.11 only supported 16 logical boards. > >>Well.... Is it any use connecting two cards to the same cable for >>differing purposes ? > >I can't say for sure if it would be any use, but my general attitude >is that if there's a problem with a NetWare server not being able to >stuff enough bits into a single cable, the volume of traffic on that >cable is probably going to cause other problems, anyway. I suspect >that in many cases partitioning the network into smaller networks >would be the better solution. That, of course, avoids entirely the >restriction against multiple NICs connected to the same cable. > >don provan >donp@novell.com ------------------------------ Date: Tue, 18 Jun 1996 15:52:30 -0600 From: Joe Doupnik Subject: Re: File Retransmission with Packet Sniffer >>From: Joe Doupnik > >OK, I'll bite. At the risk of asking a stupid question, What's so bad >about a 3C509? I read through the harware and NIC threads of the FAQ and >did not find anything conclusive. If this does turn out to be the >problem, then I'm up the proverbial creek, since we have an installed >base of over 1000 PC's about 60/40 split between 3C509/3C503. What's wrong with the 3C509 board are a couple of strategic things. Upon packet reception it interrupts to the board driver as soon as a handful of bytes have arrived, well before the rest of the frame has arrived and thus also before the frame CRC check has been applied. That means the driver is taking over with interrupts OFF at this point, it needs to get a buffer from the protocol stack, it cannot tell how large a buffer is needed so the stack has to provide the largest possible, and then it needs to sit and wait (ints still off) for the entire frame to arrive. Once arrived the frame is copied to the buffer, the board is cleaned up for the next frame, and the driver returns operations to normal. A full length Ethernet frame is 1500 bytes, or 1.5 millisec at 10Mbps; that is a looong time to keep interrupts off and the system locked up. Secondly, the Ethernet board buffer has been too small by far so there was almost no elasticity in the system for a stream of back to back packets. 3Com finally doubled the size but it is still small. Taken together these things mean the machine must pay careful attention to the board to get packets removed asap to prevent overruns. It gets blocked for long intervals from that premature interrupt. It must allocate full length buffers whereas just-right-sized buffers (from waiting until the frame has fully arrived) makes much better use of space. It may well go through all this for a damaged frame, and have to then back off that buffer allocation and appologize to the protocol stack, and that takes time too. Did you notice 3Com's comments on serial port speeds? Wonder what that is all about? See that long interval of interrupts off and premature interrupt stuff above. That "parallel tasking" stuff is a stupid design, in my opinion. Here's another, on the outgoing side. Intel Etherexpress 100/10 boards (I don't know if 3Com does the same) can transmit a piece of a frame if packet material arrives in clumps but not fast enough to keep up with the wire. Rather than waiting until all the packet bytes are ready and then sending a frame, it fires off bytes onto the wire as they arrive. Result: runt fragments on the wire. To stop this I had to configure the board to wait until all the bytes were available, just as any rational board wishes. Software timers (equals S L O W) on one end or the other must fire to recover from the lost frame. Another stupid design, and I'm speaking as an Electrical Engineer. Adding a suggestion on a point you did not cover. PCI bus master boards are normally configurable (via the PCI Bios) on how long they can continue to use the PCI bus after competition arrives. 32 clock ticks is nice and moderate, 80 is being a greedy pig. Too long and things get out of kilter in a machine. Be conservative, get off the bus when asked. The same applies to EISA bus controllers, for the benefit of other readers. SCSI controllers are items to watch. >The sniffer indicates that the problem is occuring at the application >layer rather than at the physical layer, so would the NIC have anything to >do with this? Each time I boot up the station, I get 83 instances of File >retransmissions, no more, no less. I don't know what the Sniffer is saying, lacking a Sniffer here. You should look below at your net.cfg and see that you have IPX checksums turned off. That means damaged packets (and there are plenty around) will not be recognized as such by IPX and above, and when they penetrate the stack then "interesting things" happen. We can't use IPX checksums with Ethernet_802.3 frames, so use Ethernet_II and turn on IPX checksums. >> It means your net can't stand the traffic. >Single server to single workstation? Yes, of course. Just put frames on the wire quickly, period. >> The first place to change things is that 3C509 board. Ditch it in >>favor of a non-"parallel tasking" unit such as a plain jane NE-2000 clone. >>The second place is the 3C579 board, but I know nothing about it; same >>suggestion. > >Thats 3C595, 3COM's PCI bus-mastering 10/100 Fast Ethernet card. We're >looking at getting these for all of our servers since we will be >installing ethernet switches with 100Mb ports before the fall term starts, >so I'd like to connect the servers at 100Mb. Since this is also a >parallel tasking card, would that make it suspect? If so, whats a >reasonable replacement? Is it the same controller chip as the 3C509? If so you now know a little more about the situation. >> Recall that PBurst puts lots of back to back packets on the wire, >> and the receiver can fumble them. > >I tried this with Pburst unloaded on the server, and the results were very >similiar in terms of File Retransmissions. > >> None of this has anything to do with a particular protocol since >> it's hardware level stuff that is causing the problem (most likely). But >> your datacomms guy has the right instincts. > >As mentioned above, the sniffer says "application layer" problem, not >physical. But that doesn't tell me much, aside from possible IPX damage. Anyone else have clues to offer here? >> Then have a very careful look at your Pburst settings in net.cfg >>so that you don't ask for more information than can be buffered by the >>board. That's PBurst read window and PBurst write window, with the number >>being the number of KB outstanding (make it less than 8 to stay within >>normal board buffer constraints). > >Ok, I tried turning it down both to 8 and to 5 and neither seemed to >make any difference. Take your Ethernet board's buffer capacity, divide by 1.5KB per Ethernet packet, allow for one outgoing packet, and that's the largest amount of Pburst traffic one ought to use. Smaller buffers mean more chance of overruns, so use shorter bursts. Broadcast traffic impacts things too, for obvious reasons. Let's look at your net.cfg below. PB buffers= is a boolean, 0 meaning no PBurst activity, anything else means allow PBurst. That's why changing from 8 to 5 had no effect. PB read/write window are the number of KB allowed in a burst, and you have them set to 64. 64 = 64KB >> your Ethernet board's buffer capacity. See the Sniffer for a decoding of what your site's PBurst traffic stream looks like. Here is a clipping of my net.cfg for a NE-3200 bus mastering EISA bus Ethernet adapter: NetWare DOS Requester PB buffers=1 PBurst read window 6 PBurst write window 6 checksum=on ; off speeds up high quality links >Here's my NET.CFG file as it was originally. Should I set the PBurst read >and write windows down to 8 (or below?) even though there seems to be no >difference? Have I missed something really obvious here? This is my >first attempt at using VLM's so I admit that I'm a bit fuzzy on some of >the options even after going though the on-line docs. The three lines below go under major heading NETX rather than against the left margin as major headings themselves. >SHOW DOTS ON >PREFERRED SERVER=NORWAY >FILE HANDLES=150 > >Link Driver 3C5X9 > FRAME Ethernet_802.3 > >PROTOCOL IPX > IPX PACKET SIZE LIMIT 1500 Why bother with the line above? > IPX RETRY COUNT 10 > >NetWare DOS Requester > FIRST NETWORK DRIVE = F > NETWARE PROTOCOL = NDS BIND > AUTO RETRY=10 > BIND RECONNECT=ON > CACHE BUFFERS=40 > CACHE WRITES=ON If you cache locally then all bets are off about data integrity. I turn off local caches. > CHECKSUM=0 Not a good thing, above > LARGE INTERNET PACKET=ON > LOAD LOW CONN=ON > LOAD LOW IPXNCP=ON > > PB BUFFERS=8 > > PRINT BUFFER SIZE=256 > SIGNATURE LEVEL=0 > PBURST READ WINDOW SIZE=64 > PBURST WRITE WINDOW SIZE=64 See comments in the running text > MESSAGE TIMEOUT=10000 Joe D. ------------------------------ Date: Thu, 20 Jun 1996 15:14:59 -0600 From: Joe Doupnik Subject: Re: Better LAN configuration >I had not read any discussion on performance and throughput for the >following network configuration/connetions. If anyone had any test or >insights please feel free to share with me. Basically, I am thinking of >setting a LAN with one of two method. > >1) Using a 100Mbps connections to the server and rest of the worstations >are connected using 10Mbps to the ethernet switch hub. > > Netware server > || > || 100Mbps(1 line) > || > ---------------- > | switch hub | > ---------------- > / | | | \ 10Mbps > / | | | \ > workstations and/or regular hubs. > >2) Using 4 network cards in the server and connect it to the switch hub. >All connections are 10Mbps. Lines to the servers are full parity. I guess you meant "full duplex" > Netware server > | | | | > | | | | 10Mbps(4 lines) > | | | | > ---------------- > | switch hub | > ---------------- > / | | | \ 10Mbps > / | | | \ > workstations and/or regular hubs. > >Many users on my LAN use Microsoft Access Database that is around 10M in >size. As well, I also have a sql database running with over 50M of data. >As well, there is also a tcp/ip router coming in which is not shown. ------------ The top configuration is the easiest to achieve. The bottom one should have somewhat better throughput today (not necessarily tomorrow) because 100Mbps Ethernet still has performance problems in the drivers. On the other hand, finding a hub with four full duplex ports might be difficult (anyone know?), and you would depend on load balancing software such as say NLSP to run the boards in parallel like that. Load balancing with NLSP handles IPX traffic ok, but doesn't deal with IP traffic, so there's a minus to consider. By way of comparison, Win95 loading is about 7.5MB here. I've been contemplating the top configuration to replace a three wire spread, but the funds just aren't there right now. If I had the money I would use FDDI between server and hub to get almost all of 100Mbps rather than just 20Mbps or so with today's software; but FDDI == $$$$. Joe D. ------------------------------ Date: Thu, 27 Jun 1996 20:18:44 -0700 From: "Richard K. Acquistapace" Subject: Re: Switching... >I currently have 3 Nics in my server each running an unmanaged 16 port hub. >These three hubs are now full and I'm wondering what my best expansion >option is. My users are already experiencing some slowdowns. Because of >that I really don't want to daisy chain hubs. I guess I could add yet >another NIC and create another segment, but I'm wondering if I shouldn't >look into switching. Could somebody please explain the >advantages/disadvantages of switching. Would it even help in my case? >I'm wired cat 5 with 100Mbs Intel NICS in my server. I've got 10mbs card in >my client workstations. Go Switched Fast Ethernet. You will not regret it. As I see it, there are no disadvantages to Fast Ethernet, only positives. Your bandwith will increase dramatically. Your user community will benefit with 10MB wire transmission for each workstation. Check with Cisco, WWW.CISCO.COM. ------------------------------ Date: Wed, 3 Jul 1996 11:57:31 -0600 From: Joe Doupnik Subject: Re: Listing of Ethernet Addresses by Manufacturer >I was wondering if there is a listing of ethernet addresses by >manufacturer available?? I've just inherited IP coordination for my >subnet and can map IP addresses to ethernet addresses. I would like to >be able to tell what type of ethernet card is used by that IP address >-- it will help narrow down my search for the current user of that IP >address. -------------- It pays to look on the sites supporting this list. For example, see netlab2.usu.edu, cd misc, read file index, find ptr to file ethernum.txt. Joe D. ------------------------------ Date: Mon, 8 Jul 1996 17:06:55 -0400 From: Larry Hansford Subject: Re: 10BT and 100BT in Server >I'm interested in upgrading my current network from 10BT to 100BT. >Because of cost constraints, I plan to upgrade a few workstations >at a time. > >Are there any known problems running both a 10BT NIC and 100BT NIC >concurrently in the same file server? If this is possible, I plan >to add a 100BT NIC to my file server to support a separate network >segment supporting only 100BT devices using an Intel 100BT hub. This is perfectly feasible, assuming you have the wiring infrastructure to handle the 100 Base-T load. You have to have installed good quality, CAT-5 UTP wiring to handle the load. There is no problem running one or more 100Base-T NICs with one or more 10 Base-T NICs in the same server. We are testing Intel's Pro/100 LAN Adapters that autosense 10Mbps or 100Mbps, and they seem to be working great! (I don't know where you are, but if you're in the U.S. you might want to take advantage of Intel's $49 trial offer on a couple of these cards.) There is a tremendous amount of information on fast ethernet, etc., on the various web pages for the manufactures. Intel's is at: www.intel.com/comm-net/sns/turn/dma.htm ------------------------------ Date: Wed, 11 Sep 1996 14:08:36 EST From: Jayson Agagnier Subject: Re: 100Base-TX (In General) >I've seen some discussion off and on over the last couple weeks about the >maturity of 100Base-TX products and the technology in general. I'd like to >hear if anyone has found 100Base-TX at the server level to be AT ALL >faster over 10Base-T. [snip] >In general I find that the server performance is adequate (although not >significantly faster than a single 10Base-T link used to be)... but the >workstation performance is MUCH SLOWER than when using 10Base-T. >I've tried three flavors of cards, including Cogent PCI, 3Com PCI and >3Com ISA. Naturally, I've obtained the latest drivers for all of these. > >I'm starting to wonder if there are problems with the Bay 28115. Anyone else >using one of these monsters? Any other ideas out there? I have been using both 3Com PCI & Intel E100 EISA & PCI 100Base-TX network cards for over a year (Intel) with significantly noticeable speed increase of network throughput. We did run into a small problem initially with the Intel cards and the 28115's. We had a couple of cards set to half duplex, while the switch port was set to full duplex. This resulted in many frame alignment errors, about 50% of all packets sent were bad, tracked this down using Optivity and Intel Lan Desk Traffic Analyst. Experienced another problem with some poorly punched cabling as well, the twist was more than half an inch from the punch down, and I assume this resulted in quite a bit of cross talk. Once these two issues were addressed, have not had anything bad happen since, and all has been running fine for a year now. --------- Date: Wed, 11 Sep 1996 19:08:10 -0400 From: Jayson Agagnier Subject: Re: 100Base-TX (In General) - more stuff to add There was more that I forgot to add (gets so busy here), d >I've seen some discussion off and on over the last couple weeks about the >maturity of 100Base-TX products and the technology in general. I'd like to >hear if anyone has found 100Base-TX at the server level to be AT ALL >faster over 10Base-T. Naturally, I've obtained the latest drivers for all of these. >I'm starting to wonder if there are problems with the Bay 28115. Anyone >else using one of these monsters? Any other ideas out there? Depending on the age of the switch, you might require a ROM upgrade. Most of these can be does via flash upgrades, contact your switch/hub vendor for details. We had to update two 28115s, and two Wellfleet routers, as well as a CISCO before everything would operate at 100Mbps. Also, first generation Intel and Compaq EISA 100Base-TX NICs had problems dealing with smaller frame sizes for AppleTalk and IPX 802.3 (you should only use Ethernet_II, so the second point doesn't apply). This was fixed by flash updates to the NICs and LAN driver updates. Anything purchased today should interoperate with little or no trouble at all. ------------------------------ Date: Wed, 18 Sep 1996 11:15:38 -0600 From: Joe Doupnik Subject: Re: Link Driver Parameter >>>We're experiencing difficulties with a small setup which includes a CPQ >>>1500R with the NetFlex/3 100Mbit adapter. Hub is a Synoptics/Bay 100Mbit >>>Hub and PCs are Compaq Deskpro 5100 and 575s. The PCs are running Intel >>>Etherexpress Pro 100's. Problems consist of disconnects and "server not >>>found" at LOGIN. It's a 4.1 environment with VLM 1.20b on the stations. >>> >>>Compaq recommends using a setting under the link driver section of NET.CFG: >>> Threshold 200 >>> >>>Yet they don't adequately explain what it does. Can anyone shed any light? >>> >>>We're LANalyzing their net today to get more info but preliminary input on >>the problem as a whole is welcomed! >>----------- >> I went through that particular item this spring and again this >>summer. It is a cheapshot attempt at gaining points in certain lan >>performance benchmarks, not something one wants in real life. If you read >>the little booklet which comes with the board and the read.me file you will >>see the board wants to start transmitting bytes after a certain number have >>been delivered from above, in the hope the remaining bytes in the frame >>will be delivered before exhausting the existing set. If that hope is not >>fulfilled then a bad frame is emitted onto the wire, and perhaps two such >>bad pieces, and nothing much on the sending station says this happened. >> This is nearly the obverse of the 3Com parallel tasking early interrupt >>on reception. >> Tell your board to use threshold 200, always wait for all the bytes >>of a frame before attempting transmission. There is no loss of performance >>in real life (I've tested it with the Intel 100 board), and the rest of the >>lan will be much happier without junk on the wire. >> Joe D. >> >Clears up what's going on. We'll have to see if it corrects our >disconnects. Shame "Threshold 200" isn't too indicative of its purpose to >we lay-administrators ------------ Lest folks get the impression I'm dreaming up this stuff here is a second opinion (this week) from a person who is very knowledgable about VG-Anylan (a relative of Ethernet): X-NEWS: cc.usu.edu comp.dcom.lans.ethernet: 21841 Organization: HP Roseville Networks NNTP-Posting-Host: hprnd.rose.hp.com Rick Genesi (rgenesi@bcn.net) wrote: >I'm currently using the 3COM Lan Sentry tool to monitor a ethernet >segment. >I'm currently seeing high percentages of Short + CRC's? What are these >packets? Perhaps fragments? >Collision rates are low? I'd like to find root cause of such errors. >Please describe causes and effects senario of such like ethernet errors. A while back there was a discussion on the subject of PCI NICs which use "Early Transmit DMA". ET-DMA devices may begin to transmit before a packet is fully buffered onto the NIC. If a bus contention occurs, the NIC will abort transmission onto the LAN. This may result in runts, CRCs and other such phenomena. ET-DMA will tend to provide improvements in benchmark tests where PCI bus contentions are unlikely, and thus vendors have been put into a position of providing this feature to remain competitive for benchmark testing purposes. Unfortunately, this renders LAN troubleshooting a bit more difficult as you can not distinguish between LAN faults and bus-contentions. I wish I could tell you whose cards use ET-DMA, but that would be very difficult as many vendors have begun using it over the last year or so. Dan Dove WND ------------------------------ Date: Wed, 25 Sep 1996 17:34:35 -0600 From: Joe Doupnik Subject: Re: Client 32 32bit drivers -Reply >> I can't speak about the 3C509 boards since I avoid them and all >>"parallel tasking/simul-tasking" equivalents. >> Joe D. > >Joe, >Do you avoid them because of problems? I've been using Intel's EtherExpress >Pro/10 cards for the last two years, and I haven't had any trouble with them except >for in a few "odd" 486 PCs where I had to turn off the concurrent processing in the >Intel Epro card. ----------- Again? Every few months? It's simple. The boards interrupt after a few bytes of a frame have arrived. That suspends all other activity. What if the frame is bad? One won't know until the entire frame has arrived and been CRC checked by the Ethernet controller chip, and then appologies must be offered upward and a cleanup instituted for the allocated buffer. How large a buffer must be requested from above for the frame? Same answer as above, so either wait or allocate a max length buffer every single time and then wait for the rest of the frame to arrive before moving bytes. What happens while we wait for the rest of the frame, and how long a wait will that be? Nothing, and up to 1.5ms during which basically nothing else can happen. Now, if your machine has absolutely nothing else to do, not even disk i/o on the client or updating Windows pixels or paying attention to the mouse, then this is all just dandy. Interrupts are off and nothing cares. If your machine is busy, and/or it's a multitasking o/s, such as NetWare or Unix to name but two, then it's very much un-dandy. I've heard that at least one vendor has considerations of serial port speed (COM1 etc) to be accomodated because of the busy-waiting characteristic of these boards; at least this is an attempt at helping. Really smart capable boards bother the machine least. They take care of themselves and intrude only when necessary, for as short a time as necessary, and use fewest buffer bytes in the process. One such board is an NE-3200, and there are others. They work just dandy in servers, and in clients, but particularly in busy servers. Less smart boards again interrupt only when an entire frame has arrived without error (wire bit errors), again tell the upper level the size of the needed buffer, but move bytes to that buffer by asking the cpu to do the transfers. Better boards have sufficient memory on them to buffer incoming packets during delays in servicing (there are a few other things machines need to do *right now*, such as service other boards and disk channels etc). Bus master boards do byte movements with very little help from the cpu. Non-bus master boards have the cpu do all the work, and waiting too. The differences are marked when the lan is busy. We can't tell the bus masters by the bus type. Thus an EISA board may well be a non-bus master, and an ISA board may be a bus master (witness the venerable Adaptec 1542 SCSI controller). Similarly for PCI and MCA buses. We gotta read the fine print to know. Please don't ask me for a list of recommended boards. I'm not in the benchmarking and/or magazine sales business and being sued does is not appealing. The best thing is to read reviews and manufacturer's literature in detail, think about the products, then run local stress tests. Joe D. ------------------------------ Date: Wed, 2 Oct 1996 08:09:44 +0100 From: Piotr Szafarczyk Subject: what is wrong with 3com? I've got some problems with Acer P100 server (NW 4.1) with EISA 3COM network card. The strange thing is that problems occur with 3COM cards only. When I use SMC card everything works fine. Here are sypmtoms: 1. First time I log into the network (after server start) I can type my login name. After that workstation stops for a while. In monitor.nlm I can see that server receives packets, but doesn't answer. There are some errors occuring (a few 'send packet misc. err.' and 'checksum errors'). Workstation displays 'This utility could not get the preffered Directory Services name or connection ID' and looses network connection. Next time I log in, everything is OK (usualy). 2. After a few hours of work, server has a lot of 'send packet' errors, checksum errors, packet receive buffers comes even to 2000 and no ECB raises. There are a lot of collisions too (like a card not detecting collisions, but look at 1 below). Ocasionaly workstations lose connections. There are about 5 users using the network, there is only one server. Everything looks like problems with NCP. And LanAlyser reports a lot of errors in packet burst packets. What I've tried so far: 1. I've tried other 3COM cards, both EISA and ISA - no change 2. I've tried other PORT and IRQ settings - no change 3. I've disconnected the rest of the network (only server and one workstation) - the same 4. I've installed a completely new 4.1 server on that hardware, installed all current patches, newest drivers for NIC and disks - the same As I've said before, when I changed the network card to SMC, everything looks OK. I've got no ideas what to do. HELP!!! [Floyd: I'd suggest using a different NIC] ------------------------------ Date: Fri, 4 Oct 1996 08:27:27 -0600 From: Joe Doupnik Subject: Re: what is wrong with 3com >>We had a problem with 3com here and 3com said change zero wait option >>to disabled even on your fastest machine. >> >The only settings I could change are: > > network driver optimization: server > maximum modem speed: 9600 > plug and play capabilities: disabled --------- Ever wonder why a lan adapter would be concerned about the speed of a serial port? Hint: it uses too many resources (cpu cycles and time with interrupts turned off). That should be a warning to never consider such boards in servers, and to think twice about them in clients. Joe D. ------------------------------ Date: Fri, 11 Oct 1996 15:48:13 -0600 From: Joe Doupnik Subject: Re: 3200 NIC problem >We called the European Support Center today asking if fairly high NIC >polling resets were the cause of high utilization (long 100% periods, >slow response, but no other unusual stat) during the login hours. It >was a last straw kind of question as we didn't really think so. But >the funny thing was the answer: the 3200 has been the cause of lots >of problems, so check the latest drivers and if that is not a >solution, then, ditch it! Having followed this list for a few years, >I have not heard that before and am wondering if that is the >experience of anyone else, or is this just get off the line so I can >go home Friday blues. If so, just what is the best high traffic bus >mastering EISA card out there. BTW, we will be setting up ManageWise >toute suite to get a closer look. TIA --------- We presume you mean Novell's NE-3200 board. It's a complicated beast, and the Ethernet controller chip has a strong personality. The most common reason one sees adapter reset counts (a safety measure in the driver) is overheating. The Ethernet controller runs hot and must be cooled. The latest drivers are better about reset counts but machines differ enough to make that a flexible statement. You can reduce the count some by adding undocumented phrase "POLL=5" to the LOAD line. The value is in hexadecimal and represents the number of 200 ns units to wait in a watchdog style loop for an operation to complete. The controller can get stuck and the watchdog rescues it very quickly. Values above 5 don't seem to do much (saying ops finished more quickly than loop expiration). Old rev boards have more problems, so we recommend having the boards upgraded to rev H or later. And keep them well cooled. Also, tune your EISA bus to have devices relinquish it after only a few extra bus cycles rather than hogging it for nearly forever. Hint: disk controllers. If you are going to chuck the boards I'm more than happy to catch. Joe D. --------- Date: Fri, 11 Oct 1996 18:01:53 -0700 From: Dave Hammond Subject: Re: 3200 NIC problem Agree 100% with the keep 'em cool. We have a rev G card that actually burst into flames...... We have that card and the BT-742 that was next to it hanging in my tech's shop. Joe, I have a dozen or so NE3200's that my company would probably donate to a University of my choice... Send me shipping info. ------------------------------ Date: Fri, 18 Oct 1996 14:17:31 EST From: "Robert L. Herron" To: netw4-l@bgu.edu Subject: Re: Managewise 2.1 >>Does the Lanalyzer agent put the card (in the server) into >>promiscuous mode? If you only have only one frame type loaded, does > >What type of mode is this? Your network interface card (NIC) listens to all packets transmitted on the wire. By default, the NIC discards any packets that are not broadcast packets or destined for it. In order for LANAlyzer (or any protocal analyzer/sniffer) to work, the NIC card must operate in promiscuous mode. In short, promiscuous mode allows the NIC to pick up any packet regardless of the packets destination. ------------------------------ Date: Thu, 24 Oct 1996 09:48:47 -0500 From: RogerTaylor@usemail.com (Taylor, Roger W.) To: Subject: Re: 10BaseT to 100BaseT >Quick question: I will soon be upgrading about half of a 10baseT NW 4.1 >(100-Users) to 100baseT, with the other half remaining on 10baseT. My >question is: Do I need to bridge the 2 segments to keep the 10baseT nodes >separate from the 100baseT nodes, or can they remain on the same segment? The short answer is yes. They cannot be on the same segment. This is true with TX and VG. if you use 100BaseT TX NICS on the same segment at 10BaseT you will be reduced to 10BaseT. 100BaseT VG uses 4 pair instead of two pair like 10BaseT or 100BaseT TX making it impossible to share a segment. You must also have a 100BaseT concentrator. Also my experience with 100BaseT TX all NICs must be the same Brand and possibly the same model. This answer is based on my very limited experience with 100BaseT. More educated answers I'm sure will follow mine. --------- Date: Fri, 25 Oct 96 10:59:32 EST From: DBurkey@ck6.uscourts.gov To: netw4-l@bgu.edu Subject: Re[2]: 10BaseT to 100BaseT A slightly longer answer would include switched that incorporate both 10 and 100 Base T(X) and that auto sense the device being connected to each port and adjust accordingly. ------------------------------ Date: Tue, 5 Nov 1996 10:25:33 -0600 From: Joe Doupnik Subject: Re: New name space stuff >In net.cfg, how do I distinguish between two identical NICs? I have >two 3c59x cards in my LX Pro in (according to INETCFG) slot 1 & slot 202. >Novell talk about slot settings but in reference to EISA (my cards are >PCI). Is slot now relevant to PCI? One BIND's # where number is the logical board. Logical boards are counted from 1, as the first loaded adapter first mentioned frame kind, then the same adapter and next frame kind, etc through the next adapater and its frame kinds. Thus the INDENTED line BIND #2 would bind that particular protocol object to the second logical board, counting frame kinds within adapters. Note that ODIPKT makes a mistake and counts from 0. Joe D. ------------------------------ Date: Tue, 5 Nov 1996 17:02:05 -0600 From: Joe Doupnik Subject: Re: updated on 2 NIC question >Sorry...I should've been more clear: occasionally, I need my server to be a >(very expensive) workstation. Our server room is in another building & I'm >always forgetting to bring everything I need. So, during setup, it's helpful >if I can get to the network to copy drivers to the new server's DOS >partition. The problem is that I'm not sure how to configure the net.cfg to >distinguish between my two identical PCI NICs. ---------- They have different hardware settings, right? So you load the driver twice, once for each board. The first board loaded gets logical board numbers 1.., and the second board gets logical board numbers above that. Under major heading "Protocol IPX" indent a "BIND #" clause to pick one frame from one board. That's what I tried to explain last time I responded to this question. Look at the screen as you do the loading by hand. Joe D. ------------------------------ Date: Thu, 28 Nov 1996 20:26:24 +0000 From: Richard Letts Subject: Re: Using a PC as a Gateway >Has anyone tried putting two network cards in a PC and trying the 32-bit >client to see if it will act as a multi-homed host? I've spent 30 minutes trying to get it to work; ipx.nlm recognises that there are multiple networks available, but the client32.nlm doesn't appear to be able to make use of this. this is with one network card in it. PS. multi-homed == host which has more than one physical interface, but does not route packets between them. ------------------------------ Date: Thu, 12 Dec 1996 07:17:30 +0100 From: Geir Mork Subject: Novell and FDDI If using 3COM FDDI cards 3c770 and 3c771 be sure that your ASSY No. does not end on -1(or -A). This revision has been known to cause problems with Novell servers, and are especially tickly with Compaq BIOS'es. Also be sure to use PCI cards, not EISA cards if you have a server with both PCI and EISA slots. This applies both to NetWare 3.12 and 4.x. Typical symptoms: Loss of packets, sudden drops of connections and low throughput, especially with heavy backup jobs. ------------------------------ Date: Tue, 21 Jan 1997 11:35:35 -0600 From: "Mike Avery" To: netw4-l@bgu.edu Subject: Re: ..need to vent against a hardware manufacture's blackmail >BEWARE... > >I assembled my lan 2.5 years ago with pieces provided by the school >board. > >This included Network Periferals FDDI cards and a Network Periferals >EIFO switching hub (made/distributed ? by Ungermann-Bass) for our >backbone. They had also bought may UB concentrators. > >One concentrator died.....*6* weeks for repair > >One FDDI card died...*ONLY* a 90 day waranty !!! My $ 70.00 D-link >10baseT cards have a lifetime warranty. > >My EIFO switch died...was told it only had a 90 day warranty. I >said for $ 8000.00 it better have a longer one. They got back to >me and revised their warranty on the switch to one year. I said we >would still like to have it repaired. They responded by telling me >THAT THEY WOULD NOT LOOK AT IT OR REPAIR IT UNTIL WE PAID AN UPFRONT >FEE OF $ 800.00 FOR A ONE YEAR SERVICE CONTRACT !!!!!!!!!. >This, in my mind, is akin to blackmail. I've used Network Peripherals cards in the past. Their FDDI cards used to have a lifetime warranty, and their service was excellent. They overnight advance shipped me replacements. However, since then their warranty period has dropped. And there is reason for it. Every FDDI card of theirs I have used has failed on me. Every one. And the word I've gotten is that their performance is not that good. Novell labs has found that the only FDDI card that can consistently deliver more than 60% of the nominally available bandwidth are from Madge. But.... here's a better option. Limp along with 10 base T, and as funds become available use 100baseT - I don't think that there is enough performance difference to justify the cost difference. ------------------------------ Date: Wed, 26 Feb 1997 09:55:02 -0600 From: "Mike Avery" To: netw4-l@ecnet.net Subject: Re: promiscuity on the net >Recently started looking at managewise, and so far pretty happy. Got >one datacomm Q though. Some of the components, the lanalyzer for >sure and I think netexplorer, need promiscuous mode on the NIC. I >got no problems with the fact, and understand the gist, but not the >technical guts enough. This was recently discussed in one of the usenet NetWare newsgroups, and I thought it would be helpful to forward the definitive comment to you and the mailing list: >>>>>>>>> Found on a comp.dcom newsgroup... >I'm using a 3COM EtherLink III 3C579-TP network adapter under >Windows 95 and I'm trying to figure out how to set the card to be >"promiscuous". I'm evaluating some network monitoring software >(NetXRay from Cinco Networks http://www.cinco.com), but can't >figure out how to get the Lan card to be able to see machines OTHER >than my own (promiscuous mode on). Well, that is a tough question, as those Etherlink adapters are very difficult to convince to be promiscuous. You'll need to start with about a dozen roses, and make sure they are real!!! Fake roses are much less effective, even though they last longer. You'll want to take your adapter out to dinner. Italian food works great, but some adapters go for Oriental, or even, on occasion, Mexican. It is important that you do not spill food on yourself during this process. Adapters notice details, and once a detail has been stored, it is very difficult to reset those particular memory registers. Also, make sure you tip the waiter. Adapters hate cheap-skates Make sure you talk to your adapter. Not all adapters talk TCP-IP, so be prepared to talk about other protocols. Remember that if you talk about SNMP too long, your adapter will start to get bored, and may get up often to reconfigure itself in the adapters room. This is a sure sign that your attempt at making your adapter promiscuous is failing. If your adapter brings up the subject of 'Am I too fat?' , you would be wise to change the subject. There is no correct answer, and any answer can cause your mission to fail. Also, your adapter always looks very lovely, no matter what it is wearing, or how many curlers are in its hair. Do not mention facial features like moles, warts, or mustaches. Now take your adapter to see a 'chick' flick. Something like "In Love and War", or the like. Make sure you put your arms around your adapter in the theater, but don't eat your adapters popcorn unless its understood from the start that you are sharing. Adapters will not raise a fuss in public, but they will hold it against you for life. If you do share, don't eat more than your half. After repeating this process a random number of times (each adapter has a different value), your adapter will be promiscuous towards your management console. Through trial and error, and with enough fine-tuning, your can begin to reprogram your adapter to be promiscuous towards the entire enterprise network. <<<<<<<<< --------- Date: Wed, 26 Feb 1997 10:00:39 -0600 From: "Mike Avery" To: netw4-l@ecnet.net Subject: Re: promiscuity on the net >Recently started looking at managewise, and so far pretty happy. Got >one datacomm Q though. Some of the components, the lanalyzer for >sure and I think netexplorer, need promiscuous mode on the NIC. I >got no problems with the fact, and understand the gist, but not the >technical guts enough. > >Does this promiscuous mode affect the servers' abilities for normal >print/share stuff for my users? Does it open any security holes? >Would I be better off putting this on a separate box? The specific >NIC in question right now is a SysKonnect FDDI using skfpnw.lan. Promiscuous mode is needed to allow a NIC to see packets that are not addressed to it. Obviously, a packet level monitor will need to do that. As to security holes, in theory, yes. In practice, maybe. Anyone who has access to the client piece of the LanAlyzer module will definitely be able to monitor all the traffic that hits the servers that are able to monitor the net. This could comprimise security. How badly depends on whether or not you allow unencrypted passwords, and the nature of the data being passed around the net. In theory at least, a patient person could get any information on the server - if someone accesses it and pulls it over the wire. In practice, it's difficult to go fishing for application specific information - such as accounting information such as your bosses salary. It's easier to get system specific information, such as your bosses password when he logs in, again if the LAN allows unencrypted passwords. Like most powerful tools, the edge can cut both ways. A good monitor/analyzer can help you figure out why the LAN is in the weeds, making you a hero. And it can help people get information they shouldn't, making you a bum. ------------------------------ Date: Sun, 30 Mar 97 00:02:30 -0800 From: Randy Grein To: Subject: Re: Server with 2 network cards >>Is there any advantage on having a server with 2 network cards if the >>cardsare connected to the same HUBS ( two cascaded HUBS)? > >With Novell you cannot have two nic cards on the same segment. Uncascade >the hubs and life will be good. BTW only 4 nic cards per server... I'm afraid your information is out of date. You CAN have two cards on the same segment and use load balancing, a feature that was developed to enable multiple connections to a switch. Not needed much now, as full duplex fast ethernet is available, and for standard, non switched ethernet a modern adapter can send/receive essentially at wire speed. Also, Netware 4.10 can handle up to 256 adapters - 3.x can handle 16, if I'm not mistaken. The 4 adapter limit was for netware 2.x. --------- Date: Sun, 30 Mar 1997 03:08:10 -0600 From: Kevin McIntosh To: "'netw4-l@ecnet.net'" Subject: RE: Server with 2 network cards Randy is correct, except for the # of adapters in a server. I believe the limits of 16 and 256 are print queues allowed. I think we'd have trouble getting enough IRQs for 16 or 256 adapters. In NW 4.x, I've bound two adapters to the same Network address and it's pretty slick, especially if the NICs are 10MB. It can really make a difference. --------- Date: Sun, 30 Mar 97 01:57:18 -0800 From: Randy Grein To: Subject: RE: Server with 2 network cards >Randy is correct, except for the # of adapters in a server. I believe >the limits of 16 and 256 are print queues allowed. I think we'd have >trouble getting enough IRQs for 16 or 256 adapters. Check the manuals. You're correct about the number of print queues supported, but the adapter # is the number supported by the OS, and it is 16 and 256. You could also look at Servman, the number is actually the number of possible lans - each binding of a protocol counts as a single lan, so that could be 256 cards, or 64 cards with 4 bindings each. They were trying to get a number large enough to make certain they wouldn't exceed it for the expected life of the OS. As an example, I was talking with an engineer working on the I2O initiative while at Brainshare, and he mentioned the possibility of logical connects to devices not actually within the server case. We were discussing network attached drives, but his work was directed to network adapters that would be hot swappable, and may not even be physically connected. It's wierd thinking about it, but with a sufficient switching fabric there's some pretty wild things that are possible. I asked about the possibility of a switched PCI fabric assisting in his work, and all he would say is: "Yes. We're looking at it, and I can't tell you more." He was clearly pleased that I was able to discuss at least in broad terms the technologies he was working with, and could guess the directions they needed to go. It was a lot of fun, and educational, too. I had this vision of a switched fabric of PCI ethernet and fiber channel cards attached to the servers... ------------------------------ Date: Mon, 7 Apr 1997 09:05:32 -0700 From: John Kerti Subject: Re: Remote Boot Win 95 with PCI NIC? On Fri, 4 Apr 1997, Joe Doupnik wrote: > Let me stick my neck out more than I should. I'm examining some >PCI bus Ethernet boards for resale by the University to students and staff. >One of them is the 3Com 3C905 board. Don't bother with it. It's a 3C509 in >PCI clothing and with all the faults of the 509 parallel tasking stuff >(same Ethernet controller). I have a no-name-clone board which uses about This is good to know. I would have hoped that the problems with the 509 would have been fixed by now. Oh Well... >The fastest of the current collection is the Intel EtherExpress Pro 100B >and it is tame-able to use the least cpu cycles etc. The Intel board has >nifty bootrom support in their Eprom. Interesting. I've got a Pro 100B in a test server seeing how it behaves (this is to potentially replace the 3c595's in our servers that were causing system hangs). The Intel card appears to work OK so far. I'll wait till after exams before trying it in a production server :) Have you been able to get the Pro 100B to remote boot win95 from Netware? > You should recall that going to 100Mbps Ethernet will gain a factor >of maybe three throughput increase at best given current technology; >expect a factor of two or less. Don't give in to shiny new toys unless >the network traffic numbers make the case for you. Its not a case of going for the 100MB hype, just that since we're replacing a bunch of labs with new hardware, and since this is done only every once in a many year time frame, and since the cost of 10-only vs. 10/100 cards is not very much, I'd like to be as flexible as possible with what I get. I would really like to get away from ISA and got to PCI NICS, but this win95 thing is a major concern. --------- Date: Mon, 7 Apr 1997 10:19:20 -0600 From: Joe Doupnik Subject: Re: Remote Boot Win 95 with PCI NIC? >>The fastest of the current collection is the Intel EtherExpress Pro 100B >>and it is tame-able to use the least cpu cycles etc. The Intel board has >>nifty bootrom support in their Eprom. > >Interesting. I've got a Pro 100B in a test server seeing how it behaves >(this is to potentially replace the 3c595's in our servers that were >causing system hangs). The Intel card appears to work OK so far. I'll >wait till after exams before trying it in a production server :) > >Have you been able to get the Pro 100B to remote boot win95 from Netware? I have had sys$no_such_time to get into that. Figure a few days of solid effort to explore that neighborhood. Just testing boards for performance factors is a full day, and that's all I had. Don't forget to tell the Intel board in the server TXTHRESHOLD=200 to stop it from sending packet fragments (transmitter underruns). Ditto for client in net.cfg but without the equals sign, and in the client also say EARLYRCV 0 to stop "interrupt on early receive and wait and wait for the rest of the packet to arrive over the wire." Joe D. ------------------------------ Date: Sun, 27 Apr 1997 10:26:51 -0600 From: Joe Doupnik Subject: Re: Win95b and Client32 >I don't know if this has been covered, but has anyone else had problems >with Client32 for Win95. > >I have a Vectra VL series 5 with an Intel10/100BTX card, that when I load >Client32, it freaks----loss of network community is the most minor of it. > I had to reload windows entirely. (Make me happy that my main machine is >a Macintosh). > >Are there patches for Client32? ------------ Look for memory conflicts above the top of physical memory. The Intel Etherexpress 100B uses such a memory buffer, and alas so do some video adapters, and they can conflict to lose video and/or communications. There is an undocumented command to the Intel driver, IOMAPMODE=1, which changes the board from shared memory to port i/o. But so far I see no way of stating that change with the Win95 version of Client32, and I spent part of Saturday trying. The alternative is to tinker with the video board settings/driver while within Win95. What is needed is a way of stating command line options for the Intel driver when it loads. That is not present in the current Win95 Client32 material. Alternatively, the option ought to be expressable to the Intel board's CMOS setup, and that's on my list to probe. The memory conflict problem does not appear at DOS level; it happens when the Windows drivers get cute about dense graphics modes. The fallback is to use the 16-bit real mode driver E100BODI.COM. Additional info: the identical problem occurs with Client32 for Win31. But the cure is very easy because we add that phrase to the LOAD line of the driver, like this: SET NWLANGUAGE=ENGLISH C:\NOVELL\CLIENT32\NIOS.EXE LOAD C:\NOVELL\CLIENT32\LSLC32.NLM LOAD C:\NOVELL\CLIENT32\CMSM.NLM LOAD C:\NOVELL\CLIENT32\ETHERTSM.NLM LOAD C:\NOVELL\CLIENT32\E100B.LAN slot=10001 SPEED=10 IOMAPMODE=1 TXTHRESHOLD=200 FRAME=Ethernet_II (line above is broken into two for mailing) LOAD C:\NOVELL\CLIENT32\E100B.LAN slot=10001 FRAME=Ethernet_802.2 LOAD C:\NOVELL\CLIENT32\E100B.LAN slot=10001 FRAME=Ethernet_802.3 LOAD C:\NOVELL\CLIENT32\E100B.LAN slot=10001 FRAME=Ethernet_SNAP LOAD C:\NOVELL\CLIENT32\IPX.NLM LOAD C:\NOVELL\CLIENT32\CLIENT32.NLM /c=c:\novell\client32\net.cfg c:\qemm\loadhi c:\novell\client32\lsl c:\qemm\loadhi c:\novell\client32\pdoseth Joe D. ------------------------------ Date: Sat, 10 May 1997 08:53:47 -0600 From: Joe Doupnik Subject: Re: Weird Server Situation... >Had a similar problem a couple of months back with an HP NetServer LF and >an IBM PCI Ethernet card. After varying, random amounts of time, usually >less than 2 hours, the network would just disappear. Replaced the card >with another one of the same model. No help. Replaced the card with an >NE2000 clone, and haven't had a problem since. Go figure. --------- No big surprize here. I've had a few boards which become sleepy on a busy network, and after considerable time they awaken and are normal. The heavier the traffic the more likely this is. The problems are between design of the board's chip stuff and the driver, but most often I think in mistakes on silicon leading to latch ups. The 3Com 3C503 8-bit board of years ago was a prime candidate, yet even one of my NE-2000 clones of the present does the same. Joe D. ------------------------------ Date: Fri, 20 Jun 1997 08:31:04 -0600 From: Joe Doupnik Subject: Re: intel pro100B - good choice? was: Load balancing on local LAN >I recently read a test of 100Mbit NICs in the German magazin Network& >Communications which basically concluded with "Fast or Intel?" >They tested about 15 NICs under NW4.1 and DOS in two Pentium75s linked >with a crossover cable with perform3, same type of NIC in Server and WS. >As results they listed the max. speed, the arithmetic average and the >NetWare indicated server load. Blocksize was 128 to 8192 bytes in the >first test run and 8 to 64 in the second. > >The values for the Intel Pro/100B were: > >Max. throughput Avg. Server Load > 5994 4187 55% > >As an example: the values for the 3C905-Tx (cheaper!) in comparison: > 9124 5763 38% > >The intel adapters were by far the slowest in the test. >Who is right?? Did they overlook something? ----------- It's all in the fine print, I'm sure. Let's see. First Intel boards have an early transmission feature turned on as a default, and it's mentioned in the manual. TXTHRESHOLD= is the Load line parameter. The default value is 8. That means the board can begin transmission after reception of 8 * 8 bytes from above. Should the rest of the bytes fail arrive in time then the packet is damaged on the wire and software must recover. DMA underrun is the Monitor stat. TXTHRESHOLD should be set to 200 (meaning 1600 bytes or a full packet, whichever occurs first, so the complete packet is the choice). More. Those % utilization figures are considerably higher than shown directly in Monitor here. I get 20% cpu utilization on a PPro 200, PCI bus, max wire load with perform3. Max throughput of 6MB/sec for a single station is a tad lower than what I observed with a Pentium 90 client, but close. That figure will increase as the client cpu speed goes up. In the server the board can deliver 10MB/sec at 20% util (Monitor) with perform3 on that PPro 200 server. The 3C905 board does not have adjustables. It's class does interrupt on early reception, and despite the setup program I could not turn off that "feature." It takes its sweet time about doing a packet reception operation, roughly a microsecond per packet byte at 10MHz Ethernet. Because that's also the bit rate of 10MHz Ethernet we are seeing the early interrupt stuff again. The Intel board can also do early reception interruption, but I turn it off in the client and the server driver turns it off automatically. I see the Intel board use about 20 microsec per packet nearly independent of packet length. I will say this for hopefully the last time, so bear with me. Packet Burst can have major impact on benchmark numbers and on real-life work. It can go unstable under load, yielding short intervals of no transmission (sort of a "stunned" condition) followed immediately by truely furious transmission (trying to recover we presume), overall leading to highly irregular average rates. The elements in this closed loop control system include the machines at each end; hence their speeds etc are important in the stability analysis. See this in Perform3 as drops in transmission rate when an increase would be expected from trends. See this with LZFW or other wire monitor. These early reception and early transmission knobs are benchmark toys, designed to achieve large numbers under certain conditions. Change the conditions to normal practice and things are not quite so nice. That's my view. Experience here says turn them off, get solid dependable performance. Those are my findings and observations. I urge folks to think and experiment themselves, using whatever tools they wish. By no means am I recommending folks buy or use a particular board; please make no mistake about that. I will, however, say what I use and I will recommend folks not make strategic mistakes (weak client boards in a server, etc). As to who is "right", I suggest that's the wrong question to ask in a technical discussion. Ask for the details and put matters to local test as well. Joe D. ------------------------------