Version 1.0, 4/10/02
Page 10 of 17
the number of times the PHY was not fed a cell in time to keep the wire busy, and thus had to
manufacture an idle cell. The number reported here is from the 2nd counters query when 2
“_VolgaGetChanCounters” are issued on the same line at the VxWorks prompt (this is because
“_VolgaGetChanCounters” prints out the delta between a previous invocation and the present
invocation). IXF6012 Overflows are measured the same way, and they are generally the result of
the StrongARM* core overhead involved in running the “_VolgaGetChanCounters” command
itself. “Ethernet Transmit Kframe/sec” captures the lowest and highest results as received and
reported by SmartBits600 over the 8 Ethernet ports.
The test measurements are repeated with a variable number of full-bandwidth Ethernet ports
driving the design. The test with “0” Ethernet input ports shows the maximum possible ATM-to-
Ethernet performance, that is, when there is no Ethernet-to-ATM traffic to load down the system.
This is effectively a half-duplex ATM-to-Ethernet forwarding measurement. More Ethernet input
ports are added to show how the system handles the increase in load, even though for 40 and
1500-byte packet measurements, 6-8 Ethernet ports over-subscribe available ATM transmit
bandwidth.
Hardware 29-byte packet performance
Ethernet
Input
Ports
ATM
Transmit
Rate [%]
IXF6012
Transmit
Idle
ATM
Receive
Ports
IXF6012
Overflows Ethernet
Transmit
KFrame/s
Ethernet
Transmit
[MB/s]
8 84 N/A 1 4000 132 – 138 8 – 9
7 73 N/A 1 1000 127 – 147 8.5 – 9.5
6 63 N/A 1 0 133 – 148 8.5 – 9.5
0 0 N/A 1 0 148.8 9.5
Figure 5 – Single-cell/PDU Performance using 133MHZ DRAM
The bottom entry in the table with 0 Ethernet Input Ports shows half-duplex performance – i.e.
what the design does when it is only forwarding this workload from ATM to Ethernet. The result
is wire-rate ATM Receive and Ethernet Transmit performance, and the StrongARM core can run
“_VolgaGetChanCounters” without disturbing the data plane at all. As discussed above, this
workload is attempting to transmit 949Mbps out the 800Mbps of Ethernet ports. Indeed, 8
Ethernet ports X 148,808 frames/sec = 1.19M packets/second; while the ATM Receive packet
rate is 1.4M packets/sec. Looking at the microengine counters, The ratio between the packets
dropped due to full Ethernet Transmit queues and the packets dropped due to a full IP Router
input MSGQ shows that about 37% of the dropped packets are due to Ethernet transmit queues
being full, and the remaining 63% are due to the IP Router Microengine not being able to route
1.4M packets/second. This is consistent with the simulation result for the same workload that
showed the IP router couldn’t keep up with 1.4 routes/second.
Transmitting from Smartbits on 6 full-bandwidth Ethernet ports impacts Ethernet Transmit
performance, but only on a couple of ports. But this is not enough Ethernet input to saturate
ATM Transmit.
Increasing the Ethernet workload to 7 ports, and then 8 ports, increases the ATM Transmit
performance, but with the ratio of 949Mbps Ethernet to 622Mbps ATM, this is still not enough
Ethernet input to saturate the ATM Transmitter. Also, Ethernet Transmit performance starts to