Sie sind auf Seite 1von 25

TMS320C6000 TCP/IP

Network Developer’s Kit (NDK)


Technical Data Quick Reference Guide

Literature Number: SPRU568


October 2001

Printed on Recycled Paper


IMPORTANT NOTICE

Texas Instruments Incorporated and its subsidiaries (TI) reserve the right to make corrections,
modifications, enhancements, improvements, and other changes to its products and services at
any time and to discontinue any product or service without notice. Customers should obtain the
latest relevant information before placing orders and should verify that such information is current
and complete. All products are sold subject to TI’s terms and conditions of sale supplied at the
time of order acknowledgment.

TI warrants performance of its hardware products to the specifications applicable at the time of
sale in accordance with TI’s standard warranty. Testing and other quality control techniques are
used to the extent TI deems necessary to support this warranty. Except where mandated by
government requirements, testing of all parameters of each product is not necessarily performed.

TI assumes no liability for applications assistance or customer product design. Customers are
responsible for their products and applications using TI components. To minimize the risks
associated with customer products and applications, customers should provide adequate design
and operating safeguards.

TI does not warrant or represent that any license, either express or implied, is granted under any
TI patent right, copyright, mask work right, or other TI intellectual property right relating to any
combination, machine, or process in which TI products or services are used. Information
published by TI regarding third party products or services does not constitute a license from TI
to use such products or services or a warranty or endorsement thereof. Use of such information
may require a license from a third party under the patents or other intellectual property of that third
party, or a license from TI under the patents or other intellectual property of TI.

Reproduction of information in TI data books or data sheets is permissible only if reproduction


is without alteration and is accompanied by all associated warranties, conditions, limitations, and
notices. Reproduction of this information with alteration is an unfair and deceptive business
practice. TI is not responsible or liable for such altered documentation.

Resale of TI products or services with statements different from or beyond the parameters stated
by TI for that product or service voids all express and any implied warranties for the associated
TI product or service and is an unfair and deceptive business practice. TI is not responsible or
liable for any such statements.

Mailing Address:

Texas Instruments
Post Office Box 655303
Dallas, Texas 75265

Copyright  2001, Texas Instruments Incorporated


Preface

Read This First

About This Manual


Welcome to the TMS320C6000 Network Developer’s Kit (NDK) Technical
Data Quick Reference Guide. This guide is the place to start when looking for
statistical information on the networking software included in the NDK. It in-
cludes overviews of NDK features, network performance, MIPS requirements,
code size, and data memory requirements. There is also information on the
hardware and software used on the Ethernet MAC daughtercard included in
the kit.

How to Use This Manual


This document contains the following chapters:

- Chapter 1 – TCP/IP Stack Feature List, describes the design, program-


ming, and service features of the TCP/IP stack.

- Chapter 2 – TCP/IP Stack Performance, describes the TCP/IP stack


performance.

- Chapter 3 – TCP/IP Stack Code and Data Size, lists the code size of vari-
ous TCP/IP stack components compiled for the C6211 with compiler opti-
mization level 2.

- Chapter 4 – ETHC6000 Ethernet Hardware and Driver Information,


contains information for the ETHC6000 ethernet hardware and driver.

Related Documentation From Texas Instruments


The following reference is provided for further information:

TMS320C6000 TCP/IP Network Developer’s Kit (NDK) User’s Guide


(literature number SPRU523)

TMS320C6000 TCP/IP Network Developer’s Kit (NDK) Programmer’s


Reference Guide (literature number SPRU524)

TMS320C6000 TCP/IP Network Developer’s Kit (NDK) Porting Guide


(literature number SPRU030)

Read This First iii


Trademarks

Trademarks

TMS320C6000, DSP/BIOS, and Code Composer Studio are trademarks of


Texas Instruments.

iv
Contents

Contents

1 TCP/IP Stack Feature List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-1


Describes the design, programming, and service features of the TCP/IP stack.
1.1 Design Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-2
1.2 Programming Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-3
1.3 Service Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-4

2 TCP/IP Stack Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-1


Describes the TCP/IP stack performance.
2.1 Peak Performance Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-2
2.2 MIPS Consumption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-3
2.3 TCP/IP “No-Copy” Socket Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-5

3 TCP/IP Stack Code and Data Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-1


Lists the code size of various TCP/IP stack components compiled for the C6211 with compiler
optimization level 2.
3.1 Base Code Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-2
3.2 Optional Component Code Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-3
3.3 Data Memory Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-4

4 ETHC6000 Ethernet Hardware and Driver Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-1


Contains information for the ETHC6000 ethernet hardware and driver.
4.1 Ethernet Hardware Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-2
4.2 Ethernet Driver Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-4

vii
Tables

Tables
2–1 Standard Sockets API vs. “No-Copy” API Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-3
2–2 Performance and CPU Loading with Different Driver Modes Tests . . . . . . . . . . . . . . . . . . . 2-4
2–3 Normal Network Load Conditions Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-4
3–1 Base Code Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-2
3–2 Optional Component Code Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-3
3–3 Data Memory Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-4

viii
Chapter 1

TCP/IP Stack Feature List

This chapter describes the design, programming, and service features of the
TCP/IP stack.

Topic Page

1.1 Design Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-2


1.2 Programming Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-3
1.3 Service Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-4

1-1
Design Features

1.1 Design Features

1.1.1 Small Footprint


A system using the stack with around a dozen tasks (telnet, DHCP, DNS, etc.)
requires around 200K to 250K of program memory and 95K of data memory.
Stack size is dependent on the type of build, and which features are bound into
the libraries. Options like Network Address Translation and Point-to-Point Pro-
tocol can be purged from the build.

1.1.2 Support for Both IP Clients and Routers


The base functional requirements of TCP/IP stack is identical for both clients
and routers. By adjusting the stack configuration, and selecting which network
services are run on the platform, this software works just as well as a TCP/IP
client, protocol server, or router.

1-2
Programming Features

1.2 Programming Features

1.2.1 Supports Native Operating System


The stack allows developers to program applications using DSP/BIOS. Adding
sockets support to an application does not necessitate porting to a new operat-
ing system.

1.2.2 File Descriptor-Based Stream IO Support


The stack adds a new stream IO file descriptor system to DSP/BIOS. This al-
lows standard Unix-like applications to be developed for or ported to the stack
even through the native OS looks nothing like Unix. In addition to sockets, con-
nection oriented pipes are supplied for socket compatible inter-task commu-
nication. Some similar packages use TCP for inter-task communication, which
is a significant drain on resources.

1.2.3 Standard Sockets Compatibility


The system provides the applications developer with nearly all the standard
sockets functions, including RAW sockets (only the sendmsg() and recvmsg()
socket functions are not supported). Any application that uses the standard
sockets API can execute on this stack.

TCP, IP, and Socket properties are fully configurable with the setsockopt()
function. There are also additional enhancements to socket layer options to
support protocols that have historically required kernel modifications like
DHCP, private broadcast based servers, and traceroute.

TCP/IP Stack Feature List 1-3


Service Features

1.3 Service Features


In addition to providing a programming environment for the creation of new
networking applications, the stack package includes support for a variety of
services through its NETTOOLS library.

1.3.1 Addressable Configuration


The configuration of the stack and its related services can be programmed
though a configuration interface in the form of a database addressable by iden-
tification and instance tags. Tags make the configuration easy to search and
easy to walk. The configuration database can also be converted from its native
form to a linear block of data for easy storage in non-volatile memory.

1.3.2 Home Networks


When acting as a router, the software supports the creation of virtual home net-
works. This allows a user to operate a home network that is independent of that
provided by the ISP. Isolation provides protection, and allows multiple devices
to share the same public Internet address.

1.3.3 Network-Related Services


More services will be periodically rolled out, but the following are currently in-
cluded:

- Full Routing Support (CIDR)

- Home/Virtual network support with NAT (network address translation) and


user definable proxy filters (protocol specific NAT).

- Telnet server

- HTTP server including memory-based file system

- DHCP server and client

- DNS server for servicing the local network, with pass-through

- DNS client with multi-server capability

- TFTP client for file retrieve

- PPP client and server with PAP/CHAP support

- PPP over serial with HDLC-like framing

- PPPoE client and server

1-4
Chapter 2

TCP/IP Stack Performance

This chapter describes the TCP/IP stack performance.

Topic Page

2.1 Peak Performance Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-2


2.2 MIPS Consumption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-3
2.3 TCP/IP “No-Copy” Socket Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-5

2-1
Peak Performance Numbers

2.1 Peak Performance Numbers


In simulation, the a full duplex TCP/IP data stream (simultaneous send and re-
ceive) requires about 1.66 processor cycles per bit of data per second. Thus
running out of internal memory on a 150Mhz CPU, the TCP streaming rates
are 90.4 Mb/s on full duplex, or 181Mb/s on half duplex. On a 300Mhz CPU
(like the TMS320C6203), the TCP streaming rates would be 180 Mb/s on full
duplex, and 360 Mb/s on half.

Running out of external memory on a 150MHz TMS320C6211 with 4-way


cache, the stack operates at about 50Mb/s on a full duplex TCP stream (Tx
and Rx) with a 4096 byte socket buffer (max TCP window), using internal loop-
back. This translates to a half duplex TCP stream rate of about 100Mb/s.
Memory copies can affect performance, but the overhead can be reduced on
socket receive operations by using the stack’s “no-copy” socket options.

Running on the TMS320C6211 DSK and the LogicIO/Macronix MAC with the
standard sockets API and classical sockets programming, the stack gets
about 27Mb/s on a TCP stream for receive operations and 29Mb/s on a TCP
stream for transmit.

When using the “no-copy” API, the stack gets 34Mb/s on a TCP stream for re-
ceive with the same hardware. UDP performance should be a bit faster since
its simpler and doesn’t copy packet data for either TX or RX operations.

Note:
The performance of the TCP/IP stack when using the LogicIO ETHC6000 is
somewhat bound by the long bus cycle access time of the Ethernet
hardware.

2-2
MIPS Consumption

2.2 MIPS Consumption


In actual operation, the stack will rarely be executing at peak data rates (100%
CPU loading). However, some tests have been run to determine the CPU load
under both heavy and normal network traffic loads. Performance numbers
also depend on the Ethernet driver used. The tests shown here were obtained
from the three sample drivers.

The tests described in this section were run using the NDK on a 150Mhz
TMS320C6711DSP.

2.2.1 Part 1: Standard Sockets API vs. “No-Copy” API


The following tests were run on the full EMDA version of the Ethernet driver,
but the relative numbers apply across all versions. Here, the same data bench-
mark was run on the stack, but coded in four different fashions. The variations
in coding include; using SOCK_STREAM or SOCK_STREAMNC, using recv()
or recvnc(), and either calling select() or not.

The performance numbers represent the performance of a single channel test


application executing on a Windows PC with a 100Mb network adapter.

Table 2–1. Standard Sockets API vs. “No-Copy” API Tests

TCP Socket Type API Calls Used Sustained Data Rate CPU Loading
SOCK_STREAM select(), recv() 26.6 Mb/s 70.0%

SOCK_STREAMNC select(), recv() 28.6 Mb/s 69.3%

SOCK_STREAMNC select(), recvnc() 32.3 Mb/s 58.4%

SOCK_STREAMNC recvnc() 33.5 Mb/s 54.7%

TCP/IP Stack Performance 2-3


MIPS Consumption

2.2.2 Part 2: Performance and CPU Loading with Different Driver Modes
The following tests used the same benchmark with the recvnc() described
above, along with a similar data benchmark to test data transmission. Here the
performance and CPU loading of all three sample drivers can be compared
and contrasted. On EMDA version of the driver, the CPU is freed to perform
other tasks during EDMA operations.

Table 2–2. Performance and CPU Loading with Different Driver Modes Tests

Driver Mode Used TCP Operation Sustained Data Rate CPU Loading
Polling Mode Receive 26.0 Mb/s 78.3%

Polling Mode Transmit 24.4 Mb/s 73.6%

Interrupt Mode Receive 26.3 Mb/s 78.3%

Interrupt Mode Transmit 28.6 Mb/s 84.4%

EDMA Mode Receive 33.5 Mb/s 54.7%

EDMA Mode Transmit 28.8 Mb/s 76.2%

2.2.3 Part 3: Normal Network Load Conditions


The following tests were run on the EMDA driver described above, but with a
slower test machine. This shows the CPU loading under more normal condi-
tions. Video streams are typically around 2.5Mb/s, so these number are still
about 3 times that typical load.

Table 2–3. Normal Network Load Conditions Tests

Driver Mode Used TCP Operation Sustained Data Rate CPU Loading
EDMA Mode Receive 7.57 Mb/s 11.2 %

EDMA Mode Transmit 7.29 Mb/s 17.6 %

2-4
TCP/IP “No-Copy” Socket Options

2.3 TCP/IP “No-Copy” Socket Options


The performance of any data stream operation suffers when data copies are
performed. Although the NDK’s TCP/IP stack is designed to perform the mini-
mum number of data copies possible, memory efficiency and API compatibility
sometimes require the use of data copy operations.

By default, neither UDP nor RAW sockets use send or receive buffers. Howev-
er on receive, the sockets API functions recv() and recvfrom() require a data
buffer copy because of how the calling parameters to the functions are defined.
In the stack library, two alternative functions (recvnc() and recvncfrom()) are
provided to allow an application to obtain received data buffers directly from
the stack without a copy operation. When the application is finished with the
data, it returns the buffers to the system by calling recvncfree().

By default, TCP uses both a send and receive buffer. The send buffer is used
since the TCP protocol can require “reshaping” or retransmission of data due
to window sizes, lost packets, etc. On receive, the standard TCP socket also
has a receive buffer. The receive buffer is used to coalesce data received in
packet buffers. Coalescing data is important for protocols that transmit data in
very small bursts (like a telnet session).

For TCP applications that get data in large bursts (and tend not to use flags
like MSG_WAITALL on receive), the TCP receive buffer can be eliminated by
specifying an alternate TCP socket stream type of SOCK_STREAMNC. With-
out the receive buffer TCP will queue up the actual network packet buffers con-
taining receive data instead of coalescing the data into a receive buffer. This
eliminates a data copy. TCP sockets that use the SOCK_STREAMNC stream
type are 100% compatible with the standard TCP socket type.

Note that care needs to be taken when eliminating the TCP receive buffer
since large numbers of packet buffers can be tied up for a very small amount
of received data. Also, since packet buffers come directly from the Ethernet
driver in the HAL, there may be a limited supply available. If the MSG_WAI-
TALL flag is used on a recv() or recvfrom() call, it is possible for all packet buff-
ers to be consumed before the specified amount of payload data is received.
This would cause a deadlock situation if no socket timeout is specified.

TCP/IP Stack Performance 2-5


TCP/IP “No-Copy” Socket Options

Sockets that use SOCK_STREAMNC have an added benefit in that they can
also be used with the recvnc() and recvncfrom() functions that UDP and RAW
sockets use to eliminate the final data copy from the stack to the sockets ap-
plication. Using these “no-copy” functions with SOCK_STREAMNC elimi-
nates both data copies used by the standard TCP socket. Note that when
recvnc() and recvncfrom() are used with TCP, out of band data is not sup-
ported. If the SO_OOBINLINE socket option is set, the out of band data is re-
tained, but the out of band data mark is discarded. If not using the inline socket
option, the out of band data is discarded.

2-6
Chapter 3

TCP/IP Stack Code and Data Size

This chapter lists the code size of various TCP/IP stack components compiled
for the C6211 with compiler optimization level 2.

Topic Page

3.1 Base Code Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-2


3.2 Optional Component Code Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-3
3.3 Data Memory Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-4

3-1
Base Code Size

3.1 Base Code Size


The base code size includes the portions of the stack that are almost always
linked into an executable.

Table 3–1. Base Code Size

Stack Component Code Size (–o2) Code Size (–o2 –ms3)


TCP/IP Stack Base 92000 81440

Basic Network Tools 10016 8640

OS Adaptation Layer 10752 9280

Initialization Code 5344 4800

Hardware Device Drivers 7232 7264

Total Size 125344 111424

3-2
Optional Component Code Size

3.2 Optional Component Code Size


The optional components are those that are included in the stack package, but
can be purged from the system when not required. They mostly include net-
work “services”.

Table 3–2. Optional Component Code Size

Stack Component Code Size (–o2) Code Size (–o2 –ms3)


DNS Client/Server 18272 14816

Telnet Server 5696 5088

HTTP Server 10048 8256

DHCP Server 8256 7008

DHCP Client 7808 6784

TFTP Client 3072 2624

PPP Client/Server† 22944 19968

PPPoE Client/Server 9184 7744


† PPP number includes: PPP, LCP, IPCP, PAP, CHAP, and MD5

TCP/IP Stack Code and Data Size 3-3


Data Memory Requirements

3.3 Data Memory Requirements


The amount of data memory required for the stack is heavily dependent on the
size of the data packet buffer pools in the HAL layers, and can be adjusted to
fit almost any target size. Table 3–3 below shows the relevant memory usage
expressed in some practical ranges.

Table 3–3. Data Memory Requirements

Memory Usage Lower Typical Upper


BSS Section 6000 6000 6000

Packet Memory 16640 24960 41600

Scratch Memory 30960 37152 46440

Approximate Total 53600 68112 94040

3-4
Chapter 4

ETHC6000 Ethernet Hardware and


Driver Information

This chapter contains information for the ETHC6000 ethernet hardware and
driver.

Topic Page

4.1 Ethernet Hardware Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-2


4.2 Ethernet Driver Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-4

4-1
Ethernet Hardware Information

4.1 Ethernet Hardware Information

The Ethernet DSK daughter card used in these tests was designed by LogicIO
using a Macronix MX98728EC MAC chip (GMAC). The design of the Ethernet
driver is partially determined by the operation of this hardware.

4.1.1 Bus Interface Timing and Performance

The EMIF timings required to access the chip are quite sensitive, and ac-
cesses that are too aggressive will return invalid data. The EMIF settings for
this board are 0x34a31026 which translates out to 23 cycles per write and 25
cycles per read on a 100Mhz external interface. The board thus has a theoreti-
cal performance limit of 128Mb/s on receive, and about 58Mb/s on transmit
(the board can not send while the TX FIFO is being filled – thus 139Mb/s FIFO
and 100Mb/s line rates are combined).

The total theoretical full duplex throughput of the card is bound by the transmit
rate. However, since the same memory interface used by the Ethernet card is
also used for other CPU operations (like cache line loads), the practical perfor-
mance limit is about half. The board has been measured at about 39Mb/s full
duplex on internal loopback tests.

4.1.2 Resources

The ETHC6000 is mapped into CE2 space at the address: 0xA0000000. It


uses two interrupts, INT4 and INT5. INT4 is mapped to the GMAC’s general
interrupt signal and is similar to other Ethernet device interrupts. INT5 is
hooked to a DMA “burst ready” pin on the device that signals a burst of data
(16 bytes) is available from the GMAC data read register. Depending on the
operational mode, the driver can be used with no interrupts, INT4 only, both
INT4, INT5, or INT4, INT5, and an EMDA completion interrupt.

4.1.3 Transmit Operation

Packet transmission is performed by filling a transmission FIFO on the chip,


and signaling a transmit command. The CPU can then poll the command regis-
ter for the completion of the transmit, or the GMAC can also interrupt the CPU
when transmission is complete. The transmit FIFO fill operation is a simple
memory copy of the packet data to a FIFO data register. The fill can be per-
formed directly by the CPU or by the EDMA.

4-2
Ethernet Hardware Information

4.1.4 Receive Operation


Packet reception operates a little differently from transmit. On receive, packet
data is copied into an external memory space controlled entirely by the GMAC
device. When a packet is available, two things happen. First, the INT4 interrupt
is fired with an indication that a packet is available, and second, the INT5 inter-
rupt fires indicating that burst data (16 bytes) is ready. The latter will continue
to fire every 16 bytes until the entire packet is read out of the GMAC device.
The CPU does not read the GMAC’s external memory directly, but rather the
memory is “burst” to the CPU by reading the GMAC data read register.

Depending on whether or not the CPU or the EDMA is used for fetching receive
packet data, the INT5 signal may or not be used. It is not necessary when the
CPU is used to read the data, since the status of this signal can also be deter-
mined from a GMAC register bit.

Also, for drivers designed to poll for incoming packets, the INT4 signal is not
necessary since the CPU is able to compare a head and tail pointer on the
GMAC device to determine if packets are available.

4.1.5 Additional Hardware Notes


The GMAC is a convenient architectural match to the DSP, but there are a cou-
ple of hardware issues other than the bus timing to keep in mind when examin-
ing the driver source code.

Although the GMAC provides an 8 bit register interface, the chip can only be
accessed as a 32bit device by the DSP. This requires the driver to view the indi-
vidual registers as groups of “super registers,” each containing 4 of the in-
tended register values. This can get clumsy, and in certain cases may limit the
functionality of the device. Luckily, the problem does not prevent the device
from operating in a fashion compatible with the DSP.

Also, the GMAC device signals interrupts to the CPU in a level sensitive fash-
ion, while the DSP only supports edge triggered interrupts. If the GMAC inter-
rupt signal were to stay low in a back-to-back interrupt, the DSP may become
out of synch and unable to detect additional interrupts. The software device
driver must therefore incorporate a periodic polling routine that includes han-
dling lost interrupt conditions.

ETHC6000 Ethernet Hardware and Driver Information 4-3


Ethernet Driver Information

4.2 Ethernet Driver Information


The stack library is designed to operate in a variety of flexible modes. Addition-
ally, the stack library does not place any hardware resource requirements on
the HAL or OS adaptation layers. Thus, the driver writer has the flexibility to
choose options like polling over interrupts, or EMDA over CPU memory cop-
ies.

4.2.1 Polling Mode

The simplest form of the Ethernet driver is the polling driver. This driver uses
a special “polling mode” of the stack library that invokes a single low priority
task to service the Ethernet device along with any other required background
tasks. This environment has the benefit that no interrupts are required to run
the network and hence real-time tasks can be deterministically scheduled. The
drawback to this approach is that the system programmer must integrate stack
polling code with any other background tasks in a single low priority “idle” task.

4.2.2 Interrupt Mode

As an alternative to polling, the driver can also run in an interrupt driven “sema-
phore” mode. Here, the Ethernet device’s polling function is called twice a sec-
ond instead of constantly, and it is the driver’s responsibility to use interrupts
to determine when network packets are available. This allows the CPU to
schedule time to the stack only when packet activity occurs. The driver indi-
cates packet events by signaling a global semaphore. This causes the half
second polling routine to be called early.

The advantage of this method is the stack can run independently of any low
priority “idle” task. The disadvantage is that the CPU must now take on the
overhead of one interrupt per transmit, and possibly one interrupt per receive.
Still, using the interrupt mode increases overall throughput.

4.2.3 EDMA Transfer Mode

On both transmit and receive operations, the EDMA controller can be used to
copy data to and from the GMAC device instead of the CPU. In general, using
the EDMA is faster than using the CPU. This is especially true on the GMAC
device for receive, where each 16 byte burst requires a polling loop for verify-
ing a “data ready” condition. The data ready pin is also hooked to INT5 on the
DSP, and the DSP’s frame synced EMDA handle multiple bursts of 16 bytes
well, without the need to interrupt the CPU

4-4
Ethernet Driver Information

The main advantage in using the EMDA controller is performance and lower
CPU overhead. The disadvantage is that the driver ties up more system re-
sources. In addition to the normal device interrupt (INT4), this driver uses two
EDMA channels (channels 4 and 5), an EDMA synchronization interrupt from
the GMAC (INT5), and a CPU interrupt for the DMA completion signal (INT8)

4.2.4 Sample Drivers


The following sample drivers are included for the GMAC device:

- Polling driver that does not use interrupts or EDMA

- Interrupt driver that uses only INT4, and not EDMA

- EDMA driver that uses INT4, INT5, INT8, and two EDMA channels

ETHC6000 Ethernet Hardware and Driver Information 4-5

Das könnte Ihnen auch gefallen