10G is available in advanced CPUs and Ethernet MACs for processing and storage subsystems. One pipe may handle both the operation of clusters as well as high-speed communications to the outside world. Resources may be allocated by virtualization, including network interfaces. Components and interfaces are entirely based on worldwide industrial standards. Developers are now confronted with the questions which 10G technology to choose and which embedded systems provide the ideal environment.
Why 10 Gigabit?
Multi-core CPUs come with 10 Gigabit/sec speed for inter-process communication. Several standards are available to provide the required transfer rates. So there are many good reasons to use 10 Gigabit Ethernet now:
• Interfaces: Prices have come down. There are SFP+ transceivers available in the same small form factor as SFP with passive copper links for short-range, price-sensitive applications (such as chassis interconnect, up to several meters), as well as optical connections for longer distances.
• Single Pipe - Different traffic classes can be allocated to one single pipe (with QoS providing many service levels). With protocols such as iSCSI, storage can use the same interface technology as inter-process communication or exterior services (such as web traffic or Voice over IP). For aggregation of traffic, 10G avoids multiple 1G links with complex link aggregation mechanisms.
• Virtualization – With multi-core CPUs, the number of virtual machines per processor is increasing. 10G Ethernet MACs support virtualization, i.e., network interfaces can be shared between VMs in a hardware independent way.
• LAN and WAN –The same technology is applied in local and remote communication, resulting in reduced complexity, reduced cabling and improved reliability. Carrier Ethernet services allow native connectivity for remote operation.
Virtualization drives 10G
On many desktop devices, virtualization has been used to boot one operating system or the other. In this case, one Virtual Machine (VM) may own the complete hardware, with no need to share. In IT server infrastructure and in embedded systems, Virtualization is increasingly used to run multiple VMs in parallel on the same hardware.
With current network interface cards (NIC), Ethernet ports cannot be shared easily between parallel VMs. The usual solution is to allocate Ethernet ports to individual VMs, as shown in Figure 1 (upper part). While this type of solution works with a limited number of VMs and a sufficient number of NIC, it has major drawbacks as the number of VMs increases:
• The number of network interfaces needs to match the number of VMs, resulting in extra cost, space requirements, cabling, as well as additional configuration and maintenance hassles.
• The Hypervisor must manage traffic between VMs (thus becoming a multi-port Ethernet switch), or alternatively traffic between VMs can be channeled over exterior network interfaces.
Fig. 1 – Allocation of network interfaces to virtual machines
This scenario is not far-fetched: VMs in Embedded Computing are increasingly being used to encapsulate parallel jobs. Today’s 4-core CPUs easily support 4 to 8 VMs per core. As shown in Figure 1 (lower part), this results in a number of network interfaces exceeding 16 ports. In addition, the traffic capacity of the CPU exceeds 1 Gigabit speed, requiring multiple 1GbE ports per VM. 10G NICs with support of virtualization solve many problems including the sharing of 10G Ethernet ports between VMs without need of link aggregation. Most importantly, inter-VM traffic is handled within the NIC without impact on the Hypervisor.
Network Storage and Inter-Process Communications
At customer premises, IT infrastructure serves a variety of applications; whether for enterprise communication, facility management, process technology or computer center, the general structure is similar. Front-end servers handle exterior connectivity to other sites (locally or remotely) including security screening and traffic distribution. On a second tier, application servers handle the computing tasks. A third tier handles the storage and safe backup of persistent data. Figure 2 shows the principle architecture.
Today, a diversity of technologies is used. While local network interconnectivity to clients and other sites is largely provided by Ethernet, connections to remote sites are provided over a choice of telecommunication interfaces (such as ISDN, leased lines or xDSL). Communication between servers (inter-process communication) requires high-speed connections over Infiniband, aggregated GbE links or proprietary solutions. Storage networks (network attached storage or storage area networks) connect over Fibre Channel. While all established solutions have their benefits, the result is a diverse environment, with complex configurations, which requires a diversity of skills for operation.
Fig. 2 – One pipe fits all – reducing complexity lowers costs
The Arrival of 10GbE technology removes the performance bottleneck of passed Ethernet technology for server and storage networks and allows a diversity of applications to be placed on a single technology platform. For server interconnection, 10 GbE can accommodate the speed of today’s CPUs. Standards like iSCSI allow for the building of equivalent storage infrastructures to Fibre Channel solutions without the need for specialized Fibre Channel switches and software environment. The speed of 10 GbE is also sufficient to support native Fibre Channel protocols over Ethernet (FCoE).
The result is a simplified architecture for computer networks. Reduced complexity also results in reduced costs because fewer components are needed, allowing technical skills to be focused elsewhere, thus lowering operational costs. Some server architectures allow multiple processor blades to be placed in a single server chassis, which can also contain 10GbE switches. Such systems considerably reduce space and cabling by providing Ethernet connectivity over the backplane. Embedded form factors for such architectures use low-power processors and reduce total cost of ownership through lowered operational expenses.
The use of Ethernet for server and storage interconnection is supported by worldwide standards: iSCSI represents an IETF standard published in 2003, which defines encapsulation of SCSI messages in Ethernet frames and usage of TCP/IP to handle traffic flows. In this way, iSCSI builds on a foundation that has been proven worldwide on the Internet. For storage networks, iSCSI allows 10GbE NICs to work like SAN controllers. Without the need for dedicated hardware, storage devices such as RAID networks or backup servers can be shared over IP.
Local and Remote Communications
While Ethernet is a traditional technology for local area networks, it is also gaining momentum in remote communications. Connections between two sites of an enterprise (customer premises) or to remote clients need a connection over public networks, which is provided by a network operator. Network operators provide a diversity of interfaces for mobile connectivity (over GSM, UMTS or WiFi hotspots), and fixed lines (over ISDN, xDSL or leased lines).
With increasing demand from corporate customers on high-speed interconnections, many network operators now offer Carrier Ethernet services: native Ethernet connectivity at a variety of speeds incl. guaranteed availability of service and quality at different service levels. While speeds above 100 Megabits/sec need a dedicated optical fiber at the customer’s premise and today still represent the exception, the offer of high-speed links is increasing.
For the corporate customer, Carr
ier Ethernet represents native Ethernet links over long distance networks, which can carry local Ethernet services to remote sites. Carrier Ethernet also provides a reliable connection with respect to downtimes and traffic. Figure 3 shows the setup between customer premises and the public wide area networks. Between customer premises, Ethernet connectivity is transparent, like a local area network connection.
Fig. 3 – Long distance connectivity — Carrier Ethernet
Behind Carrier Ethernet, considerable standardization activities of major equipment suppliers and network operators are at work. One of the most prominent organizations is the Metro Ethernet Forum (MEF). Standardization activities span a wide range of IETF, IEEE and ITU standards. The MEF ensures that MEF-certified Carrier Ethernet equipment is interoperable over public networks and local connections.
At a much lower level of connectivity, the progress of technology is considerable. Over just five years, connector sizes and power consumption of fiber-optic or copper transceiver modules have been consistently shrinking from XENPAK to XFP and SFP+. Figure 4 shows the evolution. As a consequence, the new high-speed transceivers such as SFP+ 10 GbE can be easily incorporated into embedded products. The small form factors allow a high density of interfaces with low power per port (below 1 Watt).
The new form factors also facilitate 10G cabling considerably. SFP+ transceivers are available as a direct attached version on pre-configured copper or optical cables. Passive copper cable covers distances of up to 7 meters (with no power consumption). The cables are factory configured with SFP+ connectors on each end using parallel shielded two-pair cables (twin-axial cables). Optical cables cover distances of up to 300 meters at 850 nm according to the LRM specification for long-range multi-mode fiber, and up to 10 km at 1350 nm according to the LR specification for long-range fiber, respectively.
Fig. 4 – Evolution of 10G interfaces
All 10G form factors are industry standards. The initial standard for Ethernet at 10 Gbits/sec was published in 2002 as IEEE standard 802.3ae. The extensions SR, LR, ER and L4 of the same year define fiber optic connections. In 2004, extension 802.ak-CX4 defined the use of copper twin-axial cable (InfiniBand cable). The use of 10GBASE-T twisted-pair copper cable was specified in 2006 as 802.3an. In 2007, two further extensions followed: 802.3ap-KR for serial backplanes, and 802.aq-LRM for optical cables. The SFP+ form factor is defined in the SFF-8431 specification of the SFF committee (an industry group concerned with small form factors that represents major IT manufacturers).
10G Embedded Technologies
While 10G has been available for embedded server technologies like AdvancedTCA for some time (and is now moving up to 40G), 10G is now becoming available in form factors with lower thermal budgets and hence lower power consumption including CompactPCI, MicroTCA, VME and VPX as well as embedded Rack Mountable Servers.
CompactPCI
CompactPCI supports GbE on the backplane according to the PICMG specification 2.16. While 1 Gigabit/sec speed is sufficient for many embedded applications, it represents a mismatch to the latest generation of CompactPCI processor blades, which handle traffic at speeds of 10 Gigabits/sec. However, when placed in a CompactPCI system, the traffic capacity of the backplane generates a bottleneck. This bottleneck can be solved by front cabling, as shown in Figure 5. An extension board (mezzanine board) can be placed on the processor blade in order provide two 10 GbE links over SFP+. This way, 10G links may be connected over standard 10G cabling. For multi-processor systems, the front cables will usually meet at a 10G Ethernet switch. The most compact solution is to place a 10G switch directly into the CompactPCI system. Also, for rugged environments, M12 connectors are now becoming available for both 1G and 10G.
Fig. 5 – 10G arrives at CompactPCI
MicroTCA
Like CompactPCI, MicroTCA represents a PICMG standard for embedded systems. MicroTCA is the first standard using serial interfaces on the backplane, rather than the traditional parallel bus architectures. The AMC connector provides 21 ports, which be used for high-speed links at 2.5 Gigabit/sec each. Thus, there is ample space for high-speed fabrics like 10 GbE, PCI Express, Serial RapidIO, SATA and others. MicroTCA defines 1GbE as basic communication infrastructure between boards over the backplane.
In addition, extra fabrics can be used. 10 GbE is currently implemented on the backplane via 4x 2.5 G lanes using the XAUI interface (according to 802.3ap -KX4). Future implementations will use 10 Gigabit/speeds per lane (according to 802.3ap -KR). Systems with 10G backplanes have been on the market for some time already. What will drive 10G is the next generation of processor blades in the AMC form factor. The double-wide (4U) form factor allows a thermal budget of up to 80 Watts per AMC. By this it is possible to implement quad-core CPU boards in MicroTCA applications, combining an extremely compact, modular system design with highly parallel computing power.
Fig. 6 – MicroTCA at full speed
VME & VPX
As the de facto standard for many deployed military applications, 6U VME successfully adopted Gigabit Ethernet on the backplane thanks to VITA31.1. The choice of leveraging a common baseline with the CompactPCI network infrastructure such as 6U CompactPCI switches now permits VME computers to offer the same evolutionary path towards 10 Gigabit Ethernet using the common solutions of 10Gb-enabled switch boards and 10Gb XMC mezzanines. These products are already supported on x86-based computers and also on PowerPC Altivec single board processor boards. By offering computers that are compatible with existing application software, VME repeats the previous success of adopting a new standard without having to throw out the vast infrastructure developed over many years.
Right from the start designed for high speed serial interfaces, VPX provides transfer rates of up to 6.25 Gbit/s per lane with a cross-talk attenuation of max. 3% and is well prepared for Multigig-Ethernet and other high speed serial technologies such as PCI Express, SATA and Serial RapidIO. With a total of 464 signal contacts for 6U boards and 280 signal contacts for 3U VPX offers enough capacity for even faster fat pipes, that are not necessarily already envisaged today.
Fig. 7 – 10G arrives at VME
Embedded Rack Mount Servers
Ten Gig Ethernet implementation is usually done via PCI Expressed based extension cards. Embedded Rackmount Servers are usually based on motherboards or PICMG 1.x with a system host board and an application specific backplane with multiple PCI and PCI express extension slots. Both system architectures can easily implement standard PCIe NICs for adding 10G Ethernet interfaces parallel to the implemented 1 Gig Etherent ports on the system board. Already available low profile extension cards enable to implement 10 Gig Ethernet even in space saving 2U designs.
Fig. 8 – Embedded Rack mount Server
For over 25 years, Ethernet has proven itself able to adapt to meet the growing demands of computer networks while maintaining low cost of implementation, ease of installation and highest reliability. These qualities helped the Ethernet standard to grow in popularity
to the point where it can handle the origination and termination of nearly all data traffic and entered several vertical markets such as military, medical and automation. 10G Ethernet represents the next evolutionary step in the simplification of complex computer systems. Thanks to the x-over-Ethernet protocols a diversity of applications can be placed on a single technology platform, helping to reduce development efforts, minimizing time-to-market and improving interoperability. And now 10G Ethernet has started to become available in embedded form factors such as CompactPCI, MicroTCA and VME/VPX for intra and inter system communication. Taking all this into account, it is clear that 10G Ethernet is poised to become the next true standard in networking technology.
The Kontron CP6930 is a performance-optimized 10-Gigabit Ethernet switch. It fits into both CompactPCI (PICMG 2.x) and VME (via the VITA 31.1 specification*) system chassis and represents a managed 32-port Ethernet switch with six 10GbE ports (plus 2x 1GbE ports) on the front, and 24x 1GbE ports on the rear (backplane or front transition modules).
The Kontron XMC401 dual 10G Ethernet Mezzanine is designed to fit on CompactPCI, VME, or VPX Single-Board Computers or any other carrier with XMC sockets.
The Kontron AM5030 with its Intel Xeon processor truly extends the performance range of MicroTCA into the realm of high-end processing.
The Kontron AM4910 provides two 10 GbE uplinks on the front plate via SFP+.