Archive

Archive for October, 2012

OpenFlow in the Data Center

October 2, 2012 Leave a comment

A QFabric perspective on the emerging network virtualization technologies

What is OpenFlow?

Literally quoting the openflow.org website : “OpenFlow is an open standard that enables researchers to run experimental protocols in the campus networks we use every day. OpenFlow is added as a feature to commercial Ethernet switches, routers and wireless access points – and provides a standardized hook to allow researchers to run experiments, without requiring vendors to expose the internal workings of their network devices. OpenFlow is currently being implemented by major vendors, with OpenFlow-enabled switches now commercially available.”

In a router or switch, the fast packet forwarding (data plane) and the high level routing decisions (control plane) occur on the same device. In an OpenFlow Switch the data plane portion resides on the switch, while the high-level routing decisions are moved to a separate controller. The communication between the OpenFlow Switch and the OpenFlow Controller uses the OpenFlow protocol.

An OpenFlow Switch presents a clean flow table abstraction; each flow table entry contains a set of packet fields to match, and an action (such as send-out-port, modify-field, or drop). When an OpenFlow Switch receives a packet that does not match its flow table entries, it sends this packet to the controller. The controller makes the decision on how to handle this packet and adds a flow entry to the switch’s flow table directing the switch on how to forward similar packets in the future.

What is QFabric?

QFabric is a distributed device that creates a single switch abstraction, using a central control plane with smart edge devices representing the data plane. Multiple edge devices are interconnected through a common backplane implemented by 2 or more dedicated interconnect devices. All high-level layer 2 and layer 3 decisions are controlled by a central director (control plane) which supplies the edge devices with information on how to forward packets. Edge devices are smart in a sense that they make their own forwarding decisions for local forwarding while informing the control plane on their local topology and state, and taking input from the central director for making inter edge device forwarding decisions. The communication between the control plane and distributed data planes is implemented using the mature and standardized MBGP protocol (IETF RFC 4760).

The smart edge devices allow for better scalability. The backplane of the distributed device is implemented by very fast interconnects providing the edge devices with a consistent latency between any 2 ports across the whole fabric. Management is abstracted in a central control plane and leaves the administrator with a single switch view.

QFabric is a distributed device, performing and acting like a single switch, implemented using top of rack edge nodes for deployment flexibility. All components of the QFabric are fully redundant and the central control plane provides the single switch abstraction and management view. Using QFabric, it is possible to create one flat network with consistent latency and performance scaling up to 6144 ports.

OpenFlow in the datacenter

OpenFlow is a SDN protocol that I would position as a L2 network virtualization solution in the datacenter world, much like NVGRE, VXLAN, EVB and others. It provides a way to scale beyond the infamous 4095 VLAN restriction imposed by most of the datacenter network hardware in use today.

I see OpenFlow as one of the potential solutions for multitenant cloud datacenters. In public cloud datacenters the primary concern is scaling isolated environments as far as possible, with the option to go well beyond the 4095 VLAN limit. Second concern in multitenant clouds is the dynamic provisioning of services. In an IaaS public cloud for example, it is imperative to have dynamic provisioning of the network layer as new virtual machines are created and deployed – software orchestration using OpenStack, CloudStack, vCloud and others need to integrate with the network and OpenFlow is probably one of the most dynamic solutions available today to achieve this integration easily and in a vendor agnostic way.

Though dynamic in nature, scale is one of the issues that might impact an OpenFlow network. The central controller is the single brain and decision point for all the devices in the network. Except for elephant flows, which are more typical for big data synchronization and backup applications, lots of short lived connections are made across the datacenter network. Any new sessions or flows have to synchronize around the central OpenFlow controller, making this controller the choke point and virtually limiting the scale and performance of the datacenter.

Network BubbleFor flexible deployment of cloud services, when scaling beyond one rack, it is imperative to have a flat network architecture. This flat network architecture can only be implemented by a fabric that provides consistent latency, not favoring flows between two devices located in the same bubble. Traditional network design requires multiple layers to scale, resulting in bubbles at the hardware layer, and location awareness becomes the main restriction for flexible management. See also the IBM Redbook – Build a Smarter Data Center with Juniper Networks QFabric for a more elaborate description of the bubble issue.

The only solution available today, providing a flat and scalable architecture with consistent low latency across all ports and racks is QFabric. This requirement becomes even more apparent when thinking about OpenFlow. OpenFlow delivers dynamic provisioning for a virtual network across physical servers. In order to not suffer a management nightmare and having to confine virtual machine motion to only a subset of the network where optimal performance and latency exists between different servers (bubbles), one requires a flat network architecture. This makes Qfabric the best architecture to run OpenFlow, not requiring location awareness and providing true dynamic scalability across the datacenter.

L2 Virtualization in the Datacenter

L2 virtualization provides a solution to the question how to create virtualized networks on top of a physical network infrastructure matching the dynamic nature of server virtualization in datacenters today. Think about a couple of virtual servers, all residing in the same L2 VLAN, but that can move from one physical host to another. How do we handle the dynamic nature of VLANs moving from one physical host to another host on the physical network ports? Look at this as a mapping problem between virtual and physical VLANs, the VLANs known and living in the virtual switch and those known and residing on the physical network ports.

The most obvious approach to this mapping question is to define all used VLANs on every port connected to a physical server. By far the easiest solution, but… as so many times, the easiest not always being the best… The most important issue faced when defining all VLANs as a trunk on every server access port is MAC flooding. Since all VLANs are defined on all physical ports, whenever a MAC address is unknown to the switch, the switch will flood the ARP request to all ports carrying that specific VLAN where this MAC address is supposed to live. This means that all servers will receive ARP requests flooded by the switch for all VLANs, even if the physical server is currently not hosting any virtual machines that participate in this VLAN. As such, there is no issue as the server will drop the unwanted packets; however the packet needs to come up the network interface device driver in software before it is discarded and in doing so will waste CPU cycles that could otherwise be allocated to virtual machines.

The problem described above is more apparent when you have a lot of L2 isolated networks (VLANs) and lots of VMs joining and leaving the L2 network (eg starting, resuming, stopping machines) which is typical for VDI environments. If you have a limited set of VLANs and servers running 24/7, this problem is much less apparent as flooding will be limited.

Another problem is the limitation on the number of VLANs. If you are running a multitenant environment with many customers and allocate one or even more VLANs per customer, your scalability will be limited by the max number of VLANs on traditional networking equipment (max 4095).

To overcome the above limitations, a number of solutions emerged and have been forming and standardizing in the last few months/years. They can be classified in a few different approaches:

  • Dynamic VLAN assignment solutions
  • Layer 2 encapsulation over L3 networks (overlay networks)
  • OpenFlow (which is a solution that might also fit in the overlay networks class, but treated independently)

Dynamic VLAN assignment solutions

Moving VLANs on physical port trunks depending on where virtual machines are active in the physical servers is an easy solution to overcome the flooding issue. Dynamic motion of VLANs can be achieved using software controllers which integrate with virtualization platforms (eg the RESTful/SOAP API provided by VMware) in order gather knowledge about which machines are running on which physical hardware and also which VLANs a particular vSwitch is serving. The software controllers can then dynamically reconfigure the physical VLANs on the switch port trunks. This controller software can be running on a separate server or it can be embedded in the switch’s control plane. Several vendors use this approach today. Arista VM Tracer and Force10 HyperLink are examples of such controller embedded in the switch’s control plane while Juniper provides an application in its Junos Space network management platform called Virtual Control which runs in a separate server.

The emerging Edge Virtual Bridging (EVB ; 802.1Qbg) standard has a component addressing the above described dynamic VLAN assignment through the standardization of a negotiation protocol between the physical switch and the virtual switch. This protocol is called VSI Discovery and Configuration Protocol (VDP) and in my opinion the most elegant solution for small to medium sized cloud datacenters available today.

The market adoption of EVB and VDP is growing, but today it is limited to a few vendors that expressed commitment to this standard. One of which is Juniper on the physical side and on the virtual side there is Open vSwitch and IBM’s Distributed Virtual Switch 5000V. VMWare has not yet expressed interest for this open standard as of yet and is proposing a solution based on a collaboration with Cisco called VXLAN. As such VMWare virtualization deployments need to replace the standard distributed vSwitch by IBM’s 5000V offering to use VDP. KVM and XEN are compatible with Open vSwitch and are compliant as such. Microsoft with Hyper-V has not expressed specific interest yet and is proposing their proper solution based on GRE (see below). In the newest Microsoft Server 2012 however, Hyper-V provides a flexible and extensible virtual switch which allows third parties to code extensions to the switch using WFP and NDIS filter drivers (known as extensible switch extensions). It is certainly imaginable that with time a VDP extension will be available for Hyper-V.

Layer 2 encapsulation across L3 networks

A second approach to the L2 virtualization problem is to create isolated overlay L2 networks between virtual machines on top of an L3 IP based network (MAC-over-IP). Each physical machine hosting multiple virtual machines has only one IP address from the physical network point of view and the virtual switches on the different physical servers create tunnels between themselves for each L2 virtual network. The virtual switches dynamically build the tunnels for each VLAN required by the virtual machines running on the virtual switch, merely creating overlay networks on top of the physical network. As of today, none of the implementations are providing an intelligent MAC distribution mechanism built into their control planes and as such the ARP protocol and MAC flooding mechanism from the physical world are conserved leading to broadcasts and multicasts being encapsulated inside the overlay tunnel, which in turn are translated into L3 multicasts on the physical network. Two very similar solutions are emerging in this area, NVGRE from Microsoft and VXLAN from VMWare/Cisco.

It is imperative that the physical network provides a good and performing L3 multicast implementation in order to transport the overlay L2 networks. The one flat network requirement also stands for this scenario. Avoiding bubbles is imperative to any network virtualization technology used. L3 multicast was taken into account during the design of QFabric, as such QFabric excels at handling multicast and has provides the flat network architecture making it the best choice for overlay virtual networks.

OpenFlow

Another emerging solution is provided through OpenFlow. The dynamic characteristic of the per flow forwarding of OpenFlow and the central control plane approach allow for easy management and design of overlay networks between virtual machines on top of a physical infrastructure. The traditional VLAN approach is left and isolation is provided through flows on the OpenFlow Switches.

For OpenFlow to succeed as a L2 virtualization solution in the datacenter, it needs to be present at all levels of the architecture. At the virtual switch level, Open vSwitch supports OpenFlow, NEC has an extension for the Microsoft Hyper-V virtual switch which integrates with its OpenFlow controller and VMWare recently acquired Nicira so OpenFlow might be part of their strategy as well.

At the physical switch level, if no fabric technology has been considered like QFabric, FEX, TRILL or SPB ; it is imperative that all the layers of interconnection support OpenFlow, not only the server access switches.

Juniper supports OpenFlow, is an active contributor in the standards process and plans to bring OpenFlow to QFabric.  As QFabric provides the architecture of choice (one flat network) for an OpenFlow based datacenter implementation, it can uniquely position them in the SDN marketplace. Juniper’s Software Solution EVP, Bob Muglia recently talked with Jim Duffy about Juniper’s SDN strategy here.

QFabric and L2 virtualization

For every L2 virtualization technology known today, the requirement to simplify the datacenter network connecting the physical servers to one flat network stands. If the network consists of multiple hops and inconsistent latency between network ports, the transparent overlay network idea will fail and careful planning and management for location awareness will be required to make this a success.

In all 3 scenarios, today, QFabric comes out as the architecture of choice to support network virtualization. Be it using EVB VDP, L2 overlay networks or OpenFlow; QFabric will provide investment protection whatever direction or technology you decide to run in the future.

IBM and Juniper QFabric

IBM is committed to QFabric and the one flat network architecture as the future of datacenter networking. See here and referring to 2 recent Redbooks that IBM published in this regard:

Considerations on deploying OpenFlow in the Data Center

A flat network that is scalable, performing and provides consistent latency is the foundation for a good network virtualization strategy. If you do not want to be restricted or confined to designing and managing bubbles in your network, this is unavoidable.

If no fabric technology has been considered like QFabric, FEX, TRILL or SPB ; all the layers of interconnection need to support OpenFlow. Make sure all proposed devices support OpenFlow today, from the core to the access including the virtual switch.

When considering storage convergence and especially FCoE, compliance with DCB is required. Best practice is to split off the FCoE in a completely separate VLAN and use DCBx for ease of deployment, avoiding OpenFlow in the storage VLAN. For pure OpenFlow devices, the OpenFlow controller would require some level of insight in the FC world and extensions for QoS in the OpenFlow device.

When planning for OpenFlow, it is best to consider devices that provide “traditional” L2/L3 mode besides pure OpenFlow. It will allow the use of OpenFlow for parts of the datacenter where they have their best use case and at the same time mix in the “traditional” L2/L3 for critical and latency sensitive protocols such as storage (FCoE, iSCSI, NAS).

Troubleshooting OpenFlow based networks can be a daunting task because of the distributed nature of the data and control planes. In QFabric this is solved by providing troubleshooting tools that mirror the traditional troubleshooting available in traditional switches. Check the OpenFlow devices and controllers for troubleshooting tools and options.

Availability and scalability of the OpenFlow controller is another concern that cannot be taken lightly. In case of an unreachable controller, the OpenFlow devices cannot function and in best case fall back to their traditional L2/L3 functionality (if the device is L2/L3 and OpenFlow capable at the same time).

QFabric is fully redundant by design, from the control network up to the directors, nodes and interconnects. When deciding to run OpenFlow you should design with this same redundancy in mind, meaning fully redundant, physically separated out-of-band connectivity between the controller and the OpenFlow devices for the control plane.

Finally, one more point that needs looking into is how the OpenFlow Controller and the OpenFlow Switches handle multicast. As you know, QFabric is designed with multicast in mind and use multicast trees in the interconnect layer. Multicast being one of the foundations of overlay networks for network virtualization.

Cloud computing, the internet of things, consumerization of IT and the Jericho forum

October 2, 2012 Leave a comment

On a daily basis we are interacting countless times with clouds and cloud services: your favorite newspaper freshly delivered to your tablet every morning, archived digital copies of your favorite magazine, your music collection stored and streamed to your iDevice or AVR, on demand archived TV programs and movies on your smart TV, your customer contacts and relation management tools available from any device and anywhere in the world, your private and business schedules consolidated and synced between your tablet, phone, laptop, PC and accessible through your living room smart TV. Digitalization projects the likes of Project Gutenberg, the human genome project and many others, antivirus, anti-spam and web filtering services in the cloud… Just a few of the most popular services that are delivered through cloud computing and hosted in cloud datacenters today.

Cloud computing cannot be considered a hype anymore, the examples are real, putting increasing demands on cloud service providers, and are causing a revolution in IT technology and infrastructure as well as their adoption in datacenters around the globe.

The internet of things is a nice example of what’s to come and what is or will be possible in machine-to-machine interactions. The internet of things can literally change our daily lives in a not so distant future, while at the same time pushing even further the demands on current cloud services. Requirements for faster response times, continuous availability, more bandwidth, wireless connectivity, and privacy will drive cloud services to adopt and look for new technologies in the area of security and network infrastructure. Faster, more reliable, agile and secure infrastructures are key to the advancement of the cloud and its services for the coming years. Telemetry applications like automated acquisition of consumption numbers for water, gas and electricity, fridges connected with supply stores which automatically resupply your stock by submitting online orders, a public transportation system that informs you of the exact time and seat availability of the next bus, tram or train. Healthcare is radically changing through new developments in information technology and futuristic images of remote robotic surgery are becoming a reality, more down to earth are smart bedside terminals providing video on demand, internet, medication tracking, and electronic patient records. An infuse drip can be centrally controlled and monitored through a wireless network and the control services could be running on a virtual server in the datacenter. A smart alarm clock that wakes you in time for your first meeting, taking into account the location of your meeting, anticipating for traffic on a Monday morning. We are close to what once was considered ‘the future’ and we refer to it as ‘the internet of things’. You will find a nice video covering this subject here. Another nice look into the near future can be found in the video ‘A day made of glass…’ and ‘A day made of glass 2’ by Corning .

Whilst watching the above movies, think about the requirements that will be put on bandwidth, connectivity, availability, and security of the cloud services – what we are facing today in cloud datacenters is only the beginning. Some sources reveal that by 2020 between 22 and 50 billion devices will be connected to the internet, corresponding to more than 6 devices for every person on earth (source).

The cloud is driving IT

The evolution of the cloud is driving a revolution in information technology. IT managers and CIOs are facing conflicting demands; they are asked to reduce costs, consumption, resources, and at the same time they face an ever growing demand for new and faster service deployment, reducing complexity and delivering better user experience. Services have ever increasing demands for more bandwidth, lower latencies and better availability. Cloud computing drives the need for agile infrastructures that optimize the total cost of ownership whilst meeting the requirements for security, performance and availability.

Datacenter trends

The need for more cost effective solutions and services has led to the broad adoption of services hosted on the internet, referred to as cloud services. Cloud services allow fast service deployments, capacity on demand and predictable cost. This moving of services from the local or private datacenter to public or private clouds has led to the consolidation of a larger number of smaller datacenters into a smaller number of mega datacenters.

Quote taken from the NetworkWorld article ‘How Cloud Computing Is Forcing IT Evolution’ : “Ron Vokoun, a construction executive with Mortenson Construction, a company that builds data centers, began by noting that the projects his firm is taking on are quickly shifting toward larger data centers. Mortenson is seeing small and medium-size enterprises leaving the business of owning and operating data centers, preferring to leverage collocated and cloud environments, leaving the responsibility for managing the facility to some other entity. The typical project size his firm sees has doubled or quadrupled, with 20,000 square feet the typical minimum.”

The need for efficient use of resources has been driving server virtualization for many years. Moore’s law is still applicable today and every year the number of cores in a CPU and total computing power per server is increasing with a factor 2 or more. This fuels the server virtualization trend and increases the potential average number of virtual machines consolidated per physical server. The adoption of increasingly powerful multi-core servers, higher bandwidth interfaces, and blade servers is increasing the need for faster connectivity. Access switches need to support growing numbers of 1GE and 10GE ports, moving to 40GE or 100GE access ports in the near future, as more and more compute resources are packed into one rack.

New software application architectures are being adopted to improve productivity and business agility. Architectures based on Web services and Service Oriented Architectures (SOA) have reshaped the data center traffic flows increasing the server-to-server bandwidth as opposed to the lighter presentation layer (Web 2.0) which reduces client-to-server traffic. Several sources estimate the server-to-server (east/west) traffic to be 75% of the global traffic. The trend for VDI (Virtual Desktop Infrastructure) will only but affirm this number.

Another trend in modern datacenter networks is storage convergence. Converging your data and storage communications into one network has several obvious advantages, consider the voice/data convergence, also referred to as VoIP, which is commodity in the current day. The storage convergence trend will accelerate through the increasing interface speeds that will become financially viable options for data networks. Consider the 10Gbps Ethernet moving into 40GE and 100GE, while current Fiber Channel (FC) storage interface speeds are 8/16Gbps and soon 32Gbps and you see the opportunity for converging both technologies onto the same interface and network infrastructure. Higher interface bandwidths will ultimately lead to simpler and less expensive cabling of each rack server into a top of rack (ToR) switch with only one or a couple of interfaces, used for both data and storage.

The Jericho Forum

Cloud deployments also require a fresh way at approaching security. The traditional castle model representation of the legacy corporate security network does not provide the agility we need in today’s flexible business application deployment pattern that is based on private, public or hybrid clouds. Since 2005, the Jericho forum has been evangelizing a security architecture and design approach based on de-perimeterization: completely removing the DMZ pattern from the security design and making every server and host secured and hardened to such extend that it could be direct internet facing. Reality shows that total de-perimeterization might be a bit far reaching, still a lot of good ideas and information is contained in the forum evolving into a new model which is considered more adequate to approach cloud security: the hotel model.

The Hotel Model

The hotel model adequately describes the security design pattern that is required in public or private cloud service deployments. The traditional corporate perimeter firewall, modeled by a castle, does not provide the granularity and level of security we require in modern hosted applications. Cloud services require a more granular and profile based access method for a broader range of user profiles. These new requirements are modeled analogous to a Hotel: Anyone can enter the lobby, for more prestigious 5* hotels that are conscious about their customer’s privacy, entering the hotel might require you to show some proof of identity. Once inside the hotel, you are free to roam the public places. After registration with the hotel reception you are provided with one or more electronic keycards which give you and your family access to your own private room(s). Everyone can walk and roam the hotel, but only a person owning the right key has access to his private room. This closely models that security paradigm that we face when deploying new cloud services.

Consumerization of IT

Let’s not forget about the main driver behind most technologies and investments: the user or the consumer. The user will adopt any technology that increases his comfort and makes his daily tasks easier and more convenient. The user or the consumer does not care about the leading edge technology we might be adopting to provide him a basic service. He or she couldn’t care less if we just invested in state-of-the-art infrastructure nor that we converge data, voice and storage into one network.

User experience can be related to many things like availability, performance, ease of use, look and feel, and price. A user is ready to pay more for better experience, not for better or newer technology. If the user can find the same experience at better price elsewhere or a better experience at the same price, the user will move his service. Many examples in the corporate space show the daily struggle by corporate IT to balance limited and reduced budgets whilst providing the best experience for their users. BYOD (Bring Your Own Device) is probably one of the most popular examples where we see this experience clashing with the regulations and policies of corporations. The user likes the ease of use and experience of his tablet/smartphone at home and wants to extend this experience to his daily business. A serious problem faced while transitioning and integrating these devices into our corporate networks and policies is the lack of corporate management and control features built into the devices because these are mainly consumer targeted, leading to what we call the consumerization of the industry. BYOD/A (Bring Your Own Device or App) is a very hot topic and whilst a lot are seeing the advantages, before finding the holy grail of BYOD there are a lot of hurdles and struggles ahead to fully integrate consumer devices in our enterprise networks while assuring the user experience. Nevertheless, CIOs and IT managers see the opportunity in consumerization, leveraging user owned devices and increasing productivity.

Security versus providing a consistent experience for the user, independent of his device (corporate laptop, internet café PC, home PC, tablet, smartphone) and independent of his location (connected to the wired LAN, wireless WLAN, a remote branch, at home over the internet, in the airport using public internet access or on the road using mobile connectivity). This is one of the headaches modern IT managers are facing and were the need arises for dynamic, adapting security infrastructures.

Network Security Orchestration and IF-MAP

A user connecting to the corporate network is not limited to only one device anymore, nor is he likely to use only one type of connectivity. Users come to the office with their smartphones, tablets and their corporate issued laptops. The laptop connects to the wired network while the smartphone and tablet will likely use the corporate wireless network. The same user can be at his desk, joining a meeting in a conference room, take a taxi to the airport, or be in his hotel room. Independent of his device, his location or even his access method, the user is expecting one experience while accessing corporate resources, virtually allowing the user access to corporate data always, anywhere and from any device.

This one experience can only happen if we tie network access control to the identity or the role of the user. A user connecting to the wireless network could use 802.1x and a smart network would provision this wireless access with the VLAN corresponding to the user’s role and his device posture. Whenever a user crosses security zones or protected networks and penetrates deeper into the corporate network towards the data center, his access should be more granularly controlled by dynamic security policies based on his identity, moving away from static security policies that base their authorization solely on the location or connection point  (network segment or subnet) of a device.

User or role based access control independent of user location and taking into account device posture can only be provided through a dynamic, automatically provisioned security policy which is the topic of orchestrated network security or network security orchestration.

The Trusted Computing Group is working on the question of network security orchestration through its Trusted Network Connect architecture and open standards. One of these standards describes the IF-MAP (Interface for Meta Access Points) protocol, which allows flexible and immediate sharing of metadata between IF-MAP clients and a central database server. IF-MAP clients can publish metadata and subscribe to changes on metadata provided by other clients. An example would be a device profiling appliance like Great Bay Beacon, which can detect personality changes of devices on a specific mac address, which publishes a change of personality of a MAC address in the IF-MAP server. A 802.1x access control device like Juniper’s UAC/MAG appliances that are subscribed to the information for this profiling appliance will receive a change notification from the IF-MAP server and will react on this new information by revoking the provided dynamic VLAN and access from the port the client was connected to earlier.

Internet Federation and SAML

IT managers are given the choice of hosting services or applications inside the corporate datacenter (private cloud) and/or leverage on externally hosted and managed infrastructures or applications (public clouds). Combining best of both world results in what we call hybrid clouds. The flexibility, predictable cost, and pay-as-you-grow model of hosted applications provide a very interesting alternative for IT managers giving them more agility, less risk and faster deployment of new services for their users. But once again, leveraging public and hybrid clouds will only be successful if it provides its users one experience. Imperative to this experience when mixing services between clouds is single sign-on and central identity management.

A technology like single sign-on has always been very highly appreciated in the corporate network and has evolved towards the more broad requirements interconnecting and federating dispersed clouds and services hosted in all-over the world. Standards like SAML started to emerge and through its basic requirements, powerful features and flexibility it provides a revolution in the way we use internet services today.

Federation with SAML makes it possible to centralize and host the identities, roles and authorizations of corporate users inside the corporate network and leverage this identity information through federated access control for internal and external services, providing the user a one-time authentication and seamless access to all services whilst keeping a single point of control and management. SAML virtually shares the corporate users’ identities and authorizations across the private and public clouds, in a secure and controlled manner.

Mobile explosion

The consumerization of IT has reached an inflection point in 2011, where the number of mobile devices on the internet surpasses the number of traditional PCs. This has consequences for existing wireless networks having to accommodate the ever increasing number of devices requesting more bandwidth and requiring flexible scalability while assuring the availability and the quality of the experience.  Quality of Service and Class of Service are becoming more widely deployed and require an end-to-end approach. Much like the dynamic security model described above, QoS needs to be granularly provided depending on the user, his device, his location and the application. The latter becomes more apparent as enterprises adopt social networking, start using rich media or online conferencing and unified communication as viable means of business.

Application Identification

The dynamic nature of orchestrated security networks providing security and quality of service to the user needs to be extended with the intelligence of the application being carried across the infrastructure. Recent trends the likes of next generation firewalling are provider a deeper look into the protocols and datagrams carried by the network. Where traditionally IDP was used for deeper application intelligence and security, this same technology has been leveraged and reused by next generation firewall devices to provide application level firewalling and quality of service giving the administrator the possibility to dynamically influence the network based on the application. And the trend is going on, stepping one level deeper into what is called nested applications like Farmville inside Facebook and InMail inside Linkedin. Take the Facebook example: marketing departments are keen in leveraging social media and they appreciate employees making positive references to corporate events and announcements, at the same time IT does not this same user to be playing Farmville and disrupt or slow down more business critical application flows. Traffic control, classification and prioritization based on applications and nested applications give the administrator and the network the required tools to take back control and provide that so important quality of experience for the user.

About the blog

We can only but admit that the cloud revolution is accelerating and fueling our economic growth and general wellbeing. An acceleration that requires high paced technology evolutions, disruptive approaches to existing infrastructures and adoption of new models and patterns in the networking industry. It is happening all around us today and will not slow down in the coming years. This blog aims to cover some of these evolutions and revolutions and will explore some of solutions available today and their implications on networking today and in the near future.

I’m not a fortune teller, neither do I have the knowledge of the world, and at times I might make some assumptions or cut some corners to shorten the technical explanations. While at times I might be biased, being passionate about my employer’s technology and having generally a better knowledge and insight about the accuracy of the information on the products we carry in our portfolio, I am trying to be open minded and as accurate as possible, given the available information. In any circumstance, I will accept and I encourage feedback – positive and negative! I admit that I’m on a continuous learning curve, like everyone in information technology…