Which SDN solution is the right one for me? NSX vs ACI vs Nuage vs Contrail

This is a question I've been getting A LOT in the last few years, and even though it sounds rather simple, somehow it gets really complex to convince all the parties (Developers, Systems/Virtualization and Network engineers and the CEO/CTO) why the solution you're proposing is a perfect fit. There are 2 simple explanations for this:

  • A so-called "language barrier" between the different departments.
  • SDN vendors being way too aggressive pushing their solution in the environments where it doesn't fit [understandable when you consider how much money they've invested in SDN, and with how much fear and hesitation the new clients are considering the migration of their production network to SDN].

What I want to try to do in this post is help you get a more objetive a non vendor-bias picture of the SDN solutions out there, and the environments each of them should be considered for.
*If you're not sure you understand the difference between Underlay and Overlay please refer to my previous posts.


There are 2 types of SDN solutions at the moment:
  1. SDN as an Overlay (VMware NSX, Nokia Nuage and OpenContrail)
  2. Underlay and Overlay controlled via APIs (Cisco ACI and OpenDayLight)

SDN as an Overlay solutions tend to be much easier to understand and more graphical and user friendly solutions. This can be explained by the fact that they only handle the Overlat of the network, completely ignoring the physical network underneath, considering it a "commodity". Even though NSX and Nuage are both great solutions and there are environments where these would be definitely the SDN solution that I would recommend, there is a pretty serious conceptual problem with this approach, especially if your network isn't 100% virtual and if your physical topology has more then a few switches.

Systems and Virtualization engineers tend to love this kind of solutions, due to 2 factors:
  • They don't have a deep level understanding of Networking protocols.
  • They kinda get the impression that they will handle both, Compute and Networking environment in the Data Center, pushing out the Networking department [kinda true, if you ignore the fact that you actually end up with 2 departments handling your network, Systems guys taking care of the Overlay and Networking guys taking care of the physical infrastructure].


Network engineers tend to not like this kind of solutions, due to 2 factors:
  • They lose the visibility of what's going on in their Network.
  • They know that when the things don't work, or when there is a performance issue, the CEO will knock on their door, and they will have no idea what to do or where to look.

Why SDN as an Overlay is not as great as they explained in that Power Point?

Let me try to explain why SDN as an Overlay should not be considered for the environments with a Physical Network Topology with more then just a few switches. Bare with me here, because the explanation might seem a bit complex at first.

The concept of Virtualization is based on optimising the physical resources in order to get better performance using the same physical resources. This concept should apply to Server Virtualization and the Network Virtualization. Now imagine the Software that handles Server Virtualization "as an Overlay", taking the Physical servers as "commodity". For example, let's imagine that the 10 physical Servers on a picture below have 16GB of RAM, 4 Cores and 512GB of SSD each. Now let's say that we need to provision 100 VMs, each with 8GB of RAM and 2 Cores. Our Virtualization Software, having no visibility or control of the Physical Servers, will just randomly provision these machines in the physical infrastructure. In this way some of our physical servers will contain 20+ VMs and therefore start having performance issues due to the insane oversubscription, while the others will work with less then 20% capacity with just a few VMs.



While this seems to be pretty easy to understand, most of the Systems departments have trouble understanding that the exactly same thing happens to our Network when we assume that our SDN should be treated as an Overlay only. Yes, RAM and number of Cores are concepts far easier to understand then Switch Throughput, IP Flows and Interface Buffer Capacity, but the concept is the same - if we want to provision our applications to run over our network ignoring the importance of the Physical Network, even if your IP network is redundant and highly available as the topology below - some of our Links will have high drop numbers while the others will have almost no traffic, some of our Switches will have CPU 99% while the others get under 10% (this data is actually from the real SDN implementations). What can we do? We have two options. We either over-provision our Network Infrastructure and spend way more money then planned, or we suffer the performance issues and blame the guys who take care of the Physical Network.



If after this paragraph you still don't understand why your traffic wouldn't be magically balanced through the Physical Network but saturate a single group of Links and Switches instead, it's yet another sign that you should probably involve your Networking experts in the decision making process. Let's face it, Overlay is based on  VxLAN, and VxLAN is basically the tunnel between the two VTEPs, and therefore - a single IP flow. What happens with an IP flow in an IP Network? It's routed via the best IP path, a decision made locally based on every routers routing table. This means that ALL the traffic between any two Hypervisors will always go through the same links and same Network devices.

The worst of all is that none of these problems will show themselves in the Demo/PoC environment, as we are mostly testing the functionalities. The problems will get more and more serious as we're adding more applications/Network loads, and tryig to scale up the environment. In any case, 100% of the wrongly chosen SDN solutions in the beginning that I've seen ended up with the clients complete frustration and a rollback to the Legacy network, at least until the SDN is "more mature". No... there are mature SDN solutions, you were just convinced too easily and chose the wrong solution.

Conclusion

Before I get to the recommendations which solution is the perfect one for you, there is one thing that most of my clients are trying to avoid - every SDN solution is a vendor lock-in. Some of them lock you in with their Hardware, some with Software and Licences, and some with Support (Including Upgrades and Additional engineering when adding/upgrading other components in your Data Center).

To sum all this up, I'll give you a simple list of advices to help you decide which SDN solution I recommend you consider.

Is VMware NSX a perfect fit for my environment?

If your environment is 100% virtual and 100% VMware (or on the path to become 100% virtual in the next few years), and your Data Center Network Topology is rather simple and made of 100% high-end high-throughput Network Devices - NSX is the way to go! With the vRealize Network Insight you'll be able to get the basic picture of whats going on in the Physical Network and as VMware says do the "Performance optimization across overlay and underlay", and the NSX micro-segmentation just works perfectly. Have also in mind that Cisco and VMware are the two companies with the greatest number of experts, so you don't have to worry about the product support.

*There's a multi hypervisor version of NSX, called NSX Transformers (previously known as NSX-mh). At the moment (December 2016) this is not something that you should consider, as it has a very limited number of functionalities, and there is no way to get your hands on it (not even as a VMware employee or a partner)

Is Nokia Nuage a perfect fit for my environment?

If you have a multi-hypervisor 100% virtual environment (or on the path to become 100% virtual in the next few years) and your Data Center Network Topology is rather simple and made of 100% high-end high-throughput Network Devices - Nuage might be the way to go. Within the Nuage VSP (Virtualized Services Platform) there is a product called Nuage VSAP (Virtualized Services Assurance Platform). Have in mind that VSAP can give you a basic overview of what's going on in your physical network, but this is more of a Monitoring then a Network Management platform. On the Nuage web page you will find that if, for example, your physical link goes down, the triggered action would be sending an email to the Networking department or similar.

If you have many Branch Offices - you should definitely consider Nuage, as Nuage Networks Virtualized Network Services (VNS) solution can literally extend your VxLANs (and therefore your applications) in a matter of hours using a simple Physical or Virtual device.

Also worth mentioning - Nuage GUI is simply awesome, fast and intuitive. Your SDN admins will appreciate this (at least in the migration process, till you migrate to all-API Data Center environment).

Is Cisco ACI a perfect fit for my environment?

ACI is definitely one of my favourites on the market, and probably the only one that gives the entire control of Overlay and Underlay as a single Network, and Out of Box with the Support model defined. The problem is that the only switch supporting the Cisco ACI is Cisco Nexus 9k. So if you have a serious Network Topology and you're planning a renovation of your Switches (or you already have a significant number of Nexus 9k) - ACI is definitely the way to go. It lets you control your network (Physical and Virtual) from a single controller, and the Troubleshooting tools are just INSANE. You can even do a trace-route including Overlay, Underlay and Security with a graphical output.

Is OpenDayLight a perfect fit for my environment?

OpenDayLight is an open source solution, which means that if you don't already have a big team of motivated R&D Network Engineers - you should go for one of the distributions out there by a major vendor, such as Ericsson, Huawei, NEC, HP etc.

The advantage of OpenDayLight is the flexibility, because it has numerous projects that you can use or not in your environment. This allows you to make a perfect fit custom solution that handles the Overlay and the Underlay using open source projects. There is again an issue of handling the Physical infrastructure with the half-engineered protocols such as OpenFlow and OVSDB, but a good system integrator can overcome this, and I've seen it happen.

The disadvantage is that this kind of solutions requires a great number of engineering hours, and an update of a certain component in your hardware may require a re-engineering of a part of your SDN solution. There is also a question of customer support, having in mind that the only one who knows the details of the personalised solution that your system integrator of choice implemented is the proper integrator.

Cisco ACI and OpenStack Integration: RedHat vs Mirantis

Note: This post requires basic knowledge of Cisco ACI architecture and ACI logical elements, as well as understanding of what OpenStack is, what the OpenStack elements (Projects) do, and the principles of what OVS and Neutron are and how they work. If you wish to get more information about these technologies, check out the Cisco ACI  and OpenStack Section within the "SNArchs.COM Blog Map".

Let's get one thing clear about OpenStack before we even start:
  • “OpenStack is a collection of open source technologies delivering a massively scalable cloud operating system” openstack.org.
  • Open source and open APIs allows the customer to avoid being locked in to a single vendor.
  • One thing to have in mind is that OpenStack is made for the applications specifically made for the Cloud, you should not even consider moving all your Virtual Loads to the OpenStack.
  • Everyone who got a bit deeper into the concept of a Private Cloud and OpenStack, how they operate and the basic use cases, understands that Neutron just wasn't designed to handle the entire OpenStack Networking.
To back this up, I'll get into a bit of a "bloggers loop" here by telling you to read this post by Scott Lowe, where he actually refers to one of my posts about Neutron and OVS.


There are 2 main advantages of OpenStack + ACI Integration?
  • ACI handles all the Physical and Virtual Networking that OpenStack requires.
  • There are OpenStack plugins for OpenDayLight as well, but they require much, much more manual work and "tuning". There is also a question of who gives you technical support when something goes wrong.




The concept of how OpenStack and Cisco ACI integration works is shown on a diagram below.


  1. From OpenStack Horizon we create the Networks and Routers. ACI OpFlex Plugin translates this into EPG/Contract language that ACI understands, and these "instructions" on how to configure the connectivity are sent to the APIC controller.
  2. APIC sends the instructions to the Physical and Virtual network elements in order to configure it in accordance with OpenStack needs.


To be fair, we used a completely different environments to deploy the OpenStack before we started the Cisco ACI Integration. I hope that this makes it clear that we did not and cannot compare the performance here, only the way it integrates and the features.

There are 2 ways of integrating OpenStack with Cisco ACI: Using the ML2 Driver, and the GBP Policy. The second one is still in BETA phase, and even though we did try it, and it's concept is much more in accordance with Cisco ACI Policy Model (read: recommended and to be used in the future) - I would highly recommend you to stick to the ML2 driver before the GBP gets somewhat stable and supported. The difference are shown in the diagram below:




There are currently 3 OpenStack distributions officially supported by Cisco ACI. Out of these 3, we focused on testing the RedHat and Mirantis distribution integration.






Red Hat OpenStack (RHOS/RDO)
  • KILO Release.
  • VxLAN mode (not fully supported by Cisco ACI at this moment).
  • Deployed in a PackStack and Red Hat Director.
  • UCS B Series (Director in a single Blade), FIs directly connected to Cisco Leafs.
  • Control and Compute Nodes on Blades.
  • We choose a mode where an OpenStack creates a single ACI Tenant, and each OpenStack Project maps into a ACI ANP within the OpenStack tenant (this used to be, but it no longer a default mode).


Mirantis OpenStack
  • KILO Release.
  • VLAN mode.
  • Deployed in VMware vSphere environment.
  • IBM Flex Chassis connected to Cisco Leafs.
  • Control and Compute Nodes on VMs.

TIP: When you deploy OpenStack in a VMware environment, you need to "tune" your vSwitch/VDS in order to allow the LLDP packets between ACI and OpenStack Nodes by following these steps:
  1. Make sure the adapter passes the LLDP packets (in case of UCS C Series disable LLDP on vic through cimc).
  2. Disable LLDP/CDP on the vSwitch (or a VDS, if thats what you are using).
  3. Make the Port Group and vSwitch "promiscuous".

INTEGRATION PROCEDURE


DICLAIMER: In both of these cases we followed the official integration guides (in the References below). Have in mind that these Plugins are being continuously updated, and you will often find that the integration guide doesn't correspond with the plugin you can currently download.

You should know that these plugins are designed by Cisco, RedHat and Mirantis, so it's a mutual effort. If you have problems with the documentation, or encounter a bug, we found that it's much easier to ask for Ciscos support, as Cisco Lab guys really seem to be on top of things.


RedHat Integration


You can follow the step-by-step integration guide, but have in mind that often you will not be sure what something does and why they are doing it. This will get better in time, but for now - you better sit together your Networkers with ACI knowledge and Linux experts with OpenStack knowledge and make them talk and work together on every step, or you will not really be able to make it work.

Before you even start, define the External, Floating IP and SNAT subnets, and configure your L3_Out from the ACI Fabric. In our case we did OSPF with Nexus 5500. Once your OpenStack is fully integrated, the Nexus 5500 "learned" SNAT and FloatingIP Subnets from ACI via OSPF. 

TIP: The External Network is a VLAN you extend from your production network for Director, Horizon etc. and it does NOT go out using the same route.

In RedHat you need to manually:
  • Install and deploy the plugin on both nodes.
  • Replace the Neutron Agents with the OpFlex ones.
  • Configure all the parameters of the Interconnections and Protocols.

During the integration process we felt like a guide was made for a very specific environment, and that many of the steps were poorly documented or not explained at all. Many times we had to stop, make a diagram of how Linux guys and how Network guys understand the current step, and reach a conclusion together. I think this will happen in many organisations, as it's the Network and System engineers do not really "speak the same language", so to say.

This is what you will see once your nodes get the IP (VTEPs actually) from the DHCP from ACI Infrastructure VLAN (in our case VLAN 3456, 172.1.0.0/16) and get the LLDP connectivity with ACI Leafs. Your OpenStack Nodes will show up as OpenStack Hypervisors, and you will be able to see all the VMs from ACI:









TIP: Since we did a VxLAN mode, all the traffic between the ACI Leafs and OpenStack Nodes (Hypervisors) is going via the Infrastructure VLAN that is carrying the VxLAN traffic, so be sure that you have the Jumbo Frames enabled VTEP-to-VTEP (This includes the Blades, FIs and all the Switches you might have in the path).

For some reason the L3_EPG within the OpenStack tenant did not correctly "pick up" the L3 Out". Once I manually assigned the OSPF peering that I had created before in the Common Tenant, OpenStack got the "ping" to the outside network working.

Once you have your OpenStack Tenant in ACI, you will be available to add another VMs or a physical servers from another Domains (Physical or virtual, such as VMware or HyperV) to the Networks (EPGs). In the diagram below, you will see how the EPGs named "group1" and "group2" contain OpenStack and VMware domains,

MIRANTIS Integration

Mirantis OpenStack was deployed in the VLAN mode (fully supported at this point by Cisco ACI), but it was deployed in the virtual environment. We were therefore expecting some operational difficulties.

Mirantis integration was a really pleasant surprise. While in RedHat you need to manually install the plugin on both nodes, replace the Neutron Agents with the OpFlex ones, and then make sure all the correct services are running, in Mirantis you have a graphical interface where you just Import the OpFlex plugin, after which the OpFlex menu auto-magically appears in your menu when you want to deploy a new OpenStack Environment.

While deploying a new OpenStack Environment you will simply configure all the integration parameters, and the environment will be built in accordance with the integration policy, from the start. It just felt so easy, until we reached the point where it was all deployed correctly, Mirantis was giving the "all correct" messages, but our OpenStack simply wasn't appearing in the ACI. To be fair - this was a virtual environment installation so we were kinda expecting this type of problems.

After we deployed the VMware workaround described in the introduction of this post, we got the visibility, and the OpenStack Hypervisors were visible from the APIC GUI.



References

There is a lot of documentation out there for Kilo. These are the ones we used:





Cisco ACI Service Graph (L4-7), ADC: F5 vs NetScaler

Note: This post requires basic knowledge of Cisco ACI architecture and ACI logical elements, as well as understanding of what ADC is, and the basic principles of Load Balancing and SSL. If you wish to get more information about these technologies, check out the Cisco ACI Section within the "SNArchs.COM Blog Map".

I will not go all "Security is super important" on you, I assume that if you are reading this post - you already know that. Let's just skip that part then, and go directly to the facts we have so far:

  • ACI does not permit the flows we do not explicitly allow. ACI is therefore a stateless FW itself.
  • ACI Filters allow the basic L3-L4 FW rules. All additional L4-L7 "features" can be deployed in a form of a Service Graph.
  • Service Graph is directly attached to a Contract between 2 EPGs (End Point Groups).
  • Cisco ACI integrates with all the big L4-7 Services vendors using the "Device Package". A Device Package is a plugin that is deployed directly to APIC Controller, and allows a 3rd party device to be "instantiated" and later applied as a Service Graph.

Most of the "big players" in the area of Security have evolved (some more, some less) with the ACI integration:




Now that the concepts are clear, let's get a bit deeper into how CITRIX and F5 handle the ACI Integration.

1. What Service Functions do we get?









CITRIX allows you to deploy the NetScaler as a Virtual (VDX) or a physical (SDX) device. The Device Package is unique for both. Once the Device Package is deployed, you will be able to see all the Functions or Services that NetScaler lets you deploy within the ACI Fabric. It was a pleasant surprise to see a big variety of ADC functions:



































F5 on the other side has 2 ways of integrating with Cisco ACI:
- Direct BIG IP integration.
- BIG IP + BIG IQ integration.

We had the chance to test both of these, and I must say that I'm personally a big fan of how the second option work. Let me get deeper into that. Once I got the Device Package installed I was a bit disappointed to see that the only Services we could deploy are a Basic Load Balancing and a Microsoft SharePoint (for some reason...).






Good thing I did some more digging, and discovered the BigIQ integration. This is where F5 really impressed me. You basically first need to configure the BigIP + BigIQ Integration, before you even deploy the Device Package. If you are not familiar with the concept of iApps - you should most definitely check them out. This allows you to create a Template of whatever Service Functions you will need in your organisation, no matter how complex. Once you create these from BigIQ, you generate a "personalized" Device Package that you then deploy in APIC. Now you will get all the iApps you created as a separate Service Functions that you can then deploy between your EPGs.











2. FLEXIBILITY

The State of the Art is currently on a Beginner/Medium Level, so it might just be a bit early to talk about Flexibility, but even so - there are a few things worth mentioning:
  • Virtual Devices: This applies to both, NetScaler VDX and Virtual BigIP - they only support Single Tenant for now. This is a bit of a vSphere/HyperV limitation as well, as they still do not allow a PortGroup/VMNetwork to have a few VLANs of our choice, so APIC cannot send a command "add this VLAN to a PortGroup on a ADC Interface". Instead, every time you deploy a new Service Graph - the VLAN gets replaced, and the old Service Graph stops working. Silly, right? Good thing I have the insiders information that this will change soon :)
  • Service Functions: Both of the vendors found a way to give us a variety of Functions that we can apply via Service Graph. CITRIX does it in a native form, while F5 uses the iApp+BigIQ. They each have their advantages and disadvantages, but I predict a bright future for both, so - Good job Citrix and F5!

NetScaler has a "bonus feature" - an online Configuration Converter, which converts your NetScaler CLI configuration into a XML/JSON, which you can later deploy as an API CALL. You must admit that this is just cool:















3. USABILITY

One of the most common question I get during the ACI Demos is - "It's all super impressive, but should we really consider using it in a production environment, how do you TS is something goes wrong?". This brings us to the question of usability.

The biggest problem we face when we try to Apply the Service Graph Template to a ACI Contract is an entirely new interface of configuring a Service. Instead of BIGIP/NetScaler interface we are used to, we will get just a group of parameters with no comments or descriptions. Some of these are semi-intuitive, like Virtual IP and Port, while the others are just awkward (fvPar2 or something like that). There is no doubt that this will evolve in the future, but right now - you better know exactly what you are doing. Just so you can get the "taste" of how these parameters look like, here is a screenshot doing NetScaler (1st screenshot) and BigIP (2nd screenshot) implementation of a basic LBaaS:




























Al alternative is using a REST API calls. I'm personally a huge fan of this method, because it's just so fast and easy. Yes, in the beginning you will not trust it, and it does have a learning curve, but once you "get it" - that's it for you, no going back to the parameters. You will most probably do what I did - start making your own API Library, and tune it with love :)


Here are some examples of the NetScaler Function API CALLs in PostMan:



If you wish to use my Libraries, feel free to download the repository from my GitHub:

Temporal Link [Official NetScaler Library]: https://github.com/citrix/netscaler_aci_poc_kit 
Target Link: [Will be updated soon]

DISCLAIMER: These are designed for a personal use, and even though I decided to share them with the community, I take no responsibility if they do not work correctly in certain environments.

Bottom line, both NetScaler and BigIP work more or less the same way here. Once you figure out which parameters are obligatory and what exactly they are - you will have no problem configuring any Service Function. You will later log into the NetScaler/BigIP device to make sure that the configuration is accurate, and that the parameters are set correctly. For now there are parameters that you can only configure from the local interface, but I'm pretty sure that in time all of these will be added to the ACI Device Package.

So far none of the two vendors has issued a complete list of parameters, together with how to use them. We sincerely hope they are working on it.

IMPORTANT TIP: Once you get your Service Graph deployed and working, you can do a right click on a deployed Service Graph, and Save the configuration in the XML/JSON format. Why is this awesome? Because you can just add this to your PostMan Library and later deploy LBaaS with a single click. If you're not impressed yet - you need to try this, trust me - you will be!


So, which one is better then?

From my personal opinion, both of these are or more or less same level of integration with Cisco ACI. This is really good if you want to use the same ADC that you've been using so far. It may be a bit disappointing if you are trying to choose one of the two based on how it integrates with ACI, because in my opinion a lot of work is yet to be done.

Cisco ACI Guide for Humans, Part 2: Upgrade Cisco ACI

First time we “unpack” ACI, we will find a certain number of potential Spine and potential Leaf switches, and hopefully 3 (or 5) APIC Controllers. We will rack the entire fabric, interconnect every Spine to every Leaf with a single 40G cable, and connect every APIC to 2 Leaf Switches. We would power on the devices, and before we even start configuring the APIC Cluster, we need to console to each Switch and verify if its running ACI mode or a NX-OS mode by executing the “show version” command. These are the details of the Fabric we used in our Lab:

Software
  BIOS: version 07.17
  NXOS: version 6.1(2)I3(3a)
  BIOS compile time:  09/10/2014
  NXOS image file is: bootflash:///n9000-dk9.6.1.2.I3.3a.bin
  NXOS compile time:  1/26/2015 11:00:00 [01/26/2015 19:45:44]

Hardware
  cisco Nexus9000 C9372PX chassis
  Intel(R) Core(TM) i3-3227U C with 16402544 kB of memory.
  Processor Board ID SAL1935N8A2

  Device name: switch
  bootflash:   51496280 kB
Kernel uptime is 0 day(s), 0 hour(s), 7 minute(s), 0 second(s)

Last reset
  Reason: Unknown
  System version: 6.1(2)I3(3a)
  Service:

plugin
  Core Plugin, Ethernet Plugin


By default your Leaf Switches will be in NX-OS mode. On the bootflash: of each Switch we will find the ACI image, NX-OS image and the EPLD file. If there is no ACI image, we will have to download it from Cisco website. Before we proceed with switching the operational mode from NX-OS to ACI, first we need to apply the EPLD upgrade:

switch# show install all impact epld bootflash:n9000-epld.6.1.2.I3.3a.img
Compatibility check:
Module        Type         Upgradable        Impact   Reason
------  -----------------  ----------    ----------   ------
     1            SUP           Yes       disruptive   Module Upgradable

Retrieving EPLD versions... Please wait.

Images will be upgraded according to following table:
Module  Type   EPLD              Running-Version   New-Version  Upg-Required
------  ----  -------------      ---------------   -----------  ------------
     1   SUP  MI FPGA                   0x12        0x11            Yes
     1   SUP  IO FPGA                   0x06        0x05            Yes
switch# install epld bootflash:n9000-epld.6.1.2.I3.3a.img module all
Compatibility check:
Module        Type         Upgradable        Impact   Reason
------  -----------------  ----------    ----------   ------
     1            SUP           Yes       disruptive   Module Upgradable

Retrieving EPLD versions... Please wait.

Images will be upgraded according to following table:
Module  Type   EPLD              Running-Version   New-Version  Upg-Required
------  ----  -------------      ---------------   -----------  ------------
     1   SUP  MI FPGA                   0x12        0x11            Yes
     1   SUP  IO FPGA                   0x06        0x05            Yes
The above modules require upgrade.
The switch will be reloaded at the end of the upgrade
Do you want to continue (y/n) ?  [n] y

Proceeding to upgrade Modules.

 Starting Module 1 EPLD Upgrade

Module 1 : MI FPGA [Programming] : 100.00% (     64 of      64 sectors)
Module 1 : IO FPGA [Programming] : 100.00% (     64 of      64 sectors)
Module 1 EPLD upgrade is successful.
Module        Type  Upgrade-Result
------  ------------------  --------------
     1         SUP         Success


EPLDs upgraded.


Once you have the entire fabric up and running in the same version of Firmware, and in the ACI mode, you can start configuring the APIC devices. It is of most importance that you decide and label each APIC Controller with a Number in the cluster, and that the Username and the Password you define match. Don’t be surprised we´re giving so much importance to this, because if you get the initial APIC configuration wrongly, it will be difficult to recover.
Start by assigning the Out-of-Bound Management IP addresses to all the Switchs and all the APICs. In our case, as an example, we set up a simple 1 Spine – 2 Leaf – 1 APIC architecture as a PoC kit, but the same principles apply regardless of the number of devices you bought. Besides the management IPs you will need:


  • IP range for your VTEPs (tunnel endpoints). By default its 10.0.0.0/8.
  • DNS and NTP Servers.
  • Dedicated infrastructure VLAN.


ACI can be upgraded before you build the entire fabric and perform a Fabric Discovery from the APIC Cluster, in which case you would have to separately upgrade every switch and every APIC controller manually (use a TFTP or SCP server or a USB stick, and copy a new image to each device and boot form an ACI image), or you can start by building a fabric and perform an orchestrated upgrade controlling it all from the APIC SSH line. I personally prefer the second option, even more so knowing that the future upgrades will be performed that way.

The Upgrade Procedure is to start with the Switches Upgrade, and then, when the entire fabric is in the new version, you may proceed with the APIC Upgrade.

STEP 1: Upgrade Leaf and Spine Switches. Before we begind, lets check the Firmware version on the entire ACI architecture:

admin@APIC:/> firmware upgrade status
Node-Id    Role            Current-Firmware     Target-Firmware      Upgrade-Status  
------------------------------------------------------------------------------------------
1          controller      apic-1.0(3f)                              completeok      100
101        leaf            n9000-11.0(3f)                            notscheduled    0
102        leaf            n9000-11.0(3f)                            notscheduled    0
201        spine           n9000-11.0(3f)                            notscheduled    0

You will notice in the output above that the APIC controller has the “complete OK” status in the Upgrade Status column. This is because the APIC had been turned on for the first time out-of-the-box.

Start by upgrading ONE of the Leaf Switches. In my case, I used Leaf1, or a Node 101 to a version 11.2m (compatible with the Dec2016 Brazos release of ACI). First we make sure that the new image is in the repository, and then we execute the upgrade:

admin@APIC:fwrepo> pwd
/firmware/fwrepos/fwrepo
admin@APIC:fwrepo> ls
aci-catalog-dk9.1.0.3f.bin  aci-n9000-dk9.11.2.1m.bin  boot  md5sum

admin@APIC:/> firmware upgrade switch node 101 aci-n9000-dk9.11.2.1m.bin
Firmware Installation on Switch Scheduled


To check the upgrade status, use 'firmware upgrade status node <node-id>.

admin@APIC:/> firmware upgrade status node 101
Node-Id    Role            Current-Firmware     Target-Firmware      Upgrade-Status  Progress-
-----------------------------------------------------------------------------------------------------
101        leaf            n9000-11.0(3f)       n9000-11.2(1m)       inprogress      5


You should repeat this procedure for all the Leafs and Spines. If you´re in a production environment, be sure to use the High Availability you´ve previously taken care of (I hope), and update a Leaf Switch at a time, then a Spine Switches one by one, and after a while:

admin@APIC:pam.d> firmware upgrade status node 101
Node-Id    Role            Current-Firmware     Target-Firmware      Upgrade-Status  Progress-
-----------------------------------------------------------------------------------------------------
101        leaf            n9000-11.2(1m)       n9000-11.2(1m)       completeok      100


admin@APIC:pam.d> firmware upgrade status node 102
Node-Id    Role            Current-Firmware     Target-Firmware      Upgrade-Status  Progress-
-----------------------------------------------------------------------------------------------------
102        leaf            n9000-11.2(1m)       n9000-11.2(1m)       completeok      100


admin@APIC:pam.d> firmware upgrade status node 201
Node-Id    Role            Current-Firmware     Target-Firmware      Upgrade-Status  Progress-
-----------------------------------------------------------------------------------------------------
201        spine           n9000-11.2(1m)       n9000-11.2(1m)       completeok      100


STEP 2: Upgrade the APIC controller. In this example I´m doing the upgrade from ACI 1.0.3f to ACI 11.2.1m (Jan2016 version of the ACI release called Brazos). The first step is to copy the new firmware to the APIC Controller:

$ scp /Users/iCloud-MJ/Downloads/aci-n9000-dk9.11.2.1m.bin admin@10.20.70.92:
Application Policy Infrastructure Controller
admin@10.20.70.92's password:
aci-n9000-dk9.11.2.1m.bin                                                   100%  532MB   6.4MB/s   01:23


IMPORTANT: When you copy your firmware files using the SCP, make sure that you have the correct privileges in the destination folder. If you don´t specify the destination folder on the APIC, it will be /home/admin/:

admin@APIC:~> pwd
/home/admin
admin@APIC:~> ls
aci  aci-apic-dk9.1.2.1m.iso  aci-n9000-dk9.11.2.1m.bin  debug mit

Add the newly copied Firmware to your Firmware Repository:

admin@APIC:~> firmware add aci-n9000-dk9.11.2.1m.bin
Firmware Image aci-n9000-dk9.11.2.1m.bin is added to the repository

Be sure all the images have been correctly added to the List before you proceed. Notice that the catalog image will auto add to the Firmware Repository when you add the NEXUS 9k and APIC Upgrade Firmware images.

IMPORTANT: You will notice the CATALOG images in the below output. These are generated automatically once you have the Fabric and the Controller image correctly synchronized.

admin@APIC:~> firmware list
Name                 : aci-n9000-dk9.11.2.1m.bin
Type                 : switch
Version              : 11.2(1m)
Size(Bytes)          : 558351658
Release-Date         : 2016-01-29T07:07:15.000+01:00
Download-Date        : 2016-02-04T09:56:40.833+01:00

Name                 : aci-apic-dk9.1.2.1m.bin
Type                 : controller
Version              : 1.2(1m)
Size(Bytes)          : 3936555008
Release-Date         : 2016-01-29T01:57:59.000+01:00
Download-Date        : 2016-02-03T19:19:42.110+01:00

Name                 : aci-catalog-dk9.1.2.1m.bin
Type                 : catalog
Version              : 1.2(1m)
Size(Bytes)          : 25358
Release-Date         : 2016-01-29T00:19:57.000+01:00
Download-Date        : 2016-02-03T19:19:44.034+01:00

Name                 : aci-catalog-dk9.1.0.3f.bin
Type                 : catalog
Version              : 1.0(3f)
Size(Bytes)          : 18064
Release-Date         : 2015-02-10T01:27:12.000+01:00
Download-Date        : 2016-02-02T08:29:32.530+01:00


As you can see below, the APIC controller is still in the old 1.0(3f) version:

admin@APIC:~> firmware upgrade status
Node-Id    Role            Current-Firmware     Target-Firmware      Upgrade-Status  
-----------------------------------------------------------------------------------
1          controller      apic-1.0(3f)                              completeok      100
101        leaf            n9000-11.2(1m)       n9000-11.2(1m)       completeok      100
102        leaf            n9000-11.2(1m)       n9000-11.2(1m)       completeok      100
201        spine           n9000-11.2(1m)       n9000-11.2(1m)       completeok      100


Start the APIC Upgrade, and check the status:

admin@APIC:~> firmware upgrade controllers aci-apic-dk9.1.2.1m.bin
Firmware Upgrade on Controllers has been scheduled.
The upgrade will be performed on one controller at a time in the background.

admin@APIC:~> firmware upgrade status
Node-Id    Role            Current-Firmware     Target-Firmware      Upgrade-Status  Progress-
----------------------------------------------------------------------------------------------
1          controller      apic-1.0(3f)         apic-1.2(1m)         inprogress      0
101        leaf            n9000-11.2(1m)       n9000-11.2(1m)       completeok      100
102        leaf            n9000-11.2(1m)       n9000-11.2(1m)       completeok      100
201        spine           n9000-11.2(1m)       n9000-11.2(1m)       completeok      100


In a certain moment you will get this message:
admin@APIC:~>
Broadcast message from root@APIC
(unknown) at 10:57 ...

The system is going down for reboot NOW!


Once you get the control back, do not panic, because now the commands have changed, but as you will see from the “show version” output, the entire ACI architecture has now been upgraded:

Application Policy Infrastructure Controller
admin@10.20.70.92's password:
APIC# firmware upgrade status
Error: Invalid argument 'status '. Please check syntax in command reference guide
APIC#
APIC# show ver
 Role        Id          Name                      Version
 ----------  ----------  ------------------------  --------------------
 controller  1           APIC                      1.2(1m)
 leaf        101         Leaf1                     n9000-11.2(1m)
 leaf        102         Leaf2                     n9000-11.2(1m)
 spine       201         Spine                     n9000-11.2(1m)


You can now SSH into any of the Nodes from ACI. Nexus Switches in ACI mode do have CLI, but it's different. For example, the “?” won’t work, but the Double-ESC will (quickly press the “escape” key twice). Also the “include” and “begin” commands won’t work, but “grep” will :)
What happens with the VTEPs within the Fabric? During the initial ACI configuration I defined the 172.1.0.0/16 range for the VTEPs. Lets first connect to one of the Leaf Switches, and check the Local interfaces that belong to the VTEP IP range, and the routing table in the Overlay-1 VRF (VRF that is internally used by the fabric for VTEP routing):

Leaf2# show ip interface brief | grep 172
vlan7                172.1.0.30/27        protocol-up/link-up/admin-up
lo0                  172.1.0.93/32        protocol-up/link-up/admin-up
lo1023               172.1.0.32/32        protocol-up/link-up/admin-up


From the output above we can clearly see that the loopbacks are in fact the VTEP interfaces. They are all /32, exactly as the VTEPs should be.

Leaf2# show vrf all
 VRF-Name                           VRF-ID State    Reason
 black-hole                              3 Up       --
 overlay-1                               4 Up       --

Leaf2# show ip route vrf overlay-1
IP Route Table for VRF "overlay-1"
'*' denotes best ucast next-hop
'**' denotes best mcast next-hop
'[x/y]' denotes [preference/metric]
'%<string>' in via output denotes VRF <string>

172.1.0.0/27, ubest/mbest: 1/0, attached, direct
    *via 172.1.0.30, vlan7, [1/0], 20:56:08, direct
172.1.0.1/32, ubest/mbest: 1/0
    *via 172.1.0.94, eth1/49.1, [115/12], 05:02:39, isis-isis_infra, L1
172.1.0.30/32, ubest/mbest: 1/0, attached
    *via 172.1.0.30, vlan7, [1/0], 20:56:08, local, local
172.1.0.32/32, ubest/mbest: 2/0, attached, direct
    *via 172.1.0.32, lo1023, [1/0], 20:54:12, local, local
    *via 172.1.0.32, lo1023, [1/0], 20:54:12, direct
172.1.0.93/32, ubest/mbest: 2/0, attached, direct
    *via 172.1.0.93, lo0, [1/0], 20:54:20, local, local
    *via 172.1.0.93, lo0, [1/0], 20:54:20, direct
172.1.0.94/32, ubest/mbest: 1/0
    *via 172.1.0.94, eth1/49.1, [115/2], 05:02:39, isis-isis_infra, L1
172.1.0.95/32, ubest/mbest: 1/0
    *via 172.1.0.94, eth1/49.1, [115/3], 05:02:39, isis-isis_infra, L1
172.1.208.64/32, ubest/mbest: 1/0
    *via 172.1.0.94, eth1/49.1, [115/2], 05:02:39, isis-isis_infra, L1
172.1.208.65/32, ubest/mbest: 1/0
    *via 172.1.0.94, eth1/49.1, [115/2], 05:02:39, isis-isis_infra, L1
172.1.208.66/32, ubest/mbest: 1/0
    *via 172.1.0.94, eth1/49.1, [115/2], 05:02:39, isis-isis_infra, L1
172.1.216.65/32, ubest/mbest: 1/0
    *via 172.1.0.94, eth1/49.1, [115/2], 05:02:39, isis-isis_infra, L1
172.1.216.66/32, ubest/mbest: 1/0
    *via 172.1.0.94, eth1/49.1, [115/2], 05:02:39, isis-isis_infra, L1
172.1.216.67/32, ubest/mbest: 1/0
    *via 172.1.0.94, eth1/49.1, [115/2], 05:02:39, isis-isis_infra, L1
Leaf2#


The internal protocol of the Spine-Leaf Fabric of ACI is IS-IS, and as we can see in the Routing Table above - on the Leaf Switch all the VTEPs
Now lets check the Spine Switch:

Spine# show ip interface brief | grep 172
lo0                  172.1.0.94/32        protocol-up/link-up/admin-up
lo1                  172.1.208.65/32      protocol-up/link-up/admin-up
lo2                  172.1.216.65/32      protocol-up/link-up/admin-up
lo3                  172.1.208.64/32      protocol-up/link-up/admin-up
lo4                  172.1.216.66/32      protocol-up/link-up/admin-up
lo5                  172.1.216.67/32      protocol-up/link-up/admin-up
lo6                  172.1.208.66/32      protocol-up/link-up/admin-up

In the above output we can see 7 VTEP interfaces on the Spine created so far. Important thing to notice at this point is that there are NO VLANs on the Spine Switch at this point, and on the Leaf there is only one automatically provisioned VLAN that is used for the APIC connection (APIC is plugged to e1/1 port of each of the 2 Leafs):

Spine# show vlan brief

 VLAN Name                             Status    Ports
 ---- -------------------------------- --------- -------------------------------
Spine#


Leaf1# show vlan br

 VLAN Name                             Status    Ports
 ---- -------------------------------- --------- -------------------------------
 7    infra:default                    active    Eth1/1
Leaf1#

Leaf2# show vlan br

 VLAN Name                             Status    Ports
 ---- -------------------------------- --------- -------------------------------
 7    infra:default                    active    Eth1/1
Leaf2#


Cisco ACI Guide for Humans, Part 1: Physical Connectivity

First of all, I need to explain why I decided to write such a post. It's quite simple to everyone who ever tried to Deploy/Configure/Understand how Cisco ACI works using the official Cisco Documentation. Cisco ACI is a very powerful architecture, and once you learn it - you start loving it. My assumption is that for some reason, Cisco seems to have hired the App Development experts to develop the ACI GUI and the ACI design and configuration guides, and the final product turned out to be hard to digest to both, DevOps and Networking professionals. That is why I feel there is a need to explain the concepts in a way more easy to understand for us, humans.

TIP: APIC maintains an audit log for all configuration changes to the system. This means that all the changes can be easily reverted.
Before the ACI installation starts, we need to connect every ACI controller (APIC) to 2 Leafs. There should be 3 or 5 APICs, for high availability, and a standard procedure, once the cabling is done, should be:

  • Turn ON and perform a Fabric Discovery.
  • Configure Out-of-Band Management.
  • Configure the NTP Server in Fabric Policies -> POD Policies” menu. This is very important, because if the Fabric and the Controllers are in different time zones for example, the ACI wont synchronise correctly.


Once the Fabric Discovery is done, you need to enter the mgmt tenant, and within the Node Management Addresses create the Static Entries for all your nodes. In our case, we have 3 nodes: Spine (201) and 2 Leafs (101 and 102). This means that since the nodes are not consecutive, you should create 2 Static Entries, one for nodes 101-102, and the second one for the node 201. You should choose the “default” Node Management EPG for now, and you will end up with:




When we are looking at a real world ACI deployment, in the Typical Migration Scenario a client would want us to migrate 2 different environments:
- Virtual Environment, where we would need to first define all VM types and "group" them (define EPGs).
- Physical Environment.
Once we have the environments defined, we need to build the ANPs (Application Network Profiles), where we will group all the EPGs that need to inter-communicate.
Once we did the initial design, we need to make a list of all the tasks we need to do, and start building up the Tenants. Be sure you understand what Infra and Common tenants are before you start planning the Configuration. Configuration objects in the Common tenant are shared with all other tenants (things that affect the entire fabric):
- Private Networks (Context or VRF)
- Bridge Domains
- Subnets

1. Physical Connectivity/Fabric Policies
The communication with the outside world (external physical network) starts by a simple question: Who from the outside world needs to access the "Service" (ANP in the ACI "language"). Once we have this answered, we need to define a EPG with these users. Let´s say the financial department needs to access the ANP, which is a Salary Application. We will create the EPG called "Financial_EPG" which might be an External L2 EPG where we group all the guys from Finances. This EPG will access the Financial Application Web Server, so the Financial_Web_EPG will need a PROVIDER CONTRACT allowing the access to Financial_Department_EPG.
Domains are used to interconnect the Fabric configuration with the Policy configuration. Different domain types are created depending on how a device is connected to the leaf switch. There are four different domain types:
- Physical domains, for physical servers (no hypervisor).
- External bridged domains, for a connection to L2 Switch via dot1q trunk.
- External routed domains, for a connection to a Router/WAN Router.
- VMM domains, which are used for Hypervisor integration. 1 VMM domain per 1 vCenter Data Center.
The ACI fabric provides multiple attachment points that connect through leaf ports to various external entities such as baremetal servers, hypervisors, Layer 2 switches (for example, the Cisco UCS fabric interconnect), and Layer 3 routers (for example Cisco Nexus 7000 Series switches). These attachment points can be physical ports, port channels, or a virtual port channel (vPC) on the leaf switches.
VLANs are instantiated on leaf switches based on AEP configuration. An attachable entity profile (AEP) represents a group of external entities with similar infrastructure policy requirements. The fabric knows where the various devices in the domain live and the APIC can push the VLANs and policy where it needs to be. AEPs are configured under global policies. The infrastructure policies consist of physical interface policies, for example, Cisco Discovery Protocol (CDP), Link Layer Discovery Protocol (LLDP), maximum transmission unit (MTU), and Link Aggregation Control Protocol (LACP). A VM Management (VMM) domain automatically derives the physical interfaces policies from the interface policy groups that are associated with an AEP.
VLAN pools contain the VLANs used by the EPGs the domain will be tied to. A domain is associated to a single VLAN pool. VXLAN and multicast address pools are also configurable. VLANs are instantiated on leaf switches based on AEP configuration. Forwarding decisions are still based on contracts and the policy model, not subnets and VLANs. Different overlapping VLAN pools must not be associated with the same attachable access entity profile (AAEP).

The two types of VLAN-based pools are as follows:

  • Dynamic pools - Managed internally by the APIC to allocate VLANs for endpoint groups (EPGs). A VMware vCenter domain can associate only to a dynamic pool. This is the pool type that is required for VMM integration.
  • Static pools - The EPG has a relation to the domain, and the domain has a relation to the pool. The pool contains a range of encapsulated VLANs and VXLANs. For static EPG deployment, the user defines the interface and the encapsulation. The encapsulation must be within the range of a pool that is associated with a domain with which the EPG is associated.

An AEP provisions the VLAN pool (and associated VLANs) on the leaf. The VLANs are not actually enabled on the port. No traffic flows unless an EPG is deployed on the port. Without VLAN pool deployment using an AEP, a VLAN is not enabled on the leaf port even if an EPG is provisioned. Infrastructure VLAN is required for AVS communication to the fabric using the OpenFlex control channel.

Now that this is all clear, we can configure, for example, a Virtual Port Channel between our Leaf Switches and an external Nexus Switch. In our case, we are using the Nexus5548 (5.2). Physical Connectivity to ACI will generally be handled using the Access Policies. There is a bit non-intuitive procedure that needs to be followed here, so lets go through it together:

1.1 Create the Interface Policies you need.
You only need to create the Interface Policies if you need a Policy on the Interface that is different then the Default policy. For example, the default LLDP state is ENABLE, so if you want to enable the LLDP – just use the default policy. In this case you will most probably need only the Port-Channel Policy, because the Default Port-Channel policy enables the “ON” mode (Static Port-Channel).

1.2 Create the Switch Policy.
This is the step where you will have to choose the Physical Leaf Switches where you need to apply your Policy. In our case we will choose the both Leaf Switches (101 and 102). This is done under Switch Policies -> Policies -> Virtual Port Channel Default.

1.3 Create the Interface Policy Group.
In this step you will need to create the Group that gathers the Interface Policies you want to use on the vPC. This means that we need to create a vPC Interface Policy Group and

1.4 Create the Interface Profile.
This is the step that will let you specify on which ports the vPC will be configured. In our case we want to choose the interface e1/3 of each Leaf.

1.5 Create the Switch Profile.
Switch Profile lets you choose the exact Leaf Switches you want the policy applied on, and select the previously configured Interface Profile to specify the vPC Interfaces on each of those leaf switches.
Check if everything is in order:

Nexus# show port-channel summary

3     Po3(SU)     Eth      LACP      Eth1/17(P)   Eth1/18(P)

Leaf1# show vpc ext
Legend:
(*) - local vPC is down, forwarding via vPC peer-link

vPC domain id                     : 10
Peer status                       : peer adjacency formed ok
vPC keep-alive status             : Disabled
Configuration consistency status  : success
Per-vlan consistency status       : success
Type-2 consistency status         : success
vPC role                          : primary
Number of vPCs configured         : 1
Peer Gateway                      : Disabled
Dual-active excluded VLANs        : -
Graceful Consistency Check        : Enabled
Auto-recovery status              : Enabled (timeout = 240 seconds)
Operational Layer3 Peer           : Disabled

vPC Peer-link status
---------------------------------------------------------------------
id   Port   Status Active vlans
--   ----   ------ --------------------------------------------------
1           up     -

vPC status
---------------------------------------------------------------------------------
id   Port   Status Consistency Reason               Active vlans Bndl Grp Name
--   ----   ------ ----------- ------               ------------ ----------------
1    Po1    up     success     success              -            vPC_101_102


IMPORTANT: ID and port-channel number (Po#) are automatically created and will vary. Notice no active VLANs. They will appear once you have created and associated an AEP.
Multicast is also allowed in the ACI Fabric, the MCAST trees are built, and in the case of failure there is a FRR (Fast Re-Route). The ACI Fabric knows the MCAST tree, and drops the MCAST exactly on the ports of the leaf Switches where the MCAST frames are supposed to go.  This might be a bit confusing, when you consider that ACI actually STRIPS the encapsulation to save the Bandwidth when the frame gets to the Leaf Port (this applies to all external encapsulations: dot1x, VxLAN, nvGRE…), and adds it back on when the “exit” leaf needs to forward the frame to the external network.

2. Tenant(s) and 3. VRF are the concepts that I think are clear enough even from the official Cisco documentation, so I wont go too deep into it.

4. Bridge Domains and EPGs
Once you create the Bridge Domain, you need to define the Subnets that will reside within the Bridge Domain. These Subnets are used as the Default Gateway within the ACI Fabric, and the Default Gateway of the Subnet is equivalent to the SVI on a Switch.
In our case we created a Bridge Domain called “ACI_Local_BD”, and decided to interconnect 2 Physical PCs with different subnets, and see if they can ping if we put them in the same EPG. In order to do this we created the following Subnets within the EPG:

  • 172.2.1.0/24 with the GW 172.2.1.1 (configured as a Private IP on ACI Fabric, and as the Principal GW within the Bridge Domain)
  • 172.2.2.0/24 with the GW 172.2.2.1 (configured as a Private IP on ACI Fabric)




Once we have the BD and the Subnets created, we need to define the EPG(s). Since in our case we are treating the Physical Servers, we know exactly what physical port of each Leaf they are plugged to. This means that the easiest way to assign the Physical Servers to the EPG is to define the Static Bindings.

IMPORTANT: If you use the Static Bindings (Leafs), all the ports within the Leaf you configure will statically belong to the EPG.

In our case we configured the ports e1/21 and e1/22 of the Leaf1, and the port e1/21 of the Leaf 1 (Node1), as shown on the screenshot below.

TIP: In one moment you will need to manually define the encapsulation of the traffic coming from this Node within the ACI Fabric. This is not the number of the Access VLAN on the Leaf port that VLAN will be locally assigned by the Leaf. This is a VLAN that needs to be from the VLAN Pool you defined for the Physical Domain.



Now comes the “cool” part (at least for the Networking guys). We will check what is happening with the VLANs on the Leaf Switches.

Leaf2# show vlan extended

 VLAN Name                             Status    Ports
 ---- -------------------------------- --------- -------------------------------
 7    infra:default                    active    Eth1/1, Eth1/5
 8    Connectivity_Tests:ACI_Local_BD  active    Eth1/21, Eth1/22
 9    Connectivity_Tests:Logicalis_Int active    Eth1/22
      ernal:Portatiles_Logicalis
 10   Connectivity_Tests:Logicalis_Int active    Eth1/21
      ernal:Portatiles_Logicalis

 VLAN Type  Vlan-mode  Encap
 ---- ----- ---------- -------------------------------
 7    enet  CE         vxlan-16777209, vlan-4093
 8    enet  CE         vxlan-16121790
 9    enet  CE         vlan-502
 10   enet  CE         vlan-501



Leaf1# show vlan ext

 VLAN Name                             Status    Ports
 ---- -------------------------------- --------- -------------------------------
 7    infra:default                    active    Eth1/1, Eth1/5
 10   Connectivity_Tests:ACI_Local_BD  active    Eth1/21
 11   Connectivity_Tests:Logicalis_Int active    Eth1/21
      ernal:Portatiles_Logicalis

 VLAN Type  Vlan-mode  Encap
 ---- ----- ---------- -------------------------------
 7    enet  CE         vxlan-16777209, vlan-4093
 10   enet  CE         vxlan-16121790
 11   enet  CE         vlan-502         
                                                                                                                                                 
First of all, have in mind that the VLANs have only the local importance on the Switch; they are NOT propagated within the ACI Fabric. Notice the following VLANs in the previous output:
-        VLAN 7: The default infra VLAN. This VLAN has no importance at all. The important part of the output is the column “Encapsulation”, where the VxLAN 16777209 and VLAN 4093 (Default Infrastructure for real) appear. These 2 entities carry the traffic between Spines and the Leafs.
-        VLANs 8, 9, 10 and 11 are also not important for ACI, only for the Leafs. This means that on the Leaf Ports there is a “Switchport access VLAN 8” command configured. The important parts are the VLANs 501 and 502, which carry the traffic within the ACI Fabric.
If you focus on how the local leaves VLANs are named, you will figure out the following structure: Tenant -> ANP -> EPG. This is done by the ACI, t give you a bettwe preview of what these local VLANs are for.

5. ANP and 6. Contracts will not be explained at this moment.

7. Virtual Machine Manager Integration
Virtual Machine Manager Domain or VMM Domain - Groups VM controllers with similar networking policy requirements. For example, the VM controllers can share VLAN or Virtual Extensible Local Area Network (VXLAN) space and application endpoint groups (EPGs).
The APIC communicates with the controller to publish network configurations such as port groups that are then applied to the virtual workloads.
Note: A single VMM domain can contain multiple instances of VM controllers, but they must be from the same vendor (for example, from VMware or from Microsoft).
The objective here is to create a VMM Domain. Upon creating the VMM domain, APIC will populate the datacenter object in vCenter with a virtual distributed switch (VDS).  You need to create a VLAN pool to be associated with the VMM domain. Have in mind that the VLAN Pools configuration is Global to the ACI Fabric because the VLANs apply to physical Leaf Switches, and they are configured at "Fabric-Access Policies-Pools" menu.
Apart from this, you will need to actually create the VMM Domain (VM Networking Menu), and define the Hypervisor IP and credentials and associate the previously created AEP to your VMM Domain. Once you have the VMM Domain created and all the hosts in the new VDS, you need to associate your EPGs with the VMM Domain, in order to add the Endpoints from the Hypervisor to the EPG.

TIP: Don´t forget that you need to add the ESXi hosts to your newly created VDS manually, from vSphere.

8. RBAC  - Works exactly the same like a RBAC (Role Based Access Policy) on any Cisco platform.

9. Layer 2 and 3 External Connectivity
L2 Bridge: Packet forwarding between EP in bridge domain “BD1” and external hosts in VLAN 500 is a L2 bridge.

IMPORTANT: We need one external EPG for each L2 external connection (VLAN).
Trunking multiple VLANs over the same link requires multiple L2 External EPGs, each in a unique BD. Contract required between L2 external EPG EPG and EPG inside ACI fabric

10. Layer 4 to Layer 7 Services/Devices [Service Function Insertion]
There are 3 major steps we need to perform in order to integrate an external L4-7 Service with ACI:
  •        Import the Device package to ACI.
  •        Create the logical devices
  •        Create the concrete devices

The APIC uses northbound APIs for configuring the network and services. You use these APIs to create, delete, and modify a configuration using managed objects. When a service function is inserted in the service graph between applications, traffic from these applications is classified by the APIC and identified using a tag in the overlay network. Service functions use the tag to apply policies to the traffic. For the ASA integration with the APIC, the service function forwards traffic using either routed or transparent firewall operation.


Most Popular Posts