I am back in Cisco’s headquarter in Sydney for another ACI training. What makes this training different from the previous ones I have attended (so far) in Cisco live Mel and here in Sydney is it a deep dive into the technology and not just being thrown a bunch of marketing jargons and hardware specs. This training is more focused on migrating a customer’s current datacentre infrustruture to ACI or showed I say a brownfield getting into ACI. I could get a lot from this as my company needs to prepare to move its current DC and UCS platform into ACI. One of the things I am after is the multi-site capability which was mentioned in Cisco Live which they promised back then would be released 3rd quarter of 2017.
Training was given by a Cisco partner Nil. Provided us with 336paged manual, which dwafts all the previous training materials I received from previous training and this does not include the lab material itself. Before I get into the meeting of what I’ve learned here are some news about the latest ACI revision they launched at Cisco Live Vegas. ACI 2.3, is not able to support 400 nodes, multi-pod capability rtt over pod-pod link relaxed to 50ms from 10ms but still distance of 500miles. This is probably the main reason why I would wait it out for the multi-site feature that is expected to be released with v 3.0. It is unclear if Cisco will jumped from 2.3 to 3.0, I expect it would be a huge revision specially with the official release of 3.0.
The idea of cloud pod floated, this is the concept where you can deploy leaves of your pod in the cloud like AWS, Azure or some other cloud provider. Participants brought up the question then of virtualizing ACI in gns3 but the trainer clarified that this would not be possible. There is a very expensive simulator that is available but it would not interface with your actual infrustructure. He did hinted though with proper linux know how it can be done.
Cisco will also be integrating more security features into ACI such as basic secuity, security groups and multi-tenancy security. Not really clear if firepower capabilities will be inserted into the leaves I guess we would just have to wait until they make that official announcement.
Another revision in terms of design is you don’t need a full mesh topology wherein in order for ACI to function, each leaf would need a link to a spine, though not really sure what will it be its benefits.
Day 1: Overview of ACI, standard in any training. It was made clear that this course is not for the beginner and you should have atleast professional level understanding of routing and switching and some DC experience/ideas or else anybody will just get lost. This is also suited for participants that have atleast have a previous overview class or have read about ACI because there are a number of new jargons to learn and I myself got so confused when I first encountered them.
What makes this training unique from the previous trainings I have attended is that we were presented with a case study of an actual customer and detailed steps on how they transitioned their company from being a legacy network centric into an application centric datacentre. From learning every detail of the company’s network, building out what it will look like once it is migrated to ACI and the migration and cutover itself.
I brought up the question, since ACI runs the same concept of multi-tenancy, VRFs, BDs and eventually the capability of multi-site, this pretty much behaves like an MPLS network. Is ACI capable of acting as a transit network which will eventually replace MPLS? The instructor’s answer was for now no though there is project GOLF.
As far as ACI multi-site is concern, looks like the plan is a spine or a couple of spines will still be running on nxos, have this spines have an IP transit, in our case through our mpls network. The RTT would not matter anymore however the APIC controller would need to have a point to point separate link between each other. However it is still unclear if this will be a dedicated link that will not go through the fabric.