top of page

L2 IPv4 Multicast in ACI

Writer's picture: Alexander DecaAlexander Deca

Updated: Feb 6, 2022

I recently figured out that documentation or configuration guides could be ambiguous, meaning that my brain interprets things differently from the original author of the document or guide.

This short blog will remind me in a couple of months what the problem was and how it got solved, even though the solution is straightforward.


As we migrate endpoints towards the production environment(virtual workloads, physical bare-metal devices, and the works), it inevitably includes multicast applications to define which host is the master and which host is the slave.


However, before the migration towards production, those applications are being tested in the lab environment to confirm that once those applications have been migrated from the legacy environment towards the production environment, they still operate as expected.


In this case, those hosts are two virtual machines residing in the same Bridge Domain (BD) with an anycast gateway configured on ACI, and they belong to a single End-Point Group (EPG). Those working with ACI understand that this is considered a network-centric approach.

VM Application connectivity

With this particular configuration, there is no need to configure L3 Multicast on the ACI fabric, as the hosts are part of the same Bridge Domain (BD), and the particular configuration of this BD is :

  • L2 Unknown Unicast is set to Hardware Proxy L3

  • Unkown Multicast flooding is set to Flood L3 IPv6

  • Unkown Multicast flooding is set to Flood

  • Multi-Destination flooding is set to Flood in BD

During the period those hosts were running in the lab environment, everything worked as expected, failover tests were successful, and it was decided to include this application to be migrated during the next batch of migrations.


Initially, everything worked as expected; however, after a couple of minutes, the cluster was broken as the hosts could not see each other anymore on multicast level. Initially, I did not understand what was going on with the application; however, during troubleshooting, I figured out those applications were running on different ESXi hosts connected to another vPC pair within ACI. It hit me that there must be an issue with the IGMP snooping timeout.


Now I have reviewed the "Cisco APIC Layer 3 Networking Configuration Guide, Release 4.2(x)," specifically the part about IGMP snooping and IGMP snooping querier, and here is where the ambiguity lies:


"Cisco ACI has, by default, IGMP snooping and IGMP snooping querier enabled. Additionally, if the Bridge Domain subnet control has “querier IP” selected, the leaf switch querier and sends query packets. Querier on the ACI leaf switch must be enabled when the segments do not have an explicit multicast router (PIM is not enabled). On the Bridge Domain where the querier is configured, the IP address used must be from the same subnet where the multicast hosts are configured."

the first sentence made me think that a default policy has IGMP Snooping and IGMP Snooping querier enabled within the ACI fabric. The only configuration needed, so I thought, was to set the flag "Querier IP" under the L3 policy of the BD.

In all fairness, I did not verify if the querier was deployed initially. Still, to my surprise, there was no querier active for the VLAN representing the BD and thus resulting in the seen behavior on the production environment when both VM's were connected to different leaf switches. It broke the multicast flow after a while. To solve this, an IGMP Policy needed to be defined where "enable querier" is selected and is applied to the BD.


Enable IGMP querier under the policy

Apply the IGMP policy to the BD

Once you have applied this policy, your multicast flow should start working, and you can verify this on the leafs that there is a querier defined for your specific VLAN. (it is the internal VLAN that ACI uses to represent your BD on your leaf)

CLI verification IGMP Snooping querier

Note:

L2 multicast within this blog refers to L2 IPv4 multicast packets with an IPv4 header and not a packet with only a multicast destination MAC address or the link-local multicast address range IPv4 multicast address range 224.0. 0.0/24.


Instead of defining an IGMP policy where you enable the querier and activate the querier setting under the L3 policy of a BD, you can enable PIM on the BD and at the vrf level; the only tricky part is for this to work, a Rendez-Vous point needs to be configured under the multicast configuration of the vrf as it is required for the policies to be pushed entirely to the individual leafs.

125 views0 comments

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating

© 2023 by Deca Consulting. 

bottom of page