Network multicast support on Azure

3 min read
Feb 4, 2016 12:00:00 AM

 

Enabling multicast support on Microsoft Azure is a unique challenge. While it’s not a common requirement for typical virtualized workloads, certain enterprise applications rely on it for critical networking functions. A prime example is Oracle RAC (Real Application Clusters), which requires multicast for Highly Available IP (HAIP) on the interconnect starting with version 11.2.0.2.

If you are building a training or lab environment for Oracle RAC on Azure, multicast support is essential. This post explores how to validate your network and implement a workaround since Azure does not provide native multicast support.

The Challenge: Azure's Native Network Limitations

Before diving into solutions, it is important to understand what you are working with. You can check if your kernel supports multicast using the netstat utility:

[root@oradb5 ~]# netstat -g | grep mcast lo              1       all-systems.mcast.net eth0            1       all-systems.mcast.net eth1            1       all-systems.mcast.net 

While the interfaces appear "ready," the underlying Azure Virtual Network (VNet) is the bottleneck. According to the Azure Virtual Network FAQ:

Do VNets support multicast or broadcast? No. What protocols can I use within VNets? Multicast, broadcast, IP-in-IP encapsulated packets, and GRE packets are blocked. Only standard TCP, UDP, and ICMP work.

Validating the Multicast Failure

To confirm this, you can use Oracle’s mcasttest.pl script (from MOS Doc ID 1054902.1). In my test environment with two Azure VMs (oradb5 and oradb6), testing the eth1 interface yielded the following:

[oracle@oradb5 mcasttest]$ ./mcasttest.pl -n oradb5,oradb6 -i eth1 Test for Multicast address 230.0.1.0 Nov 24 15:05:23 | Multicast Failed for eth1 using address 230.0.1.0:42000 Test for Multicast address 224.0.0.251 Nov 24 15:05:53 | Multicast Failed for eth1 using address 224.0.0.251:42001 

The Workaround: Peer-to-Peer VPN with n2n

To bypass Azure's restrictions, we can use a Peer-to-Peer VPN called n2n provided by ntop. This creates a virtual "tap" interface that encapsulates multicast traffic into standard UDP packets, which Azure allows.

1. Preparing the Environment

First, install the necessary headers and subversion utility to compile n2n from source. Compiling from source is recommended so you can manually disable encryption or compression if desired.

[root@oradb5 ~]# yum install kernel-headers subversion.x86_64 [root@oradb5 ~]# svn co https://svn.ntop.org/svn/ntop/trunk/n2n 

2. Compilation and Installation

Navigate to the n2n_v2 directory and compile the utility. Once finished, copy the binaries to your system path on all nodes.

[root@oradb5 n2n_v2]# make [root@oradb5 n2n_v2]# cp supernode edge /usr/sbin/ 

Configuring the Multicast Tunnel

The n2n setup requires two components: a supernode (the directory/broker) and edge nodes (the actual VMs).

1. Starting the Supernode

The supernode only needs to run on one machine (it can even be a third, separate VM). Here, we use port 1200:

[root@oradb5 ~]# supernode -l 1200 

2. Starting the Edge Nodes

Run the edge command on both servers to create a virtual edge0 interface. We use the following parameters:

  • -E: Accept multicast MAC addresses.
  • -r: Enable packet forwarding.
  • -a: Assign a virtual IP for the tunnel (192.168.1.x).
# On oradb5 [root@oradb5 ~]# edge -l 10.0.2.11:1200 -c RAC -a 192.168.1.1 -E -r  # On oradb6 [root@oradb6 ~]# edge -l 10.0.2.11:1200 -c RAC -a 192.168.1.2 -E -r 

Validating the Solution

Once the edge0 interface is up on both nodes, it should show MULTICAST in the interface flags:

[root@oradb5 ~]# ifconfig edge0 edge0     Link encap:Ethernet  HWaddr 52:CD:8E:20:3D:E5           inet addr:192.168.1.1  Bcast:192.168.1.255  Mask:255.255.255.0           UP BROADCAST RUNNING MULTICAST  MTU:1400  Metric:1 

Final Multicast Test

Now, run the mcasttest.pl script again, but target the edge0 interface instead of the physical eth1:

[oracle@oradb5 mcasttest]$ ./mcasttest.pl -n oradb5,oradb6 -i edge0 ########### testing Multicast on all nodes ########## Test for Multicast address 230.0.1.0 Nov 24 16:22:12 | Multicast Succeeded for edge0 using address 230.0.1.0:42000 Test for Multicast address 224.0.0.251 Nov 24 16:22:13 | Multicast Succeeded for edge0 using address 224.0.0.251:42001 

The test is a success! Your Azure VMs now have a functional multicast layer that Oracle RAC can use for its interconnect. In my next article, I will demonstrate how to complete the Oracle RAC installation using this interface and shared storage.

Cloud Migrations

Ready to optimize your cloud environment for the future?

On this page

Ready to unlock value from your data?

With Pythian, you can accomplish your data transformation goals and more.