Our upgrade to 10Gb on the LAN is nearly complete. With the exception of one fiber run that refuses to support 10Gb (I suspect it has something to do with the 50′ fiber patch cable being used to get from the IDF to the fiber panel), we are up and running. A major relief given all of our OM1 rated multi-mode fiber (read, old and slow and 62.5nm and 10Gb = bad!).
Thank goodness for the 10Gb LRM! Designed to make old 62.5 multi mode fiber run 10Gb out to distances of 220m, these miracle modules have done the trick. We do have a few MDF links that are +.1-.5 dbm hot on the receiving side, but no link errors to date and we are planning to add some attenuation to those links shortly. It looks like the Cisco 4500X is more sensitive than the Meraki MS320 switches.
Speaking of which, we’re running in a configuration I’ve started calling the Meraki Sandwich. We have a Cisco 4500X core switch connected at 10Gb to the Meraki MS320 IDF switches with our previous 3750X switch stacks hanging off of the Merakis at 1Gb. Since most of our heavy use is over Wireless, relegating phones and printers to a 1Gb uplink should be fine. It’s working great now. I wish I could say that was the case last week.
Some interesting things happen when you break the laws of nature and sandwich a Meraki switch in between two Cisco switches (and yes, I know the Meraki switch says Cisco on it, but it’s a lie!).
First, Cisco switches require Mode Conditioning Patch (MCP) Cables with 10Gb LRM modules. Meraki switches do not. Good luck finding this in the documentation. We discovered it when we could not for the life of us get a Cisco to link up to a Meraki over 10Gb using MCP cables on both sides. That almost ended our project real quick. After much head scratching and a few days wasted troubleshooting, we decided to rotate through different module and cable combinations, and low and behold, Meraki + 10Gb LRM SFP (Meraki or Cisco brand) with a regular SC-LC fiber patch cable connected to a Cisco + 10Gb LRM SFP with a Mode Conditioning Patch Cable on the other end worked!
The next thing we ran into was missing VLANs. Yes, missing. This problem almost sunk us. Intermittently, our staff and student VLANs would stop working. We saw this manifest as clients connecting to Wifi, pulling an IP address and DNS settings from the DHCP server and then disappearing from the network. It was happening sporadically across the district which took us a few days to identify. Thankfully, we could consistently reproduce the symptoms in one IDF and we began troubleshooting in earnest.
At first we suspected our VLAN trunks were having issues. We reviewed them across the district, both on the switches that were “working” (or we didn’t see connectivity issues with) and those that were not. Frustratingly, we would confirm a working wing one day only to come back later and find the clients unable to connect in that wing later. After going round and round on our VLAN trunk settings, we finally decided something else had to be causing the problem and started looking deeper.
Unfortunately, we were running up against hosting a County edtech day at one of our Middle Schools on the Friday before teachers officially came back. Since there was an expectation that the wifi would work for the event and we had narrowed the problem down to something related to the 3750X stack hanging off the Meraki, the night before the event we reconfigured the network, directly patching the Cisco stacks in the IDF through to the 4500X in the MDF. Luckily we just had enough free ports on the 4500X to cover the wings where we were hosting the event. That and using the Meraki NAT option for the event SSID got us through the day. I love the Meraki NAT option for event SSIDs. Totally awesome!
We continued troubleshooting on Saturday. Having narrowed down the issue to the interaction between the 3750X and the Meraki MS320, we used the packet capture tools built into the Meraki to see what was going on. Actually, throughout the entire ordeal, having the visibility provided by the Meraki dashboard was invaluable.
Our next step was to strip all the proprietary cisco protocols off of the 4500X and 3750X switches. We removed EIGRP and went to good old fashioned static routes. We removed QoS and Multi-cast routing and anything else that looked like it might cause a problem with the Meraki switches. And just when we thought that was it, the problem persisted.
The next thing we tried was the MTU setting. Since we were seeing packets leave the MS320 but not come back, we figured maybe the Cisco core switches were dropping packets for some reason. It turns out that the default MTU setting for Meraki switches is 9600. However, the default for Cisco, even on 10Gb links, is 1500. While they should play well together, with the Meraki 9600 sized packets being chopped up into 1500 sized packets (and all seemed to be working fine with most switches), we decided to play it safe and set the Meraki MTU to 1500. This required a switch reboot. Again, we let it sit overnight, came back and things looked good. Until they didn’t and the issues persisted. (Having read up on MTU and Jumbo fames, we’ve decided to leave all switches at 1500, the performance gain on regular network traffic not being enough to justify having to reconfigure every cisco switch at this time).
After days of staring at configurations, we were getting crossed eyed. We had ruled out problems with the DHCP server, routing protocols, Access Point configs, clients, switch configs, pretty much everything, and yet we were still seeing the issue. I was ready to bypass the Meraki switches entirely while we continued to work through the issue with Cisco and Meraki support (an interesting back and forth experience to be sure).
And then, in our darkest hour, out of the light came VTP. On Saturday, while rebooting a cisco switch for the upteenth time, one that had just been cleaned of any cisco proprietary protocols from the running config, there, staring at us on the screen was VTP. Cisco’s proprietary VLAN Trunking Protocol. Enabled, by default, but hidden from view in a show run command, VTP allows cisco switches to communicate VLAN information between each other. And apparently when there is a non-cisco switch in the middle, odd things can happen. Like in the Meraki Sandwich. As soon as we disabled VTP (put it into transparent mode) on the Cisco 4500X, no more missing VLANs.
As it turns out, the Meraki wasn’t playing nice with the Cisco’s active VTP traffic and VLANs were intermittently being dropped. This is a known issue with Cisco VTP domains and Meraki switches. So on the Saturday night, the weekend before Teachers started the new school year, we disabled VTP on all of the Cisco switches, put the Middle School network back together and called it a day. High value, high impact 10Gb LAN upgrade project saved after a week of intense troubleshooting.
We’re now running with the Meraki Sandwich at 10Gb to every IDF. Had we not been up against the start of school deadline, this would not have been as stressful, but our cable project got off to a late start and faced several delays along the way, which meant we weren’t in a position to discover this issue until just two weeks before the start of school. The time crunch added to our unfamiliarity with 10Gb networking (it’s slightly more involved than 1Gb) and the Meraki/Cisco interoperability configurations made for a challenging two weeks.
And before you ask, yes, we did pilot this configuration prior to going on all in and we thought we had all the configuration issues sorted out. But when doing a complete network overhaul, you never really know what you’re going to find until you’re in the weeds.
So that’s the Meraki Sandwich. If you are thinking about taking advantage of the affordable 10Gb options from Meraki while gaining the awesome network visibility of the Dashboard and leveraging existing Cisco switches and 62.5nm fiber in the process, read the links below. They will save you some headaches along the way.
10Gb LR SFP – https://meraki.cisco.com/products/switches/accessories