We currently have a front-end 100Mbps switch with ports for:
-- our two HSRP uplinks to Internap
-- our two redundant BIG-IP load balancers
-- our ultra-paranoid/secure ssh gateway to the internal network
-- goathack, and other misc. machines
Goathack/misc machines are only on the public network. The BIG-IPs and gateway have connections to the private network.
The private network is a 3 different 100 Mbps switches, bridged with gigabit GBICs between them. All one big VLAN. All front-end proxies, mod_perl nodes, memcaches, and database are on this.
In addition, we have a tiny 8-port gigabit network, used for large copies/backups between database servers and fileservers. We also do some iSCSI over this to a LUN on the our NetApp.
We want to separate our private network into the existing 100 Mbps half, and a new Gigabit half. The existing half will contain just front-end proxies and one interface of mod_perl nodes.
All mod_perl nodes, memcaches, and DBs will also be on the new gigabit network. (latency is very important, esp. when making tons of tiny memcache requests.)
For the new gigabit network we were thinking of getting four of these guys:
HP ProCurve Switch 2848
Well, actually 3, and then 1 of the 24 port version.
We have two cabinets isolated, then two that are connected to each other. One 48 port would be for the two connected to each other. The other 48 port would be for our new cabinet, which is empty, and needs a 100 Mbps switch as well. (we'd just make 2 VLANs out of the 48 port, and use half for the slow network). The 24 port would be for the old isolated cabinet. The remaining 48 port would be a cold standby, in case one dies.
48 port: 2 cabinets
48 port: 1 cabinet, 2 VLANs (front-end and back-end)
24 port: 1 cabinet
48 port: spare
Now, we were going to bridge all those with gigabit, or if the need comes up in the future, two gigabit links trunked. (but really, we should be fine with single links.... we've done measurements on our existing cross-switch traffic and it's not bad.... something like 40 Mbps IIRC)
So, to networking topology experts: any advice? If it's easier, I could call you at your earliest convenience, if you don't want to type, or you have too many questions. (I'd then summarize the phone call on lj_dev, if that's cool.)
Any complaints about HP ProCurve switches? We have all Ciscos now, and while they've been okay, I wasn't too thrilled when the Cisco vulnerability came out recently and we found out we couldn't get an update without paying for a support contract. Fuck that. (I think they later gave it away for free, after taking a lot of flack, but it pissed us off....) I get the idea that HP gives updates for free? I can't understand either HP or Cisco's websites. I don't give a damn about TCO or case studies. I want your guys' real world experience and recommendations.