Log in

No account? Create an account
January 6th, 2004 - LiveJournal Development — LiveJournal [entries|archive|friends|userinfo]
LiveJournal Development

[ userinfo | livejournal userinfo ]
[ archive | journal archive ]

January 6th, 2004

Networking Advice Needed [Jan. 6th, 2004|05:28 pm]
LiveJournal Development


One thing the LJ staff isn't too knowledgable about is networking. (Well, me especially. Nick and Lisa know a lot more from their previous jobs.) We'd appreciate some advice from people who know networking stuff well.

We currently have a front-end 100Mbps switch with ports for:

-- our two HSRP uplinks to Internap
-- our two redundant BIG-IP load balancers
-- our ultra-paranoid/secure ssh gateway to the internal network
-- goathack, and other misc. machines

Goathack/misc machines are only on the public network. The BIG-IPs and gateway have connections to the private network.

The private network is a 3 different 100 Mbps switches, bridged with gigabit GBICs between them. All one big VLAN. All front-end proxies, mod_perl nodes, memcaches, and database are on this.

In addition, we have a tiny 8-port gigabit network, used for large copies/backups between database servers and fileservers. We also do some iSCSI over this to a LUN on the our NetApp.

The Plan
We want to separate our private network into the existing 100 Mbps half, and a new Gigabit half. The existing half will contain just front-end proxies and one interface of mod_perl nodes.

All mod_perl nodes, memcaches, and DBs will also be on the new gigabit network. (latency is very important, esp. when making tons of tiny memcache requests.)

For the new gigabit network we were thinking of getting four of these guys:

HP ProCurve Switch 2848

Well, actually 3, and then 1 of the 24 port version.

We have two cabinets isolated, then two that are connected to each other. One 48 port would be for the two connected to each other. The other 48 port would be for our new cabinet, which is empty, and needs a 100 Mbps switch as well. (we'd just make 2 VLANs out of the 48 port, and use half for the slow network). The 24 port would be for the old isolated cabinet. The remaining 48 port would be a cold standby, in case one dies.

In review:

48 port: 2 cabinets
48 port: 1 cabinet, 2 VLANs (front-end and back-end)
24 port: 1 cabinet
48 port: spare

Now, we were going to bridge all those with gigabit, or if the need comes up in the future, two gigabit links trunked. (but really, we should be fine with single links.... we've done measurements on our existing cross-switch traffic and it's not bad.... something like 40 Mbps IIRC)

So, to networking topology experts: any advice? If it's easier, I could call you at your earliest convenience, if you don't want to type, or you have too many questions. (I'd then summarize the phone call on lj_dev, if that's cool.)

Any complaints about HP ProCurve switches? We have all Ciscos now, and while they've been okay, I wasn't too thrilled when the Cisco vulnerability came out recently and we found out we couldn't get an update without paying for a support contract. Fuck that. (I think they later gave it away for free, after taking a lot of flack, but it pissed us off....) I get the idea that HP gives updates for free? I can't understand either HP or Cisco's websites. I don't give a damn about TCO or case studies. I want your guys' real world experience and recommendations.

link37 comments|post comment

Potential DBI change [Jan. 6th, 2004|09:12 pm]
LiveJournal Development


If you don't already know, DBI's behavior is different between fetchrow_arrayref and fetchrow_hashref.

Fetching arrayrefs always returns the same arrayref pointer, just with the values changed. So you can't do this:

push @items, $_ while $_ = $sth->fetchrow_arrayref;

Because each $_ will be the same, until the final undef.

But you can do that with hashrefs:

# Currently okay code:
push @items, $_ while $_ = $sth->fetchrow_hashref;

And we do that all over LJ code. However, I recently noted this in the DBI docs:

Currently, a new hash reference is returned for each row. This
will change in the future to return the same hash ref each time, so
don't rely on the current behaviour.
So, I think we need to audit our usage of fetchrow_hashref so we're not bitten in the future.

New code should be:

# safe version:
push @items, {%$_} while $_ = $sth->fetchrow_hashref;

But that's kinda lame, since it makes two copies. :-/

In general, try and use fetchrow_arrayref if possible, but don't be passing around arrayrefs over long distances of code and doing stuff later like:

my ($foo, $bar, $baz) = @{$arrayref}[2,4,8];

That just gets insane.
link4 comments|post comment

[ viewing | January 6th, 2004 ]
[ go | Previous Day|Next Day ]