yarffaJ nalA (jnala) wrote in lj_dev,
yarffaJ nalA

  • Music:

Speeding up BML

I wrote a patch to speed up the BML processor, which handles almost all the pages on the site that aren't actual journals. It was re-eval'ing a lot of code per hit that it didn't need to. Here are some example timings from my random-Linux-box test server using Time::HiRes.

Front page237ms50ms79%
First userinfo hit398ms241ms40%
Later userinfo hits360-380ms100-110ms72%

Basically, this is going to save us 200-250ms on almost every BML page on the site. Which is huge.

The patch and the README are on my website. The README is also included after the cut.

The purpose of this patch is to move require statements out of BML files 
and into lj-bml-init.pl, so we don't eval 2000+ lines of library code on
every hit.  This was complicated by the fact that ljlib.pl, the worst
offender for both omnipresence and size, puts some variables in its caller's
namespace, and BMLCodeBlock::* gets swept out after each hit.

All the variables in ljlib.pl outside package LJ were deleted or moved into
package LJ.  ljlib.pl includes ljconfig.pl, and so unqualified references to
$SENDMAIL, $IMGPREFIX, $SERVER_DOWN were also changed to the LJ::* form.

I also refactored some of the mail-sending code, so we have send_mail_raw
(localizing the call to sendmail) and send_admin_mail (localizing signoff
and signature text for emails from the LJ system); removed code for and
references to deprecated %monthname, %monthshortname, and %dayweekname;
and fixed a bug on LJ::Lang::enum_trans (upon which the replacement for
those hashes depends).

This patch also includes my previous protocol-groupmask and interest-findsim
patches, for no good reason.

I did not include the commenting out of requires from 100+ BML files in this
patch.  So, after applying this patch (with patch -p5), run this command to
do it yourself:

perl -pi.bak -e 's/^/# / if /^\s*require.*(ljlib|ljprotocol|cleanhtml)\.pl/' \
             `find /home/lj -name '*.bml' -print`

Possible further work:

Remove the guts of talkread/talkpost (400-500 lines each) and other
frequently hit pages, put them in libraries, stop re-evaling them all
the time.  This would probably save us another 30-50ms for those hits.

Learn why we're restarting the FCGI coprocess every 50 hits, see if we
can stop doing so somehow.  It's expensive in CPU time and in setting
up new connections with the database.

Make cleanhtml faster.
  • Post a new comment


    Anonymous comments are disabled in this journal

    default userpic

    Your reply will be screened

    Your IP address will be recorded