As far as contributing to LJ goes, I don't see being able to do much actual coding for now, as I'm pretty involved in work, and have other open source projects I'd like to accomplish before putting significant chunks of time into LJ. In the meantime, if there's a short enough fix that lies in areas I feel competent to handle, I'll definitely send in patches.
Now, on to the real purpose behind my posting here. I have skimmed the server documentation available through links, and haven't come across the answer to my questions.
- Are developers expected to set up their own LJ server, apply their changes there, and then send their patches in?
- Are developers expected to test their changes by playing around with their browser or with clients? Are there no test scripts which will simulate various user activities, preferably placed in a test harness?
From what I've been able to glean from documentation thus far (and the non-existent memories), is that the answers to those two questions are yes and no, respectively. While I can understand not providing a server for developers to work on due to the resource commitment, I feel the point ought to be made clear, possibly in a, "So you wanna be an LJ Dev?" document, or a "LJ Dev HOWTO" depending on your cultural background. If the answer to the second question is, in fact, no, I wanted to start a discussion on whether or not writing up test kits would be worth the effort or not. Btw, the reason I am posting here rather than in

- Various issues I can see related to writing up LJ test kits
- If none is written yet, this will be high initial effort.
- Whenever a protocol changes, the corresponding test kit must also be updated to reflect those changes.
- Whenever fixing bugs which were not reported by the test kits, the corresponding test kits must be updated so they can reproduce those bugs on the unpatched code.
So, what the proposed "Day in the Life" would be after the test kits have been written would be:
- Read bug report.
- Check if test kit reports the bug.
- If test kit does not report the bug, add in appropriate transactions/queries/sequences until the bug is reproduced.
- Submit test kit patch.
- Fix bug until the (updated) test kit gives it the all clear. For added paranoia, the entire test harness should pass the patch.
- Submit bug fix patch.
As soon as the test kit patches are validated and incorporated into the development tree, anybody may attack the bug, itself, of course.
In case you haven't noticed yet, I'm a strong subscriber to automated tests and the method of "write the test before you write the code" memes. I am well aware that there are other schools of thought, such as one of my colleague's motto of, "don't write bugs." I do think, however, that this sort of methodology would be very beneficial to this setup where a lot of people who are contributing might not realize the effects of their code and might easily create side effects in their patches through blissful ignorance.
And now we return to your regular Saturday programming.