The lexer is done... token classes are written for comments, indentifers, keywords, punctuation, variables, string literals (with variable interpolation), integer literals, and whitespace.
The parser's framework is done, and I've started to write classes to parse/store a few constructs, but not all yet. It's pretty cool.... I got it setup so each node in the parsed S2 layer also contains all the tokens that make it up, and since it knows the context the tokens are in at this point, it can refine the token's type... so if it was just an identifier token before, now it can be a type identifer token. Then, I can produce a character for character reconstruction of the input file, but formatted in HTML with syntax highlighting, links for datatypes/classes to pages that describe them, etc... fun stuff.
Haven't started the perl backend yet, but the base backend class is ready. The HTML backend is generating HTML code now (so pretty) but only for the few node types the parser recognizes.
From here on out it's just a `Simple Matter Of Programming'.
Well, not exactly... then we all need to decide the tree of data and all the classes that'll be exposed. But that's something that'll be easy to tweak incrementally later. Those are all stored in the s2 core file.....
Once I have the parser finished (tomorrow?) then I'll start making releases.