Reduced lookahead requirements in the lexer?
1 posts in topic
Flat View  Flat View

Posted By:   Joseph_Corazza
Posted On:   Friday, February 14, 2003 01:06 PM

Recently I changed all of my lexer rules that were of the following pattern:

RULE_1 : "abc" | "def" | "ghi";

to this pattern:

RULE_1 : "abc";
RULE_2 : "def" {$setType(RULE_1);};
RULE_3 : "ghi" {$setType(RULE_1);};

The lexer never has a problem tokenizing similar character sequences (i.e., sequences with the same prefix) when using the latter patter. Am I correct in assuming that the depth of required lookahead is reduced in the latter and that it why it is more robust?

Re: Reduced lookahead requirements in the lexer?

Posted By:   Monty_Zukowski  
Posted On:   Tuesday, February 18, 2003 11:18 AM

It all has to do with how the nextToken() rule is synthesized out of all of the non-protected lexer rules. Take a look at the generated code and I think you'll get the idea. It also has to do with linear approximate lookahead and how lookahead tests are combined. You might want to look at FAQ entries about linear approximation.
About | Sitemap | Contact