Domanda

Using fslex I would like to return multiple tokens for one pattern but I don't see a way how to accomplish that. Even to use another rule function that returns multiple tokens would work for me.

I am trying to use something like this:

let identifier = [ 'a'-'z' 'A'-'Z' ]+

// ...

rule tokenize = parse
// ...
| '.' identifier '(' { let value = lexeme lexbuf
                       match operations.TryFind(value) with
                      // TODO: here is the problem:
                      // I would like to return like [DOT; op; LPAREN]
                      | Some op -> op
                      | None    -> ID(value) }

| identifier         { ID (lexeme lexbuf) }
// ...

The problem I am trying to solve here is to match for predefined tokens (see: operations map) only if the identifier is between . and (. Otherwise the match should be returned as an ID.

I am fairly new to fslex so I am happy for any pointers in the right direction.

È stato utile?

Soluzione 3

(This is a separate answer)

For this specific case, this might solve your issue better:

...

rule tokenize = parse
...
| '.' { DOT }
| '(' { LPAREN }
| identifier { ID (lexeme lexbuf) }

...

And the usage:

let parse'' text =
    let lexbuf = LexBuffer<char>.FromString text
    let rec tokenize =
        let stack = ref []
        fun lexbuf ->
        if List.isEmpty !stack then
            stack := [Lexer.tokenize lexbuf]
        let (token :: stack') = !stack // can never get match failure,
                                        // else the while wouldn't have exited
        stack := stack'
        // this match fixes the ID to an OP, if necessary
        // multiple matches (and not a unified large one),
              // else EOF may cause issues - this is quite important
        match token with
        | DOT ->
          match tokenize lexbuf with
          | ID id ->
            match tokenize lexbuf with
            | LPAREN ->
              let op = findOp id
              stack := op :: LPAREN :: !stack
            | t -> stack := ID id :: t :: !stack
          | t -> stack := t :: !stack
        | _ -> ()
        token
    Parser.start tokenize lexbuf

This will fix the ID's to be operations, if they are surrounded by DOT and LPAREN, and only then.

P.S.: I have 3 separate matches, because a unified match would require either using Lazy<_> values (which will make it even less readable), or will fail on a sequence of [DOT; EOF], because it'd expect an additional third token.

Altri suggerimenti

Okay, here it is.

Each lexer rule (i.e. rule <name> = parse .. cases ..) defined a function <name> : LexBuffer<char> -> 'a, where 'a can be any type. Usually, you return tokens (possibly defined for you by FsYacc), so then you can parse text like that:

let parse text =
    let lexbuf = LexBuffer<char>.FromString text
    Parser.start Lexer.tokenize lexbuf

Where Parser.start is the parsing function (from your FsYacc file), of type (LexBuffer<char> -> Token) -> LexBuffer<char> -> AST (Token and AST are your types, nothing special about them).

In your case, you want <name> : LexBuffer<char> -> 'a list, so then all you have to do is this:

let parse' text =
    let lexbuf = LexBuffer<char>.FromString text
    let tokenize =
        let stack = ref []
        fun lexbuf ->
        while List.isEmpty !stack do
            stack := Lexer.tokenize lexbuf
        let (token :: stack') = !stack // can never get match failure,
                                        // else the while wouldn't have exited
        stack := stack'
        token
    Parser.start tokenize lexbuf

This simply saves the tokens your lexer supplies, and gives them to the parser one-by-one (and generates more tokens as needed).

Try to keep semantic analysis like "...only if the identifier is between . and (" out of your lexer (fslex), and instead save it for your parser (fsyacc). i.e. one option would be to keep your lexer ignorant of operations:

let identifier = [ 'a'-'z' 'A'-'Z' ]+    
// ...
rule tokenize = parse
// ...
| '.' { DOT }
| '(' { LPAREN }
| identifier { ID (lexeme lexbuf) }
// ...

and then in fsyacc solve the problem with a rule like:

| DOT ID LPAREN { match operations.TryFind($2) with
                  | Some op -> Ast.Op(op)
                  | None    -> Ast.Id($2) }

UPDATE in response to comment:

Perhaps the following then in your lexer:

let identifier = [ 'a'-'z' 'A'-'Z' ]+   
let operations =
  [
    "op1", OP1
    "op2", OP2
    //...
  ] |> Map.ofList 

// ...
rule tokenize = parse
// ...
| '.' { DOT }
| '(' { LPAREN }
| identifier 
  { 
    let input = lexeme lexbuf
    match keywords |> Map.tryFind input with
    | Some(token) -> token
    | None -> ID(input) 
  }
// ...

and in your parser:

| DOT ID LPAREN { ... }
| DOT OP1 LPAREN { ... }
| DOT OP2 LPAREN { ... }

Thus you have enforced the rule that IDs and operations must come between a DOT and a LPAREN in your parser while keeping your lexer simple as it should be (to provide a stream of tokens, with little in the way of enforcing the validity of the tokens in relation to each other).

Autorizzato sotto: CC-BY-SA insieme a attribuzione
Non affiliato a StackOverflow
scroll top