This page is no longer maintained — Please continue to the home page at www.scala-lang.org

Combinator parsing questions ... is performance problematic?

15 replies
marius
Joined: 2008-08-31,
User offline. Last seen 3 years 19 weeks ago.
Hi all,

I'd like to parse large csv file using combinator parser so I have the following simple example:

class CSVParser extends Parsers {
  type Elem = Char
 
  var fnc: List[String] => Unit = l => ()
 
  lazy val item = elem("elem", c => {isLetterOrDigit(c) || c == '_'}).+ ^^ {_ mkString("")}
  lazy val spaces = elem("elem", isSpace).*
  lazy val sep = spaces ~ elem(',') ~ spaces
  lazy val expr = item ~ (sep ~> item).* ^^ {case l ~ r => l :: r}
  lazy val exprs = ((expr <~ (elem('\r') ~ elem('\n'))) ^^ (fnc(_))).+
 
  def parse(in: java.io.Reader)(f: List[String] => Unit) {
    fnc = f
    exprs(StreamReader(in))
  }
}

I have a 64 MB csv file and I'm getting pretty soon:

Exception in thread "main" java.lang.StackOverflowError
    at scala.collection.immutable.Page.latest(PagedSeq.scala:220)
    at scala.collection.immutable.Page.latest(PagedSeq.scala:220)
    at scala.collection.immutable.Page.latest(PagedSeq.scala:220)


The second thing is that if I read the file using a Java BufferedReader and feed to my parser line by line, it takes to parse 64MB in more then one minute. A pure Java StringTokenizer approach takes about 5 seconds. Any thoughts?

Are combinator parsers fitful for parsing large files?



Br's,
Marius
Randall R Schulz
Joined: 2008-12-16,
User offline. Last seen 1 year 29 weeks ago.
Re: Combinator parsing questions ... is performance problematic

On Thursday September 24 2009, Marius Danciu wrote:
> Hi all,
>
> I'd like to parse large csv file using combinator parser ...
>
> ...
>
> Are combinator parsers fitful for parsing large files?

Fitful?

I tend to think a format as simple as CSV (even if you have variations
on the literal "comma-separated" part) is not best served by a parser
formalism as rich as an LL* parser.

Similarly, I don't think it will be easy, if at all possible, to make a
combinator parser perform as well as a simpler, hand-written parser (or
the Java StreamTokenizer), at least not for a parsing task that is so
manageable using such simpler techniques.

> Br's,
> Marius

Randall Schulz

Michael Davey
Joined: 2009-09-10,
User offline. Last seen 42 years 45 weeks ago.
Re: Combinator parsing questions ... is performance problemati

hi Marius

I use String.split and happily parse 1000000+ record csv files in less
than a minute

michael

On 9/25/09, Marius Danciu wrote:
> Hi all,
>
> I'd like to parse large csv file using combinator parser so I have the
> following simple example:
>
> class CSVParser extends Parsers {
> type Elem = Char
>
> var fnc: List[String] => Unit = l => ()
>
> lazy val item = elem("elem", c => {isLetterOrDigit(c) || c == '_'}).+ ^^
> {_ mkString("")}
> lazy val spaces = elem("elem", isSpace).*
> lazy val sep = spaces ~ elem(',') ~ spaces
> lazy val expr = item ~ (sep ~> item).* ^^ {case l ~ r => l :: r}
> lazy val exprs = ((expr <~ (elem('\r') ~ elem('\n'))) ^^ (fnc(_))).+
>
> def parse(in: java.io.Reader)(f: List[String] => Unit) {
> fnc = f
> exprs(StreamReader(in))
> }
> }
>
> I have a 64 MB csv file and I'm getting pretty soon:
>
> Exception in thread "main" java.lang.StackOverflowError
> at scala.collection.immutable.Page.latest(PagedSeq.scala:220)
> at scala.collection.immutable.Page.latest(PagedSeq.scala:220)
> at scala.collection.immutable.Page.latest(PagedSeq.scala:220)
>
>
> The second thing is that if I read the file using a Java BufferedReader and
> feed to my parser line by line, it takes to parse 64MB in more then one
> minute. A pure Java StringTokenizer approach takes about 5 seconds. Any
> thoughts?
>
> Are combinator parsers fitful for parsing large files?
>
>
>
> Br's,
> Marius
>

marius
Joined: 2008-08-31,
User offline. Last seen 3 years 19 weeks ago.
Re: Combinator parsing questions ... is performance problemati
Hi,

I understand that split/StringTokenizer are really easy to use in many cases. However there might be more complex grammars that I would have liked to parse using combinator parsers for pretty large files.

Can anyone explain why the combinator parsers is not even compare (based on simple tests that I made) with raw imperative parsing? .. I mean I expected some overhead induced by functional style of comb. parsers but the difference is quite significant.


Br's,
Marius

On Fri, Sep 25, 2009 at 8:30 AM, Michael Davey <michael4096 [at] googlemail [dot] com> wrote:
hi Marius

I use String.split and happily parse 1000000+ record csv files in less
than a minute

michael

On 9/25/09, Marius Danciu <marius [dot] danciu [at] gmail [dot] com> wrote:
> Hi all,
>
> I'd like to parse large csv file using combinator parser so I have the
> following simple example:
>
> class CSVParser extends Parsers {
>   type Elem = Char
>
>   var fnc: List[String] => Unit = l => ()
>
>   lazy val item = elem("elem", c => {isLetterOrDigit(c) || c == '_'}).+ ^^
> {_ mkString("")}
>   lazy val spaces = elem("elem", isSpace).*
>   lazy val sep = spaces ~ elem(',') ~ spaces
>   lazy val expr = item ~ (sep ~> item).* ^^ {case l ~ r => l :: r}
>   lazy val exprs = ((expr <~ (elem('\r') ~ elem('\n'))) ^^ (fnc(_))).+
>
>   def parse(in: java.io.Reader)(f: List[String] => Unit) {
>     fnc = f
>     exprs(StreamReader(in))
>   }
> }
>
> I have a 64 MB csv file and I'm getting pretty soon:
>
> Exception in thread "main" java.lang.StackOverflowError
>     at scala.collection.immutable.Page.latest(PagedSeq.scala:220)
>     at scala.collection.immutable.Page.latest(PagedSeq.scala:220)
>     at scala.collection.immutable.Page.latest(PagedSeq.scala:220)
>
>
> The second thing is that if I read the file using a Java BufferedReader and
> feed to my parser line by line, it takes to parse 64MB in more then one
> minute. A pure Java StringTokenizer approach takes about 5 seconds. Any
> thoughts?
>
> Are combinator parsers fitful for parsing large files?
>
>
>
> Br's,
> Marius
>

marius
Joined: 2008-08-31,
User offline. Last seen 3 years 19 weeks ago.
Re: Combinator parsing questions ... is performance problemati
Any other thoughts please?

Thanks,
Marius

On Fri, Sep 25, 2009 at 8:48 AM, Marius Danciu <marius [dot] danciu [at] gmail [dot] com> wrote:
Hi,

I understand that split/StringTokenizer are really easy to use in many cases. However there might be more complex grammars that I would have liked to parse using combinator parsers for pretty large files.

Can anyone explain why the combinator parsers is not even compare (based on simple tests that I made) with raw imperative parsing? .. I mean I expected some overhead induced by functional style of comb. parsers but the difference is quite significant.


Br's,
Marius

On Fri, Sep 25, 2009 at 8:30 AM, Michael Davey <michael4096 [at] googlemail [dot] com> wrote:
hi Marius

I use String.split and happily parse 1000000+ record csv files in less
than a minute

michael

On 9/25/09, Marius Danciu <marius [dot] danciu [at] gmail [dot] com> wrote:
> Hi all,
>
> I'd like to parse large csv file using combinator parser so I have the
> following simple example:
>
> class CSVParser extends Parsers {
>   type Elem = Char
>
>   var fnc: List[String] => Unit = l => ()
>
>   lazy val item = elem("elem", c => {isLetterOrDigit(c) || c == '_'}).+ ^^
> {_ mkString("")}
>   lazy val spaces = elem("elem", isSpace).*
>   lazy val sep = spaces ~ elem(',') ~ spaces
>   lazy val expr = item ~ (sep ~> item).* ^^ {case l ~ r => l :: r}
>   lazy val exprs = ((expr <~ (elem('\r') ~ elem('\n'))) ^^ (fnc(_))).+
>
>   def parse(in: java.io.Reader)(f: List[String] => Unit) {
>     fnc = f
>     exprs(StreamReader(in))
>   }
> }
>
> I have a 64 MB csv file and I'm getting pretty soon:
>
> Exception in thread "main" java.lang.StackOverflowError
>     at scala.collection.immutable.Page.latest(PagedSeq.scala:220)
>     at scala.collection.immutable.Page.latest(PagedSeq.scala:220)
>     at scala.collection.immutable.Page.latest(PagedSeq.scala:220)
>
>
> The second thing is that if I read the file using a Java BufferedReader and
> feed to my parser line by line, it takes to parse 64MB in more then one
> minute. A pure Java StringTokenizer approach takes about 5 seconds. Any
> thoughts?
>
> Are combinator parsers fitful for parsing large files?
>
>
>
> Br's,
> Marius
>


Jorge Ortiz
Joined: 2008-12-16,
User offline. Last seen 29 weeks 4 days ago.
Re: Combinator parsing questions ... is performance problemati
I think there's a version of combinator parsers somewhere (in 2.8-land? in paulp's hidden attic of awesome scala code?) that are memoized and thus substantially faster.

I don't think original parser combinators library was written with performance particularly in mind.

--j

On Fri, Sep 25, 2009 at 9:14 PM, Marius Danciu <marius [dot] danciu [at] gmail [dot] com> wrote:
Any other thoughts please?

Thanks,
Marius

On Fri, Sep 25, 2009 at 8:48 AM, Marius Danciu <marius [dot] danciu [at] gmail [dot] com> wrote:
Hi,

I understand that split/StringTokenizer are really easy to use in many cases. However there might be more complex grammars that I would have liked to parse using combinator parsers for pretty large files.

Can anyone explain why the combinator parsers is not even compare (based on simple tests that I made) with raw imperative parsing? .. I mean I expected some overhead induced by functional style of comb. parsers but the difference is quite significant.


Br's,
Marius

On Fri, Sep 25, 2009 at 8:30 AM, Michael Davey <michael4096 [at] googlemail [dot] com> wrote:
hi Marius

I use String.split and happily parse 1000000+ record csv files in less
than a minute

michael

On 9/25/09, Marius Danciu <marius [dot] danciu [at] gmail [dot] com> wrote:
> Hi all,
>
> I'd like to parse large csv file using combinator parser so I have the
> following simple example:
>
> class CSVParser extends Parsers {
>   type Elem = Char
>
>   var fnc: List[String] => Unit = l => ()
>
>   lazy val item = elem("elem", c => {isLetterOrDigit(c) || c == '_'}).+ ^^
> {_ mkString("")}
>   lazy val spaces = elem("elem", isSpace).*
>   lazy val sep = spaces ~ elem(',') ~ spaces
>   lazy val expr = item ~ (sep ~> item).* ^^ {case l ~ r => l :: r}
>   lazy val exprs = ((expr <~ (elem('\r') ~ elem('\n'))) ^^ (fnc(_))).+
>
>   def parse(in: java.io.Reader)(f: List[String] => Unit) {
>     fnc = f
>     exprs(StreamReader(in))
>   }
> }
>
> I have a 64 MB csv file and I'm getting pretty soon:
>
> Exception in thread "main" java.lang.StackOverflowError
>     at scala.collection.immutable.Page.latest(PagedSeq.scala:220)
>     at scala.collection.immutable.Page.latest(PagedSeq.scala:220)
>     at scala.collection.immutable.Page.latest(PagedSeq.scala:220)
>
>
> The second thing is that if I read the file using a Java BufferedReader and
> feed to my parser line by line, it takes to parse 64MB in more then one
> minute. A pure Java StringTokenizer approach takes about 5 seconds. Any
> thoughts?
>
> Are combinator parsers fitful for parsing large files?
>
>
>
> Br's,
> Marius
>



marius
Joined: 2008-08-31,
User offline. Last seen 3 years 19 weeks ago.
Re: Combinator parsing questions ... is performance problemati
Thanks Jorge. I would definitely like to see those goodies !

Br's,
Marius

On Sat, Sep 26, 2009 at 10:59 PM, Jorge Ortiz <jorge [dot] ortiz [at] gmail [dot] com> wrote:
I think there's a version of combinator parsers somewhere (in 2.8-land? in paulp's hidden attic of awesome scala code?) that are memoized and thus substantially faster.

I don't think original parser combinators library was written with performance particularly in mind.

--j

On Fri, Sep 25, 2009 at 9:14 PM, Marius Danciu <marius [dot] danciu [at] gmail [dot] com> wrote:
Any other thoughts please?

Thanks,
Marius

On Fri, Sep 25, 2009 at 8:48 AM, Marius Danciu <marius [dot] danciu [at] gmail [dot] com> wrote:
Hi,

I understand that split/StringTokenizer are really easy to use in many cases. However there might be more complex grammars that I would have liked to parse using combinator parsers for pretty large files.

Can anyone explain why the combinator parsers is not even compare (based on simple tests that I made) with raw imperative parsing? .. I mean I expected some overhead induced by functional style of comb. parsers but the difference is quite significant.


Br's,
Marius

On Fri, Sep 25, 2009 at 8:30 AM, Michael Davey <michael4096 [at] googlemail [dot] com> wrote:
hi Marius

I use String.split and happily parse 1000000+ record csv files in less
than a minute

michael

On 9/25/09, Marius Danciu <marius [dot] danciu [at] gmail [dot] com> wrote:
> Hi all,
>
> I'd like to parse large csv file using combinator parser so I have the
> following simple example:
>
> class CSVParser extends Parsers {
>   type Elem = Char
>
>   var fnc: List[String] => Unit = l => ()
>
>   lazy val item = elem("elem", c => {isLetterOrDigit(c) || c == '_'}).+ ^^
> {_ mkString("")}
>   lazy val spaces = elem("elem", isSpace).*
>   lazy val sep = spaces ~ elem(',') ~ spaces
>   lazy val expr = item ~ (sep ~> item).* ^^ {case l ~ r => l :: r}
>   lazy val exprs = ((expr <~ (elem('\r') ~ elem('\n'))) ^^ (fnc(_))).+
>
>   def parse(in: java.io.Reader)(f: List[String] => Unit) {
>     fnc = f
>     exprs(StreamReader(in))
>   }
> }
>
> I have a 64 MB csv file and I'm getting pretty soon:
>
> Exception in thread "main" java.lang.StackOverflowError
>     at scala.collection.immutable.Page.latest(PagedSeq.scala:220)
>     at scala.collection.immutable.Page.latest(PagedSeq.scala:220)
>     at scala.collection.immutable.Page.latest(PagedSeq.scala:220)
>
>
> The second thing is that if I read the file using a Java BufferedReader and
> feed to my parser line by line, it takes to parse 64MB in more then one
> minute. A pure Java StringTokenizer approach takes about 5 seconds. Any
> thoughts?
>
> Are combinator parsers fitful for parsing large files?
>
>
>
> Br's,
> Marius
>




Ben Hutchison 2
Joined: 2009-02-14,
User offline. Last seen 42 years 45 weeks ago.
Re: Combinator parsing questions ... is performance problemati

Marius Danciu wrote:
> Can anyone explain why the combinator parsers is not even compare (based on
> simple tests that I made) with raw imperative parsing? .. I mean I expected
> some overhead induced by functional style of comb. parsers but the
> difference is quite significant.
>
Marius,

Bernie Pope gave talk about Parser Combinators at Melbourne Scala User
Group that discussed performance issues related to backtracking.

See the following slides, esp slide 47
http://www.cs.mu.oz.au/~bjpop/parser_combinators.pdf

-Ben

>
> Br's,
> Marius
>
> On Fri, Sep 25, 2009 at 8:30 AM, Michael Davey
> wrote:
>
>
>> hi Marius
>>
>> I use String.split and happily parse 1000000+ record csv files in less
>> than a minute
>>
>> michael
>>
>> On 9/25/09, Marius Danciu wrote:
>>
>>> Hi all,
>>>
>>> I'd like to parse large csv file using combinator parser so I have the
>>> following simple example:
>>>
>>> class CSVParser extends Parsers {
>>> type Elem = Char
>>>
>>> var fnc: List[String] => Unit = l => ()
>>>
>>> lazy val item = elem("elem", c => {isLetterOrDigit(c) || c == '_'}).+
>>>
>> ^^
>>
>>> {_ mkString("")}
>>> lazy val spaces = elem("elem", isSpace).*
>>> lazy val sep = spaces ~ elem(',') ~ spaces
>>> lazy val expr = item ~ (sep ~> item).* ^^ {case l ~ r => l :: r}
>>> lazy val exprs = ((expr <~ (elem('\r') ~ elem('\n'))) ^^ (fnc(_))).+
>>>
>>> def parse(in: java.io.Reader)(f: List[String] => Unit) {
>>> fnc = f
>>> exprs(StreamReader(in))
>>> }
>>> }
>>>
>>> I have a 64 MB csv file and I'm getting pretty soon:
>>>
>>> Exception in thread "main" java.lang.StackOverflowError
>>> at scala.collection.immutable.Page.latest(PagedSeq.scala:220)
>>> at scala.collection.immutable.Page.latest(PagedSeq.scala:220)
>>> at scala.collection.immutable.Page.latest(PagedSeq.scala:220)
>>>
>>>
>>> The second thing is that if I read the file using a Java BufferedReader
>>>
>> and
>>
>>> feed to my parser line by line, it takes to parse 64MB in more then one
>>> minute. A pure Java StringTokenizer approach takes about 5 seconds. Any
>>> thoughts?
>>>
>>> Are combinator parsers fitful for parsing large files?
>>>
>>>
>>>
>>> Br's,
>>> Marius
>>>
>>>
>
>

marius
Joined: 2008-08-31,
User offline. Last seen 3 years 19 weeks ago.
Re: Combinator parsing questions ... is performance problemati
Thank you very much Ben !

On Sun, Sep 27, 2009 at 7:52 PM, Ben Hutchison <benh [at] ibsglobalweb [dot] com> wrote:
Marius Danciu wrote:
Can anyone explain why the combinator parsers is not even compare (based on
simple tests that I made) with raw imperative parsing? .. I mean I expected
some overhead induced by functional style of comb. parsers but the
difference is quite significant.
 
Marius,

Bernie Pope gave talk about Parser Combinators at Melbourne Scala User Group that discussed performance issues related to backtracking.

See the following slides, esp slide 47
http://www.cs.mu.oz.au/~bjpop/parser_combinators.pdf

-Ben


Br's,
Marius

On Fri, Sep 25, 2009 at 8:30 AM, Michael Davey
<michael4096 [at] googlemail [dot] com>wrote:

 
hi Marius

I use String.split and happily parse 1000000+ record csv files in less
than a minute

michael

On 9/25/09, Marius Danciu <marius [dot] danciu [at] gmail [dot] com> wrote:
   
Hi all,

I'd like to parse large csv file using combinator parser so I have the
following simple example:

class CSVParser extends Parsers {
 type Elem = Char

 var fnc: List[String] => Unit = l => ()

 lazy val item = elem("elem", c => {isLetterOrDigit(c) || c == '_'}).+
     
^^
   
{_ mkString("")}
 lazy val spaces = elem("elem", isSpace).*
 lazy val sep = spaces ~ elem(',') ~ spaces
 lazy val expr = item ~ (sep ~> item).* ^^ {case l ~ r => l :: r}
 lazy val exprs = ((expr <~ (elem('\r') ~ elem('\n'))) ^^ (fnc(_))).+

 def parse(in: java.io.Reader)(f: List[String] => Unit) {
   fnc = f
   exprs(StreamReader(in))
 }
}

I have a 64 MB csv file and I'm getting pretty soon:

Exception in thread "main" java.lang.StackOverflowError
   at scala.collection.immutable.Page.latest(PagedSeq.scala:220)
   at scala.collection.immutable.Page.latest(PagedSeq.scala:220)
   at scala.collection.immutable.Page.latest(PagedSeq.scala:220)


The second thing is that if I read the file using a Java BufferedReader
     
and
   
feed to my parser line by line, it takes to parse 64MB in more then one
minute. A pure Java StringTokenizer approach takes about 5 seconds. Any
thoughts?

Are combinator parsers fitful for parsing large files?



Br's,
Marius

     

 


etorreborre
Joined: 2008-09-03,
User offline. Last seen 1 year 22 weeks ago.
Re: Combinator parsing questions ... is performance problemati

The scala 2.8 preview notes mention "efficiency" as a side-effect of using
packrat parsers in the parser combinator library:
http://www.scala-lang.org/node/1564.

I'm not sure however if that'll make a big difference on Marius use case.

Eric.

Casey Hawthorne
Joined: 2009-09-10,
User offline. Last seen 42 years 45 weeks ago.
Re: Combinator parsing questions ... is performance problematic

I wonder how Haskell combinator parsers "score" against Scala
combinator parsers.

The JVM JIT compiler seems pretty good these days, so I wonder if the
Scala combinator parsers aren't JIT friendly.

--
Regards,
Casey

marius
Joined: 2008-08-31,
User offline. Last seen 3 years 19 weeks ago.
Re: Combinator parsing questions ... is performance problemati
I really hope that 2.8 packrat would really improve performance (... thanks for pointing this out Eric). I'm not sure if backtracking support is the most consuming concept (... but could be) as even for simple operations (like the csv parser above) where backtracking is not apparently used it is pretty slow.

As far as JIT goes I tend to think that there aren't many "JIT friendly" problems

The other concern that I have is the Readers ... StreamReader doesn't seam to like large inputs. I know that I could probably write my own Reader implementation but would be nice to have something that plays nice with large inputs.

Br's,
Marius

On Sun, Sep 27, 2009 at 10:01 PM, Casey Hawthorne <caseyh [at] istar [dot] ca> wrote:
I wonder how Haskell combinator parsers "score" against Scala
combinator parsers.

The JVM JIT compiler seems pretty good these days, so I wonder if the
Scala combinator parsers aren't JIT friendly.

--
Regards,
Casey

Randall R Schulz
Joined: 2008-12-16,
User offline. Last seen 1 year 29 weeks ago.
Re: Combinator parsing questions ... is performance problematic

On Sunday September 27 2009, Casey Hawthorne wrote:
> I wonder how Haskell combinator parsers "score" against Scala
> combinator parsers.

I refer you to this message from earlier in this very thread:

On Sunday September 27 2009, Ben Hutchison wrote:
> ...
>
> Marius,
>
> Bernie Pope gave talk about Parser Combinators at Melbourne Scala
> User Group that discussed performance issues related to backtracking.
>
> See the following slides, esp slide 47
> http://www.cs.mu.oz.au/~bjpop/parser_combinators.pdf
>
> -Ben

> The JVM JIT compiler seems pretty good these days, so I wonder if the
> Scala combinator parsers aren't JIT friendly.
>
> --
> Regards,
> Casey

Randall Schulz

ounos
Joined: 2008-12-29,
User offline. Last seen 3 years 44 weeks ago.
Re: Combinator parsing questions ... is performance problemati

2009/9/28 Randall R Schulz :
> On Sunday September 27 2009, Casey Hawthorne wrote:
>> I wonder how Haskell combinator parsers "score" against Scala
>> combinator parsers.
>
> I refer you to this message from earlier in this very thread:

Thus, apart from empty hijacked threads, we now have recursive threads
too. Lets see what other types of threads are waiting to be found :)

>
> On Sunday September 27 2009, Ben Hutchison wrote:
>> ...
>>
>> Marius,
>>
>> Bernie Pope gave talk about Parser Combinators at Melbourne Scala
>> User Group that discussed performance issues related to backtracking.
>>
>> See the following slides, esp slide 47
>> http://www.cs.mu.oz.au/~bjpop/parser_combinators.pdf
>>
>> -Ben
>
>
>> The JVM JIT compiler seems pretty good these days, so I wonder if the
>> Scala combinator parsers aren't JIT friendly.
>>
>> --
>> Regards,
>> Casey
>
>
> Randall Schulz
>

Randall R Schulz
Joined: 2008-12-16,
User offline. Last seen 1 year 29 weeks ago.
Re: Combinator parsing questions ... is performance problemati

On Monday September 28 2009, Jim Andreou wrote:
> 2009/9/28 Randall R Schulz :
> > On Sunday September 27 2009, Casey Hawthorne wrote:
> >> I wonder how Haskell combinator parsers "score" against Scala
> >> combinator parsers.
> >
> > I refer you to this message from earlier in this very thread:
>
> Thus, apart from empty hijacked threads, we now have recursive
> threads too. Lets see what other types of threads are waiting to be
> found :)

Judging from some lists I subscribe to,
infinite threads are a possibility.

> ...

RRS

Mohamed Bana
Joined: 2008-12-20,
User offline. Last seen 3 years 19 weeks ago.
Re: Combinator parsing questions ... is performance problemati
I'd look at Pandoc, which uses parser combinators.  I use it very often and it's pretty fast.
2009/9/28 Casey Hawthorne <caseyh [at] istar [dot] ca>
I wonder how Haskell combinator parsers "score" against Scala
combinator parsers.

The JVM JIT compiler seems pretty good these days, so I wonder if the
Scala combinator parsers aren't JIT friendly.

--
Regards,
Casey

Copyright © 2012 École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland