An efficient way to determine if two context free grammars are equivalent?
I'm wondering if there's an efficient way of checking to see if two context free grammars are equivalent, besides working out "test cases" by hand (ie, just trying to see if both grammars can generate the same things, and only the same things, by trial and error).
Thanks!
There is not. In fact, there isn't even an inefficient way!
That is, the problem of determining whether two given CFGs represent the same language is undecidable. In fact, an even stronger statement is true: the problem of determining whether a given CFG accepts all strings on its alphabet is undecidable.
The proof of this can be found in chapter 5 of Sipser's Introduction to the Theory of Computation. The basic idea is that, for any Turing machine $M$, we can obtain a context-free grammar which accepts all strings that do not encode a proof that $M$ halts (under some specific encoding that's more complicated than I really want to get into here). So determining whether this grammar accepts all strings is equivalent to solving the halting problem for $M$.
This paper provides an answer "Comparison of Context-Free Grammars Based on Parsing Generated Test Data", http://link.springer.com/chapter/10.1007%2F978-3-642-28830-2_18. The authors propose to generate sub-sentences from the one grammar and use a parser generated from the other to parse them. A statistical analysis and visualization helps to interpret the results.
Quoting the abstract:
There exist a number of software engineering scenarios that essentially involve equivalence or correspondence assertions for some of the context-free grammars in the scenarios. For instance, when applying grammar transformations during parser development—be it for the sake of disambiguation or grammar-class compliance—one would like to preserve the generated language. Even though equivalence is generally undecidable for context-free grammars, we have developed an automated approach that is practically useful in revealing evidence of nonequivalence of grammars and discovering correspondence mappings for grammar nonterminals. Our approach is based on systematic test data generation and parsing. We discuss two studies that show how the approach is used in comparing grammars of open source Java parsers as well as grammars from the course work for a compiler construction class.
In some cases, it is relatively easy to prove that two formal grammars are equivalent. If two grammars generate a finite language, then you only need to compare the sets of strings generated by both grammars to prove that they are weakly equivalent.
In some other cases, it is possible to simplify an equivalence proof by replacing non-terminal symbols with terminal symbols. In this example, A1
is equivalent to A
because D
can be replaced with B C
:
A --> A B C
A1 --> A D
D --> B C
Equivalence proofs can sometimes be simplified by re-writing grammars into normal forms, such as Chomsky normal form.
Similarly, there are some repeating patterns that can be proven to be weakly equivalent. All three of the following grammars are equivalent to (B | A)*
, where *
is the Kleene star:
A1 --> (B*) ((B A)*)
A2 --> ((B | A)*) (B*)
A2 --> (B | A | (A B))*