Problem Detail: I’m referring to regular expressions as language: begin{equation*} Sigma = { “a”, “b”, “(“, “)”, “*”, … } end{equation*} and begin{equation*} L = Sigma^* text{, which form a legal regular expression} end{equation*} I am not referring to their computational power. I did some research, but wasn’t able to find anything relevant. Intuitively, I would imagine they’re context-free, but am not sure how to attempt a proof. I am trying to use this as a lemma, so a reference, or recommendation for how to attempt a proof would be extremely helpful.
Asked By : Steve Peters
Answered By : jmite
Are we talking regular expressions as in only union, concatenation and star? Consider the following grammar: $R -> a | b | c$ $R -> R+R$ $R -> RR$ $R -> R^*$ $R -> (R)$ This captures all regular expressions. So if we’re only talking in the Chomsky hierarchy, then it’s clearly context free. If you allow parentheses, it’s not regular, since well-nested parentheses are known to not be regular. If you want a proof, do a homomorphism from the language of REs to the language of well-nested parentheses. I don’t know where they fit in terms of more detailed classification (i.e. $LL(1)$, $LR(0)$ etc.). You could probably find something about this if you searched for “parsing regular expressions.”
Best Answer from StackOverflow
Question Source : http://cs.stackexchange.com/questions/16109 Ask a Question Download Related Notes/Documents