Question

This was given as a homework problem but I have already submitted the assignment. I'd like to resolve it at this point for my own satisfaction.

Given that $L_1$ is a linear language and $L_2$ is a regular language, show that $L=L_1L_2$ is a linear language.

Recall that a linear grammar $G=(\Sigma, V, P, \sigma)$ has productions $A\to yBz$ for some $y,z \in \Sigma^*$ and $A,B \in V$.

I use the theorem that every regular language can be represented by a right linear grammar.

Then I use the theorem that every right linear grammar is the reverse of a left linear grammar (being a little careful about what I mean by reverse)... $L(rev(G))=rev(L(G))$...

Next each left linear grammar is the reverse of a regular language, but the reverse of a regular language is regular, so left linear grammars also represent regular languages.

So our productions in $L_2$ are of the form $x \to Ca \mid a$ for some $C \in V_{L_2}$ and $a \in \Sigma_{L_2}$.

Now on to the show...

What we are looking for is $L = L_1.L_2$, $L$ is linear (to show).

So this has the form $S \to yBzCa \mid yBzaa$

So far so good, the second production is linear and within our expectations for set inclusion.

I'm having a devil of a time reducing $yBzCa$ however ...

If I introduce $V\to BzC$ that linearizes $S$ but $V$ is not linear ...

If I give $T\to z$ to get $V\to BTC$ I'm not much better off

If I use $V_1\to Bz$ (ok linear!) but then $V\to V_1C$ (not linear)

What is the piece of the puzzle I'm missing?

I have a suspicion that my woes are because I failed to have a production that is $B\implies^*a$ for some terminal $a \in \Sigma_{L_1}$ but I haven't observed that in the definitions thus far... and further unless B only goes to a terminal I'm in the same mess (if $B\to t$ where $t \in \Sigma_{L_2} \bigcup {\epsilon} $ then I think I'm finished but how do I justify it?

Was it helpful?

Solution

The piece of the puzzle is not to start $L_1$ and $L_2$ in parallel, but to first generate the regular suffix (strings in $L_2$), then (when the left-linear grammar finishes) continue with the linear prefix (strings in $L_1$).

(added, deadline closed) Thus we assume we have grammar $G_i = (\Sigma,V_i,P_i,S_i)$ for language $L_i$, where $G_1$ is linear, and $G_2$ is left-linear, following your own suggestion, and generates the regular language "in reverse". Given two strings $w_1$ and $w_2$ in the respective languages, having derivations $S_1 \Rightarrow^*_{G_1} w_1$ and $S_2 \Rightarrow^*_{G_2} B w' \Rightarrow_{G_2} w_2$, we glue these together as $S_2 \Rightarrow^*_{G_2} Bw'\Rightarrow S_1 w_2 \Rightarrow^*_{G_1} w_1 w_2 $. Thus, first generate $w_2$, in the last step switching to the axiom $S_1$ of $G_1$, then generate $w_1$.

Technically this is achieved by requiring $V_1 \cap V_2 = \varnothing$, choosing all productions in $P_1$ and $P_2$ except that "final" productions $A\to a$ with $a\in\Sigma$ in $P_2$ are changed into $A\to S_1a$.

Licensed under: CC-BY-SA with attribution
Not affiliated with cs.stackexchange
scroll top