One of the issues you'll run into the most is consistency... new articles and new product names are always coming in and you'll have an "eventual consistency" problem. So there are three approaches that come to mind that I have used to tackle this kind of problem.
As suggested, use a full text search in MySQL, basically create a loop over your products table, and for each product name do a MATCH AGAIST query and insert productkey, and article key into a tie table. THis is fast, I used to run a system in SQL Server with over 90000 items being searched against 1B sentences. If you had a multithreaded java program that chunked up the categories and exectured the full text query, you may be surpised how fast this will be. Also, this can hammer your DB server.
Use Regex. Put all the products in a collection in memory, and regex find with that list against every document. This CAN be fast if you have your docs in something like hadoop, where it can be parallelized. You could run the job at night, and have it populate a MySQL table... This approach means you will have to start storing your docs in HDFS or some NOSQL solution, or import from MySQL to hadoop daily etc etc.
You can try doing it "at index time", so when a record is indexed in ElasticSearch the extraction will happen then and your facets will be built. I have only used SOLR for stuff like this... problem here is that when you add new products you will have to process in batch again anyway because the previously index docs will not have had the new products extracted from them.
so there may be better options, but the one that scales infinitely (if you can afford the machines) is option 2... the hadoop job.... but this means big change.
These are just my thoughts, so I hope others come up with more clever ideas
EDIT: As for using NER, I have used NER extensively, mainly OpenNLP, and the problem with this is that what it extracts will not be normalized, or to put it another way, it may extract pieces and parts of a product name, and you will be left dealing with things like fuzzy string matching to align the NER Results to the table of products. OpenNLP 1.6 trunk has a component called the EntityLinker, which is designed for this type of thing (linking NER results to authoritative databases). Also, NER/NLP will not solve the consistency problem, because every time you change your NER model, you will have to reprocess.