Question

I have a supplier's product file which I've read into memory via CSV. My next step is to make updates and additions to an existing BigCommerce product list (9k products). This is the first time I'll be dealing with an API.

My supplier file doesn't have the BC product ID in it, only its own product ID which is a piece of data on the product in BC.

In terms of requests, I think I'd need to:

  • GET a chunk of BC products (I think it's 200 per request)
  • iterate over the BC products, using bc id and where it matches the supplier file, then do PUT's to deal with updates
  • keep getting chunks until done
  • any remaining products are then added via PUT's

I think HTTParty is an applicable gem (along with the big commerce Ruby one).

My question is does the above fit the 'normal' process of how you would attack a problem like this? Or is there a better/standard way of how this would be approached?

The main thing I'm concerned about is how to iterate given there are 9000 records and I don't know the id ahead of requesting all?

Was it helpful?

Solution

The above is pretty much how I ended up doing the updates.

Main issue was order of getting/putting data so that nested objects worked correctly.

Eg need to do brands and categories updates before you can add products using them. Products need to be added before you can do images or Options/Option Sets

My first version I've stuck with HTTParty, but the next refactor will use Typhoeus to get through the data quicker; but need to be mindful of BC's API limits which for this sort of process you'll run into pretty quick (eg 4k product updates).

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top