In general, updating millions of rows at once isn't a very good idea. especially if you have database cluster (there'll be replication delays almost for sure). Better strategy is to split the update into batches.
Yes. There is always a possibility of failure
See 1 :) Split your table into batches of N records (N from 100 to 1000) and update them batch-by-batch. One of the strategies is to make a client job that initiates and monitors these batch updates. (one possible way: add an indexed field to store the date of the last update, then choose N rows which have last_update_date < current_date)
Comment: by "splitting the table" I didn't mean physically splitting, just the following:
add the field where you keep the date of the last sync (and make it indexed) (e.g. last_sync_date);
when the job starts, within the cycle do the following:
retrieve ID's of the next N records (e.g. N=500) with last_sync_date < curdate():
if you didn't get anything, you are done, exit the cycle;
otherwise,
set interest=(money*rate)/100, last_sync_date = curdate()
for the records with these IDs .
I would rather do it as a job written outside of MySQL and scheduled via e.g. cron (because then it's easier to monitor how the job runs and kill it if necessary ), but you can, in theory, do it in MySQL too, for example (untested) something like that (I assume that your records have unique IDs stored in the field
id
):
delimiter | create event cal_interest on every 1 day do create temporary table if not exists temp_ids(id int) engine=memory; declare keep_sync int default 1; begin repeat truncate temp_ids; insert into temp_ids(id) select id from userTable where last_sync_date < curdate() limit 500; select count(1) from temp_ids into keep_sync; update userTable set interest=(money*rate)/100, last_sync_date = curdate() where id in (select id from temp_ids) ids; until keep_sync>0; drop table temp_ids; end | delimiter ;