4

In order to create more 'data recovery' points than the standard automatic backup policy allows, I'm going to run a cron which will export sql dumps out of my CloudSQL instances at intervals throughout the day. I know I can do this using the gcloud command as follows:

cloud sql export sql [INSTANCE_NAME] gs://[BUCKET_NAME]/sqldumpfile.gz \
    --database=[DATABASE_NAME] --table=[TABLE_NAME1,TABLE_NAME2, ...]

There is not much info on Google support docs as to what is actually happening here. What's important to me is whether this process causes tables to be locked on export like a mysqldump command would (without the appropriate flags)

I know that the gcloud export function includes the --skip-triggers flag, but there is no information about table locking. Can anybody help?

4
  • Any particular reason you don't want to ask Google support, for which you are probably paying? Commented Jan 23, 2019 at 13:25
  • @molenpad were you able to find an answer? I'm in the same boat. Commented Jan 20, 2020 at 16:18
  • 1
    @MarkS Yes, there is no table locking involved with mysql on cloudsql. To be honest, we managed to avoid doing it this way by running a cron job in kubernetes which issues a gcloud sql backup command every few hours. I'd suggest looking at something like that. Commented Jan 31, 2020 at 14:39
  • Thanks @Molenpad! Commented Jan 31, 2020 at 18:26

0

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.