2

I have a laravel application which must insert/update thousands of records per second in a for loop. my problem is that my Database insert/update rate is 100-150 writes per second . I have increased the amount of RAM dedicated to my database but got no luck. enter image description here

is there any way to increase the write rate for mysql to thousands of records per second ? please provide me configurations.

My storage engine is InnoDB

7
  • It is pretty hard to give you a definitive answer without knowing the hardware and software environment you are running the mysql instance under. Does the table in question have proper index coverage for the updates? Are we talking about simple inserts and updates or are they dependent on some other db operation? Commented Jun 14, 2017 at 19:15
  • Its simple Insert/update . I have 8 GB of RAM and a core i5 CPU. its not a code problem . its a misconfiguration Commented Jun 14, 2017 at 19:18
  • 1
    There are a lot of factors which control the speed of inserts, host server configuration, MySQL installation configuration (data volumes, etc), table design - field types, indexes, etc. Can you provide more information about these areas to let us help you more? Commented Jun 14, 2017 at 19:30
  • "its not a code problem . its a misconfiguration" -- how did you determine that? Commented Jun 14, 2017 at 20:49
  • 1
    Insert/update issue might be because of Indexes. We can help you only if you share your table schema with us. Commented Jun 15, 2017 at 6:35

2 Answers 2

1

Innodb storage is care about ACID transaction it will make sure your data(insert/update) commit to disk that why take the time:

  1. Considering about the columns which frequently change(insert/update), you should not indexing on that column.
  2. if you always need big data of DML operation try to avoid auto-commit=0.but it not good if server crush or some fallen.
  3. turning your query if your DML operation bases on subquery:

Example:

UPDATE TEST SET NAME='XXXX' WHERE (subquery).

1
  • Batch INSERTs -- That is, do one INSERT with up to 100 rows.
  • Group into transactions -- BEGIN; do several things; COMMIT;
  • Change innodb_flush_log_at_trx_commit=2 (see documentation for discussion of speed versus security tradeoff.)

If you need further discussion, show us SHOW CREATE TABLE and some of the queries involved.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.