Single or Multiple Transactions?

I have a section of mysql db code that will occasionally run very slow. It takes about 100 rows of 35 columns of data with from an array called ExamList() and writes it to the database. When it runs slow, I can actually watch the rows being added to the table. I’m using a quad-core Mac Pro that doesn’t normally suffer from any performance issues, but I’m trying to determine if this is a scenario that is unique to using MySQL on my computer or if it will run slow on the server as well.

I thought that using InsertRecord sort of stores up the statements in memory and the db.commit statement runs them all at once. Is that not how it works? If not, what would be that fastest way to add the data? I’ve read the recommendations about discarding triggers and indexes and then adding them back after the transaction, but I don’t use either of those right now.

Here’s an segment from what I’m currently using:

dim dr as DatabaseRecord
dr = new DatabaseRecord

dim x as integer

for x = 0 to ubound(ExamList) - 1
  
  dr.IntegerColumn("savedexamsID") = isavedExamsID
  dr.IntegerColumn("calid") = calid
  dr.column("username") = Session.username
  dr.column("examname") = SimExam.examname
  dr.integercolumn("questionid") = val(ExamList(x,0))
  dr.column("heading") = ExamList(x,1)
  dr.column("category") = ExamList(x,2)
  dr.column("question") = ExamList(x,3)
  dr.column("opta") = ExamList(x,4)
  dr.column("optb") = ExamList(x,5)
  dr.column("optc") = ExamList(x,6)
  dr.column("optd") = ExamList(x,7)
  dr.column("answer") = ExamList(x,8)
  dr.column("rationale") = ExamList(x,9)
  dr.column("citations") = ExamList(x,10)
  dr.column("examnum") = ExamList(x,11)
  dr.column("html") = ExamList(x,12)
  dr.column("questiontype") = ExamList(x,13)
  dr.column("draga") = ExamList(x,14)
  dr.column("dragb") = ExamList(x,15)
  dr.column("dragc") = ExamList(x,16)
  dr.column("dragd") = ExamList(x,17)
  dr.column("opte") = ExamList(x,18)
  dr.column("optf") = ExamList(x,19)
  dr.column("optg") = ExamList(x,20)
  dr.column("opth") = ExamList(x,21)
  dr.column("article1") = ExamList(x,22)
  dr.column("citations1pgfrom") = ExamList(x,23)
  dr.column("citations1pgto") = ExamList(x,24)
  dr.column("useranswer")= ExamList(x,25)
  dr.column("correct") = ExamList(x,26)
  dr.integercolumn("timespent") = val(ExamList(x,27))
  dr.column("hotspotimg") = ExamList(x,28)
  dr.column("tagged") = ExamList(x,29)
  dr.column("studyplanname") = "NONE"
  
  mydb.InsertRecord "savedexamdetails",dr
next x

mydb.commit

100 rows as one transaction isnt much IF the columns are small
But I see a lot of those are string columns - how many are inserting very large textual data ?
That likely factors into it

The other question I have is is it important that all 100 rows be in one transaction ?

Hm…well at first blush I don’t see anything that causes red flags. The question I have is why store them up? I would think changing them when the data changes would be considerably faster…

The purpose of transactions is to guarantee that all the data made it or none of it does. If you are willing to allow some of the data to be written and others not if there was an error or something then you don’t require transactions.

Good point. Are you doing error checking on the insert and doing transactions? Putting all of this inside a transaction should greatly speed it up since it doesn’t write to the database until the last one is done.

theres other ways to achieve “all or none” without doing it all in a transaction from the client code

shove them into a temporary table without a transaction
watch those inserts for errors
then, if inserting to the temporary table worked without errors, insert to the real table in a transaction via a select statement and thats all done server side
you can get the speed of not using transactions from the client to the server and the safety of a transaction inserting to the real table

If you can see the rows being inserted into the table one by one then I doubt that a transaction is actually being used. If you are in AutoCommit mode then you need to specifically send a command to start the transaction. One way to test this is to perform a rollback rather than a commit. If you see the records in the database then a transaction was not used.

One thing to check is the engine assigned to your table. For example, if you are using MyISAM then you cannot use transactions. Unless you have a specific need you should really be using InnoDB.

The main purpose of transactions is to guarantee atomicity. However, they also have the benefit of improving performance due to less disk I/O.

Thank you for the replies! Most of the fields are less than 128 characters in length, but there are two fields that can be a couple of sentences long. The longest on of those two fields could be is about a 4-5 sentence paragraph.

I’m only doing error checks when the database connection is established earlier in the routine. I don’t have an error check on the insert itself and now realize I should.

The data is saved all at once because this forms a ‘crash protection’ mechanism and is setup at the beginning of the exam. It creates a line by line copy of the exam, then updates several fields in each row as they move from question to question. That way, if it crashes for any reason, it can replicate the exact exam, where they left off, etc.

It is definitely an ‘ALL OR NONE’ write to the database, so it sounds like a transaction would be helpful.

What I couldn’t figure out by reading other threads on transactions, though, is: in order to execute multiple lines like this, do I need to create one big SQL statement consisting of a series of individual INSERT statements and then perform a SQLEXECUTE() to do it as a single transaction? I use transactions, but so far only use them for single SQL statements like a DELETE.

From the MySQL docs:

So you only use a COMMIT if you have started a transaction with BEGIN:

db.SQLExecute("BEGIN") For i ... db.InsertRecord ... Next db.Commit()

As Kevin noted you’re probably NOT using a transaction as there’s no sqlexecute(‘begin transaction’) at the outset
So you’re very likely inserting one row at a time which can be slower

dim dr as DatabaseRecord
dr = new DatabaseRecord

mydb.sqlexecute "begin transaction" // <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<

dim x as integer

for x = 0 to ubound(ExamList) - 1

        dr.IntegerColumn("savedexamsID") = isavedExamsID
... etc ...
       dr.column("studyplanname") = "NONE"

       mydb.InsertRecord "savedexamdetails",dr

       // <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< check here if this insert worked
next x

mydb.commit
// <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< check if the commit worked 

Hi Peter.

I don’t do much SQL using Xojo but the usual way is that you:

  1. start a transaction
  2. execute several statements individually
  3. commit the transaction

You should check the result after each execute and commit as if an error occurred it might cause the entire transaction to roll back.

Ideally, you should be running MySQL so that auto commit is enabled as if you are only executing single statements you don’t need to wrap them in a transaction (you would only need a transaction if you wanted to execute multiple statements together). As Eli says above, autocommit is on by default but the config may have been changed.

I did notice that you said you were copying the exam rows. If the master copy of the exam already exists in the database it is usually better to use INSERT SELECT rather than reading the rows into Xojo and re-inserting them.

I think the AutoCommit explains exactly what is happening. In particular, it can slow the system to a crawl if 35 users do it at the same time.

I currently use Bob Keeney’s SQLExecuteRaiseError method. Would I position the InsertRecord loop in between Try and Catch, like the abbreviated example below? (I’m trying to make sure I have the format correct)

mydb.SQLExecuteRaiseOnError(“START TRANSACTION”)
try

for x = 0 to ubound(ExamList) - 1
dr.integercolumn(“questionid”) = val(ExamList(x,0))
dr.column(“heading”) = ExamList(x,1)
dr.column(“category”) = ExamList(x,2)
dr.column(“question”) = ExamList(x,3)
mydb.InsertRecord “savedexamdetails”,dr
next x

catch
mydb.rollback
end try
mydb.commit

Hi.

You can send a json to an SP, so just make a call to the server and that it precesses.

Regards
Mauricio

I cant say as I dont know how his code works but that seems likely

Hi Peter.

If you are using transactions a lot and you have several users hitting the database at once you may receive Deadlock errors. When this error occurs your entire transaction may be rolled back automatically and you need to retry to entire transaction. For safety, in our code if we receive a deadlock error we always execute a rollback before we retry as I have read that different types of Deadlocks can either rollback the last statement or the entire transaction.

Googling for InnoDB Deadlocks should give you more information.

Update: The Transaction code I tried functions normally, so that seems to a valid way to implement it.

However, I’ve discovered that the delay still occurs, but, I think the row inserts aren’t the root of the problem.

So, I click a button, the database code begins to run and the program will hang up at random positions in the routine. (I am logging every line to try and find the culprit and it changes positions nearly every time.)

It also happens with different data each time, so the data itself doesn’t seem to be the issue. There’s a countdown clock running in the app and it stops running.

But, the second I click another window like a text editor, IDE, Microsoft Excel, anything, and the browser loses focus, everything updates immediately and the clock starts running again. I’ve replicated this every time it hangs up. Just click any window other than the browser and all the data pops onto the screen and the clock starts running again.

Is there a screen refresh for web apps that needs to be forced or something?