MySQL - load data local infile?

I’m getting an error whenever I try to execute the MySQL “load data local infile” command in Xojo to import a file of csv data. The same command works when used in Sequel Pro or Workbench. The error I get is: The used command is not allowed with this MySQL version.

Well it is allowed because I’m using it.

I understand it may be a my.conf issue as well as a security issue.

If someone can share some experience with this I would appreciate it. I know there’s other ways to read the data in.

Thanks.

This issue has come up again. Why can’t I run the “load data local infile” from Xojo but can run it from MySQL Workbench and SQL Pro on the same local OS X machine?

I’m connecting to an Amazon MySQL instance.

Anyone with any experience on this?

Is the “local-infile” directive set in my.cnf?

I am a newbie to AWS RDS as well and their docs are hard to follow. So I don’t know. Looking at that now. Just think that if it wasn’t then I would not be able to run the command in Workbench or Sequel Pro. Puzzling.

You have a point. LOCAL-INFILE needs to be enabled on BOTH sides of the connection in order for it to work since it is a security concern. My guess is that it is not enabled by default in the Xojo plugin for mySQL.

Maybe someone from Xojo will chime in and clarify the use of client configuration options beside the basic host, port, etc…?

With the mysql command line client you can put a file in your home directory “~/.my.cnf” and put options in a “[client]” section. You could give that a try and see if the plugin reads it.

It does appear to be enabled in AWS RDS. The value of the parameter is “1”. The other option is “0”.

Tried creating the .my.conf file. No joy. I’m just fishing around on this so I’ll wait and see what someone else may say. I assume I’m not the first person to try this. Thanks.

I’m sure you know, but if you can put the file on the server and load it from there you can bypass the “local” part.

The users are getting spreadsheets that they prepare and save as csv or tabs. They couldn’t handle uploading to a server. Thanks for the tips.

How are you trying to execute this from Xojo ?

Basically I’m building the SQL string which I also test in Sequel Pro and/or Workbench. That string is like this:

vSQL="LOAD DATA LOCAL INFILE '/Users/dm/Desktop/TestTargetList3000Tabs.txt' INTO TABLE temp FIELDS TERMINATED BY '\\t' LINES TERMINATED BY '\\r\ ' IGNORE 1 LINES;"

And I’m trying to execute it as both a prepared statement and regular execute. I’ll try both.

Dim ps as MySQLPreparedStatement db.SQLExecute(vSQL) 'ps=db.Prepare(vSQL) 'ps.SQLExecute

both the client & server have to be configured to allow this and I dont think there’s any way for you to configure the client code inside Xojo

Is there some way to get a confirmation on this? It’s a quite a difference in performance.

Not sure if this is worth pursuing since I can’t download the results as a file. It takes about 3.5 minutes to create a 3000 record table using Xojo reading the tab file one row at a time. Using the load data local file it takes .75 seconds. For now we can live with that and I’ll look elsewhere for performance improvements. Thanks for the input.

Are you inserting each row inside a transaction per row ?
I would say

  1. create a temp table
  2. insert into that with one giant transaction
  3. select from this temp table into the real one again inside one transaction

Duane, you could get significantly better performance by writing fewer & larger SQL inserts instead of going line by line, inserting each one (if that is what you mean). I do some much larger tables like this and the only way to get reasonable speed is to reduce the number of inserts by combining lines. Locking the tables also helps a lot. 3000 records is not very much so this could be greatly improved, I think.

I think Norman and I are on the same wavelength.

To Normans comment I would add:

  1. create temp table
  2. lock table & disable keys
  3. insert data is as few inserts as possible
  4. re-enable keys and unlock table
  5. rename tables (this is nearly instantaneous, atomic operation)

You can do it this way on a live DB and swap/update the data without issue, at least for read only.

This is what I will do but not sure exactly how. I’ll look around for a solution and if I can’t find one I’ll come back here and post.

Yes, unfortunately, it is what I mean. I’ve not done the larger transactions before so will have to figure it out. So as to keep this subject on topic I’ll post back on the subject of making these sorts of larger SQL inserts.

I agree. I think you’ll see me back here shortly asking about this subject.

begin transaction
insert a zillion rows
commit