Database at Max Size Issue

I took the advice of this board and concatenated 5000 insert statements. These statements are executed with a single SQLexecute statement inside a transaction. The performance is fantastic. i am able to insert 25 million records in about 8 hours. I found a problem when the Microsoft SQL database reached its size limit. The setting is to automatically add 1 meg when needed, but only a fraction of the 5000 records are inserted, and the database shows no error code. I fixed the problem by manually increasing the size of the database, but I was wondering how to prevent this issue in the future.

why not start transaction, run one insert a time in a loop to get 5000 done and than end transaction?

you may not need to put all in one statement for great speed.

I haven’t tested that for speed against concatenating the insert statements because another poster suggested concatenating, and the results were so fast. I will do a speed test, perhaps concatenating is not worth the effort.

you’d likely have to be the db admin to do it but you can issue a transact sql command to increase the size

https://msdn.microsoft.com/en-us/library/bb522469.aspx

USE master;
GO
ALTER DATABASE
MODIFY FILE
(NAME = ,
SIZE = );
GO

But this does take time

Thanks Norman,

I am the database admin, and I increased the size and solved the problem. I just wonder what I can do to trigger an error when all records are not inserted.

when you try to insert & eventually commit the transaction you should get an error that the transaction failed & you can issue a rollback

at that point you should get an error message saying that the db is full and can take appropriate action

You are a genius Norman, I was checking for an error after executing the SQL statement, and received no error. The error shows up after the commit.