"ERROR:  duplicate key value violates unique constraint" on an index.

I’m having an issue with a database update and I’m wondering if anyone has some isight as to what I may be doing wrong or if I’ve discovered bug.

Error is: ERROR: duplicate key value violates unique constraint “emp_idx_empid”

“empid” is the employee ID and it is unique, “emp_idx_empid” is its index.

Windows service, PostgreSQL 8.2, Xojo 2014r2 through 2015r1.

When my Windows service inserts new records to the PostgreSQL database I get the above error for each new row to be inserted.

If I rebuild the index there are no errors on the first run. but the second and subsequent runs throws errors. The inserts are actually happening but I’m afraid the index is getting out of whack and will become inefficient.

In my design there is a class (sync_class) that contains the database property. The class gets created (as dim xxx as new sync_class) and the database it setup and connected to and then the sync begins. After the sync the database connection is closed and the sync_class xxx is nilled. 24 hours later it repeats.

Am I doing something wrong in how I’ve implemented this or is something in the Postgre SQL plugin not letting go of a reference to the indexes?

This all seems weird to me because 1 - I’ve created several desktop and Windows services that access and update databases without this happening and 2 - it seems to me that adding the index value is Postgre’s job, nothing Xojo or my app should be doing. The index has been dropped and recreated with new names several times without affecting the problem.

So, to recap. I get errors about unique constraint violations on a unique index. Rebuilding the index makes the error go away once. Indexes have been recreated using new names multiple times from two different management tools (Navicat premium essentials and PGAdmin). None of my other applications doing inserts on Postgre databases that have indexes on unique fields exhibit this behavior. Manually doing an insert from a management tool succeeds with no such error.

The plugin doesn’t hold on to indexes - its a conduit between your xojo code & postgresql - not much more.
I’ve never run into postgres warning about such a thing in error.

I’d really look at your code that is reporting this with a fine toothed comb

Maybe that issue: How to fix PostgreSQL error “duplicate key violates unique constraint”

Yeah, gone through several combs.

It’s a pretty simple routine, construct an insert command, execute, check for error and if present - log it.

I was pretty sure that’s the way it was.

Forgot to mention, if I stop and start the service it’s fine. For 1 run.

Eli, the insert works, the tables sequence is not out of sync for the field, the index is what it’s complaining about.

Hi Zane,

Can you post the DDL for the index?

ON employees
USING btree

You say that the first insert works after you rebuild the index, but the second one fails.
Can you post both insert statements? The actual statements your code generates, not the code.

Just standard INSERT INTO statements. All in a loop, all generated the same. All include an employee ID. Some remaining fields may be empty.

The loop processes the contents of an array holding all the new employee records who are new. Another loop handles updates.

INSERT INTO employees (employee_id, first_name, last_name, active, email, dept_id, job_code, manager_id, building, entity) VALUES '123456789’, $$FirstName$$, $$LastName$$, ‘Y’, $$EMailAddress$$, ‘Department_ID’, 'Title_ID’, ‘Manager_ID’, ‘WorkLocation’, ‘Entity’)

So you are inserting an actual value for the employee id as opposed to using the PostgreSQL serial or bigserial data type (autoincrement), right? If you are not using one of those two data types how are you generating / getting the next unique number?

If employee id is supposed to be unique across all records why use an unique index instead of a primary key index?

It has to be the actual value from the HRIS system. (This DB is our company directory) There is an auto increment int ID primary key field on the employee table. The actual insert works, it just complains about the index.

The Employee ID is used to tie the employee to their respective department and title in those tables, among other things, so I want an index on the employee ID. It is unique so I assumed the index can be set as unique as well.

The management tool where I initially set this doesn’t word it as a primary key index, it uses the term unique.

So you are getting a value from another system (HRIS) and using that as the employee id. It is unique on the source system so then it should be unique here. That makes sense.
Having the column indexed also makes sense.

That tool sounds…interesting…

PKs & unique indexes differ in a couple ways:

  1. PKs will not allow any of the records to have a NULL value in the specified column, while a unique index will allow one record to have NULL value
  2. Most tools will, by default, visually identify the table’s PK for you if it is specified as a PK, but they typically don’t show you the column(s) in a unique index without changing some setting or switching to a different view.
    I know both of those things sound a little trivial, but it is because the industry assumes, rightly so, that each table will have a PK so it is generally good practice to create them as PKs instead of just unique indexes.

Just for grins, can you drop the index and run your inserts to see what happens or have you tried that?

run the inserts after dropping the unique & do a group by having count(*) > 1 to see what duplicates you have

Tried that, it works fine.

The only thing I haven’t tried is dropping the whole employee table and re-creating that. Since it’s tripping over the index, and the index has been re-created, dropped and created again it doesn’t “feel” like that will help anything.

There are no duplicates.

  • I have checked this, and the table has a primary key on the employee ID field, so the insert would fail.

I can look at the source before hand, find a new entry, confirm that it’s not in the table, watch the sync spout an error on that insert and then look at the table and it’s there. It just complains about the index.

Zane, I can honestly say this is an odd one and doesn’t make any sense.
Maybe the table is corrupted somehow and it is impacting the index.
Since plain inserts from non-Xojo programs work fine (multiple, right?) this feels more like a code issue, but maybe it is something to do with the Xojo PostgreSQL driver?

Based on your original post it looks like this is a service that the clients connect to, right? Is there any way two inserts could be happening at the same time? Some kind of odd timing issue?

So here are a couple more suggestions:
Go ahead and try dropping / recreating the table. If the database object is corrupted somehow this should resolve that.
Otherwise try to connect to the database differently. Perhaps ODBC or Christian’s excellent SQL drivers and see if that clears up the issue.
Are the inserts happening inside a database transaction?
At what point are you doing your commit?

You mean the source has the PK, right?
You don’t have a unique index AND a PK over the same columns right? I would think PostgreSQL would complain if you tried that, but maybe not…and even if you did I am not sure why having both over the same column would create that issue. Except it might be some kind of index updating issue if you have both over the same column.

I always get the odd problems, no Google help for me sometimes.

It’s just the directory so it’s not mission critical. This problem has been bugging me for a while and if I’m doing something wrong I want to learn so I finally gave in and submitted this.

It’s a service that lives to sync the directory with the HRIS system, Active directory and the security software where it gets pictures and credentials. There are other apps that use the data, but they just read the DB. My service is responsible for modifying it. No clients connect to the service, it just does its sync every 24 hours and sends e-mail notifications when someone is added or moves.

Not using a transaction, just making an insert and executing it. Using built in commit. Other than this weird problem with this one service on this one table in this one database everything works fine. No missing inserts, no duplicates.

My employee table has two primary keys. An integer auto incremented and the employee ID. Both default to NOT NULL when you make them the PK. The index setup (GUI, sorry I’m lazy sometimes) allows you to check a box if the field the index is on is unique. No complaints at that time. I created the index using PGAdmin via script last time, same result.

As far as corrupt table, I did leave out the very beginning of my adventure with this problem. I have two copies of this database, one for dev and the other for live. While making changes to the service (source data from a file instead of direct DB access) It would update the dev DB first, all is well. Then it would update the live DB and have these errors. In my early trouble shooting I switched those two. The problem followed the second update, so when they were switched and the live DB was updated first it was fine, the dev DB would have the issue. So it didn’t seem to be the table at this point since these are two tables in two different databases. (With different names on the same server.) I thought that maybe the indexes got cross-linked (for lack of a better term) and removed one and re-created it with a new name. No affect.

I will try connecting to it via ODBC just to see what happens, even though I shiver a little when I think ODBC. I can also wrap the inserts into a transaction, we’re bigger now so there’s more than one person added at a time.

Of course not. But you are trying to add them.

Any SQL table only can have one primary key.

Again I post the same link – it matches exactly what you are doing and getting: How to fix PostgreSQL error “duplicate key violates unique constraint”

This does not prevent having more than one record with the same employee ID. Only that it is not NULL.

One record in the table has the autoid (record) number set to 1234 and employeeID = 123456789
Then another record has the autoid number set to 2345 with employeeID = 123456789.

Since your Primary Key is made up of the combination of the auto integer and the employee id both example records would ‘insert’ with no problem. The PK would be unique because of the serial number.

However, trying to create a unique index on the employeeID only, would complain about the duplicates.