How to delete duplicate rows without unique identifier

SqlDatabasePostgresqlDuplicatesNetezza

Sql Problem Overview


I have duplicate rows in my table and I want to delete duplicates in the most efficient way since the table is big. After some research, I have come up with this query:

WITH TempEmp AS
(
SELECT name, ROW_NUMBER() OVER(PARTITION by name, address, zipcode ORDER BY name) AS duplicateRecCount
FROM mytable
)
-- Now Delete Duplicate Records
DELETE FROM TempEmp
WHERE duplicateRecCount > 1;

But it only works in SQL, not in Netezza. It would seem that it does not like the DELETE after the WITH clause?

Sql Solutions


Solution 1 - Sql

I like @erwin-brandstetter 's solution, but wanted to show a solution with the USING keyword:

DELETE   FROM table_with_dups T1
  USING       table_with_dups T2
WHERE  T1.ctid    < T2.ctid       -- delete the "older" ones
  AND  T1.name    = T2.name       -- list columns that define duplicates
  AND  T1.address = T2.address
  AND  T1.zipcode = T2.zipcode;

If you want to review the records before deleting them, then simply replace DELETE with SELECT * and USING with a comma ,, i.e.

SELECT * FROM table_with_dups T1
  ,           table_with_dups T2
WHERE  T1.ctid    < T2.ctid       -- select the "older" ones
  AND  T1.name    = T2.name       -- list columns that define duplicates
  AND  T1.address = T2.address
  AND  T1.zipcode = T2.zipcode;

Update: I tested some of the different solutions here for speed. If you don't expect many duplicates, then this solution performs much better than the ones that have a NOT IN (...) clause as those generate a lot of rows in the subquery.

If you rewrite the query to use IN (...) then it performs similarly to the solution presented here, but the SQL code becomes much less concise.

Update 2: If you have NULL values in one of the key columns (which you really shouldn't IMO), then you can use COALESCE() in the condition for that column, e.g.

  AND COALESCE(T1.col_with_nulls, '[NULL]') = COALESCE(T2.col_with_nulls, '[NULL]')

Solution 2 - Sql

If you have no other unique identifier, you can use ctid:

delete from mytable
    where exists (select 1
                  from mytable t2
                  where t2.name = mytable.name and
                        t2.address = mytable.address and
                        t2.zip = mytable.zip and
                        t2.ctid > mytable.ctid
                 );

It is a good idea to have a unique, auto-incrementing id in every table. Doing a delete like this is one important reason why.

Solution 3 - Sql

In a perfect world, every table has a unique identifier of some sort.
In the absence of any unique column (or combination thereof), use the ctid column:

DELETE FROM tbl
WHERE  ctid NOT IN (
   SELECT min(ctid)                    -- ctid is NOT NULL by definition
   FROM   tbl
   GROUP  BY name, address, zipcode);  -- list columns defining duplicates

The above query is short, conveniently listing column names only once. NOT IN (SELECT ...) is a tricky query style when NULL values can be involved, but the system column ctid is never NULL. See:

Using EXISTS as demonstrated by @Gordon is typically faster. So is a self-join with the USING clause like @isapir added later. Both should result in the same query plan.

Important difference: These other queries treat NULL values as not equal, while GROUP BY (or DISTINCT or DISTINCT ON ()) treats NULL values as equal. Does not matter for columns defined NOT NULL. Else, depending on your definition of "duplicate", you'll need one approach or the other. Or use IS NOT DISTINCT FROM to compare values (which may exclude some indexes).

Disclaimer:

ctid is an implementation detail of Postgres, it's not in the SQL standard and can change between major versions without warning (even if that's very unlikely). Its values can change between commands due to background processes or concurrent write operations (but not within the same command).

Related:

Aside:

The target of a DELETE statement cannot be the CTE, only the underlying table. That's a spillover from SQL Server - as is your whole approach.

Solution 4 - Sql

Here is what I came up with, using a group by

DELETE FROM mytable
WHERE id NOT in (
  SELECT MIN(id) 
  FROM mytable
  GROUP BY name, address, zipcode
)

It deletes the duplicates, preserving the oldest record that has duplicates.

Solution 5 - Sql

We can use a window function for very effective removal of duplicate rows:

DELETE FROM tab 
  WHERE id IN (SELECT id 
                  FROM (SELECT row_number() OVER (PARTITION BY column_with_duplicate_values), id 
                           FROM tab) x 
                 WHERE x.row_number > 1);

Some PostgreSQL's optimized version (with ctid):

DELETE FROM tab 
  WHERE ctid = ANY(ARRAY(SELECT ctid 
                  FROM (SELECT row_number() OVER (PARTITION BY column_with_duplicate_values), ctid 
                           FROM tab) x 
                 WHERE x.row_number > 1));

Solution 6 - Sql

The valid syntax is specified at http://www.postgresql.org/docs/current/static/sql-delete.html

I would ALTER your table to add a unique auto-incrementing primary key id so that you can run a query like the following which will keep the first of each set of duplicates (ie the one with the lowest id). Note that adding the key is a bit more complicated in Postgres than some other DBs.

DELETE FROM mytable d USING (
  SELECT min(id), name, address, zip 
  FROM mytable 
  GROUP BY name, address, zip HAVING COUNT() > 1
) AS k 
WHERE d.id <> k.id 
AND d.name=k.name 
AND d.address=k.address 
AND d.zip=k.zip;

Solution 7 - Sql

If you want to keep one row out of duplicate rows in the table.

create table some_name_for_new_table as 
(select * from (select *,row_number() over (partition by pk_id) row_n from 
your_table_name_where_duplicates_are_present) a where row_n = 1);

This will create a table which you can copy.

Before copying table please delete the column 'row_n'

Solution 8 - Sql

If you want a unique identifier for every row, you could just add one (a serial, or a guid), and treat it like a surrogate key.


CREATE TABLE thenames
        ( name text not null
        , address text not null
        , zipcode text not null
        );
INSERT INTO thenames(name,address,zipcode) VALUES
('James', 'main street', '123' )
,('James', 'main street', '123' )
,('James', 'void street', '456')
,('Alice', 'union square' , '123')
        ;

SELECT*FROM thenames;

        -- add a surrogate key
ALTER TABLE thenames
        ADD COLUMN seq serial NOT NULL PRIMARY KEY
        ;
SELECT*FROM thenames;

DELETE FROM thenames del
WHERE EXISTS(
        SELECT*FROM thenames x
        WHERE x.name=del.name
        AND x.address=del.address
        AND x.zipcode=del.zipcode
        AND x.seq < del.seq
        );

        -- add the unique constrain,so that new dupplicates cannot be created in the future
ALTER TABLE thenames
        ADD UNIQUE (name,address,zipcode)
        ;

SELECT*FROM thenames;

Solution 9 - Sql

From the documentation delete duplicate rows

A frequent question in IRC is how to delete rows that are duplicates over a set of columns, keeping only the one with the lowest ID. This query does that for all rows of tablename having the same column1, column2, and column3.

DELETE FROM tablename
WHERE id IN (SELECT id
          FROM (SELECT id,
                         ROW_NUMBER() OVER (partition BY column1, column2, column3 ORDER BY id) AS rnum
                 FROM tablename) t
          WHERE t.rnum > 1);

Sometimes a timestamp field is used instead of an ID field.

Solution 10 - Sql

For smaller tables, we can use rowid pseudo column to delete duplicate rows.

You can use this query below:

Delete from table1 t1 where t1.rowid > (select min(t2.rowid) from table1 t2 where t1.column = t2. column)

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionmoeView Question on Stackoverflow
Solution 1 - SqlisapirView Answer on Stackoverflow
Solution 2 - SqlGordon LinoffView Answer on Stackoverflow
Solution 3 - SqlErwin BrandstetterView Answer on Stackoverflow
Solution 4 - SqlBruno CalzaView Answer on Stackoverflow
Solution 5 - SqlVivek S.View Answer on Stackoverflow
Solution 6 - SqlJoe MurrayView Answer on Stackoverflow
Solution 7 - SqlAditya NathireddyView Answer on Stackoverflow
Solution 8 - SqlwildplasserView Answer on Stackoverflow
Solution 9 - SqlChad CroweView Answer on Stackoverflow
Solution 10 - SqlAnsih MukherjeeView Answer on Stackoverflow