MySQL remove duplicates from big database quick

SqlMysqlDuplicates

Sql Problem Overview


I've got big (>Mil rows) MySQL database messed up by duplicates. I think it could be from 1/4 to 1/2 of the whole db filled with them. I need to get rid of them quick (i mean query execution time). Here's how it looks:
id (index) | text1 | text2 | text3
text1 & text2 combination should be unique, if there are any duplicates, only one combination with text3 NOT NULL should remain. Example:

1 | abc | def | NULL  
2 | abc | def | ghi  
3 | abc | def | jkl  
4 | aaa | bbb | NULL  
5 | aaa | bbb | NULL  

...becomes:

1 | abc | def | ghi   #(doesn't realy matter id:2 or id:3 survives)   
2 | aaa | bbb | NULL  #(if there's no NOT NULL text3, NULL will do)

New ids cold be anything, they do not depend on old table ids.
I've tried things like:

CREATE TABLE tmp SELECT text1, text2, text3
FROM my_tbl;
GROUP BY text1, text2;
DROP TABLE my_tbl;
ALTER TABLE tmp RENAME TO my_tbl;

Or SELECT DISTINCT and other variations.
While they work on small databases, query execution time on mine is just huge (never got to the end, actually; > 20 min)

Is there any faster way to do that? Please help me solve this problem.

Sql Solutions


Solution 1 - Sql

I believe this will do it, using on duplicate key + ifnull():

create table tmp like yourtable;

alter table tmp add unique (text1, text2);

insert into tmp select * from yourtable 
    on duplicate key update text3=ifnull(text3, values(text3));

rename table yourtable to deleteme, tmp to yourtable;

drop table deleteme;

Should be much faster than anything that requires group by or distinct or a subquery, or even order by. This doesn't even require a filesort, which is going to kill performance on a large temporary table. Will still require a full scan over the original table, but there's no avoiding that.

Solution 2 - Sql

Found this simple 1-line code to do exactly what I needed:

ALTER IGNORE TABLE dupTest ADD UNIQUE INDEX(a,b);

Taken from: http://mediakey.dk/~cc/mysql-remove-duplicate-entries/

Solution 3 - Sql

DELETE FROM dups
WHERE id NOT IN(
    SELECT id FROM (
        SELECT DISTINCT id, text1, text2
            FROM dups
        GROUP BY text1, text2
        ORDER BY text3 DESC
    ) as tmp
)

This queries all records, groups by the distinction fields and orders by ID (means we pick the first not null text3 record). Then we select the id's from that result (these are good ids...they wont be deleted) and delete all IDs that AREN'T those.

Any query like this affecting the entire table will be slow. You just need to run it and let it roll out so you can prevent it in the future.

After you have done this "fix" I would apply UNIQUE INDEX (text1, text2) to that table. To prevent the posibility of duplicates in the future.

If you want to go the "create a new table and replace the old one" route. You could use the very inner select statement to create your insert statement.

MySQL specific (assumes new table is named my_tbl2 and has exactly the same structure):

INSERT INTO my_tbl2
    SELECT DISTINCT id, text1, text2, text3
            FROM dups
        GROUP BY text1, text2
        ORDER BY text3 DESC

See MySQL INSERT ... SELECT for more information.

Solution 4 - Sql

remove duplicates without removing foreign keys

create table tmp like mytable;
ALTER TABLE tmp ADD UNIQUE INDEX(text1, text2, text3, text4, text5, text6);
insert IGNORE into tmp select * from mytable;
delete from mytable where id not in ( select id from tmp);

Solution 5 - Sql

If you can create a new table, do so with a unique key on the text1 + text2 fields. Then insert into the table ignoring errors (using the INSERT IGNORE syntax):

select * from my_tbl order by text3 desc
  • I think the order by text3 desc will put the NULLs last, but double check that.

Indexes on all those columns could help a lot, but creating them now could be pretty slow.

Solution 6 - Sql

For large tables with few duplicates, you may want to avoid copying the whole table to another place. One way is to create a temporary table holding the rows you want to keep (for each key with duplicates), and then delete duplicates from the original table.

An example is given here.

Solution 7 - Sql

I don't have much experience with MySQL. If it has analytic functions try:

delete from my_tbl
where id in (
select id
from (select id, row_number()
over (partition by text1, text2 order by text3 desc) as rn
from my_tbl
/* optional: where text1 like 'a%'  */
) as t2
where rn > 1
)

the optional where clause makes the means you'll have to run it multiple times, one for each letter, etc. Create an index on text1?

Before running this, confirm that "text desc" will sort nulls last in MySQL.

Solution 8 - Sql

I know this is an Old thread but I have a somewhat messy method that is much faster and customizable, in terms of speed I'd say 10sec instead of 100sec (10:1).

My method does required all that messy stuff you were trying to avoid:

  • Group by (and Having)
  • group concat with ORDER BY
  • 2 temporary tables
  • using files on disk!
  • somehow (php?) deleting the file after

But when you are talking about MILLIONS (or in my case Tens of Millions) it's worth it.

anyway its not much because comment are in portuguese but here is my sample:

EDIT: if I get comments I'll explain further how it works :)

START TRANSACTION;

DROP temporary table if exists to_delete;

CREATE temporary table to_delete as (
	SELECT
		-- escolhe todos os IDs duplicados menos os que ficam na BD
		-- A ordem de escolha dos IDs é dada por "ORDER BY campo_ordenacao DESC" em que o primeiro é o que fica
		right(
			group_concat(id ORDER BY campos_ordenacao DESC SEPARATOR ','),
			length(group_concat(id ORDER BY campos_ordenacao DESC SEPARATOR ',')) 
				- locate(",",group_concat(id ORDER BY campos_ordenacao DESC SEPARATOR ','))
		) as ids,

		count(*) as c
	
	-- Tabela a eliminar duplicados
	FROM teste_dup

	-- campos a usar para identificar  duplicados
	group by test_campo1, test_campo2, teste_campoN
	having count(*) > 1 -- é duplicado
);

-- aumenta o limite desta variável de sistema para o máx 
SET SESSION group_concat_max_len=4294967295;

-- envia os ids todos a eliminar para um ficheiro
select group_concat(ids SEPARATOR ',') from to_delete INTO OUTFILE 'sql.dat';

DROP temporary table if exists del3;
create temporary table del3 as (select CAST(1 as signed) as ix LIMIT 0);

-- insere os ids a eliminar numa tabela temporaria a partir do ficheiro
load data infile 'sql.dat' INTO TABLE del3
LINES TERMINATED BY ',';

alter table del3 add index(ix);

-- elimina os ids seleccionados
DELETE teste_dup -- tabela 
from teste_dup -- tabela

join del3 on id=ix;

COMMIT;

Solution 9 - Sql

you can remove all the duplicate entries by using this simple query. that will select all the duplicate records and remove them.

 DELETE i1 
FROM TABLE i1
LEFT JOIN TABLE i2
  ON i1.id = i2.id
 AND i1.colo = i2.customer_invoice_id
 AND i1.id < i2.id
WHERE i2.customer_invoice_id IS NOT NULL

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionbizzzView Question on Stackoverflow
Solution 1 - SqlʞɔıuView Answer on Stackoverflow
Solution 2 - SqlliorqView Answer on Stackoverflow
Solution 3 - SqlKevin PenoView Answer on Stackoverflow
Solution 4 - SqlGadelkareemView Answer on Stackoverflow
Solution 5 - SqlScott SaundersView Answer on Stackoverflow
Solution 6 - Sqluser1931858View Answer on Stackoverflow
Solution 7 - SqlredcayugaView Answer on Stackoverflow
Solution 8 - SqlJDuarteDJView Answer on Stackoverflow
Solution 9 - Sqlkamran SheikhView Answer on Stackoverflow