Remove duplicate rows in MySQL
I have a table with the following fields:
id (Unique)
url (Unique)
title
company
site_id
Now, I need to remove rows having same title, company and site_id
. One way to do it will be using the following SQL along with a script (PHP
):
SELECT title, site_id, location, id, count( * )
FROM jobs
GROUP BY site_id, company, title, location
HAVING count( * ) >1
After running this query, I can remove duplicates using a server side script.
But, I want to know if this can be done only using SQL query.
Solution 1:
A really easy way to do this is to add a UNIQUE
index on the 3 columns. When you write the ALTER
statement, include the IGNORE
keyword. Like so:
ALTER IGNORE TABLE jobs
ADD UNIQUE INDEX idx_name (site_id, title, company);
This will drop all the duplicate rows. As an added benefit, future INSERTs
that are duplicates will error out. As always, you may want to take a backup before running something like this...
Solution 2:
If you don't want to alter the column properties, then you can use the query below.
Since you have a column which has unique IDs (e.g., auto_increment
columns), you can use it to remove the duplicates:
DELETE `a`
FROM
`jobs` AS `a`,
`jobs` AS `b`
WHERE
-- IMPORTANT: Ensures one version remains
-- Change "ID" to your unique column's name
`a`.`ID` < `b`.`ID`
-- Any duplicates you want to check for
AND (`a`.`title` = `b`.`title` OR `a`.`title` IS NULL AND `b`.`title` IS NULL)
AND (`a`.`company` = `b`.`company` OR `a`.`company` IS NULL AND `b`.`company` IS NULL)
AND (`a`.`site_id` = `b`.`site_id` OR `a`.`site_id` IS NULL AND `b`.`site_id` IS NULL);
In MySQL, you can simplify it even more with the NULL-safe equal operator (aka "spaceship operator"):
DELETE `a`
FROM
`jobs` AS `a`,
`jobs` AS `b`
WHERE
-- IMPORTANT: Ensures one version remains
-- Change "ID" to your unique column's name
`a`.`ID` < `b`.`ID`
-- Any duplicates you want to check for
AND `a`.`title` <=> `b`.`title`
AND `a`.`company` <=> `b`.`company`
AND `a`.`site_id` <=> `b`.`site_id`;
Solution 3:
MySQL has restrictions about referring to the table you are deleting from. You can work around that with a temporary table, like:
create temporary table tmpTable (id int);
insert into tmpTable
(id)
select id
from YourTable yt
where exists
(
select *
from YourTabe yt2
where yt2.title = yt.title
and yt2.company = yt.company
and yt2.site_id = yt.site_id
and yt2.id > yt.id
);
delete
from YourTable
where ID in (select id from tmpTable);
From Kostanos' suggestion in the comments:
The only slow query above is DELETE, for cases where you have a very large database. This query could be faster:
DELETE FROM YourTable USING YourTable, tmpTable WHERE YourTable.id=tmpTable.id
Solution 4:
Deleting duplicates on MySQL tables is a common issue, that's genarally the result of a missing constraint to avoid those duplicates before hand. But this common issue usually comes with specific needs... that do require specific approaches. The approach should be different depending on, for example, the size of the data, the duplicated entry that should be kept (generally the first or the last one), whether there are indexes to be kept, or whether we want to perform any additional action on the duplicated data.
There are also some specificities on MySQL itself, such as not being able to reference the same table on a FROM cause when performing a table UPDATE (it'll raise MySQL error #1093). This limitation can be overcome by using an inner query with a temporary table (as suggested on some approaches above). But this inner query won't perform specially well when dealing with big data sources.
However, a better approach does exist to remove duplicates, that's both efficient and reliable, and that can be easily adapted to different needs.
The general idea is to create a new temporary table, usually adding a unique constraint to avoid further duplicates, and to INSERT the data from your former table into the new one, while taking care of the duplicates. This approach relies on simple MySQL INSERT queries, creates a new constraint to avoid further duplicates, and skips the need of using an inner query to search for duplicates and a temporary table that should be kept in memory (thus fitting big data sources too).
This is how it can be achieved. Given we have a table employee, with the following columns:
employee (id, first_name, last_name, start_date, ssn)
In order to delete the rows with a duplicate ssn column, and keeping only the first entry found, the following process can be followed:
-- create a new tmp_eployee table
CREATE TABLE tmp_employee LIKE employee;
-- add a unique constraint
ALTER TABLE tmp_employee ADD UNIQUE(ssn);
-- scan over the employee table to insert employee entries
INSERT IGNORE INTO tmp_employee SELECT * FROM employee ORDER BY id;
-- rename tables
RENAME TABLE employee TO backup_employee, tmp_employee TO employee;
Technical explanation
- Line #1 creates a new tmp_eployee table with exactly the same structure as the employee table
- Line #2 adds a UNIQUE constraint to the new tmp_eployee table to avoid any further duplicates
- Line #3 scans over the original employee table by id, inserting new employee entries into the new tmp_eployee table, while ignoring duplicated entries
- Line #4 renames tables, so that the new employee table holds all the entries without the duplicates, and a backup copy of the former data is kept on the backup_employee table
⇒ Using this approach, 1.6M registers were converted into 6k in less than 200s.
Chetan, following this process, you could fast and easily remove all your duplicates and create a UNIQUE constraint by running:
CREATE TABLE tmp_jobs LIKE jobs;
ALTER TABLE tmp_jobs ADD UNIQUE(site_id, title, company);
INSERT IGNORE INTO tmp_jobs SELECT * FROM jobs ORDER BY id;
RENAME TABLE jobs TO backup_jobs, tmp_jobs TO jobs;
Of course, this process can be further modified to adapt it for different needs when deleting duplicates. Some examples follow.
✔ Variation for keeping the last entry instead of the first one
Sometimes we need to keep the last duplicated entry instead of the first one.
CREATE TABLE tmp_employee LIKE employee;
ALTER TABLE tmp_employee ADD UNIQUE(ssn);
INSERT IGNORE INTO tmp_employee SELECT * FROM employee ORDER BY id DESC;
RENAME TABLE employee TO backup_employee, tmp_employee TO employee;
- On line #3, the ORDER BY id DESC clause makes the last ID's to get priority over the rest
✔ Variation for performing some tasks on the duplicates, for example keeping a count on the duplicates found
Sometimes we need to perform some further processing on the duplicated entries that are found (such as keeping a count of the duplicates).
CREATE TABLE tmp_employee LIKE employee;
ALTER TABLE tmp_employee ADD UNIQUE(ssn);
ALTER TABLE tmp_employee ADD COLUMN n_duplicates INT DEFAULT 0;
INSERT INTO tmp_employee SELECT * FROM employee ORDER BY id ON DUPLICATE KEY UPDATE n_duplicates=n_duplicates+1;
RENAME TABLE employee TO backup_employee, tmp_employee TO employee;
- On line #3, a new column n_duplicates is created
- On line #4, the INSERT INTO ... ON DUPLICATE KEY UPDATE query is used to perform an additional update when a duplicate is found (in this case, increasing a counter) The INSERT INTO ... ON DUPLICATE KEY UPDATE query can be used to perform different types of updates for the duplicates found.
✔ Variation for regenerating the auto-incremental field id
Sometimes we use an auto-incremental field and, in order the keep the index as compact as possible, we can take advantage of the deletion of the duplicates to regenerate the auto-incremental field in the new temporary table.
CREATE TABLE tmp_employee LIKE employee;
ALTER TABLE tmp_employee ADD UNIQUE(ssn);
INSERT IGNORE INTO tmp_employee SELECT (first_name, last_name, start_date, ssn) FROM employee ORDER BY id;
RENAME TABLE employee TO backup_employee, tmp_employee TO employee;
- On line #3, instead of selecting all the fields on the table, the id field is skipped so that the DB engine generates a new one automatically
✔ Further variations
Many further modifications are also doable depending on the desired behavior. As an example, the following queries will use a second temporary table to, besides 1) keep the last entry instead of the first one; and 2) increase a counter on the duplicates found; also 3) regenerate the auto-incremental field id while keeping the entry order as it was on the former data.
CREATE TABLE tmp_employee LIKE employee;
ALTER TABLE tmp_employee ADD UNIQUE(ssn);
ALTER TABLE tmp_employee ADD COLUMN n_duplicates INT DEFAULT 0;
INSERT INTO tmp_employee SELECT * FROM employee ORDER BY id DESC ON DUPLICATE KEY UPDATE n_duplicates=n_duplicates+1;
CREATE TABLE tmp_employee2 LIKE tmp_employee;
INSERT INTO tmp_employee2 SELECT (first_name, last_name, start_date, ssn) FROM tmp_employee ORDER BY id;
DROP TABLE tmp_employee;
RENAME TABLE employee TO backup_employee, tmp_employee2 TO employee;