背景

超过5300重复行:

"id","latitude","longitude","country","region","city"
"2143220","41.3513889","68.9444444","KZ","10","Abay"
"2143218","40.8991667","68.5433333","KZ","10","Abay"
"1919381","33.8166667","49.6333333","IR","34","Ab Barik"
"1919377","35.6833333","50.1833333","IR","19","Ab Barik"
"1919432","29.55","55.5122222","IR","29","`Abbasabad"
"1919430","27.4263889","57.5725","IR","29","`Abbasabad"
"1919413","28.0011111","58.9005556","IR","12","`Abbasabad"
"1919435","36.5641667","61.14","IR","30","`Abbasabad"
"1919433","31.8988889","58.9211111","IR","30","`Abbasabad"
"1919422","33.8666667","48.3","IR","23","`Abbasabad"
"1919420","33.4658333","49.6219444","IR","23","`Abbasabad"
"1919438","33.5333333","49.9833333","IR","34","`Abbasabad"
"1919423","33.7619444","49.0747222","IR","24","`Abbasabad"
"1919419","34.2833333","49.2333333","IR","19","`Abbasabad"
"1919439","35.8833333","52.15","IR","35","`Abbasabad"
"1919417","35.9333333","52.95","IR","17","`Abbasabad"
"1919427","35.7341667","51.4377778","IR","26","`Abbasabad"
"1919425","35.1386111","51.6283333","IR","26","`Abbasabad"
"1919713","30.3705556","56.07","IR","29","`Abdolabad"
"1919711","27.9833333","57.7244444","IR","29","`Abdolabad"
"1919716","35.6025","59.2322222","IR","30","`Abdolabad"
"1919714","34.2197222","56.5447222","IR","30","`Abdolabad"

额外细节:

  • PostgreSQL 8.4数据库
  • Linux

问题

有些值是显而易见的重复项(“ abay”,因为区域匹配和“ AB Barik”,因为两个位置在如此紧密的接近距离之内),而另一些位置并不是那么明显(甚至可能不是实际的重复):

"1919430","27.4263889","57.5725","IR","29","`Abbasabad"
"1919435","36.5641667","61.14","IR","30","`Abbasabad"

目标是消除所有重复项。

问题

给定一个值表,例如上述CSV数据:

  • 您如何消除重复项?
  • 您将使用哪些以GEO为中心的PostgreSQL功能?
  • 您还将使用哪些其他标准来降低重复项?

更新

半加工示例代码可在同一国家 /地区选择重复的城市名称,这些范围紧密接近(10公里以内):

select
  c1.country, c1.name, c1.region_id, c2.region_id, c1.latitude_decimal, c1.longitude_decimal, c2.latitude_decimal, c2.longitude_decimal
from
  climate.maxmind_city c1,
  climate.maxmind_city c2
where
  c1.country = 'BE' and
  c1.id <> c2.id and
  c1.country = c2.country and
  c1.name = c2.name and
  (c1.latitude_decimal <> c2.latitude_decimal or c1.longitude_decimal <> c2.longitude_decimal) and
  earth_distance(
    ll_to_earth( c1.latitude_decimal, c1.longitude_decimal ),
    ll_to_earth( c2.latitude_decimal, c2.longitude_decimal ) ) <= 10
order by
  country, name

想法

两期方法:

  1. 通过删除最小的(ID)来消除明显的重复(同一国家,地区和城市名称)。
  2. 消除具有相同名称和国家的彼此紧邻的人。这可能会消除一些合法的城市,但几乎没有任何后果。

谢谢!

有帮助吗?

解决方案 2

这删除了第二个城市,靠近同一国家的同名城市:

delete from climate.maxmind_city mc where id in (
select
  max(c1.id)
from
  climate.maxmind_city c1,
  climate.maxmind_city c2
where
  c1.id <> c2.id and
  c1.country = c2.country and
  c1.name = c2.name and
  earth_distance(
    ll_to_earth( c1.latitude_decimal, c1.longitude_decimal ),
    ll_to_earth( c2.latitude_decimal, c2.longitude_decimal ) ) <= 35
group by
  c1.country, c1.name
order by
  c1.country, c1.name
)

其他提示

查找重复项很简单:

select
  max(id) as this_should_stay,
  latitude,
  longitude,
  country,
  region,
  city
FROM
  your_table
group by
  latitude,
  longitude,
  country,
  region,
  city
having count(*) > 1;

添加代码以基于此删除重复项很简单:

delete from your_table where id not in (
    select
      max(id) as this_should_stay
    FROM
      your_table
    group by
      latitude,
      longitude,
      country,
      region,
      city
)

请注意,缺少删除查询。

如果您的数据已通过CSV文件导入并使用代码(PHP),则可以防止使用PHP代码中的推杆条件重复输入。如果您插入的城市已经存在,则使循环继续进行下一个记录并跳过当前记录。

如果您按照此方式导入数据库中的数据。

谢谢。

许可以下: CC-BY-SA归因
不隶属于 StackOverflow
scroll top