问题
假设你在存档中有成千上万的文档,其中许多是彼此重复的,即使文档的内容相同,标题不同。 现在想象一下,现在老板要求你通过删除不必要的重复文档来释放一些空间。
问题是:如何过滤标题足够相似的文本,以使内容可能相同? 接下来,如何实现此目标,以便在完成操作时不会删除过多的文档,而保留一组唯一的文档? 让我们用一些代码使它更清楚:
titles = [ "End of Year Review 2020", "2020 End of Year", "January Sales Projections", "Accounts 2017-2018", "Jan Sales Predictions" ] # Desired output filtered_titles = [ "End of Year Review 2020", "January Sales Projections", "Accounts 2017-2018", ]
根据以上的问题,本文适合那些希望快速而实用地概述如何解决这样的问题并广泛了解他们同时在做什么的人!
接下来,我将介绍我为解决这个问题所采取的不同步骤。下面是控制流的概要:
预处理所有标题文本
生成所有标题成对
测试所有对的相似性
如果一对文本未能通过相似性测试,则删除其中一个文本并创建一个新的文本列表
继续测试这个新的相似的文本列表,直到没有类似的文本留下
用Python表示,这可以很好地映射到递归函数上!
代码
下面是Python中实现此功能的两个函数。
import spacy from itertools import combinations # Set globals nlp = spacy.load("en_core_web_md") def pre_process(titles): """ Pre-processes titles by removing stopwords and lemmatizing text. :param titles: list of strings, contains target titles,. :return: preprocessed_title_docs, list containing pre-processed titles. """ # Preprocess all the titles title_docs = [nlp(x) for x in titles] preprocessed_title_docs = [] lemmatized_tokens = [] for title_doc in title_docs: for token in title_doc: if not token.is_stop: lemmatized_tokens.append(token.lemma_) preprocessed_title_docs.append(" ".join(lemmatized_tok<p>本文来源gao!daima.com搞$代!码#网#</p>ens)) del lemmatized_tokens[ : ] # empty the lemmatized tokens list as the code moves onto a new title return preprocessed_title_docs def similarity_filter(titles): """ Recursively check if titles pass a similarity filter. :param titles: list of strings, contains titles. If the function finds titles that fail the similarity test, the above param will be the function output. :return: this method upon itself unless there are no similar titles; in that case the feed that was passed in is returned. """ # Preprocess titles preprocessed_title_docs = pre_process(titles) # Remove similar titles all_summary_pairs = list(combinations(preprocessed_title_docs, 2)) similar_titles = [] for pair in all_summary_pairs: title1 = nlp(pair[0]) title2 = nlp(pair[1]) similarity = title1.similarity(title2) if similarity > 0.8: similar_titles.append(pair) titles_to_remove = [] for a_title in similar_titles: # Get the index of the first title in the pair index_for_removal = preprocessed_title_docs.index(a_title[0]) titles_to_remove.append(index_for_removal) # Get indices of similar titles and remove them similar_title_counts = set(titles_to_remove) similar_titles = [ x[1] for x in enumerate(titles) if x[0] in similar_title_counts ] # Exit the recursion if there are no longer any similar titles if len(similar_title_counts) == 0: return titles # Continue the recursion if there are still titles to remove else: # Remove similar titles from the next input for title in similar_titles: idx = titles.index(title) titles.pop(idx) return similarity_filter(titles) if __name__ == "__main__": your_title_list = ['title1', 'title2'] similarty_filter(your_title_list)