• 欢迎访问搞代码网站,推荐使用最新版火狐浏览器和Chrome浏览器访问本网站!
  • 如果您觉得本站非常有看点,那么赶紧使用Ctrl+D 收藏搞代码吧

python文本处理的方案(结巴分词并去除符号)

python 搞代码 4年前 (2022-01-07) 35次浏览 已收录 0个评论
文章目录[隐藏]

这篇文章主要介绍了python文本处理的方案(结巴分词并去除符号),具有很好的参考价值,希望对大家有所帮助。如有错误或未考虑完全的地方,望不吝赐教

看代码吧~

 import re import jieba.analyse import codecs import pandas as pd def simplification_text(xianbingshi): """提取文本""" xianbingshi_simplification = [] with codecs.open(xianbingshi,'r','utf8') as f: for line in f : line = line.strip() line_write = re.findall('(?<=\<b>).*?(?=\)',line) for line in line_write: xianbingshi_simplification.append(line) with codecs.open(r'C:\Users\Administrator.SC-201812211013\PycharmProjects\untitled29\yiwoqu\code\xianbingshi_write.txt','w','utf8') as f: for line in xianbingshi_simplification: f.write(line + '\n') def jieba_text(): """""" word_list = [] data = open(r"C:\Users\Administrator.SC-201812211013\PycharmProjects\untitled29\xianbingshi_write.txt", encoding='utf-8').read() seg_list = jieba.cut(data, cut_all=False)  # 精确模式 for i in seg_list: word_list.append(i.strip()) data_quchong = pd.DataFrame({'a':word_list}) data_quchong.drop_duplicates(subset=['a'],keep='first',inplace=True) word_list = data_quchong['a'].tolist() with codecs.open('word.txt','w','utf8')as w: for line in word_list: w.write(line + '\n') def word_messy(word): """词语提炼""" word_sub_list = [] with codecs.open(word,'r','utf8') as f: for line in f: line_sub = re.sub("^[1-9]\d*\.\d*|^[A-Za-z0-9]+$|^[0-9]*$|^(-?\d+)(\.\d+)?$|^[A-Za-z0-9]{4,40}.*?",'',line) word_sub_list.append(line_sub) word_sub_list.sort() with codecs.open('word.txt','w','utf8')as w: for line in word_sub_list: w.write(line.strip("\n") + '\n') if __name__ == '__main__': xianbingshi = r'C:\Users\Administrator.SC-201812211013\PycharmProjects\untitled29\yiwoqu\xianbingshi_sub_sen_all(1).txt' # simplification_text(xianbingshi) # word = r'C:\Users\Administrator.SC-201812211013\PycharmProjects\untitled29\word.txt' simplification_text(xianbingshi)

补充:python 进行结巴分词 并且用re去掉符号

看代码吧~

 # 把停用词做成字典 stopwords = {} fstop = open('stop_words.txt', 'r',encoding='utf-8',errors='ingnore') for eachWord in fstop: stopwords[eachWord.strip()] = eachWord.strip()  #停用词典 fstop.close() f1=open('all.txt','r',encoding='utf-8',errors='ignore') f2=open('allutf<i style="color:transparent">来源gaodai$ma#com搞$$代**码)网</i>11.txt','w',encoding='utf-8') line=f1.readline() while line: line = line.strip()  #去前后的空格 line = re.sub(r"[0-9\s+\.\!\/_,$%^*()?;;:-【】+\"\']+|[+――!,;:。?、~@#¥%……&*()]+", " ", line) #去标点符号 seg_list=jieba.cut(line,cut_all=False)  #结巴分词 outStr="" for word in seg_list: if word not in stopwords: outStr+=word outStr+=" " f2.write(outStr) line=f1.readline() f1.close() f2.close()

以上为个人经验,希望能给大家一个参考,也希望大家多多支持gaodaima搞代码网

以上就是python文本处理的方案(结巴分词并去除符号)的详细内容,更多请关注gaodaima搞代码网其它相关文章!


搞代码网(gaodaima.com)提供的所有资源部分来自互联网,如果有侵犯您的版权或其他权益,请说明详细缘由并提供版权或权益证明然后发送到邮箱[email protected],我们会在看到邮件的第一时间内为您处理,或直接联系QQ:872152909。本网站采用BY-NC-SA协议进行授权
转载请注明原文链接:python文本处理的方案(结巴分词并去除符号)
喜欢 (0)
[搞代码]
分享 (0)
发表我的评论
取消评论

表情 贴图 加粗 删除线 居中 斜体 签到

Hi,您需要填写昵称和邮箱!

  • 昵称 (必填)
  • 邮箱 (必填)
  • 网址