这篇文章主要介绍了python3爬虫学习之数据存储txt的案例详解,文中通过示例代码介绍的非常详细,对大家的学习或者工作具有一定的参考学习价值,需要的朋友们下面随着小编来一起学习学习吧
上一篇实战爬取知乎热门话题的实战,并且保存为本地的txt文本
先上代码,有很多细节和坑需要规避,弄了两个半小时
import requests import re headers = { "user-agent" : "Mozilla/5.0 (Windows NT 6.1; Win64; x64)" " AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.86 Safari" "/537.36M", "cookie" : '_xsrf=H6hRg3qQ9I1O8jRZOmf4ytecfaKdf2es; _zap=296584df-ce11-4059-bc93-be10eda0fdc1; d_c0="AKBmB5e-PA-PTkZTAD1nQun0qMf_hmcEH14=|1554554531"; ' 'capsion_ticket="2|1:0|10:1554554531|14:capsion_ticket|44:Yjc0NjAzNDViMTIzNDcyZDg2YTZjYTk0YWM3OGUzZDg=|2d7f136328b50cdeaa85e2605e0be2bb931d406babd396373d15d5f8a6c' '92a61"; l_n_c=1; q_c1=ad0738b5ee294fc3bd35e1ccb9e62a11|1554554551000|1554554551000; n_c=1; __gads=ID=9a31896e052116c4:T=1554555023:S=ALNI_Mb-I0et9W' 'vgfQcvMUyll7Byc0XpWA; tgw_l7_route=116a747939468d99065d12a386ab1c5f; l_cap_id="OGEyOTkzMzE2YmU3NDVmYThlMmQ4OTBkMzNjODE4N2Y=|1554558219|a351d6740bd01ba8ee34' '94da0bd8b6来源gaodai.ma#com搞##代!^码网97b20aa5f0"; r_cap_id="MDIzNThmZjRhNjNlNGQ1OWFjM2NmODIxNzNjZWY2ZjY=|1554558219|ff86cb2f7d3c6e4a4e2b1286bbe0c093695bfa1d"; cap_id="MGNkY2RiZTg5N2MzNDUyNTk0NmEzMTYyYzgwY' 'zdhYTE=|1554558219|18ed852d4506efb2345b1dbe14c749b2f9104d54"; __utma=51854390.789428312.1554558223.1554558223.1554558223.1; __utmb=51854390.0.10.1554558223; __utmc=51854390; ' '__utmz=51854390.1554558223.1.1.utmcsr=(direct' ')|utmccn=(direct)|utmcmd=(none); __utmv=51854390.000--|3=entry_date=20190406=1', "authority" : "www.zhihu.com", } url = "https://www.zhihu.com/explore" response = requests.get(url=url , headers=headers) text = response.text # print(text) titles = [] f_titles = re.findall(r'
.*?(.*?).*?',text,re.S) for title in f_titles: titles.append(title.strip()) # print("*"*30) authors = [] f_authors = re.findall(r'
以上就是python3爬虫学习之数据存储txt的案例详解的详细内容,更多请关注gaodaima搞代码网其它相关文章!