• 欢迎访问搞代码网站,推荐使用最新版火狐浏览器和Chrome浏览器访问本网站!
  • 如果您觉得本站非常有看点,那么赶紧使用Ctrl+D 收藏搞代码吧

python爬虫实现获取下一页代码

python 搞代码 4年前 (2022-01-07) 16次浏览 已收录 0个评论

在本篇文章里小编给大家整理了关于python爬虫实现获取下一页代码内容,需要的朋友们可以参考学习下。

我们首先来看下实例代码:

 from time import sleep import faker import requests from lxml import etree fake = faker.Faker() base_url = "http://angelimg.spbeen.com" def get_next_link(url): content = downloadHtml(url) html = etree.HTML(content) next_url = html.xpath("//a[@class='ch next']/@href") if next_url: return base_url + next_url[0] else: return False def downloadHtml(ur): user_agent = fake.user_agent() headers = {'User-Agent': user_agent,"Referer":"http://angelimg.spbeen.com/"} response = requests.get(url, headers=headers) return response.text def getImgUrl(content): html = etree.HTML(content) img_url = html.xpath('//*[@id="content"]/a/img/@src') title = html.xpath(".//div['@class=article']/h2/text()") return img_url[0],title[0] def saveImg(title,img_url): if img_url is not None and title is not None: with open("txt/"+str(title)+".jpg-600",'wb') as f: user_agent = fake.user_agent() headers = {'User-Agent': user_agent,"Referer":"http://angelimg.spbeen.com/"} content = requests.get(img_url, headers=headers) #request_view(content) f.write(content.content) f.close() def request_view(response): import webbrowser request_url = response.url base_url = '' %(request_url) base_url = base_url.encode() content = response.content.replace(b"",base_url) tem_html = open('tmp.html','wb') tem_html.write(content) tem_html.close() webbrowser.open_new_tab('tmp.html') def crawl_img(url): content = downloadHtml(url) res<div style="color:transparent">来源gaodai.ma#com搞##代!^码网</div> = getImgUrl(content) title = res[1] img_url = res[0] saveImg(title,img_url) if __name__ == "__main__": url = "http://angelimg.spbeen.com/ang/4968/1" while url: print(url) crawl_img(url) url = get_next_link(url)

python 爬虫如何执行自动下一页循环加载文字

 from bs4 import BeautifulSoup import requests import time from lxml import etree import os # 该demo执行的为如何利用bs去爬一些文字 def start(): # 发起网络请求 html=requests.get('http://www.baidu.com') #编码 html.encoding=html.apparent_encoding #创建sp soup=BeautifulSoup(html.text,'html.parser') print(type(soup)) print('打印元素') print(soup.prettify()) #存储一下title 该方法没有提示直接展示 title=soup.head.title.string print(title) #   写入文本 with open(r'C:/Users/a/Desktop/a.txt','w') as f: f.write(title) print(time.localtime()) url_2 = 'http://news.gdzjdaily.com.cn/zjxw/politics/sz_4.shtml' def get_html_from_bs4(url): # response = requests.get(url,headers=data,proxies=ip).content.decode('utf-8') response = requests.get(url).content.decode('utf-8') soup = BeautifulSoup(response, 'html.parser') next_page = soup.select('#displaypagenum a:nth-of-type(9)')[0].get('href') # for i in nett print(next_page) next2='http://news.gdzjdaily.com.cn/zjxw/politics/'+next_page def get_html_from_etree(url): response = requests.get(url).content.decode('utf-8') html= etree.HTML(response) next_page = html.xpath('.//a[@class="PageNum"][8]/@href')[0] print(next_page) # next2='http://news.gdzjdaily.com.cn/zjxw/politics/'+next_page get_html_from_etree(url_2) if __name__ == '__main__': start()

到此这篇关于python爬虫实现获取下一页代码的文章就介绍到这了,更多相关python爬虫获取下一页内容请搜索gaodaima搞代码网以前的文章或继续浏览下面的相关文章希望大家以后多多支持gaodaima搞代码网

以上就是python爬虫实现获取下一页代码的详细内容,更多请关注gaodaima搞代码网其它相关文章!


搞代码网(gaodaima.com)提供的所有资源部分来自互联网,如果有侵犯您的版权或其他权益,请说明详细缘由并提供版权或权益证明然后发送到邮箱[email protected],我们会在看到邮件的第一时间内为您处理,或直接联系QQ:872152909。本网站采用BY-NC-SA协议进行授权
转载请注明原文链接:python爬虫实现获取下一页代码
喜欢 (0)
[搞代码]
分享 (0)
发表我的评论
取消评论

表情 贴图 加粗 删除线 居中 斜体 签到

Hi,您需要填写昵称和邮箱!

  • 昵称 (必填)
  • 邮箱 (必填)
  • 网址