• 欢迎访问搞代码网站,推荐使用最新版火狐浏览器和Chrome浏览器访问本网站!
  • 如果您觉得本站非常有看点,那么赶紧使用Ctrl+D 收藏搞代码吧

分享十种py3爬取网页资源的方法

python 搞代码 4年前 (2022-01-09) 18次浏览 已收录 0个评论

这两天学习了python3实现抓取网页资源的方法,发现了很多种方法,所以,今天添加一点小笔记。

这两天学习了python3实现抓取网页资源的方法,发现了很多种方法,所以,今天添加一点小笔记。

1、最简单

import urllib.requestresponse = urllib.request.urlopen('http://python.org/')html = response.read()

2、使用 Request

import urllib.request req = urllib.request.Request('http://python.org/')response = urllib.request.urlopen(req)the_page = response.read()

3、发送数据

#! /usr/bin/env python3 import urllib.parseimport urllib.request url = 'http://localhost/login.php'user_agent = 'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)'values = {     'act' : 'login',     'login[email]' : '[email protected]',     'login[password]' : '123456'     } data = urllib.parse.urlencode(values)req = urllib.request.Request(url, data)req.add_header('Referer', 'http://www.python.org/')response = urllib.request.urlopen(req)the_page = response.read() print(the_page.decode("utf8"))

4、发送数据和header

#! /usr/bin/env python3 import urllib.parseimport urllib.request url = 'http://localhost/login.php'user_agent = 'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)'values = {     'act' : 'login',     'login[email]' : '[email protected]',     'login[password]' : '123456'     }headers = { 'User-Agent' : user_agent } data = urllib.parse.urlencode(values)req = urllib.request.Request(url, data, headers)response = urllib.request.urlopen(req)the_page = response.read() print(the_page.decode("utf8"))

5、http 错误

#! /usr/bin/env python3 import urllib.request req = urllib.request.Request('http://www.python.org/fish.html')try:  urllib.request.u<em>本文来源[email protected]搞@^&代*@码2网</em>rlopen(req)except urllib.error.HTTPError as e:  print(e.code)  print(e.read().decode("utf8"))

6、异常处理1

#! /usr/bin/env python3 from urllib.request import Request, urlopenfrom urllib.error import URLError, HTTPErrorreq = Request("http://twitter.com/")try:  response = urlopen(req)except HTTPError as e:  print('The server couldn\'t fulfill the request.')  print('Error code: ', e.code)except URLError as e:  print('We failed to reach a server.')  print('Reason: ', e.reason)else:  print("good!")  print(response.read().decode("utf8"))

7、异常处理2

#! /usr/bin/env python3 from urllib.request import Request, urlopenfrom urllib.error import URLErrorreq = Request("http://twitter.com/")try:  response = urlopen(req)except URLError as e:  if hasattr(e, 'reason'):    print('We failed to reach a server.')    print('Reason: ', e.reason)  elif hasattr(e, 'code'):    print('The server couldn\'t fulfill the request.')    print('Error code: ', e.code)else:  print("good!")  print(response.read().decode("utf8"))

8、HTTP 认证

#! /usr/bin/env python3 import urllib.request # create a password managerpassword_mgr = urllib.request.HTTPPasswordMgrWithDefaultRealm() # Add the username and password.# If we knew the realm, we could use it instead of None.top_level_url = "https://cms.tetx.com/"password_mgr.add_password(None, top_level_url, 'yzhang', 'cccddd') handler = urllib.request.HTTPBasicAuthHandler(password_mgr) # create "opener" (OpenerDirector instance)opener = urllib.request.build_opener(handler) # use the opener to fetch a URLa_url = "https://cms.tetx.com/"x = opener.open(a_url)print(x.read()) # Install the opener.# Now all calls to urllib.request.urlopen use our opener.urllib.request.install_opener(opener) a = urllib.request.urlopen(a_url).read().decode('utf8')print(a)

9、使用代理

#! /usr/bin/env python3 import urllib.request proxy_support = urllib.request.ProxyHandler({'sock5': 'localhost:1080'})opener = urllib.request.build_opener(proxy_support)urllib.request.install_opener(opener) a = urllib.request.urlopen("http://g.cn").read().decode("utf8")print(a)

10、超时

#! /usr/bin/env python3 import socketimport urllib.request # timeout in secondstimeout = 2socket.setdefaulttimeout(timeout) # this call to urllib.request.urlopen now uses the default timeout# we have set in the socket modulereq = urllib.request.Request('http://twitter.com/')a = urllib.request.urlopen(req).read()print(a)

【相关推荐】

1. Python免费视频教程

2. Python学习手册

3. 马哥教育python基础语法全讲解视频

以上就是分享十种py3爬取网页资源的方法的详细内容,更多请关注搞代码gaodaima其它相关文章!


搞代码网(gaodaima.com)提供的所有资源部分来自互联网,如果有侵犯您的版权或其他权益,请说明详细缘由并提供版权或权益证明然后发送到邮箱[email protected],我们会在看到邮件的第一时间内为您处理,或直接联系QQ:872152909。本网站采用BY-NC-SA协议进行授权
转载请注明原文链接:分享十种py3爬取网页资源的方法

喜欢 (0)
[搞代码]
分享 (0)
发表我的评论
取消评论

表情 贴图 加粗 删除线 居中 斜体 签到

Hi,您需要填写昵称和邮箱!

  • 昵称 (必填)
  • 邮箱 (必填)
  • 网址