源文件
http://theday.guohongfu.top/letter.txt
内容为abcdefghijklmnopqrstuvwxyz
获取第20字节及当前的内容
<code class="Python">import requests url = 'http://theday.guohongfu.top/letter.txt' headers1 = { 'Range': "bytes=20-" # 获取 第20字节及当前的 } response = requests.get(url, headers=headers1) print('data={}'.format(response.content.decode())) # abcdef # 后果 #data=uvwxyz
设置 If-Match
判断文件在两次申请间是否产生了扭转
<code class="Python">import requests url = 'http://theday.guohongfu.top/letter.txt' headers1 = { 'Range': "bytes=0-5" # 获取0-5 的字节 } response = requests.get(url, headers=headers1) print('data={}'.format(response.content.decode())) # abcdef # 失去etag req_etag = response.headers['ETag'] headers1['If-Match'] = req_etag # 判断文件在两次申请间是否产生了扭转 headers1['Range'] = 'bytes=6-10' # 获取6-10字节的数据 response = requests.get(url, headers=headers1) print('data={}'.format(response.content.decode())) # ghijk
失去后果:
<code class="Python"># data=abcdef # data=ghijk
应用 Python 分片下载文件
<code class="Python">import requests mp4url = 'https://mp4.vjshi.com/2020-11-20/1c28d06e0278413bf6259ba8b9d26140.mp4' response = requests.get(mp4url, stream=True) with open('test.mp4', 'wb') as f: [f.write(chunk) for chunk in response.iter_content(chunk_size=512) if chunk]
每次以512字节进行下载数据,避免下载文件过大而被一次性读取到内存中,导致内存爆满。