话说
网上的pymongo教程都有点老了,也就菜鸟教程的能够看一看,别的博客什么的就别看了,翻了一堆都是过期的,倡议间接看官网文档,内容不多,上手也很简略。
发现的问题
- 和网上的用法不同,目前只用MongoClient()一种连贯形式,也不存在safe=true这个参数了,maxPoolSize默认100,能够在连贯时本人设置。
- 和网上的说法不同,据说插入数据的时候指定_id比不指定更快,理论测试恰恰相反,但差异不大,能指定还是指定
- logging.info()挺耗时间的,测试插入1万条数据,加这一句耗时16秒左右,不加只用7秒左右
pymongo应用
import requests from pymongo import MongoClient import logging import time import json from concurrent.futures import ThreadPoolExecutor logging.basicConfig(level=logging.DEBUG, format='%(asctime)s [%(threadName)s] %(levelname)s: %(message)s') # 复用连贯,能大幅提高效率 s = requests.Session() # 批改默认连接数(10),改为20 host,200连接池,http https 别离对应各自类型,只是须要别离设置 s.mount('https://', requests.adapters.HTTPAdapter(pool_connections=20, pool_maxsize=200)) s.mount('http://', requests.adapters.HTTPAdapter(pool_connections=20, pool_maxsize=200)) def getHTML(uid): # url = '' return uid def toDB(obj): res = obj.result() x = collect.insert_one({'_id': res + 20000, 'uid': res}) # # 加_id貌似更快,网上说的不加更快,都是放屁 # logging.info(x.inserted_id) if __name__ == "__main__": logging.info('start') # client = MongoClient('127.0.0.1', 27017) # 默认连贯 client = MongoClient('127.0.0.1', 65500, maxPoolSize=200) # 200连接数 db = client['douyin'] collect = db['user'] pool = ThreadPoolExecutor() # 默认为CPU数*5 t = time.perf_counter() for short_id in range(10000): pool.submit(getHTML, short_id).add_done_callback(toDB) pool.shutdown(wait=True) client.close() t = time.perf_counter() - t print(t)