1. pandarallel (pip install )
对于一个带有Pandas DataFrame df的简单用例和一个应用func的函数,只需用parallel_apply替换经典的apply。
from pandarallel import pandarallel # Initialization pandarallel.initialize() # Standard pandas apply df.apply(func) # Parallel apply df.parallel_apply(func)
注意,如果不想并行化计算,仍然可以使用经典的apply方法。
另外可以通过在initialize函数中传递progress_bar=True来显示每个工作CPU的一个进度条。
2. joblib (pip install )
https://pypi.python.org/pypi/joblib
# Embarrassingly parallel helper: to make it easy to write readable parallel code and debug it quickly from math import sqrt from joblib import Parallel, delayed def test(): start = time.time() result1 = Parallel(n_jobs=1)(delayed(sqrt)(i**2) for i in range(10000)) end = time.time() print(end-start) result2 = Parallel(n_jobs=8)(delayed(sqrt)(i**2) for i in range(10000)) end2 = time.time() print(end2-end)
——-输出结果———-
0.4434356689453125
0.6346755027770996
3. multiprocessing
import multiprocessing as mp with mp.Pool(mp.cpu_count()) as pool: df['newcol'] = pool.map(f, df['col']) multiprocessing.cpu_count()
返回系统的CPU数量。
该数量不同于当前进程可以使用的CPU数量。可用的CPU数量可以由 len(os.sched_getaffinity(0)) 方法获得。
可能引发 NotImplementedError 。
参见os.cpu_count()
4. 几种方法性能比较
(1)代码
import sys import time import pandas as pd import multiprocessing as mp from joblib import Parallel, delayed from pandarallel import pandaral<em>本文来源[email protected]搞@^&代*@码2网</em>lel from tqdm import tqdm, tqdm_notebook def get_url_len(url): url_list = url.split(".") time.sleep(0.01) # 休眠0.01秒 return len(url_list) def test1(data): """ 不进行任何优化 """ start = time.time() data['len'] = data['url'].apply(get_url_len) end = time.time() cost_time = end - start res = sum(data['len']) print("res:{}, cost time:{}".format(res, cost_time)) def test_mp(data): """ 采用mp优化 """ start = time.time() with mp.Pool(mp.cpu_count()) as pool: data['len'] = pool.map(get_url_len, data['url']) end = time.time() cost_time = end - start res = sum(data['len']) print("test_mp \t res:{}, cost time:{}".format(res, cost_time)) def test_pandarallel(data): """ 采用pandarallel优化 """ start = time.time() pandarallel.initialize() data['len'] = data['url'].parallel_apply(get_url_len) end = time.time() cost_time = end - start res = sum(data['len']) print("test_pandarallel \t res:{}, cost time:{}".format(res, cost_time)) def test_delayed(data): """ 采用delayed优化 """ def key_func(subset): subset["len"] = subset["url"].apply(get_url_len) return subset start = time.time() data_grouped = data.groupby(data.index) # data_grouped 是一个可迭代的对象,那么就可以使用 tqdm 来可视化进度条 results = Parallel(n_jobs=8)(delayed(key_func)(group) for name, group in tqdm(data_grouped)) data = pd.concat(results) end = time.time() cost_time = end - start res = sum(data['len']) print("test_delayed \t res:{}, cost time:{}".format(res, cost_time)) if __name__ == '__main__': columns = ['title', 'url', 'pub_old', 'pub_new'] temp = pd.read_csv("./input.csv", names=columns, nrows=10000) data = temp """ for i in range(99): data = data.append(temp) """ print(len(data)) """ test1(data) test_mp(data) test_pandarallel(data) """ test_delayed(data)