APR_DECLARE_NONSTD(unsigned int) apr_hashfunc_default(const char *char_key,<BR> apr_ssize_t *klen)<BR>{<BR> unsigned int hash = 0;<BR> const unsigned char *key = (const unsigned char *)char_key;<BR> const unsigned char *p;<BR> apr_ssize_t i;<br><br> /*<BR> * This is the popular `times 33' hash algorithm which is used by<BR> * perl and also appears in Berkeley DB. This is one of the best<BR> * known hash functions for strings because it is both computed<BR> * very fast and distributes very well.<BR> *<BR> * The originator may be Dan Bernstein but the code in Berkeley DB<BR> * cites Chris Torek as the source. The best citation I have found<BR> * is "Chris Torek, Hash function for text in C, Usenet message<BR> * in comp.lang.c , October, 1990." in Rich<BR> * Salz's USENIX 1992 paper about INN which can be found at<BR> * .<BR> *<BR> * The magic of number 33, i.e. why it works better than many other<BR> * constants, prime or not, has never been adequately explained by<BR> * anyone. So I try an explanation: if one experimentally tests all<BR> * multipliers between 1 and 256 (as I did while writing a low-level<BR> * data structure library some time ago) one detects that even<BR> * numbers are not useable at all. The remaining 128 odd numbers<BR> * (except for the number 1) work more or less all equally well.<BR> * They all distribute in an acceptable way and this way fill a hash<BR> * table with an average percent of approx. 86%.<BR> *<BR> * If one compares the chi^2 values of the variants (see<BR> * Bob Jenkins ``Hashing Frequently Asked Questions'' at<BR> * http://burtleburtle.net/bob/hash/hashfaq.html for a description<BR> * of chi^2), the number 33 not even has the best value. But the<BR> * number 33 and a few other equally good numbers like 17, 31, 63,<BR> * 127 and 129 have nevertheless a great advantage to the remaining<BR> * numbers in the large set of possible multipliers: their multiply<BR> * operation can be replaced by a faster operation based on just one<BR> * shift plus either a single addition or subtraction operation. And<BR> * because a hash function has to both distribute good _and_ has to<BR> * be very fast to compute, those few numbers should be preferred.<BR> *<BR> * -- Ralf S. Engelschall <BR> */<br><br> if (*klen == APR_HASH_KEY_STRING) {<BR> for (p = key; *p; p++) {<BR> hash = hash * 33 + *p;<BR> }<BR> *klen = p - key;<BR> }<BR> else {<BR> for (p = key, i = *klen; i; i--, p++) {<BR> hash = hash * 33 + *p;<BR> }<BR> }<BR> return hash;<BR>}<BR>
对函数注释部分的翻译: 这是很出名的times33哈希算法,此算法被perl语言采用并在Berkeley DB中出现.它是已知的最好的哈希算法之一,在处理以字符串为键值的哈希时,有着极快的计算效率和很好哈希分布.最早提出这个算法的是Dan Bernstein,但是源代码确实由Clris Torek在Berkeley DB出实作的.我找到的最确切的引文中这样说”Chris Torek,C语言文本哈希函数,Usenet消息< in comp.lang.c ,1990年十月.”在Rich Salz于1992年在USENIX报上发表的讨论INN的文章中提到.这篇文章可以在上找到. 33这个奇妙的数字,为什么它能够比其他数值效果更好呢?无论重要与否,却从来没有人能够充分说明其中的原因.因此在这里,我来试着解释一下.如果某人试着测试1
。本文来源gao!%daima.com搞$代*!码网1
搞代gaodaima码到256之间的每个数字(就像我前段时间写的一个底层数据结构库那样),他会发现,没有哪一个数字的表现是特别突出的.其中的128个奇数(1除外)的表现都差不多,都能够达到一个能接受的哈希分布,平均分布率大概是86%. 如果比较这128个奇数中的方差值(gibbon:统计术语,表示随机变量与它的数学期望之间的平均偏离程度)的话(见Bob Jenkins的http://burtleburtle.net/bob/hash/hashfaq.html,中对平方差的描述),数字33并不是表现最好的一个.(gibbon:这里按照我的理解,照常理,应该是方差越小稳定,但是由于这里不清楚作者方差的计算公式,以及在哈希离散表,是不是离散度越大越好,所以不得而知这里的表现好是指方差值大还是指方差值小),但是数字33以及其他一些同样好的数字比如 17,31,63,127和129对于其他剩下的数字,在面对大量的哈希运算时,仍然有一个大大的优势,就是这些数字能够将乘法用位运算配合加减法来替换,这样的运算速度会提高.毕竟一个好的哈希算法要求既有好的分布,也要有高的计算速度,能同时达到这两点的数字很少.