Brandon Cieslak
2013-07-17 15:54:21 UTC
*
Hello Computer Go mailing list,
We were wondering if anyone has ever seen or heard of a method of storing
patterns similar to the one we are currently trying to implement and test.
Here is our method:
We are using multiple hash tables to store win rates for large patterns.
Each table uses a different Zobrist hash function. The tables are sloppy
in that, when two patterns inevitably hash to the same location, we simply
combine the information.
When we store a win or a loss for a pattern (encountered in a recorded game
or a playout), we store it in each table. When looking for the win rate of
a pattern, we look in each of the tables and get the average of those
results.
The advantage of this approach (over one big table) is that, if there are n
tables, each lookup averages n copies of the signal but only 1 copy of
each collision. Preliminary experiments suggest that this is a better use
of memory than one large table.
(We also plan to store patterns at several different scales this way, but
thats another story.)
The only reference we found to this approach has been a single section of
Narayanasamy et al., Catching Accurate Profiles in Hardware (
http://cseweb.ucsd.edu/~calder/papers/HPCA-03-Multihash.pdf, Section 6 is
most relevant). They call this approach Multi-Hashing.
Thank you in advance,
The Orego Team*
Hello Computer Go mailing list,
We were wondering if anyone has ever seen or heard of a method of storing
patterns similar to the one we are currently trying to implement and test.
Here is our method:
We are using multiple hash tables to store win rates for large patterns.
Each table uses a different Zobrist hash function. The tables are sloppy
in that, when two patterns inevitably hash to the same location, we simply
combine the information.
When we store a win or a loss for a pattern (encountered in a recorded game
or a playout), we store it in each table. When looking for the win rate of
a pattern, we look in each of the tables and get the average of those
results.
The advantage of this approach (over one big table) is that, if there are n
tables, each lookup averages n copies of the signal but only 1 copy of
each collision. Preliminary experiments suggest that this is a better use
of memory than one large table.
(We also plan to store patterns at several different scales this way, but
thats another story.)
The only reference we found to this approach has been a single section of
Narayanasamy et al., Catching Accurate Profiles in Hardware (
http://cseweb.ucsd.edu/~calder/papers/HPCA-03-Multihash.pdf, Section 6 is
most relevant). They call this approach Multi-Hashing.
Thank you in advance,
The Orego Team*