Monday, November 10, 2008
Wow... (update on Zilch research)
I grossly underestimated the size of the game space. I was thinking it'd be in the billions. It turns out to be about 64 trillion states. Searching the space will require some memoization to speed up the time (because the expected value nodes are 55,987 branches wide) , but I can't memoize the entire space unless I had an exabyte of RAM available (need someone to get on that real quick :D). I'll use some strategies to reduce the size of the space, and maybe a caching strategy to only memoize the states that are needed at the moment. More updates as I progress tomorrow. I need my sleep.
Sunday, November 9, 2008
I officially dislike Blogger's formatting...
It is impossible to format code in Blogger without Blogger ripping it apart... is there a solution for this or do I need to move to WordPress?
Saturday, November 8, 2008
Study of the game Zilch part 1
You know you're a math nerd when you analyze a game to determine the best strategy. Be forewarned, the following post is really math intense; abandon all hope of avoiding nerdhood ye who enter here...
I have been introduced to a wonderful game called Zilch on Kongregate. If you want to lose hours of your life, go to Kongregate and check out all of their cool flash games, all of which you can play for free. :) The reason I wanted to talk about Zilch is that it is a great game that illustrates some of the interesting corner cases of game theory. I will be breaking up this analysis into several segments in order to discuss different aspects of the game, and the process one could go through in order to come up with a winning strategy for the game. This first segment will discuss the scoring and expected value of a hand, and will end with an analysis of an interesting situation in the game that has an unexpected strategy.
Let's start with a quick summation of the rules. Players play in turns, and attempt to score points by rolling dice for a hand. Once one player gets to 10,000 points, the other player has one more turn in order to attempt to beat the crosser's score. Scoring for a hand works as follows: you roll 6 dice, and every turn you have to score something and either bank or roll. You score based on the following scoring rules:
After the first few playthroughs, some interesting strategies seem to jump off. Firstly, if you have a single die remaining, and you can bank, it's usually a good idea to do so, for the probability to zilch in that position is 2/3. If you have two zilches against you, then you want to bank at your first opportunity in order to avoid the zilch penalty. When scoring, it's not always the best thing to score everything available to you, especially in the case of 3 twos. Your daring is really more of a function of what your opponent's score is in relation to your own.
So what does it mean to know the perfect strategy for a game? A perfect strategy is to know based upon any game position the proper play to make that maximizes some quality of the game state. This quality is usually to win the game, but in the case of the online game there are also strategies to obtain some of the achievement badges, such as winning a game without zilching, playing a game with the least number of turns, or scoring over a certain value for each non-zilched play. At first I will be analyzing the basic question of best play to win the game.
Therefore, we need to start by defining the game state. Since we at first do not care how many turns it takes, or whether or not the win is zilch free, or any other qualities of the game other than winning, a lot of historical values can be discarded. The remaining qualities of the game break down as follows:
I could approach this from an elegant mathematical angle, using clever reasoning to determine each step in the process. Instead I decided to brute force the numbers, and get the elegance later when summing up the results. My tool in this quest is Java. Later on I will see how much faster this would have been in a language like Haskell, where this kind of search is better structured.
I represent my dice as an integer array. I then write a function that evaluates a set of dice for the score you'd get if you took everything (no banking instituted yet). After writing some tests to assert that I was good, I then wrote a clumsy for loop to evaluate across all dice:
My goal here was to get the average and median scores from the first throw. This data would be useful later in understanding strategies and would allow me to develop the infrastructure needed to walk the state space of this game.
Results? You can execute the code yourself to get the score layouts. :) Here are the highlights. I get an average score from the first throw of 426.60. That's a lot of potential... what that tells me is that with average throws I should be able to score my 10,000 points in about 23 turns. The median of the first throw is 250 points, and your chances of having a bankable score at the start are 44.58%. If I only count bankable throws, and take worst case scenario of zilching otherwise, I get an average score of 341.78.
That told me a lot of info, but not everything I needed. I now wanted to evaluate based on the second and subsequent throws, meaning I'd have to evaluate with less dice. The idea of writing that horrendous for-loop structure was scary, so I decided to go a bit more functional. And man, Java didn't make it easy. :D
In order to evaluate across all dice of any number, I decided to stay with my int[] strategy. I started by writing a couple of generic interfaces:
The first interface allowed me to make a calculation across dice. The second allowed me to collate that calculation across those dice combinations. Notice how I've abstracted the evaluation structure and the Collation structure. Essentially these interfaces will act as containers for function objects, much like a C# delegate structure. Next is to do the actual evaluation and collation:
This allows me to evaluate across all dice combinations for an arbitrary number of dice. The acrossAllDice function creates a holder array for the dice states, and recursion causes the numbers in dice to be updated in order. When I am at the root of the tree, I call the evaluate function in the DiceEvaluator to get my evaluation object, and then the TreeCollate's ret function to put that into a singleton collation object. It then continues to collate as it goes up the tree. I can easily wrap my scoring function in an evaluator:
and then my collation function for now will just sum the results:
Now to get the average score for 6 dice? acrossAllDice(6,SCORE_CALC,SUM_UP)/Math.pow(6.0, 6.0). For 5 dice it's just acrossAllDice(5,SCORE_CALC,SUM_UP)/Math.pow(6.0, 5). That seems like a lot of work just to be able to make a small change like that. Let's show the power of this approach. Say I want to collect all the scores into a list for stat evaluation:
and change SUM_UP to COLLECT above to reap the benefits. Say I wanted to calculate the chances of zilching on a throw?
and substitute ZILCH_CALC for SCORE_CALC. To get a tabluar evaluation:
which gives me the wonderful table:
Number of dice Average Score Zilch Prob.
1 25.0 0.6666666666666666
2 50.0 0.4444444444444444
3 86.80555555555556 0.2777777777777778
4 143.5185185185185 0.1574074074074074
5 225.7908950617284 0.07716049382716049
6 426.60108024691357 0.0
This table explains a great generic rule that's originally counterintuitive. Say you're under two zilches and you rolled 2-2-2-5-3-4, a rather unpleasant roll. Do you score the 5, the 3 twos, or both? If you look at the table, it'll make sense that you score only the 5. Scoring the 5 gives you the possibility of scoring an average of 225.79 points over those last 5 dice, meaning your average score is 275.79 (still not that good) but your chances for zilching are only 7%. If you take the three 2s, you do have 200 points, and your average is slightly higher now (286.81) but you've now quadrupled your chances to zilch (at ~28%). If you take both, your average is at 300, but your chances to zilch are now 4 in 9, an unhealthy prospect. It makes sense to reroll as much of this crappy roll as you can.
However, this isn't a soundproof argument for maximizing your score, only a good hunch based on some statistical calculations. The soundproof argument will come from calculating across the entire space, and that will happen in the next installment.
I have been introduced to a wonderful game called Zilch on Kongregate. If you want to lose hours of your life, go to Kongregate and check out all of their cool flash games, all of which you can play for free. :) The reason I wanted to talk about Zilch is that it is a great game that illustrates some of the interesting corner cases of game theory. I will be breaking up this analysis into several segments in order to discuss different aspects of the game, and the process one could go through in order to come up with a winning strategy for the game. This first segment will discuss the scoring and expected value of a hand, and will end with an analysis of an interesting situation in the game that has an unexpected strategy.
Let's start with a quick summation of the rules. Players play in turns, and attempt to score points by rolling dice for a hand. Once one player gets to 10,000 points, the other player has one more turn in order to attempt to beat the crosser's score. Scoring for a hand works as follows: you roll 6 dice, and every turn you have to score something and either bank or roll. You score based on the following scoring rules:
- 1 one: 100 points
- 1 five: 50 points
- a set of three or more dice: 12.5*base*2^dice. The base is the die value in points or 10 points for a one. For example a set of 4 sixes, it scores 12.5*6*2^4=1200 points. 6 ones is the largest score in the game worth 8000 points.
- a long straight 1-6: 1500 points
- three pairs (not necessarily unique): 1500 points
- nothing: if a roll of 6 dice results in no scorable dice, then it scores 500 points instead.
After the first few playthroughs, some interesting strategies seem to jump off. Firstly, if you have a single die remaining, and you can bank, it's usually a good idea to do so, for the probability to zilch in that position is 2/3. If you have two zilches against you, then you want to bank at your first opportunity in order to avoid the zilch penalty. When scoring, it's not always the best thing to score everything available to you, especially in the case of 3 twos. Your daring is really more of a function of what your opponent's score is in relation to your own.
So what does it mean to know the perfect strategy for a game? A perfect strategy is to know based upon any game position the proper play to make that maximizes some quality of the game state. This quality is usually to win the game, but in the case of the online game there are also strategies to obtain some of the achievement badges, such as winning a game without zilching, playing a game with the least number of turns, or scoring over a certain value for each non-zilched play. At first I will be analyzing the basic question of best play to win the game.
Therefore, we need to start by defining the game state. Since we at first do not care how many turns it takes, or whether or not the win is zilch free, or any other qualities of the game other than winning, a lot of historical values can be discarded. The remaining qualities of the game break down as follows:
- Whose turn
- Your score
- Opponent's score
- Your zilch run (number of consecutive zilches before this turn)
- Opponent's zilch run
- Current bank value
- Dice state
I could approach this from an elegant mathematical angle, using clever reasoning to determine each step in the process. Instead I decided to brute force the numbers, and get the elegance later when summing up the results. My tool in this quest is Java. Later on I will see how much faster this would have been in a language like Haskell, where this kind of search is better structured.
I represent my dice as an integer array. I then write a function that evaluates a set of dice for the score you'd get if you took everything (no banking instituted yet). After writing some tests to assert that I was good, I then wrote a clumsy for loop to evaluate across all dice:
int dice[]=new int[]{1,1,1,1,1,1};
int fullscore=0;
Map<Integer,Integer> scorecount=new TreeMap<Integer,Integer>();
for(dice[0]=1;dice[0]<=6;dice[0]++)
for(dice[1]=1;dice[1]<=6;dice[1]++)
for(dice[2]=1;dice[2]<=6;dice[2]++)
for(dice[3]=1;dice[3]<=6;dice[3]++)
for(dice[4]=1;dice[4]<=6;dice[4]++)
for(dice[5]=1;dice[5]<=6;dice[5]++){
int sc=calculateScore(dice);
if(sc<300)continue;
if(scorecount.containsKey(sc)){
scorecount.put(sc, scorecount.get(sc)+1);
}else scorecount.put(sc, 1);
fullscore+=sc;
}
double avgscore=(double)fullscore/46656.0;
System.out.println("Average score:"+avgscore);
boolean medflag=false;
System.out.println("Breakdown:");
int ssf=0;
for(int x:scorecount.keySet()){
ssf+=scorecount.get(x);
System.out.print(x+":\t"+scorecount.get(x)+"\t"
+(double)scorecount.get(x)/466.56
+ "\t" + ssf
+ "\t" + (double)ssf/466.56);
if(!medflag && ssf>=23328){
medflag=true;
System.out.println("<=== Median");
}else System.out.println("");
}
My goal here was to get the average and median scores from the first throw. This data would be useful later in understanding strategies and would allow me to develop the infrastructure needed to walk the state space of this game.
Results? You can execute the code yourself to get the score layouts. :) Here are the highlights. I get an average score from the first throw of 426.60. That's a lot of potential... what that tells me is that with average throws I should be able to score my 10,000 points in about 23 turns. The median of the first throw is 250 points, and your chances of having a bankable score at the start are 44.58%. If I only count bankable throws, and take worst case scenario of zilching otherwise, I get an average score of 341.78.
That told me a lot of info, but not everything I needed. I now wanted to evaluate based on the second and subsequent throws, meaning I'd have to evaluate with less dice. The idea of writing that horrendous for-loop structure was scary, so I decided to go a bit more functional. And man, Java didn't make it easy. :D
In order to evaluate across all dice of any number, I decided to stay with my int[] strategy. I started by writing a couple of generic interfaces:
public class ZilchCalculations {
public interface DiceCalculation<X>{
public X evaluate(int[] dice);
}
public interface TreeCollate<X,Y>{
public X collate(X values, X nv);
public X ret(Y st);
}
The first interface allowed me to make a calculation across dice. The second allowed me to collate that calculation across those dice combinations. Notice how I've abstracted the evaluation structure and the Collation structure. Essentially these interfaces will act as containers for function objects, much like a C# delegate structure. Next is to do the actual evaluation and collation:
public static <X,Y> X acrossAllDice(int numdice,DiceCalculation<Y> eval,TreeCollate<X,Y> coll){
int[] dice=new int[numdice];
Arrays.fill(dice,-1);
return acrossDice(0,dice,eval,coll);
}
public static <X,Y> X acrossDice(int make,int[] dice,DiceCalculation<Y> eval, TreeCollate<X,Y> coll){
if(make<dice.length){
X res=null;
for(int d=1;d<=6;d++){
dice[make]=d;
if(d==1)res=acrossDice(make+1,dice,eval,coll);
else res=coll.collate(res,acrossDice(make+1,dice,eval,coll));
}
return res;
}else return coll.ret(eval.evaluate(dice));
}
This allows me to evaluate across all dice combinations for an arbitrary number of dice. The acrossAllDice function creates a holder array for the dice states, and recursion causes the numbers in dice to be updated in order. When I am at the root of the tree, I call the evaluate function in the DiceEvaluator to get my evaluation object, and then the TreeCollate's ret function to put that into a singleton collation object. It then continues to collate as it goes up the tree. I can easily wrap my scoring function in an evaluator:
public static final DiceCalculation<Double> SCORE_CALC
=new DiceCalculation<Double>(){
public Double evaluate(int[] dice){return (double)calculateScore(dice);}
};
and then my collation function for now will just sum the results:
public static final TreeCollate<Double,Double> SUM_UP
=new TreeCollate<Double,Double>(){
public Double collate(Double values, Double nv){
return values+nv;
}
public Double ret(Double x){return x;}
};
Now to get the average score for 6 dice? acrossAllDice(6,SCORE_CALC,SUM_UP)/Math.pow(6.0, 6.0). For 5 dice it's just acrossAllDice(5,SCORE_CALC,SUM_UP)/Math.pow(6.0, 5). That seems like a lot of work just to be able to make a small change like that. Let's show the power of this approach. Say I want to collect all the scores into a list for stat evaluation:
public static final TreeCollate<List<Double>,Double> COLLECT
=new TreeCollate<List<Double>,Double>(){
public List<Double> collate(List<Double> values,List<Double> nv){
values.addAll(nv);
return values;
}
public List<Double> ret(Double x){
return Collections.singletonList(x);
}
};
and change SUM_UP to COLLECT above to reap the benefits. Say I wanted to calculate the chances of zilching on a throw?
public static final DiceCalculation<Double> ZILCH_CALC
=new DiceCalculation<Double>(){
public Double evaluate(int[] dice){return calculateScore(dice)>0?0.0:1.0;}
};
}
and substitute ZILCH_CALC for SCORE_CALC. To get a tabluar evaluation:
System.out.println("Number of dice\tAverage Score\tZilch Prob.");
for(int ndice=1;ndice<=6;ndice++){
System.out.println(ndice+"\t"
+(acrossAllDice(ndice,SCORE_CALC,SUM_UP)/Math.pow(6.0, ndice))+"\t"
+(acrossAllDice(ndice,ZILCH_CALC,SUM_UP)/Math.pow(6.0, ndice))+"\t");
}
which gives me the wonderful table:
Number of dice Average Score Zilch Prob.
1 25.0 0.6666666666666666
2 50.0 0.4444444444444444
3 86.80555555555556 0.2777777777777778
4 143.5185185185185 0.1574074074074074
5 225.7908950617284 0.07716049382716049
6 426.60108024691357 0.0
This table explains a great generic rule that's originally counterintuitive. Say you're under two zilches and you rolled 2-2-2-5-3-4, a rather unpleasant roll. Do you score the 5, the 3 twos, or both? If you look at the table, it'll make sense that you score only the 5. Scoring the 5 gives you the possibility of scoring an average of 225.79 points over those last 5 dice, meaning your average score is 275.79 (still not that good) but your chances for zilching are only 7%. If you take the three 2s, you do have 200 points, and your average is slightly higher now (286.81) but you've now quadrupled your chances to zilch (at ~28%). If you take both, your average is at 300, but your chances to zilch are now 4 in 9, an unhealthy prospect. It makes sense to reroll as much of this crappy roll as you can.
However, this isn't a soundproof argument for maximizing your score, only a good hunch based on some statistical calculations. The soundproof argument will come from calculating across the entire space, and that will happen in the next installment.
Labels:
Kongregate,
math,
probability,
strategy,
Zilch
Subscribe to:
Posts (Atom)