Author Mason Malmuth
Synopsis of Blackjack Essays
Blackjack Essays by Mason Malmuth is designed to aid the serious player beat the games in today's modern casino environment. The days when you could simply master a count and expect to be a winner just do not exist anymore. As a result, winning at blackjack requires not only numerous skills, but also a lot of thinking about the game. Unlike most blackjack books, this text assumes that the reader already knows how to count cards, and it introduces techniques that should be useful to the successful player well into the l990s and beyond. The topics covered are card domination -- more commonly known as shuffle tracking, theoretical concepts, blackjack biases, current blackjack, mistaken ideas, supplemental strategies, playing in a casino, obsolete techniques, and front loading. In addition, advice is offered on gambler's ruin, the one-deck game, back counting, betting strategies, heads-up play, becoming a professional, casino preparation, first basing, and much more.
The book is designed to make the reader do a great deal of thinking about the game. In fact, very few readers will agree with everything the text offers, but the information provided should help most people become better players. (224 pages, plus a foreword by Arnold Snyder; lSBN #1-88O685-05-1...$l9.95)
Blackjack Essays Book Excerpt: Blackjack Biases: Part I
There's a controversy brewing in the blackjack world. Anyone who has followed the blackjack literature for the last couple of years knows about it, but resolving it is another matter. What's the controversy? It is the idea that systems based on non-random shuffles are a viable approach to beating the game of twenty-one.
Unfortunately, even though one can read about claims and counter-claims, very little theoretical information is available explaining exactly how these systems work. Also, most of the claims and counter-claims were made either by those individuals who sold these systems or by those who sold the more conventional systems. It seemed to me that what was needed was for someone who had no interest in system selling, who did not own a blackjack school, who was both knowledgeable and independent of the controversy, whose interest in blackjack was mainly theoretical, and who was not involved in the name-calling and mud-slinging that had taken place between many of the "experts" in this field to try to evaluate this approach to the game of twenty-one. Well, I decided to undertake this task. I thought I was a good candidate, especially since I had written on the subject of non-random shuffles before. What follows are the results and the conclusions that I obtained.
Basically, these systems claim that you can walk into a casino, and by matching certain characteristics to what is actually happening at the table, you can select those tables where the dealer is breaking more than she should, where you catch more than your share of 10s when you double down on a total of 11, and where all the other good things that can happen at a blackjack table are currently happening. Similarly, bad tables, which the skilled player will want to avoid, can also be identified. (By the way, the proponents of these non-random shuffle systems don't claim that it is easy to identify these tables, but they do claim that it can be done.)
First, let's define a shuffle to be random when the following two conditions occur.
- After the shuffle any card is equally likely to be located any place in the deck, no matter where it was located before the shuffle began.
- For any particular card in the deck after the shuffle, the probability of any other card succeeding it is equally likely to all the remaining cards in the deck.
This second condition is very important. It is a measure of closeness, and specifically it says that if two cards are located near each other (or far away from each other) before the shuffle, there is no reason to believe that this relationship will hold true after the shuffle. Now, this is contradictory to the idea of card clumping when a deck is shuffled. Card clumping after the shuffle means that the cards which were near each other before the shuffle will still be near each other after the shuffle is completed, thus producing a non-random effect.
Here's an example. In a single-deck game, suppose you knew that the 10 of spades and the five of hearts were near each other before the shuffle, and suppose you expected this relationship to hold after the shuffle. Then, if you had a total of 16 versus the dealer's 10 up, if the count was slightly positive, and if the 10 of spades had just been dealt, you would probably want to hit -- even though the count says not to -- since you know that the five of hearts is nearby.
An interesting theoretical concept is a perfect non-random shuffle. Now you would know the exact order of the deck, and there would be no need to count at all. This means that you would be able to play much better blackjack than even the computers that play "perfectly."
Is this the way these non-random shuffle systems work? Are they some valid way of obtaining this additional information, which is currently unobtainable through the conventional and accepted methods? Also, even if this information can be obtained, how can it be used?
I would argue that blackjack is a very non-self-weighting game. (See the essay titled "Self-Weighting" in Part Two.) This means that it is highly possible for the dealer to have long periods of time where she breaks more than expected and long periods of time where she always seems to draw out on the players. And these periods happen completely at random. This is worth repeating. These periods happen completely at random! But if additional information is known as to how the cards are interweaved, then it might just be possible to predict whether there is such a thing as a "dealer-breaking table."
Even though most of the literature I have seen in this area is vague, there is one concept I have run across that is very specific: the idea of not playing blackjack after a new deck (or decks) is brought into a game. The claim is that the initial shuffles create a bias that even the most highly skilled counters cannot overcome.
In his book The Theory of Blackjack, Peter Griffin also has some interesting comments on this subject. He says, "If the dealer performs a perfect shuffle of half the deck against the other half, then of course the resultant order is deterministic rather than random. Three perfect shuffles of a new deck give the head-on basic strategist about a 30 percent advantage (where the cards are cut is considered uniformly random), whereas five random shuffles give an edge of 25 percent in favor of the house."
It seemed to me that if the idea of not playing against a new deck was valid, then it would be fairly easy to simulate on my computer, and this I set out to do. I wrote a program that worked as follows. First, a new deck was created. The cards in the deck, as those in a new Bee package (the brand most casinos use), were ordered ace to king, ace to king, king to ace, king to ace, except that I called all of the face cards "10" since this is their value in blackjack. Next, the deck was split into two stacks of 26 cards each. Then one of the two stacks as selected at random. Suppose it was the left one. Now a random number between one and four was selected. Suppose it was two. This meant that the top two cards of the left stack became the first two cards in the shuffled deck.
Next, the computer would switch to the right stack and again pick a random number between one and four. Suppose it was three. This meant that the third, fourth, and fifth cards of the shuffled deck were the first three cards of the right stack. Then the computer would switch back to the left stack, and the process would continue until one of the stacks was exhausted. WHen this happened, the remaining cards in the unexhausted stack became the rest of the shuffled deck.
Notice that a non-random shuffle was produced. One blackjack "expert" has claimed that this is impossible to do. In fact, these claims of impossibility are often cited as "proof" that the experts' systems are correct since the mathematician cannot verify or reject them; in other words, it appears that the user must accept them on faith.
This non-random shuffle has the following two characteristics. First, cards that are close together in a stack before the shuffle remain close together afterwards, and second, a card that occurs in the stack before another card will occur in the shuffled deck before that card.
By the way, since it is so easy to program non-random shuffles--anyone with even a minimum knowledge of computer programming should know this--the reader should be skeptical of those who say that it is impossible to do so. Now this doesn't mean that these systems are not any good, but it does mean that the reader should be cautious before purchasing material of questionable worth. If this material was not worthless, why would these system sellers be negative toward the idea of verifying their methods?
Continuing with the program, four complete riffles were performed, and then the deck was cut by choosing a random number between 12 and 41. For example, suppose the number 20 was selected. Then the first 20 cards in the deck (after the four riffles) becaume the last 20 cards in the deck, just as if one of the players at the table had cut the cards.
Now it was time to play blackjack. The player used perfect basic strategy, and the dealer had to abide by standard Las Vegas Strip rules. The top card was burned, and when the number of cards remaining became less than 25, a new deck was created and the above process was repeated.
In all honesty, even though my computer work was done in a totally objective manner, I already know what the results would be. I would (1) produce a clumping effect, and (2) play dead even with the house, which is the expectation for a basic strategy player against a single deck using Las Vegas Strip rules. I would then write that the impossible non-random shuffle, which was extremely easy to program, had no effect on the game. That is, I would say that the claims of non-random shuffle system proponents were just a "bunch of baloney" and be done with it. Also, I would say that my unbiased approach clearly showed that these systems were worthless. But sometimes when one undertakes research into unexplored areas, results are obtained that are totally unexpected. That is what happened in this case. To my complete amazement, my computer did not break even. It consistently lost! I had managed to create a dealer bias, so it appears that there may be something to the warning of not to play against a new deck or shoe. These results are given in the following table.
|Rounds||Total Action||Profit||Win Rate (%)|
Notice that instead of breaking even, the shuffle procedure that the program followed caused my computer to lose at a rate of slightly more than 1 percent. That is a significant amount! There was now no doubt about it--dealer biases can be produced. And if dealer biases can be produced, it seems logical that player biases can also be produced. The next step is to correlate these biases with something realistic that a player can identify.
One last comment, however. The idea that if a particular dealer has already broken alot, you ahve a player bias, is flawed. If you inspect a lot of tables, the laws of probabilty (assuming all events do occur randomly) tell us that you eventually expect to find a table where things seem to be out of kilter. These "data points," known as "statistical outlyers," are expected in large statistical samples. They don't mean that something funny is going on. In fact, if outlyers did not exist, something funny would be going on. This type of faulty reasoning is commonly seen in lottery analysis. Because some number has been hit a lot does not mean that it is "hot." Remember, in a large sample of a lot of numbers, it is expected that some numbers will show up more times than you thought was likely, just like it is also expected that some numbers will show up fewer times than you thought was likely. Don't let this faulty reasoning extend to hot and cold blackjack tables.