Blackjack Bankroll management for card counters

cyph

Trusted Member
COUNTER'S BANKROLL
Two bosses were once watching an aggressive black check player. One boss wants to back the player off because there’s clearly an indication of skill. The other boss disagrees, and comments/'This guy is a waiter across the street, he doesn't have the bankroll to play this high.” He believes that the player is outright gambling and treading on very dangerous ground.

From an industry standpoint, we must assume that every player is adequately financed; after all, we never really know how much money is behind any player. Many play to a pocket bankroll. When they lose, they go back to work for a couple of weeks, and they're right back again. It's one big game in this respect. For these players, their bankroll may turn out to be future income.

Determining how much money is needed to be a successful player is not an industry issue—we know that our bankroll is sufficient—but it is an issue with the player. In fact, it's a vital skill that must be mastered before he can be a contender. For completeness sake, here are some of the basics.

Unit Size is the Key

Bankroll requirements are usually discussed in terms of unit size. When Thorp first went out to test his famous system, he believed that $6,000 to $7,000 was adequate capital to bet $50 to $500. Revere suggested 120 units and playing until one had won or lost 30 units, or one had played for an hour, whichever came first. Humble suggested 50 top bets for true count wagering, and 100 top bets for betting according to the running count. Snyder likes to recommend a bankroll size that will fade two and three standard deviations of negative fluctuation. There are no hard and fast rules.

The bankroll can be any amount of money a player or team can afford. A team may start with a certain bank, say $50,000, and from there they will determine unit size based on some acceptable element of ruin. This is defined as the chance of losing everything before they double the bank. A mathematical formula might calculate that a 200-unit bankroll has a 10% chance of ruin; with 400 units, the element of ruin might only be 1%. Each player/team finds their own comfort zone.

Kelly Criterion

With any discussion of unit size comes the ubiquitous references to the Kelly Criterion. Developed by John Kelly in 1956, he proved mathematically that in any even-money proposition where the player always has an advantage, the fastest way to make money is to bet a proportion of your bankroll equal to your advantage, called the 'optimal bet size' (more formally, the optimal bet size is the players advantage times his bankroll divided by variance). To keep it simple, assuming a $10,000 bankroll and 1%
 

cyph

Trusted Member
advantage, the player bets $100. As the bankroll grows, say to $15,000, the same 1% advantage calls for a $150 bet. If a portion of the bankroll is lost, say $2,000, a 1% advantage indicates an $80 wager (.01 x $8,000). The player is always betting in proportion to his money.

What is truly fascinating about Kelly betting occurs when the player overbets. If the player bets twice the optimal amount, his bankroll will go up and down indefinitely; if he bets more than twice the optimal amount, believe it or not, hes more likely to lose everything-—even with the best of w. Its easy to go busted by overbetting.

It's important to note that the Kelly Criterion wasn’t devised with blackjack in mind. In blackjack, its difficult to recalculate your bankroll after every bet, as there are risk considerations with the additional money bet on doubles and splits. Blackjack is not truly an even money proposition, and, unless you're ' back-counting, how does Kelly deal with those hands when the player is at a disadvantage?

Despite these issues, the essence of Kelly betting remains a central theme in bankroll management, and the concept of optimal bet size is a consideration with all pros. The most common way to incorporate Kelly is for teams to reassess the bankroll at various plateaus. A common practice is to reassess bet size after halving a bank (losing half the bankroll). With these adjustments, they can reduce their element of ruin significantly.

In the end, it often comes down to time versus money. If a bet-to-bank ratio is too small, it could take forever to double a bank; if the ratio is too high, it could be financial suicide. The goal of any serious player is to win the money without being detected as a skilled player, and to maximize the win rate while protecting his bankroll. It’s a balancing act for most, no problem for the pros.

PSEUDO COUNTING SYSTEMS

The industry is occasionally exposed to nontraditional count systems, best described as pseudo card counting systems since these systems attempt to parallel the card counting science by evaluating the remaining cards in nontraditional ways. These systems are often pitched as easier than card counting (which they are), but they never come with any scientific support, outside of irrelevant charts and tables. These systems attract the less sophisticated players, and you'll find them everywhere, regularly sold in newspapers, magazines, and on the Internet. If you have ever questioned the approach or merit of these systems, here's a quick overview of the subject.

The majority of these systems tend to have a common theme, they look for indicators, defined as some event or result that indicates a favorable player opportunity.

For example, consider a system for single-deck, heads-up play, that has the player counting the total number of cards played in each round. The system instructs the player to increase his bet any j time a single round uses eight or more cards. Since the average number of cards needed to complete I
 

cyph

Trusted Member
a single round under these conditions is about 5.4 cards (about 2.7 cards per hand), the extra cards suggest a round rich in small cards, and, therefore, an increase in bet size is probably in the right direction. But just the opposite may also be true. A round comprised of eight cards may also include the player splitting aces, catching 10-10, and the dealer hitting 8-7-A-10—not much of a sign of good things to come. Indicators along these lines are generally weak for spotting player advantages, and get worse when the information is carried from round to round.

Other systems like to use previous results as an indicator, contending that the probability of winning is higher following certain events such as a win, loss, or push. An important development in this area goes back to 1978. At the Fourth Gambling Conference in Reno, Nevada, John Gwynn and Armand Seri presented a paper showing what happens to the players chances of winning following a win, loss, or push. The study was titled "Experimental Comparisons of Blackjack Betting Systems".

A thirty-million hand simulation indicated that player expectation goes up by 0.1% after a loss. This is logical as losses are generally the result of extra small cards playing unfavorably. For example, the dealer doesn’t bust, the dealer makes a four-card 21, the player catches a small card on a double down, and so on, so the removal of small cards helps the players subsequent chances of winning. The study also indicated that player expectation goes down by about 0.1% following a win. This, too, makes sense as winning hands are generally the result of big cards (blackjacks and twenties), and the removal of such cards hurts your subsequent chances of winning. But we are only talking about a swing in advantage of 0.1%, simply not enough to form the basis for a winning strategy.

Stanford Wong tested the premise further by looking at the probability of winning following the results from the previous two hands. The nine possible permutations of win, lose, and push, were tested via a twenty-million-hand simulation. All permutations produced similar results: a 43.2% chance of winning, a 47.8% chance of losing, and an 8.9% chance of pushing. It’s interesting how the 0.1% swing in advantage concluded in the Gwynn/Seri study seems to have evaporated when the indicator was comprised of two consecutive results.

Along the same lines, streaks are also quoted as the basis for many systems. In Basic Blackjack Betting (Charles Einstein, 1980), his rhythm system was introduced. Mr. Einstein proposed betting small until a win, then betting big until a loss. He reasoned that blackjack is streakier than other games, and once in a rich deck, the deck tended to stay rich; once in a poor deck, the deck tended to stay poor. This is not the case. For a deck to get richer, it must continue to use up small cards. In any rich deck you are always more likely to see big cards to even out the composition. The Gwynn/Seri simulation is in agreement, and one should expect better results after a loss, not after a win, but not enough to matter.

Most of these systems have been dissected and discarded years ago, thanks to extensive computer simulations. Virtually every definable occurrence has been tested to see what happens to the player’s probability of winning the next hand. To date, no evidence of any kind suggests an alternative, comparable, or superior system for playing blackjack as compared to traditional card counting strategies.
 
Top