History Of Handicapping, Part III: USGA Leads The Way October 18, 2011 By Hunki Yun, USGA

The USGA's Leighton Calkins is considered the pioneer of handicapping in the United States. (USGA Museum)

This is the third installment of a four-part series reviewing the history of handicapping. This month, the USGA celebrates the 100th anniversary of the Handicap System with a centennial dinner at Baltusrol Golf Club in Springfield, N.J., the site where the system was officially adopted in 1911.

By the dawn of the 20th century, the rapid growth of golf – on both sides of the Atlantic Ocean­ – outstripped the capabilities of existing handicapping methods, which lacked a centralized system that offered the same standards for all golfers. The biggest source of inequality was the lack of a uniform procedure for determining par, bogey or scratch scores.

A letter to the editor in a British newspaper in 1898 summed up the frustrations of the club golfer: “In the absence of any authoritative legislation on the subject, most golf clubs have made Bogey laws for themselves. The net result is hopeless confusion, and it seems time that some attempt was made to indicate the lines on which the laws for Bogey should be laid down.”

In Great Britain and Ireland, the various governing bodies attempted to make handicapping more uniform and widespread, but their efforts hardly covered all courses and golfers. Whether you were part of an equitable handicapping system often depended on where you lived and your gender.

Working with a smaller pool of courses and golfers, the Ladies Golf Union (LGU) achieved early success in standardizing handicaps, largely due to the efforts of Issette Pearson. In the 1890s, she assigned course ratings to member courses instead of relying on them to determine their own standards. 

“No doubt it was uphill work at the start,” wrote Robert Browning in A History of Golf, “but within eight or ten years the LGU had done what the men had signally failed to do ­­– established a system of handicapping that was reasonably reliable from club to club.”

As the game made the leap across the ocean to the United States, so did handicapping ­­– both the good and bad. But unlike the way golf developed in Great Britain and Ireland, there was one central golf authority in America. And after years of study and experimentation, the USGA adopted the first nationwide handicap format at a meeting on October 11, 1911, at Baltusrol Golf Club in Springfield, N.J.

Much of the credit for developing the USGA Handicap System goes to Leighton Calkins, a member of the USGA Executive Committee and the pioneer of handicapping in the United States. In March 1905, Calkins introduced his methodology, which adapted the British system of averaging the three best scores, in a work titled A System for Club Handicapping.

Before taking the system to the USGA, Calkins tested his ideas, first at Plainfield Country Club in Edison, N.J., then on wider scales with the Metropolitan Golf Association and the New Jersey State Golf Association. Calkins was the chairman of the Handicap Committee for both organizations.

By using the three best scores of the season as the basis for determining handicaps, the USGA made clear from the start that its Handicap System would be a measure of potential, not playing ability. Since better players are much more consistent and have a smaller range of scores than high handicappers, they have a much better chance of matching their best rounds.

As Calkins wrote: “The principal feature of this system is that not only is the good player handicapped because he is a good player, but the bad player is also handicapped because he is a bad player.

“The reason is this: The object of handicapping is to put all players on the same level, and if an allowance of a certain number of strokes is to be made to the less skillful player because he cannot play as well, some allowance ought to be made to the more skillful player because he cannot improve as much.”

It appears Calkins was responding to the disadvantages that better players were enduring at the time. “It is fairly well proved by actual results in handicap events,” he wrote, “that the scratch player and the player with a low handicap has not, under the usual methods of handicapping, as good a chance to win as the player with the high handicap.”

Calkins introduced several visionary concepts that survive to this day. He was adamant in insisting that each club “have a Handicap Committee which is willing to work.” Calkins also introduced the concept of a par rating, which later became known as the USGA Course Rating, the baseline from which all players would receive strokes. When established in 1912, the par rating was based not on a theoretical standard but the ability of an actual player, Jerome “Jerry” Travers, who won four U.S. Amateurs (1907, 1908, 1912, 1913) and the 1915 U.S. Open.

Now as then, the USGA uses a player’s Handicap Index to determine eligibility for championships. Introduced in 1912 and encompassing 324 member clubs, the first handicap list identified the players who were eligible to enter the U.S. Amateur, which required a handicap of 6 or better. (The current maximum Handicap Index for entry into the U.S. Amateur is 2.4.)

At first, the USGA allowed clubs to set their own par or course ratings, a decision that Calkins protested vehemently, calling the practice “useless” and a “farce.” The USGA soon changed its methodology, and the golf associations began issuing official Ratings, establishing a system that is still in place today. Without an accurate USGA Course Rating, it would be impossible to determine accurate handicaps.

More than a decade later, in 1924, the British and Irish golf unions formed the British Golf Unions Joint Advisory Committee, which developed a uniform, definitive system of handicapping and course rating in Great Britain and Ireland. Later known as the Council of National Golf Unions, the committee took over the responsibility of assigning handicaps to players and Standard Scratch Scores to courses.

Improving the System

As the overseas golf unions were launching their course-rating systems, the USGA was refining the calculation of handicaps and Course Ratings, the backbone of the USGA Handicap System. During the first several decades of the USGA Handicap System, the improvements came from various regional golf associations, which had a close working relationship with courses and players. 

The Massachusetts Golf Association, for example, recommended using decimals to make Course Ratings more accurate. Similarly, the Chicago District Golf Association adopted a fractional rating method.

All these suggestions from coast to coast eventually came to be incorporated into the Course Rating system in place today. For years, while ratings for individual holes were determined to within a tenth of a stroke, the overall rating for the course was rounded to a whole number. In 1967, the USGA began to issue Course Ratings in decimals, the way they are still presented today.

There have been alterations to other aspects of the Handicap System. In 1947, the USGA increased the number of scores used to determine handicaps, from the three lowest scores to the 10 best rounds ever ­­– with a minimum of 50 scores needed to obtain a handicap. The change was a welcome one for average players, who now had a better chance of playing to their handicaps. 

Unfortunately, that increase triggered a confusing landscape, as regional golf associations could not agree on the number of rounds from which to take the 10 best scores for handicap purposes. During the middle of the century, Tom McMahon of the Chicago District Golf Association wanted to count 10 of 15 scores. Richard Tufts, later the president of the USGA, called for using 10 of 50 scores.

For a while, both systems, in addition to a third method introduced for women, were sanctioned by the USGA, causing further chaos. The USGA finally ended the confusion in 1958 with a compromise that computed handicaps using the best 10 of 25 scores. In 1967, the USGA reduced the requirement to 10 of the last 20 scores, a formula that remains operative today.

There were other sources of perplexity over the years, including handicap allowances that varied according to whether players were competing at stroke or match play. While players always could count on their full allotment of strokes for stroke play, they could receive between two-thirds and 85 percent of their handicap allowance, depending on whether they were playing a singles or four-ball match. The standards of the time, which were ever-evolving over the years, also influenced the allowance.

When it comes to adjusting numbers, the USGA instituted other variables that have remained in the System to the present. One is the little-known “Bonus for Excellence” multiplier that determines a player’s final Handicap Index by taking 96 percent of the actual figure. This percentage rewards better golfers by giving them a slight edge over higher-handicap players in matches. (When first instituted, the bonus was 85 percent, which was deemed too advantageous for low-handicap players.)

Another is Equitable Stroke Control, which sets the maximum number a player can post on any hole depending on the player’s Course Handicap. For years, the USGA opposed stroke control; in 1966, USGA Executive Director Joe Dey explained the USGA’s stance. Among his thoughts, he argued that stroke control violated the Rules of Golf, discriminated against players with high handicaps and artificially lowered handicaps. 

In the end, math won out. Equitable Stroke Control was instituted in 1974, and has remained in force ever since – with modifications in 1991 and 1998 that adjusted the maximum number of strokes depending on his or her Course Handicap.

History of the USGA Handicap System™