skip to main content

As an agronomist, I am curious about what well-performing putting greens have in common. Are there things that good putting greens have in common, or are there many different paths to the same destination? How does performance fluctuate during a year, or among many years? When I ask these questions in the field, I find there are few golf courses that collect and consolidate information about putting green performance and management inputs that would allow them to provide definitive answers.

In 2018, USGA agronomist Addison Barden and I embarked on a project with six different golf courses to answer these questions by collecting daily putting green management information. Through this process of data collection and analysis, we hoped the participating golf course superintendents would use this newly accumulated information to make decisions that would smooth out the peaks and valleys in putting green performance and optimize the allocation of resources in managing their putting greens. This article will share a few details about the project, what we learned, and how you might use data collection to improve management at your golf course.

 

The Project

When we embarked on this putting green surface management data collection project, we wanted to make data collection as efficient as possible by measuring only the variables that would provide the most helpful information. We chose to keep the project simple and asked for measurements to be taken on only one putting green for each course. We asked the superintendents to select an average putting green and avoid their best or worst putting greens.

Step 1: What to Measure

Our first decision was to determine how we were going to measure performance. Ultimately, we identified green speed and clipping volume as our key performance indicators. Each day, superintendents measured green speed from the exact same location on the same putting green. They also collected clippings from that green.

Next, we identified the variables we thought contributed most to those performance indicators. In other words, we had to decide which inputs and practices contributed most to green speed and clipping volume. We created a spreadsheet for each course and asked them to enter information every day about 12 distinct items that fit under the broad categories of key performance indicators, cultural inputs and surface maintenance practices.

Key Performance Indicators

  • Green speed
  • Clipping volume

Cultural Inputs and Conditions

  • Nitrogen applications and rates
  • Topdressing applications and rates
  • Plant growth regulator applications and rates
  • Temperature – daily high and low

Surface Maintenance Practices

  • Mowing height
  • Mowing frequency
  • Vertical mowing
  • Grooming
  • Brushing
  • Rolling

Step 2: Visualize the Data with Graphs and Tables

As data was collected, we created simple graphs and tables that showed the key performance indicators over time and summarized the frequency and quantity of maintenance practices and inputs. This exercise proved to be helpful but challenging. The graphs were helpful because they showed the data in a form other than numbers on a spreadsheet. Expressing the information in graph form proved to be challenging because there are an almost unlimited number of graphs or tables that can be created. Along the way, we received helpful feedback from participating superintendents on what they found to be most useful and we adjusted data presentation accordingly.

Step 3: Statistical Analysis

The participating courses were collecting data, but how did we know if we were measuring the variables that accounted for daily fluctuations in those key performance indicators? They answer is we didn’t know, but we could see if we were on the right track by using a multivariable regression analysis.

Dr. Andy Tiger, a professor at Angelo State University, conducted a multivariable regression analysis to see how well the variables we chose to measure explained the variations in the key performance indicators. The analysis of green speed revealed coefficient of multiple determination values (r2 values) of 70%, 90%, 44%, 97%, 65% and 72% at the six courses. The closer an r2 value is to 100%, the more confidence there is that the variables measured explained the variability in the key performance indicators. All the values except the 44% r2 value showed a strong predictive relationship. Interestingly, we observed that the course with the 44% r2 value made the most day-to-day adjustments in their inputs and practices. Overall, we felt confident that we were measuring the inputs and practices that had the most explanatory effect on the key performance indicators.

Another part of the analysis was to assess whether an individual variable, such as mowing frequency or topdressing had a statistically significant relationship to the key performance indicators – i.e., the relationship is not attributed to chance. If a variable was significant, we wanted to determine the relative influence it had on the key performance indicators.

Our results in this analysis varied from site to site. Factors such as mowing frequency and temperatures always were significant and were major contributors to performance. This was not surprising. However, it has proven difficult to quantify the relative impact of inputs such as growth regulators and nitrogen because their impact lasts multiple days or weeks and does not necessarily appear on the day they are applied. Further, the impact of these inputs may vary from day to day during that window of time.  

As we refine the analysis, it may be possible to develop predictive modeling for green speed and clipping volume for one, two or three days into the future after various practices or applications. Think of it this way – if we are measuring the variables that contribute the most to the key performance indicators, and if there is a large enough data set – e.g., a year or more – to assess their relative impact, we may be able to develop a predictive model that allows superintendents to test different combinations of inputs and maintenance practices in the model before implementing them in the field. If the model proves to be accurate, this will offer superintendents an opportunity to be both more efficient and effective in reaching their goals.

 

What We Learned

Below are six items we learned throughout this project:

1. Data collection was not unnecessarily burdensome.

We had 100% participation for one year’s worth of data collection at all but one of the courses. This course had a change in superintendents during the year and their data collection was disrupted. Participating superintendents said that their teams made the data collection a part of the daily routine. They usually assigned the logging of data into the spreadsheet to an assistant superintendent and had a backup for days the assistant was not working.