SHARE

Rusty Acree got a call from Bill Carollo this past year, asking him to provide three years of Power Five (P5) replay data from the QwikRef system — a software program for evaluating football and basketball officials. Acree, president and founder of QwikRef, was heartened. Though the software program he’d built over the years was initially designed to evaluate and analyze the performance of officials, it had now reached a new threshold, where he was being asked to provide data to help evaluate the effectiveness of collaborative replay in NCAA Division I college football.

Millions of dollars have been spent by the P5 conferences building command centers for college football replay. Carollo, coordinator of football officials for the Big Ten, wanted replay data to help evaluate the cost effectiveness of the command centers, specifically replay trend data for the previous three years. With it, he’d be armed to help evaluate the effectiveness that collaborative replay might have in the Big Ten as well as for other P5 conferences that are currently using collaborative replay. All of this would be with respect to the impact collaborative replay has had on college football.

Within a few hours, Acree sent Carollo the information — showing trends in replay results and review times that were all trending in the right direction for the past three seasons. “That information is now available for use by the NCAA Football Rules Committee as well as for decision-makers at the P5 conferences related to collaborative replay and its associated cost,” Acree explained.

General Advertisement – Yapalong

These types of trends are an outgrowth of the data QwikRef has accumulated since its raw inception some 20 years ago. Originally envisioned as a program to eradicate paperwork and help with basic evaluations of football officials, it’s evolved into much more. “We can now identify and analyze performance trends by both onfield and replay officials as well as the impact of the conference command centers and collaborative replay and their respective impact on the game,” Acree said.

“Aided by collaborative replay and more effective evaluation of onfield performance and training enabled by the QwikRef system, college football officiating is now getting more plays right,” he said.

Viewers complaining when they watch the Jumbotron or their wide screens at home at what they perceive is a missed call is what fuels what Acree describes as a “we gotta get it right” emphasis for the officiating crew.

“The outcome of one game can determine who goes to the conference championship game or a postseason bowl game, and whether or not the bowl game is a major or minor bowl game. Millions of dollars are potentially in play. Livelihoods of hundreds of coaches and athletic administrators are at stake. We gotta get it right … both on the field and in the replay booth,” he said.

Sports-Baseball Interrupter – Next Level Baseball Umpiring (640px x 150px)

Not only does QwikRef provide detailed and comprehensive officiating performance data, it also provides data at the 20,000-foot level to help senior-level decision-makers who administer college football officiating at the national level and tweak the rules of the game for consideration by the NCAA Football Rules Committee.

General Advertisement – Referee Officiating News

“Data compiled and generated by QwikRef is showing that we’re trending in the right direction, that we are getting better both on the field as well as in the replay booth,” Acree said. In his opinion, the collaborative replay model — having several people in the command center make a collaborative decision — in combination with more effective onfield and replay performance evaluation and training, are having a positive impact on officiating of the college game.

Within a week of Carollo’s request, Rogers Redding, the national coordinator of college football officials, asked Acree for targeting data through the first three weeks of the 2018 season. Redding was particularly interested in the number of targeting fouls that had been called and suspected that there had been a large uptick in the number of crown of the helmet-type targeting fouls. Acree provided the data. Redding’s instincts were correct. Acree’s data, provided from QwikRef, showed crown of the helmet-type targeting fouls were up significantly in FBS games after the first three weeks of the 2018 college season.

Sports-Basketball Interrupter – Basketball Complete Package

Reports, Ratings, Communication

Carollo has employed the product for 10 years. He uses it as a tool to write reports, rate officials, communicate with coaches and officials, and as a repository to capture information and video. He also captures trends during the season through QwikRef’s analytics. “I need someone smarter than me to break it down,” he joked. He can use it to analyze plays based on the down, what happened in the fourth quarter or “plays under pressure.”

All fouls are entered into the system by officials, and they also have the option to put in a “no flag” play for examination later, according to Tony Buyniski, assistant commissioner for the Big Ten Conference. The program contributes to staff evaluations and it is used for comments on no-calls. Every play is graded. Video is submitted to officials and those who train the officials. Correct, not correct, marginal and no-calls are all recorded. The system is used to track and provide data immediately, in season and for postseason consideration.

“We’re tracking the new fair catch rule on kickoffs this year,” Buyniski said, referring to a rule that allows teams to start its series on its 25 yardline as opposed to returning the kick. Officials are also entering game duration variables, which Acree then checks. Fouls, stoppages, the length of stoppages and average game duration are other variables tracked and provided to the commissioner for in-season consideration.

QwikRef is also used by the consortium to compare statistics from year to year. “We are able to compare numbers for two to three years. We’re perfectly happy with the system. It provides a lot of statistical analysis,” Buyniski explained, “though it can take time to sort through.

“It’s a blessing and a curse with all the data. We need to review it very meticulously and stay on top of trends. Currently, we use the data in-season, take a deep dive into it after the season, and provide weekly reports to the Big Ten, Mid-American and Missouri Valley,” he said.

Idea Genesis

Acree got the idea for QwikRef when he was using his master’s degree in information system technology to lead a website development project to support U.S. Navy labs that were engaged on a Navy combat systems collaborative engineering project. Acree is a veteran Division I football official and former Division II basketball official with more than 36 years of officiating experience. He retired from the U.S. Navy after 21 years of service in 1995 with the rank of commander. He developed a web-based application for Navy combat systems engineers to share engineering test data. At the time, he was officiating football in the Mid-Eastern Atlantic Conference (MEAC) and “the light popped on. If I could do it with engineering test data, I could do it with football penalty data.”

Johnny Grier, then an NFL referee, was his conference coordinator at the time (1999-2000) and Acree examined the paper forms Grier used to report postgame NFL data. “Johnny explained the entire process for postgame reporting,” Acree said.
“I basically reverse-engineered the process and digitally replicated the postgame paper-reporting processes used by the NFL circa 1999.”

He created a website that MEAC football officials could access to enter their foul reports.

NASO Interrupter – Training Has Evolved (640px x 165px)

“Johnny had the vision and encouraged me,” Acree explained. The system languished until 2000 when it caught the eye of Jim Maconaghy, who began using it in the Atlantic 10, Ivy and Patriot leagues.

Looking for a tool to hire officials and measure performance

With a background in engineering, Maconaghy was looking at the tool for hiring purposes, and to measure the performance of and to rate his officials. “We were doing everything longhand. We looked at what Rusty had and liked it,” Maconaghy said.

He reviewed the results at the end of the year and “Rusty would improve the program and make it more precise. In my 53 years of officiating, next to replay, QwikRef is the best development program to identify trends to grade officials.

“A coach could give you a call in week six of the season and ask about personal fouls on one of his players. The system is so precise it can respond to almost any situation, allowing coaches to review plays,” Maconaghy said.

Maconaghy used the information captured for postseason reports and showed the reports to other fellow Division I coordinators. He and Grier helped Acree get an audience with the late Dave Parry, the national coordinator for NCAA football officials at the time, to demo the system at a winter meeting of all the Division I football officiating coordinators. At the time, the system was only capturing “fouls and a few other rudimentary things,” Acree observed.

At the meeting, there were a “lot of great football officiating minds in the room,” Acree said. But there were many “just not attuned to the technology and how it could be integrated into officiating of the game.” Though disheartened, Acree did not give up.

Fast Forward

Fast forward from 2000 to 2006, and acceptance of the system was still slow in coming. In came Walt Anderson, recently hired as the Big 12 coordinator of football officials. He saw some of the reports Maconaghy brought each winter to the Division I coordinator’s meeting and placed discreetly on his table where he was seated. As a “techie guy, Walt Anderson was all over this stuff. He wanted a demo. He decided to implement QwikRef into the Big 12’s officiating program for 2006, including replay. Walt Anderson created the model and expanded the functional requirements of the system to meet his needs as a Power Five coordinator,” Acree said.

Anderson fed Acree the functional requirements, “to fry the eggs and cook the bacon. He expanded the system’s foul and replay reporting and officiating performance evaluation functionality,” according to Acree.

From there, it was off and running. “Walt Anderson’s hands are all over QwikRef,” Acree said. “He got us started developing it further for a Division I coordinator, established the framework for it, promoted its expansion and advocated its use to other coordinators. He gave QwikRef the national exposure in the Big 12 it needed to launch it into other D-I conferences.” Many of the features today are a direct result of requirements established by both Anderson in the Big 12 in 2006 and later by Carollo, who adopted it for the Big Ten in 2009.

Creating transparency on rulings and comments from observations

Anderson’s initial application of QwikRef was to collect data for coordinators and was limited to officiating. But he kept pushing for expanded information, including reports coaches could see, creating transparency in rulings on the field and comments from observations. He added, for example, more specific fouls by categories, like offside and delay of game. He appreciated the ability to edit and build upon the parameters of QwikRef, citing a need to generate useful and accessible data.

“We compounded the information available to coaches,” Anderson said. “It got to the point where we were adding comments to the grades and both the evaluator and coordinator’s comments could be distinguished. Coaches could check on individual plays, see the comments and why the call was rated correct or incorrect. That helped make our officials better and provided transparency for coaches so they could learn.”

The coaches could see the foul and why it was called. “We made it clear to the coaches the information was not to be used to beat up on officials, or their access would be over,” he added.

Anderson joked that “you can’t call Bill Gates and ask him to modify software at Microsoft,” but you can call Rusty Acree at QwikRef and get the program quickly revised. “Rusty customized it on a per-conference basis,” he said. The Big 12 might have different requirements than the Atlantic Coast Conference (ACC), for example, and the program would be set up to meet each specifically.

Early on, one of the statistics Anderson captured for his staff from QwikRef was data demonstrating the importance of moving officials from one side of the field to the other to ensure they were exposed to both coaches. He found a statistical point that argued officials should be switched and not stay on the same side of the field throughout the game to ensure greater objectivity.

Anderson also sought specific statistics on targeting and helmets coming off. Based on data provided, he found players weren’t strapping their helmets on. “When they did, the helmets stopped flying off,” he said.

“Granite can be just a piece of granite or turned into a sculpture. Data is like granite. It is nothing unless it is molded into something, like a Michelangelo statue,” Anderson observed. “My job was to find the block of granite; Rusty’s job was to make it look like David.”

Anderson would like to see the program expanded to capture more injury data on knees and concussions, for example. “It’s important to have accurate data to make rules, adjust rules,” he said. “Otherwise, we’re speculating on emotion.”

By 2007, “We could quantify performance on the field and in the replay booth. Using a number of factors, we grade correct calls, foul types, how many rulings are wrong,” Anderson explained.

Blocking below the waist (BBW) fouls have drawn extra attention. “Because the rules were difficult to officiate correctly, BBW trend data compiled from QwikRef has enabled the rules committee to progressively tweak the BBW rules each season,” Acree added.

Other rules or officiating mechanics issues for which QwikRef has provided data directly to the NCAA/CFO include:

  • Game duration creep. As a direct result of trending game duration data reported in QwikRef for more than 12 years, actions and/or rules changes have been taken to arrest game duration;
  • Adding an eighth official. Provided performance metrics for evaluation of the effectiveness of the center judge to support the 2014 decision to add an eighth official;
  • Changing sides of the field. Provided performance metrics for evaluating the effect of the head linesjudge, side judge, line judge and field judge changing sides of the field at halftime;
  • Targeting. With respect to player safety, providing metrics for this issue which has become and remains a very high interest and high visibility issue throughout the country;
  • Helmets-off data;
  • Kickoff data to evaluate the effectiveness of rules enacted in 2018 to reduce the number of kickoff returns.

Ranking Officials Based on Evaluations

Today, QwikRef’s strength and primary utility for coordinators of officials is the evaluation, analysis and training of officials. Ranking officials based on data-driven evaluations and using metrics to quantify performance have become a key component and tool for a coordinator of officials. Coordinators use the data in their own subjective ways with regard to regular-season assignments, postseason assignments, training officials and for the hiring and retention of officials. “Quantifiable data to back up decisions is available,” Acree said.

Similar to rule changes, the QwikRef system adapts to new issues every year as administrators, coordinators and the NCAA seek information to improve how the game is officiated. “We collect data to verify, validate and evaluate the effects of rule changes. Every Division I conference is using QwikRef for football,” Acree said.

Expanding Into Other Sports

In 2012, the ACC men’s basketball program requested a basketball version of QwikRef be developed to support performance evaluation of ACC men’s basketball officials.

Working closely with Ben Tario in the ACC office, Acree’s team leveraged off the existing football QwikRef technology and quickly had a basketball version of QwikRef up and running for the ACC that was based on Tario’s basketball functional requirements. The ACC women’s program soon followed. Additional functional and reporting capability was also added.

The Pac-12 men’s and women’s programs are also using the system to evaluate basketball officials’ performance and grade their staff, as well as the West Coast Conference for men’s officials.
The Ivy, MAAC, American, Atlantic 10 and Big East women’s basketball programs are all using the system. Debbie Williamson, women’s basketball coordinator for those five conferences, has been “a huge advocate and champion” of the QwikRef basketball system for women’s programs, Acree noted.

Williamson had to break down calls to develop an overall percentage score for officials at that time. She found that her highest or lowest score, based on grading every call or no-call, could be a very different score from an evaluator who may be simply “eye-balling” an overall game performance. “I may think a room is hot and you may think it is cold,” she said. “That’s how opinions work, but if we know the exact temperature in the room, we can make decisions based on fact and not just on how something feels. We needed a number — the temperature. QwikRef provides those numbers based on specific criteria.”

Charlene Curtis, coordinator of women’s basketball officials for the ACC, was using the system at that time. Williamson found Curtis had the same “call by call” breakdowns, but had a platform that could house the information and produce reports. “We’ve found our officials are approximately 93 percent accurate on fouls and violations regarding correct or incorrect calls. If you factor in no-calls, that drops to 86-87 percent,” she said, “It shows you cannot hold your whistle when there should’ve been one.”

Williamson realizes that even with the system, observation is still one person’s judgment against another’s. She can take two or three looks at the replay while the official has only one shot at the call. But she tells her team if it can defend a call, she will change her score on the call. “I want to be wrong in those situations and will always give the official the benefit of the doubt,” she observed.

QwikRef is improving call accuracy for Williamson’s staff and changing the statistics they are capable of tracking. She recently broke down calls in the last four minutes of games and found “we’re finishing the games as strong, and in many cases stronger, than the average for the game.”

She jokes that her “inter-rater reliability” (the reliability of her personal ratings) is very high because, “I agree with myself,” and she recognizes that to retain reliability she needs to hire observers who use similar standards, such as an expert-level knowledge of current NCAA women’s basketball rules and mechanics.

She uses stats from the system to provide feedback and build her offseason training programs around the calls being missed. “The system shows year-by-year, side-by-side statistics. We’re improving our rebounding calls, block/charges, fouls in general, and screening when we put a whistle to it. Our people want to get this right,” she explained.

The system provides customized reports to Williamson to produce the numbers she wants to track. She uses the responses to have a conversation with her officials on call accuracy.

“I don’t get to see my team every day so I coach when I have the opportunity to coach, and my team is doing everything they can do to get better. I have so much respect for my team,” she said. QwikRef gives her tools to analyze the game andsuccess factors, and how to improve. “Coaches would love a 93 percent success rate from their shooters. Heck, they’d take 86 percent.”

The numbers from QwikRef help Williamson “carry the banner” for her staff. “It helps me display my staff’s good work. That’s what they do for everybody.”

Big data continues to influence sport. As more quality information is gathered, distributed and used to evaluate plays for officials, the quality of officiating continues to improve. That’s good for the game, for officials, teams and the fans.

What's Your Call? Leave a Comment:

comments



Note: This article is archival in nature. Rules, interpretations, mechanics, philosophies and other information may or may not be correct for the current year.

This article is the copyright of ©Referee Enterprises, Inc., and may not be republished in whole or in part online, in print or in any capacity without expressed written permission from Referee. The article is made available for educational use by individuals.