If you’re a typical Referee reader, you’re college educated and have spent 15 or 20 years in school. That means you’ve probably taken thousands of tests in dozens of formats in your life. Early on, they included the never-gets-old game of “flashcards.” Later, they would be multiple-choice behemoths and essay-type brain busters, requiring your best BS to help earn a BS. The purpose of tests was simple, but the process of passing them was the stuff of daydreams to some and nightmares for others: Prove that you know enough to be dangerous, but not enough to be a danger in your chosen field of study.
After you graduated, you finally had time for other interests, like officiating — and here they come with the @$#$ tests again.
But this time it’s different; tests in school were about objectively ranking a student among peers and the same went for the institutions administering them. We all know someone who got into a professional school or not because of a few marks here or there. We’ve probably all heard the debate about a 3.8 GPA meaning the same coming from School A, B or C, too: Grades meant everything in academia.
In sports, meanwhile, A, B or C are three esoteric choices on a rules quiz and most jurisdictions require passing one as a condition of registration or advancement. The norm these days is for the test to be of the “open book” variety. It can be taken multiple times, if necessary, and often as a group activity with others. To top it off, the candidate only needs a minimum score before being released into the wild; getting a higher score often doesn’t change anything.
So, our psyches burn with indignation and confusion. We know what it took to pass some college exams. Then we look at the test we take for basketball officiating every year and proclaim some sort of comedown: “They can’t be serious,” you might sniff. “Some guy who made 76 on his second try is just as qualified as moi?
“My rulebook and I spent our summer vacation together and I had a 98 to show for it. If it hadn’t been for that one dumb question, I would have been perfect!”
Or would I?
Virtually every state office is unique in its approach to testing. Most agree on the motivation for testing, but then they often diverge in the method. They view the annual rules exam as an audit, accomplishing two important things: First, it demonstrates that an individual official is continuing to put effort into learning the rules. Second, the pattern of incorrect answers turned in by all the candidates tells them about the rules themselves — which ones need improvement and which ones need more attention from officials; just as much as a right answer shows whether an official knows what targeting is, enough wrong answers show whether officials get what targeting is, too. That can be a critical distinction, and the states weave what they see in test responses into their training. What follows is a sampling of how various states prepare officials to work high school sports. We’ll talk about testing requirements but include the other arrows in their quivers, too, like training programs and performance assessments.
Mark Dreibelbis is an associate commissioner of the North Carolina High School Athletic Association (NCHSAA), responsible for officials. The NFHS, through its rules committees, drafts two 100-question tests each year that states may use for testing in a given sport. The Part I test is more straightforward, while the Part II test is designed to be a stiffer evaluation of overall rules knowledge. Dreibelbis’ officials take the Part I test and must pass it with a 75 percent score on the first try to be regular-season eligible for football, as an example. In itself, that doesn’t seem like much of an achievement. The test is open book, an approach that is a departure from the proctored format that North Carolina used about 10 years ago. Dreibelbis says the change was made because studies have shown the open-book format promotes better learning. “(It’s) better than me just going in blind (on a closed-book test) and having a question that I don’t know the answer to and just taking a guess,” he explains. “What I’m going to do is answer the 70 or 80 or so questions (on the Part I test) I know and then … I’m going to go stick my nose in the rulebook and look up the rest.”
The concern often heard is that open-book tests lead to … collusion in achieving passing grades. Dreibelbis has a good answer to this: Cheating on the test won’t get you very far in North Carolina because of the other requirements. “You have to go to a minimum six preseason clinics where they work on rules and mechanics,” Dreibelbis says. “You have a local association meeting. You have a mandatory state rules clinic where we go over all the rules changes, and interpreters now use video review. You have to work three scrimmages and then we add the NFHS concussion test.
“But you have to do all of those every year to be eligible. So, it’s much more than just a 100-question test.”
Some of us can feel the treads shallowing on our tires at the thought of crisscrossing the state to find “a minimum” of six clinics, let alone putting in the other work. The North Carolina system of local officiating chapters provides broad access to available clinics and, with some direction, Dreibelbis gives his selected clinicians healthy leeway to present selected subjects in relatable formats. The proof in the pudding is that NCHSAA officiating numbers have declined only about 5 percent in recent years, against the trend in many areas. The officiating pool is also getting older, however, and the qualification requirements may be a deterrent to younger officials who might balk at the needed time and financial commitment, Dreibelbis admits. He says NCHSAA has taken advantage of the new NFHS program to recruit new officials and it also waives registration fees for first-year officials; on the one hand that leaves about $25,000 in potential fees on the table annually, but it pays for itself in getting more newbies over the hump by evading the initial sticker shock and giving it a try.
Matt Bennett is the instructional chair of the California Basketball Officials Association (CBOA), representing about 3,500 officials in California. He says the California Interscholastic Federation (CIF) controls high school sports in the state but is not involved in officiating development in any way. Bennett says the CBOA is the dominant entity in Southern California that trains, evaluates and assigns basketball officials to work in the state. Rather than using NFHS materials, CBOA has written and developed its own program for officials. The focal point is the CBOA Study Guide, something akin to a workbook combining rule discussions with questions like those in Referee’s “Test Yourself” features. The guide becomes a tour of NFHS rules and mechanics with 215 multiple-choice questions to develop the user’s comprehension of the concepts presented (see sidebar for an example).
The first task for an official each year is to submit online answers to the questions in the guide, with numerous opportunities to achieve an 80 percent score. After that comes a classification test for assignment to games; it requires another 80 percent on a selection of 100 of the same questions, this time administered by individual chapters in their own way. This is where officials get more facetime with mentors at the unit level and, if they fail that test, they get one more try at a makeup test after some individual coaching. If they don’t pass the makeup test, they don’t work basketball that season.
“The testing is a guideline to get the knowledge base for our folks,” Bennett says. “But just because you pass the quiz you still have to have confidence. … You probably run into people that are really book smart, but they don’t apply it very well.
“So, you’ve got a guy who could run off rule references by the number and letter all day long, but he can’t apply it. He doesn’t have judgment or is not physically able to move up and down the floor or he’s just got a real irritating personality.” This is where interaction with mentors comes in.
Bennett emphasizes the face-to-face element of developing officials. He says one important thing the CBOA requires officials to do as they get into their third and fourth years is go and observe other officials. “We have them basically fill out a card, make a few notes, get a signature from both officials that they were there. … Just the exposure of getting out to the games is so valuable; there’s really no substitute for that.
“You can look at video, you can take quizzes, you can read books, but to get out to see games and watch people do it — it’s just the basic things you’re looking at in terms of … appearance, physical fitness, judgment, mechanics, rule knowledge.”
The same approach is used in evaluating officials themselves. Bennett thinks the methodology is helping keep CBOA’s numbers steady, but he feels like they can always use more officials. Most high school ball in California is worked by two-person crews, so somewhere out there is room for considerable growth, if or when California embraces three-person mechanics.
So far, we’ve looked at two examples of existing structures evolving into concerted training approaches validated by training. Now let’s see what happens when a system is built from the ground up. Jeff Cluff is an assistant director for the Utah High School Activities Association (UHSAA), responsible for officials and baseball. Cluff says that when he took over the job five years ago, basketball officials had two chances to make 90 percent on the NFHS Part II exam to qualify, but there was a problem. “I could have a guy who scored 99 percent on the Part II, which is supposed to be harder than the Part I test,” Cluff explains. “Then he’ll walk in and he’ll mess up the rules on the floor.”
That changed. UHSAA now requires returning officials to “only” make an 85 percent on the more basic Part I exam during a window early in each season. New officials must pass the test before they can officiate, but that’s only where the story begins, for Cluff. For him, online, open-book testing wasn’t producing valid results. “I believe that’s one of the problems with the testing mechanism,” he says, “which is why I reduced its importance and we’ve tried to refocus in Utah in other areas to try to gauge their level of competency.
“In Utah, we did not have a preseason clinic or require training meetings during the season that were at a high enough standard. We were requiring 50 percent participation.”
The new requirement is that officials attend a six-hour preseason clinic every year to be eligible for postseason. The clinic staff includes many professional and college officials. Officials can apply for a waiver if they happen to have a college game on the clinic date but that’s a one-year-only deal: Miss two years in a row and you can watch the playoffs from the bleachers.
“It’s been overwhelmingly successful,” says Cluff. He says they offer a five-hour makeup clinic that’s actually more difficult than the original and the net result has been buy-in at the rate of about 70 percent for a deal that costs about $20 a head to facilitate. Cluff then follows up the training by going out and observing prospective playoff officials personally before they can work later rounds. It also helps that the policy in Utah, as in some other western states, is that officials can’t work a state final in successive years. That creates more incentive to improve.
The result is that officiating numbers are holding their own in Utah. Cluff also takes advantage of the NFHS recruitment program and makes a point of personally calling every prospective official who inquires through the program. Of them, he says about 80 percent of those who came in through the program at its inception last year have stuck with it this season. That’s a lot of phone calls for a busy man, but the Welcome Wagon approach seems to have worked.
These administrators understand that the success of their officials and their training programs correlate to the level of effort they put in.
The people we’ve quoted here will tell you they assign little or no importance to an official’s test score once the minimum qualification is achieved. For them, it’s performance on the field or court and active participation in learning the theory of officiating that matters most. Demonstrating that you’ve had your nose in the rulebook is simply a box to be checked and not a means of advancement in itself. If you work in a state that seems to imply differently, you might notice there are more problems with retention and the quality of officiating.
The takeaway from all this is to heed your state office’s advice: Don’t kill yourself to make a 100 percent on the test if they say that “X” percent is plenty. There’s enough subjectivity in testing that pursuit of the perfect score is Quixotic anyway. However, if you only read the rulebook long enough to stumble through the test, your lack of commitment will show in the other elements of training and evaluation noted here. There’s the story of a football crew that once, collectively, ruled a field goal attempt “too high”; the rules, they explained on Monday morning, said the ball had to go between the uprights and this attempt went clearly above them. They proved that reading the rules is different from comprehending them and a test score alone can’t always differentiate the two — only performance does.
Study the rules, but not just as a way of getting by. Learn the rules and how they apply as a complement to your mechanics and game management skills. The test will look after itself.
What's Your Call? Leave a Comment:
Note: This article is archival in nature. Rules, interpretations, mechanics, philosophies and other information may or may not be correct for the current year.
This article is the copyright of ©Referee Enterprises, Inc., and may not be republished in whole or in part online, in print or in any capacity without expressed written permission from Referee. The article is made available for educational use by individuals.