Do you believe that every statement in your audit report needs to be backed up with evidence? Some auditors do and some don’t.
Essential Audit Skills in Austin |
October 28 & 29, 2015 |
Register here. |
The word “every” is troublesome, isn’t it? Most statements need backup – but not “every” statement does. For instance, the date of the report doesn’t need much evidence to back it up. Yes, it is May. Yes, it is 2015. You don’t need to tie the date in your audit report to the printout of a calendar in your working papers! But, if we say something controversial, such as “The executive team is using university funds for personal travel,” then you had better have plenty of evidence to back you up.
But how do we tell whether we have enough? And how do we come to agreement with those that review our audit quality about what constitutes enough?
I was privileged to hear Amanda Lamb, Senior Research and Evaluation Analyst Multnomah County Sheriff’s Office, speak at an Association of Local Government Auditors Conference about this issue. She has developed a way to determine whether evidence is sufficient in a more objective way than just “going with your gut.”
Going with your gut
“Going with your gut” is a frustratingly, fuzzy directive for an auditor because most auditors are equipped with analytical minds. For years, I have left my audiences unfulfilled when I have told them to the amount of evidence they need is based on how comfortable they would be standing in front of an incredulous board. If the auditor can imagine making a convincing argument to the board based on the evidence they gathered, they have enough. If not, they need to gather more.
Not having a refined method to judge evidence causes silly arguments
One of my clients called me last year complaining that her auditors constantly argue about how much evidence is enough as they review the audit working papers and audit report at the end of the audit process. Instead of the review process taking a few days, it takes several weeks, and the report undergoes major changes as a result. Obviously, something the team is doing is wrong.
Amanda ran into the same problem, but rather than argue about it, she came up with a technique to help her team decide how much evidence was necessary. Her team still disagreed sometimes, but the intensity and frequency of the disagreements has decreased.
Appropriate is easier to discern than sufficient
The audit standards require evidence to be “appropriate” and “sufficient.” As Amanda so aptly pointed out, replace the word “appropriate” with “relevant,” and you have a pretty objective way to judge one of the qualities of evidence. If the evidence does not help you answer the audit objective (in other words, if your evidence was gathered as you hopped down a bunny trail or while visiting another planet!), then the evidence is not appropriate. Case closed.
But the term “sufficient” causes all sorts of debate among auditors. How much evidence is “enough”? The GAO gives us some tips in the Yellow Book such as, the greater the audit risk the, more evidence is required, and stronger evidence may allow less evidence to be used, etc. But the guidance never tells us how many pieces of evidence we must have. That is left up to our “professional judgment,” which, as you know, means we are on our own.
The scale of certainty
Amanda started by presenting a language to help us grade the intensity of our statements. When you make a particular statement in the report, do you want to:
- Be certain,
- Be convinced beyond a reasonable doubt,
- Be clear and convincing,
- Present a preponderance of evidence,
- Present some credible evidence, or
- Present no evidence?
Auditors have to work a lot harder to “be certain” than they will to work to “present some credible evidence.”
Questions help us decide where we should be on the scale
And to help us decide where we are on the scale, needing certainty or simply a smattering of credible evidence, she asked us not to consider our “gut” but, instead, to consider a series of questions in over a half-dozen categories.
The first question asks what type of statement you are making. Are you making a background statement, a statement regarding a minor administrative lapse, or a statement regarding a major agency malfunction?
A background statement, such as ‘The University was established in 1893…,” is low risk, and thereby needs less evidence to back it up. A minor administrative lapse, such as “2 of the reported errors were keying entry errors and were immaterial,” is a moderate risk. And a major agency malfunction, such as ““30% of students receiving aid were ineligible,” is high risk and demands more evidence to back it up.
The following factors are listed from the lowest risk, needing less sufficient evidence, to the highest risk, demanding more sufficient evidence.
Type of statement:
A statement of fact is low risk.
A summarizing statement is moderately risky.
A translation of a technical subject is a high-risk statement.
Statements using the words:
About, could, or possible are low risk.
Likely, should, or usually are moderate risk.
Exactly, must, all, or always are high risk.
Testing results that:
Conform to expectations are low risk.
Generated no expectations are moderate risk.
Are contrary to expectations are high risk
If the audit staff is:
Competent and experienced, the statements they make are lower risk.
A mix of experienced and inexperienced members, their statements are moderately risky.
Comprised of a new in-charge or staff, their statements are high risk.
If the relationship with the client is:
Collaborative, the statements are more low risk.
Suspicious, the statements are moderately risky.
Adversarial, the statements are high risk.
If the audit resources are:
Sufficient with a flexible deadline, the statements are more low risk.
Churning (team members turned over during fieldwork), the statements are more moderately risky.
Limited and the audit scope exceeds available time resources, the statements are more high risk.
If the political content is:
Banal, the statement is low risk.
Debatable, the statement is moderately risky.
Controversial, the statement is more risky.
Amanda presented these options in a matrix that also included the elements of a finding. I disagree with her on that facet of her matrix, so I don’t include it here. But the benefit of it being in a matrix is that the team can easily circle or highlight the results and get a visual sense of how all of the risk factors stack up.
Play with that on one of your engagements, and let me know whether it reduces friction on your audit team.