What Elements College Students Should Expect To See in a Report When They Pay For Updated Testing
Also available as a PDF download.
Students with learning disabilities and Attention Deficit Disorder (ADD) should know that graduate admission exams such as the GRE and MCAT do offer disability accommodations. However, some students may find that the documentation (i.e. proof of disability) that they provided to their college in order to receive accommodations there may not be considered sufficient when they apply for accommodations on these exams. The testing agencies may tell them that their testing is too old or is missing certain required elements. Students should see should see Elizabeth’s advice on how to research what documentation they will need in order to get a sense of whether or not they will have to get new testing in order to apply for accommodations on graduate admission exams.
Elements to Look For in a Good Private Report
Any report should contain some basic elements. It should be printed on the professional’s letterhead, state the reason for testing, present the testing results, and be signed. Many evaluators who do testing have their own template that they use, and that’s perfectly appropriate. But what separates a basic report from a detailed one is the amount of detail that the evaluator includes on top of this basic framework.
It is a very good idea for students to ask the evaluators they are considering using for a copy of a redacted report, which is a sample report where any identifying information about the student will be removed or blacked out. Many professionals are unlikely to have such a report readily available, and this does not mean anything except that they might never have been asked for one. However, students should know that it is worth waiting a few weeks to receive one, as this will give them a good idea of how much information these professionals include in a report aside from the test scores. Students may be surprised to see that some of the evaluators who charge the most money do not provide the most detailed reports.
As they review these redacted reports, student should see certain elements. A good report should contain:
- A fairly thorough history. This part of the narrative should include information about early development (Did the student walk, talk within a normal range), acquisition of academic skills (when did the student learn to read, write, do simple calculations?), and any history of speech and language or occupational therapy, tutoring, after-school help provided by teachers. The report should also mention any other members of the family who have been diagnosed with or are suspected of having a disability. There should be a discussion of the family’s situation, too (Parents married? How many siblings? Any family stressors?).
- A description of the student’s performance during testing. Good evaluators will describe the student’s demeanor throughout the testing (Was she eager to try? Was he shy and withdrawn?). They will talk about how the student worked on the tasks (Did she rush through? Did he take five minutes to work on each question?). They will sometimes ask a student to explain how he/she arrived at an answer or give the student a chance to try a problem again after it has been marked wrong to see whether the student just wasn’t paying attention or doesn’t know how to do the problem or know the answer.
- A thorough discussion of the results. Good evaluators talk about what students did well and analyze their errors. For instance, on a spelling task, did the student’s responses get close to the correct spelling (ex. spelling “phone” as “fone”) or were they phonetically correct but technically incorrect (ex. spelling “table” as “tabul”)? Good evaluators also make connections between the patterns seen in the cognitive testing and the achievement tests (ex. you would expect to see a student with strong verbal skills to do well on reading and writing measures, and a student who shows weaknesses in verbal skills to struggle with reading and writing).
- Standard (or scaled) scores and percentile ranks for every measure administered. Evaluators should provide a score for every test and subtest given in the evaluation. For instance, an evaluator should give not only the student’s Verbal Comprehension Index score for the WAIS-IV, but she should also provide the student’s scores on the Vocabulary, Similarities, and Information tests (and any supplemental measures administered).
Some evaluators like to include students’ test scores within the narrative as they are being discussed. This is fine, as long as the scores are provided for each measure administered. Others provide a score summary or table at the end. For an example of what one looks like for an achievement battery, see this page.
When evaluators use computer programs, these usually provide students’ scores presented as standard (or scaled) scores, percentile ranks, age equivalents, and grade equivalents (as seen on the sample), and they may provide some additional information. It is important for students to know that most testing agencies and colleges will reject testing that does not provide the standard scores and percentile ranks. The other information is not typically required, so while its presence in a report will not be a problem, it will also not be seen as adding value.
- A summary. At the end of the report, good evaluators will summarize the overall picture and point out students’ strengths as well as particular areas of concern, if there are any. They will also mention that they have ruled out other causes of the difficulty (such as anxiety, depression, or other conditions that could affect the student’s performance). Good evaluators will not necessarily find that a client has a learning disability or ADD, because not everybody who gets tested does have a disability. But they should at least be able to give a sense of what is the root of the difficulty (if, indeed, there is one).
- Recommendations. Good evaluators not only suggest adjustments to be made at school, but they recommend strategies and technology that the student can use to improve his/her skills and bypass the area of weakness. These recommendations for accommodation should be supported by evidence from the testing and make sense based on what the testing says. This is important, as learning disabilities and ADD are lifetime issues, and students need to know how to help themselves in their daily lives as well as in educational settings.
How Students Can See Whether an Evaluator Offers a Basic or Detailed Report
As stated earlier, most evaluators have a template that they use for every report, which provides some descriptions of the subtests administered so that the reader can understand the tasks that the student completed. If students are paying for testing, they should make sure that they get their money’s worth by looking to see how far evaluators go beyond their standard framework.
The best way to see whether that sample report they got from an evaluator contains what it should is to use a highlighter on any section that explains what the student had to do in the test, the student’s scores, and any other information regarding scores or percentile ranks. If most of the test gets highlighted, it is a very basic report. This is not a bad thing, but students may not wish to pay a high price for such a report.
Here’s how a basic report describing a student’s performance on the Woodcock-Johnson Writing Samples test would look after highlighting:
Student X earned a score of 121 on a task that asked him to write sentences to describe an object, explain something factual, or complete a passage (Writing Samples). This placed Student X in the Superior range on this task as compared to other students his age. His RPI score for this test was 98/90. His age equivalent score was >30, and his grade equivalent score was >30.
Here’s how a detailed report would look:
Student X earned his lowest writing score for the entire battery on a task that asked him to write sentences to describe an object, explain something factual, or complete a passage (Writing Samples). This task assigns two points to responses that answered the prompt and provided some detail, one point for items that provided simple, straightforward answers, and either 1.5 or .5 points for answers in between (answers that were very far off the mark did not receive credit). His performance on this task showed a lot of inconsistency. He earned maximum credit on only one item, where he provided a response that was simple in content and vocabulary but did describe the action and its result (analogy – “The woman cooked some pasta and ate it.”). However, he did not provide a two-part answer on a different item where he should have done so (analogy – answering “You might get thirsty” to the question “Why should you carry water with you when you go on a hike?”). Student X received only half a point for this response because, without the first part of the sentence, his response was too vague to make the reader understand what he had been asked to explain.
On most other items, Student X earned two points. On an item that asked Student X to describe the action and predict what would happen next, he used a run-on-sentence.
Some of Student X’s answers used vague language. On two items that asked him to describe the action in the picture, his answers were certainly technically correct, but they lacked a fairly obvious verb that was more specific (analogy – “The boy has ice cream” rather than “The boy licks his cone.”). On one item that asked him to explain the relationship between two people, his answer was so vague that he only earned .5 points (analogy – writing “They are in a family” for a picture that shows a father and son). When asked verbally what the relationship was, Student X was able to respond correctly (the vocabulary was simple and was clearly not out of his knowledge base).
Surprisingly, Student X provided one response that was not a sentence (analogy – writing “One word – a car” when asked to say what vehicle is pictured). When asked to try the item again, Student X was able to provide a simple, correct response. This item is not at all representative of his abilities, and since his sense of humor was in evidence throughout the session, it was likely just a manifestation of this. But it is curious that he did not self-correct this response before testing moved on to the next item. He only self-corrected when prompted to by the examiner.
While these elements were not part of the scoring, Student X used punctuation, capitalization, and spelling correctly (though he spelled one or two words incorrectly). His sentence structures were simple and straightforward, so except where noted already, he used grammar correctly.
Students should be able to see a lot of text that is not highlighted in a report, which will show them that an evaluator puts lots of details and thought into writing reports.
College students who need to get tested for graduate admission exams need to make sure that their new testing contains all of the tests they need. They should see Elizabeth’s instructions on how to check the documentation requirements for their exam. They should also see Elizabeth’s advice on how to be an educated consumer of such testing.