Is Repeated Testing of Declarative Knowledge in Mathematics Moderated by Feedback?

This study set out to examine the effects of repeated testing of students’ declarative knowledge in mathematics in grade 7 (13-14 years old) and to what extent feedback moderates the effect of continually testing students’ declarative knowledge. Students who have automated the 400 basic arithmetical combinations (200 addition combinations and 200 subtraction combinations) have gained declarative knowledge. Mastering these combinations gives students an advantage where doing various calculations and performing different mathematical procedures are concerned (Dowker, 2012). If a student has automated the basic combinations, their attention will not be diverted from the procedure when solving calculation tasks, and there is thereby less risk of incorrect answers (Dowker, 2012). Previous studies have also shown that declarative knowledge in mathematics predict future results in more advanced mathematics (Hassel Bring, Goin, & Bransford, 1988; Gersten, Jordan, & Flojo, 2005; Rathmell & Gabriele, 2011).


Is Repeated Testing of Declarative Knowledge in Mathematics Moderated by Feedback?
This study set out to examine the effects of repeated testing of students' declarative knowledge in mathematics in grade 7 (13-14 years old) and to what extent feedback moderates the effect of continually testing students' declarative knowledge. Students who have automated the 400 basic arithmetical combinations (200 addition combinations and 200 subtraction combinations) have gained declarative knowledge. Mastering these combinations gives students an advantage where doing various calculations and performing different mathematical procedures are concerned (Dowker, 2012). If a student has automated the basic combinations, their attention will not be diverted from the procedure when solving calculation tasks, and there is thereby less risk of incorrect answers (Dowker, 2012).
One important aspect being debated concerning the acquisition of declarative knowledge in arithmetic is how to learn factual arithmetic combinations. Some researchers argue that practicing arithmetic facts to a certain level of fluency is of importance, and that teachers need to continually evaluate students' progress (Gersten et al., 2009). Other researchers claim that students should elaborate and explore the various number combinations, thereby constructing a clear sense of numbers (Baroody, 2011) that will help them solve simple arithmetic tasks with well-developed strategies. However, there is hardly any research into how repeated testing affects students' declarative knowledge in mathematics. The above presented assumptions are more grounded in rhetoric than they are in forceful evidence-based research.
To enhance students learning, teachers can use feedback. There is a considerable variation in effect size reported in feedback meta-analyses. The most effective forms of feedback are video-, audio-, or computer-assisted instructional feedback (Hattie & Timperly, 2007). The impact of feedback is also influenced by the complexity and the difficulty of the task.
Some types of feedback are more effective with respect to the task. Feedback on tasks with a low level of complexity, such as basic combinations, should provide information on correct answers (effect size .43) rather than incorrect answers (effect size .25) (Hattie & Timperly, 2007). Feedback does not automatically emphasize development. Kulhavy (1977) shows that if feedback is not accepted, modified, or rejected, it has no effect on the learner.
In this study, our first aim was to examine whether testing 7 th graders on simple arithmetic facts had an effect on their performance testing. Our second aim was to examine whether feedback moderated the effect of testing.

Method
Two classes, 46 students, participated in this study. Over a period of 26 days, the students were tested on their declarative knowledge in arithmetic three times a week. The two classes were randomly assigned to begin a period of repeated testing, either with or without feedback. The feedback procedure consisted of a) a summarized positive comment about their results and their progress (cf. Hattie & Timperly, 2007); and b) information regarding which tasks were correct, incorrect, or unfinished.
During three lessons each week, the students took a 90-second test on addition facts (number combination).
Four different but equal tests were used to measure students' declarative knowledge. The task we used involved single digit additions involving "carrying" the ten (e.g., 4+9; see appendix 1 for the complete tests). The dependent variable was the number of correct answers.
Although all students were given feedback in the order planned, it was not possible to implement all the testing simultaneously (e.g., students absent from class due to sickness). This practicality resulted in The average test-retest correlation between the four time points was ̅ = .77. To test the two research questions, a repeated measure ANOVA with class affiliation as an inter-subject factor was calculated. Greenhouse-Geisser correction was used due to a statistical significant Mauchly's W, which indicates that the assumption of equal variance in the different time points was not met. The ANOVA showed a main effect of time F(1.75, 70.16) = 20.10, p < .001, partial 2 = .33, and also a statistical significant interaction with class F(1.75, 70.16) = 6.47, p < .001, partial 2 = .14. This shows that both classes improved in arithmetic facts from start to finish, and that the class that started with feedback did better than the students that were given feedback in time point 3 and 4; see Figure 1.

Discussion
The results of this explorative study provide evidence for the effects of test-retest on seventh-grade students' declarative knowledge in mathematics. Repeated testing yielded significant improvement in students' declarative knowledge with respect to fluency in simple arithmetic. How a certain level of fluency can be reached has been discussed in earlier studies: should students practice arithmetic facts regularly to a certain level of fluency (Gersten et al., 2009), or should they elaborate and explore the different number combination and thereby construct a clear sense of number (Baroody, 2011)? Our results are consistent with earlier studies that found that regular practice-in this case repeated testing of arithmetic facts-affects students' fluency. Whether repeated testing is enough-better or worse than other methods, such as ordinary practicing (e.g., Gersten et al., 2009) or elaboration and exploration (e.g., Baroody, 2011), that aim to develop students fluency-needs to be investigated.
In our study, we used computer-assisted feedback and provided information on correct and incorrect answers (Hattie & Timperly, 2007 computer-assisted feedback is an effective form of feedback (Hattie & Timperly, 2007). One explanation could be that the feedback is accepted, and not modified or rejected (cf. Kulhavy, 1977).
One practical implication of the results is that feedback can render the assessments more valid. In the present study, the students that started without feedback performed more poorly when they were given feedback; this indicates that you actually can impair the validity of the instrument if feedback is not incorporated when applying it. We suggest that this phenomenon is probably more present when the student is already familiar with the subject area and only needs to build fluency in it. If this is the case, however, it needs to be empirically evaluated.
This study has limitations; for instance, there was a relatively small number of students that participated. Moreover, it should be noted that students were not randomized at the individual level; for practical reasons we were forced to determine at the class level whether or not to provide feedback. The results of our study should be viewed with caution, and the results need to be confirmed or discarded by more rigorously designed studies in the future.