Focus on Technology and Asssessment

Within the realm of educational technology, assessment has multiple faces. Teachers and administrators are increasingly encountering assessment as a key concept in any discussion of technology implementation. In particular, the recent emphasis on program accountability for federal and state-funded initiatives means that districts across the country will need to rapidly come up to speed in terms of their understandings of how to assess the impact of their technology funds. For a number of years, Sun Associates has been helping local school districts grapple with the intersection between technology and assessment. Therefore, in this article, we will explore the various implications of "assessment" on educational technology as well as some solutions for making technology more accountable.

Curriculum, Instruction, and Assessment

As classically understood by teachers, assessment is the measurement component of the "curriculum, instruction, and assessment" triptych. Most teachers are familiar with the triangular arrangement of these elements and how one element drives and supports the others. Curriculum drives instruction, and assessment measures both the effectiveness and outcomes of both methods. In this regard, assessment truly stands at the crossroads of curriculum and instruction. This pivotal position of assessment has direct implications on how the concept comes into play as a vital part of educational technology efforts. Assessment is both a tool for measuring output and monitoring input into the learning process.

Technology as a Tool for Assessment

In terms of measuring output, teachers are increasingly turning to technology tools for measuring student performance. Electronic portfolios are a classic example of the integration of technology into assessment methodology. As schools grow their technology infrastructure — that is, as multimedia authoring tools, flexible storage systems, and presentation tools proliferate in use by teachers and students — electronic portfolios are becoming increasingly possible and viable tools for long term, systemic, student assessment. There have been a number of excellent resources recently published on the use of electronic portfolios. For a start, check out Dr. Helen Barrett’s excellent resource.

More often, we find that teachers are implementing electronic assessment tools that are not quite as broad based as a full-on electronic portfolio. Rather, we find many instances of teachers creating classroom activities which require the production of technology-infused demonstrations of student learning. For example, teachers may ask students to create a single multimedia presentation or a website as a "final product" in the study of a particular curriculum unit. These products or demonstrations do not necessarily become part of a larger portfolio of student work that follows the student for multiple years, but they do provide a measure of authenticity to the learning assessment that does not exist with some more traditional assessments. This sort of use of technology as a tool for presentation and demonstration of learning seems to be the thrust of much of the current technology integration work by the majority of technology-using teachers. The curriculum ideas available from online resources such as Marco Polo or Blue Web’n would bear this out.

Assessing the Impact of Technology

One place where teachers often overlook an intersection of technology and assessment is in the assessment of how students use technology as a tool for final products or presentations. Likewise, teachers often do not assess student use of technology for information acquisition and analysis. This is particularly noteworthy in that it is precisely this sort of assessment that touches upon the other major theme we wish to cover in our exploration of technology and assessment; that is, assessing the impact of technology as a tool for student learning.

For example, how does a teacher determine if a student has effectively used technology as a research tool? Further, how does a teacher determine that use of a technology tool is in fact an improvement over the use of a more tradition tool to accomplish a similar task? What benchmarks or indicators does a teacher employ to assess the effectiveness of student technology use? An approach to this sort of assessment is found as an essential component of WebQuests. There are many sites that cover the development of WebQuests in detail — in particular, see Bernie Dodge’s WebQuest Page at http://webquest.sdsu.edu/webquest.html -- but note that the final element of any WebQuest is the evaluation or assessment component. Most WebQuest authors create a rubric for student assessment, and usually this rubric is carefully keyed to not just what technology a student uses (e.g., visiting a certain number of websites), but also how a student uses technology. For example, a rubric might attempt to assess the depth of a student’s understanding of online materials useful for a key component of the Quest. In this way, the teacher has created a rubric that qualitatively examines the impact of a technology tool (or tools) on student learning. An excellent resource for investigating and developing performance rubrics is HP*RTEC’s Rubistar.

Assessing Technology Initiatives

Considered at the classroom, school, and district levels, this same sort of qualitative assessment turns out to be very useful in measuring the impact of a technology initiative. It is this sort of technology assessment or evaluation that is a critical component of the new Federal Elementary and Secondary Education (ESEA) No Child Left Behind legislation. A number of the performance indicators in NCLB pertain to the assessment of the impact of programs funded through NCLB. Does this impact you? It sure does if you are planning on using federal funds — that might pass through your state as district grants — to fund technology initiatives. For example, funds that were previously labeled "TLCF" funds are now part of ESEA; and when your district receives these funds, you will be accountable for showing how these funds resulted in improved student achievement.

A critical component of NCLB’s accountability is that the Federal government does not specify what exactly constitutes "improved student achievement." Rather, the Feds leave it to states to set these standards. Nevertheless, all states are expected to set standards, and all districts receiving ESEA funds will need to show that they are meeting the standards.

This federally-mandated (as interpreted by the states) accountability effort has major implications for districts as they consider technology planning and implementation. In the past, districts could manage a "we promise to do it" attitude toward technology evaluation. Most often, if a district ever did develop an evaluation plan for technology, that plan related primarily to counting computers and the frequency with which classroom and laboratory computers were used. In this current, new, age of technology planning districts will be expected to provide actual benchmarks and indicators for how technology is having an impact on teaching and learning. In other words, districts will now be accountable for showing not just that technology is being used, but for what is it being used and to what ends.

This brings us full-circle in our discussion of technology evaluation and assessment. We started with an examination of the relationship between curriculum, instruction, and assessment. In this orientation, it is clear that assessment is relevant only in terms of standards (curriculum) and methods of delivery (instruction). The same holds true for technology assessment. That is, it only makes sense to evaluate technology in terms of what is supposed to do within the context of curriculum and instruction. For example, "counting" how often a computer is used is nonsense unless there is some standard for what sorts of student learning or teacher use that computer is supposed to support. In our work around the country we are continuously making this point. Often we are asked to perform a technology "audit" to address concerns that "the students are not using the computers." In such cases, one of the usual first findings in our audit is that the district has done little to set student and/or teacher expectations for technology use, and that there has been little reflection on how technology can best support teaching and learning. In short, without standards, there can be no assessment.

Much of Sun Associates’ work relates to helping districts establish meaningful assessments for instructional technology use and impact. Sometimes we start with the district’s desire to audit, or get a snapshot of, the current level of technology use and integration in their classrooms. Other times we start with a district’s interest in creating truly strategic goals for instructional technology. In either case, our work will ultimately come around to working with the district to create a set of benchmarks or rubrics to measure success in meeting technology-related expectations. This work helps establish credibility and accountability for the district’s technology effort. It also inevitably leads to serious reflection as to what the district wants from its technology implementation effort. When technology is viewed as a tool for reformed learning, this reflection can only help inspire a broader reflection on district-wide learning goals.

Information on this site that has been produced by Sun Associates is Copyright 1997 - 2013 Sun Associates and is available for individual, one-time, use by educators. Duplication is prohibited without permission. All other material is the property of its authors and Sun Associates makes no warranty for its use or accuracy.

Last updated, May 15, 2002