Selasa, 06 Januari 2009

jurnal
Information and Communication Technology (ICT) Literacy:
Integration and Assessment in Higher Education
Irvin R. KATZ
Educational Testing Service
Princeton, NJ 08541 USA
Alexius Smith MACKLIN
Purdue University
West Lafayette, IN 47907 USA

ABSTRACT
Despite coming of age with the Internet and other technology,
many college students lack the information and communication
technology (ICT) literacy skills—locating, evaluating, and
communicating information—necessary to navigate and use the
overabundance of information available today. This paper
presents a study of the validity of a simulations-based
assessment of ICT literacy skills. Our overall goals for the
assessment are to support ICT literacy instructional initiatives at
colleges and universities.
Keywords: Higher Education, ICT Literacy, Information
Literacy, Instructional Initiatives, Psychometrics, Validity
INTRODUCTION
Discussions of Information Technology in Education
typically emphasize the Technology rather than the
Information. Widespread technology has meant that
people encounter more information, in a greater variety of
formats, than ever before. Technology is the portal
through which we interact with information, but people’s
ability to handle information—to solve problems and
think critically about information—tells us more about
their future success than their knowledge of specific
hardware or software. These skills—known as
Information and Communications Technology (ICT)
Literacy—comprise a 21st century form of literacy, in
which researching and communicating information via
digital environments are as important as reading and
writing were in earlier centuries.
ICT literate students master content faster, are better
problem-solvers, become more self-directed, and assume
greater control over learning [1]. Beyond the classroom,
ICT literacy is essential for being productive citizens in a
knowledge-driven society [16], and employers want their
employees to have these skills [6]. As a result, college
and university administrators are beginning to require
them as competencies for graduation. This focus has led
to campus-wide initiatives (e.g., [3], [15]) to improve
students’ ICT literacy.
However, there are several challenges to designing and
implementing effective ICT literacy instruction. First,
students in higher education often believe themselves to
be competent users of information resources because of
their daily interactions with the Internet [13]. This can
lead to disinterest in learning skills to improve their use
of search engines and electronic research databases.
Second, the ease of transferring between social and
academic environments, using the same technology, can
cause disruptions in classroom activity. For example,
anecdotal evidence suggests that students receiving ICT
literacy instruction in a computer lab frequently
disengage and go off-task by reading their email and
instant messaging their friends, playing games, or
searching something of interest to them. These behaviors
indicate that current instruction strategies are inefficient
in meeting students’ perceived needs and equally lacking
in an engaging delivery method. Finally, without
effective assessment it is difficult to know if instructional
programs are paying off – are students’ ICT literacy skills
improving? Educators who accept the challenge of
teaching ICT literacy skills must be prepared to:
• Find a strategy to reach the user who believes she is
already proficient
• Make the learning relevant to the user’s needs,
including using the technologies the student already
knows, to anchor the learning in something familiar
• Create active learning opportunities to keep the
students on task
• Assess the impact of instruction on student-learning
outcomes
This paper describes the ICT Literacy Assessment,
developed by Educational Testing Service (ETS), an
Internet-based assessment of ICT literacy skills. The
assessment was designed to support instructional efforts
in ICT literacy by providing data on students’ skills that
can help inform decisions for instituting and evaluating
information literacy programs.
ETS ICT LITERACY ASSESSMENT
In January 2001, ETS convened an International ICT
Literacy Panel to study the growing importance of
existing and emerging information and communication
technologies and their relationship to literacy. The
members agreed that little was being done to address
critical ICT literacy skills in higher education [7]. In
response, a consortium of experts in ICT literacy
assembled to advise ETS test developers as in the design
of an Internet-delivered assessment that measures
students’ abilities to research, organize, and communicate
information using technology [9].
50 SYSTEMICS, CYBERNETICS AND INFORMATICS VOLUME 5 - NUMBER 4
The assessment focuses on the cognitive problem solving
and critical thinking skills associated with using
technology to handle information. As such, scoring
algorithms target cognitive decision-making, rather than
technical competencies [8]. The assessment measures
ICT literacy through seven performance areas, which
represent important problem-solving and critical thinking
aspects of ICT literacy skill (Table 1).
Table 1: Components of ICT literacy (from [5])
Proficiency Definition
Define Using digital tools to identify and represent an information need
Access Collecting and/or retrieving information in digital environments
Manage Using digital tools to apply an existing organizational or classification scheme for information
Integrate Interpreting and representing information, such as by using digital tools to synthesize, summarize,
compare, and contrast information from multiple sources
Evaluate Judging the degree to which digital information satisfies the needs of an information problem,
including determining authority, bias, and timeliness of materials
Create Adapting, applying, designing, or constructing information in digital environments
Communicate Disseminating information relevant to a particular audience in an effective digital format
Figure 1. Students demonstrate their skills at handling information through interaction with simulated software. In this
example task (designed to take about five minutes), students develop a search query as part of a research assignment on
earthquakes. Figure is © 2007, Educational Testing Service. All rights reserved.
SYSTEMICS, CYBERNETICS AND INFORMATICS VOLUME 5 - NUMBER 4 51
Students solve information-handling tasks in the context
of simulated software (e.g., email, web browser, library
database). Each interactive task, separated into five and
15 minute tasks, uses simulated software with the look
and feel of typical applications. The five minute tasks
target a single proficiency whereas the 15 minute tasks
comprise more complex scenarios. The simpler tasks
contribute to the overall reliability of the assessment
whereas the more complex tasks focus on the richer
aspects of ICT literacy performance.
In the assessment, a student might encounter a scenario
that requires her to access information from a database
using a search engine (Figure 1). The results are tracked
and strategies scored based on how she searches for
information, such as key words, sequentially refined
search strategies, etc. Her proficiency is estimated based
on her ability to identify how well the information
returned meets the needs of the task.
The real-world, scenario-based simulation tasks represent
a critical aspect of our assessment approach. Knowledge
gained about information retrieval and use within an
authentic setting, such as a computer lab where students
are working on a research project or assignment, or a
workplace environment where employees are trying to
solve an information-based problem, is more useful than
knowledge generated about information seeking
behaviors from outside of an authentic context [10].
ICT LITERACY ASSESSMENT SCORES AND
ICT LITERACY SKILLS
Before using an assessment to support instructional
initiatives, there should be evidence of its validity. In this
section, we present an investigation into the validity of
the ICT Literacy Assessment: the extent to which scores
on the assessment reflect students’ ICT literacy skills. A
common approach to validating an assessment is to
administer the assessment and other measures to a sample
drawn from the population of interest (e.g., college
students). Convergent validity is supported if assessment
scores correlate with other measures expected to be
related to ICT literacy. Discriminant validity is supported
if scores do not correlate with measures thought to be
distinct from ICT literacy. In this study, comparison
measures were developed from questionnaires
administered to test-takers before they completed the ICT
Literacy Assessment.
Participants
Participants were 4048 undergraduate students recruited
in January 2005 to take the ETS ICT Literacy Assessment.
The students represented 30 college and university
campuses, primarily in the western United States.
Students were recruited at their local campuses and all
data were collected at the campuses. All but two
campuses recruited using a convenience sample (e.g.,
campus flyers). Table 2 shows the demographic and
academic characteristics of the participants.
Procedure
Between January and April 2005, the ICT Literacy
Assessment was administered at different campuses, and
so each administration differed on a number of details
such as the time-of-day, nature and timing of incentives
(e.g., raffles for iPods, $25 gift certificates), number of
students within each session, and location of computer
lab on campus. However, certain characteristics remained
consistent. Students first completed a demographic
questionnaire and academic experiences questionnaire
(approximately 30 minutes) before beginning the
assessment (two one-hour sections). All testing sessions
were proctored. If a student did not complete the
assessment within the allotted time, the testing software
stopped the section and asked the student to alert the
proctor to move the student to the next section of the test
or to the exit survey. After completing both assessment
sections, students completed an on-line survey
concerning their experiences in taking the assessment.
Table 2: Characteristics of analysis sample
Gender Year
Female 2400
(59%) Freshman 1261
(31%)
Male 1648
(41%) Sophomore 625
(15%)
GPA Junior 1258
(31%)
D or lower 40
(1%) Senior 904
(22%)
C- 98
(2%) Race/Ethnicity
C 336
(8%) African American 367
(9%)
C+ or B- 831
(21%) Asian 935
(23%)
B 1178
(29%) Hispanic 682
(17%)
B+ or A- 1157
(29%)
White (non-
Hispanic)
1734
(43%)
A 408
(10%) All others 330
(8%)
Instruments
ETS ICT Literacy Assessment scores. The
purpose of the ICT Literacy Large Scale Assessment
delivered in early 2005 was to describe the ICT literacy
levels of a student population or group in the aggregate
(no individual scores). The assessment was delivered
using a spiraled design, wherein each participant received
tasks that targeted two of the seven proficiencies.
Samples of students at each campus were distributed
evenly across forms. Raw scores for each test form were
separately scaled to a mean of 150 and standard deviation
of 35. To simplify analyses, each student’s score is
52 SYSTEMICS, CYBERNETICS AND INFORMATICS VOLUME 5 - NUMBER 4
treated equally, regardless of the particular test form
received. This equating across forms is justified by
preliminary analyses that showed high (mid .80s) intercorrelations
among the seven ICT literacy proficiency
scores.
Because of the spiraled design, reliability metrics could
not be calculated. However, in other administrations of
comprehensive test forms (all proficiencies represented)
that contained fewer assessment tasks, Cronbach alpha
reliabilities were .85 and higher.
Self-report measures. Three types of self-report
measures were developed from the demographic and
academic experiences questionnaire administered prior to
the ICT Literacy Assessment. Table 4 provides more
details on the measures as well as descriptive statistics.
1. Self Assessment measures gauged students’ reports of
their abilities with activities and skills related to ICT
literacy. Self-assessments have been used both for
academic and workplace competencies as an
alternative to objective testing (e.g., [2]), in
comparison with others’ judgments, and to validate
objective measures (see [14], for a review). Research
on self-assessment measures have revealed moderate
correlations (mid .20s to mid .30s) between selfassessment
and performance measures (e.g., [11],
[12]), although correlations differ by domain and
self-assessment instrument.
2. Self sufficiency measures provide insights into
students’ capabilities for self-directed learning. ICT
literate students identify their own need for
information (e.g., “I need to learn about…”) and can
locate appropriate sources for meeting those needs.
Thus, ICT literate students should be able to take
greater responsibility for their own learning, having
the skills to figure out information problems they
encounter on their own (or, at least, know where to
go to find answers). Several authors posit a
correspondence between ICT literacy and selfdirected
learning (e.g., [4]), although we are not
aware of any empirical studies investigating this
connection.
3. Academic performance measures reflect students’
general academic performance (GPA). Any
investigation into the validity of an assessment must
investigate whether the instrument assesses the skills
of interest rather than reflecting only general
academic performance (i.e., good students tend to
score better on a wide range of assessments). Of
course, some connection between ICT literacy and
academic ability is to be expected. For example,
better students might be more likely to recognize the
importance of ICT literacy skills for their academic
and workplace careers.
Results and Discussion
Correlations between the self-report measures and ICT
literacy scores are shown in Table 3. Except for
frequency of ICT literacy activities, all measures
correlate significantly with performance on the ICT
Literacy Assessment, supporting the convergent validity
of the assessment. The correlations are at a level
consistent with research comparing self-report measures
of skills to assessment scores (e.g., [11]). GPA correlated
only weakly with the self-assessment and self sufficiency
measures (not shown, but all rs close to zero). Thus, ICT
literacy confidence and self-sufficiency are each distinct
from academic performance even though, as just stated,
all three measures contribute to ICT literacy skill. (i.e.,
correlate with ICT literacy scores).
Table 3. Correlations with ICT Literacy Scores
Measure Name Correlation
Self-Assessments of Skills
Confidence in ICT literacy activities .27***
Frequency of ICT literacy activities -.01
Skills in course technology .29***
Self sufficiency
Figured out problems on own .29***
Asked for help (reverse coded) .26***
No. of ICT literacy skills learned on own .15***
Academic Performance
Overall GPA .23***
***p < .001
The low correlation between ICT literacy scores and
frequency of ICT literacy activities is especially
important. Many students believe they have good ICT
literacy skills because of their frequent interactions with
the Internet. Indeed, there was a strong correlation
between the frequency and confidence scales (r = .67, p
< .001). However, only the students’ confidence in their
skills aligned with their performance on the ICT literacy
assessment, supporting the discriminant validity of the
assessment. This result provides strong support for
instructors’ claims that frequency-of-contact does not
translate to good ICT literacy skills, and points to the
need for ICT literacy instruction.
SYSTEMICS, CYBERNETICS AND INFORMATICS VOLUME 5 - NUMBER 4 53
Table 4. Measures developed from questionnaires
Measure Name Reliability
(Cronbach
alpha)
Mean
(SD)
Questions Comprising Measure
Self-Assessments of Skills
Confidence in ICT literacy
activities
.94 2.4
(.4)
Mean response for each of 30 ICT literacy activities: How confident
are you in your ability to do this activity?
(1=Not confident to 3=Very Confident)
Frequency of ICT literacy
activities
.95 3.2
(.7)
Mean response for each of 30 ICT literacy activities: About how often
you have done the activity over the past two years? (1=Never to
4=Very Often)
Skills in course technology .62 3.1
(.6)
Mean of responses to the following three questions:
At the beginning of [your most recent technology-related course], how
familiar were you already with the technology you were to use in the
course?
(1=Not Familiar to 4=Very Familiar)
In thinking about your experience using the technology in this course,
how often did you have trouble using the course technology? [reverse
coded]
(1=Very Often to 4=Never)
How confident are you that you could effectively use similar course
technologies for another course?
(1=Not Confident to 4=Very Confident)
Self sufficiency
Figured out problems on
own (dummy coded)
N/A* .35
(.5)
When you encountered any types of difficulties or problems using the
course technology, from what source did you most often seek a
solution?
(1=Figured it out myself; 0=all other responses, e.g., faculty, peers)
Asked for help (reverse
coded)
N/A* 3.4
(.7)
In thinking about your experience using the technology in this course,
how often did you ask for help from others on how to use the course
technology?
(1=Very often to 4=Never)
Number of ICT literacy
skills learned on own
.67 3.6
(1.9)
Mean responses for seven ICT literacy skill areas: From what source
did you learn the most about this topic over the past two years?
(1=On my own; 0=Coursework or other training)
Academic Performance
Overall GPA N/A* 5.0
(1.3)
1=D or lower; 2=C-; 3= C, C+; 4=B-; 5=B, B+; 6=A-; 7=A
*Because this measure consists of a single question, reliability cannot be calculated.
54 SYSTEMICS, CYBERNETICS AND INFORMATICS VOLUME 5 - NUMBER 4
CONCLUSIONS
This study provides some evidence for the convergent and
discriminant validity of the ETS ICT Literacy Assessment,
paving the way for its use to evaluate instructional
programs on ICT literacy. In current work, we are
assessing the effectiveness of an innovative ICT literacy
instructional method by comparing student performance
on the ICT Literacy Assessment before and after
instruction. Our overall goals are to understand how firstyear
students acquire information-processing skills,
identify best practices for integrating information literacy
into the curriculum, and assess the impact of skill
acquisition on overall academic achievement.
REFERENCES
[1] American Association of School Librarians &
Association for Educational Communications and
Technology (1998). Information literacy for
student learning: Standards and indicators.
Retrieved June 5, 2006 from
http://www.ala.org/ala/aasl/aaslproftools/informati
onpower/InformationLiteracyStandards_final.pdf
[2] Anaya, G. (1999). College impact on student
learning: Comparing the use of self-reported gains,
standardized test scores, and college grades.
Research in Higher Education, 40(5), 499-526.
[3] The California State University (2006).
Information competence initiative web site.
Retrieved June 4, 2006 from
http://calstate.edu/ls/infocomp.shtml.
[4] Candy, P.C., Crebert, G., O’Leary, J. (1994).
Developing lifelong learners through
undergraduate education. National Board of
Employment, Education and Training. Sydney:
Australian Government Publishing Service.
[5] Educational Testing Service (2003). Succeeding in
the 21st century: What higher education must do to
address the gap in information and communication
technology proficiencies. Princeton, NJ: Author.
[6] Herman, A. M. (2000, April 11). A skills shortage,
not a worker shortage. Remarks at the National
Skills Summit. Washington, D. C.: U.S.
Department of Labor.
[7] The International ICT Literacy Panel (2002).
Digital transformation: A framework for ICT
literacy. Princeton, NJ: Educational Testing
Service.
[8] Katz, I. R. (2005). Beyond technical competence:
Literacy in information and communication
technology. Educational Technology Magazine,
45(6), 44-47.
[9] Katz, I.R., Williamson, D.M., Nadelman, H.L.,
Kirsch, I., Almond, R.G., Cooper, P.L., Redman,
M.L., & Zapata-Rivera, D. (June, 2004). Assessing
information and communications technology
literacy for higher education. Paper presented at
IAEA, Philadelphia, PA.
[10] Lave, J. and Wenger, E. (1991). Situated learning:
Legitimate peripheral participation. Cambridge
University Press.
[11] Love, K. G., & Hughes, F. V. (1994). Relationship
of self-assessment ratings and written test score:
Implications for law enforcement promotional
systems. Public Personnel Management, 23(1), 19-
30.
[12] Mabe, P.A., & West, S. G. (1982). Validity of selfevaluation
of ability: A review and meta-analysis.
Journal of Applied Psychology, 67, 280-296.
[13] Macklin, A. and Fosmire, M. (2003). Becoming an
information leader at Purdue University. College
and Research Library News, 64, 192-195.
[14] Powers, D. (2002). Self-assessment of reasoning
skills. ETS Research Report No. RR-02-22.
Princeton, NJ: Educational Testing Service.
[15] University of Central Florida (2006). Information
fluency initiative web site. Retrieved June 4, 2006
from http://www.if.ucf.edu.
[16] Zurkowski, P. G. (1974). The information service:
Environment relationships and priorities.
Washington, D. C.: National Commission on
Librarians and Information Science.
SYSTEMICS, CYBERNETICS AND INFORMATICS VOLUME 5 - NUMBER 4 55

Tidak ada komentar:

Posting Komentar