For a More Robust Evaluation of 1 to 1 ICT 4 Ed Adoption Projects

» Posted by on May 12, 2012 in Spring 2012 | 0 comments

For a More Robust Evaluation of 1 to 1 ICT for Education Adoption Projects 

The rapid change of information and communication technology (ICT) increases the challenge in determining how to best evaluate proficient use of these technological advances and their impact on learning. Through an overview of different initiatives, this paper illustrates the benefits of implementing a mixed-methods approach, and analyzing projects over a prolonged period of time. Looking at a program in a longer timeframe can enable us to be more aware of the impact a program has on an individual and a community. The use of mixed-methods allows us to analyze a program in various ways, studying variables that are measurable and generalizable, as well as elements that are specific to a particular situation. By incorporating these elements into evaluation studies we can potentially increase the quality and usability of the reports generated. To illustrate the benefits of mixed-methods and the continued analysis of a project, this paper discusses the 1 to 1 iPad project at the University of Minnesota.

Rapid Rate of Change – A Relevant Characteristic of ICT for Education Projects

It was only a few decades ago, in 1978, when top MIT computer scientists had reservations about the usability of the personal computer for tasks such as an address book or a personal calendars (Tippet & Turkle, 2011). Today, universities in the United States increasingly consider remodeling their computer labs, as almost all college students in the United States (89.1% – 2009 at UMN) bring their own laptops to the university (Walker & Jorn, 2009). Students bringing their laptops to college increased from 36% in 2003 to 83% in 2008 (Terris, 2009).

The rapid improvement of technology results in the rapid depreciation of gadgets, and contributes to the difficulty of evaluating them. The increased computational power and capabilities of technology has encouraged educational institutions and other industries to adopt them. Ownership of Information and Communication Technologies (ICTs) has decreased the costs of transferring data and increased workers’ potential productivity (Friedman, 2007). Influential ICTs such as the mobile phone, the television, the internet and the radio, have augmented the quantity of information available to individuals. The economic benefits from improvements in information and data transfers have led to increased investments. There has also been growing interest in digital literacy as a necessary skill in the 21st century (Flannigan, 2006; Jenkins et al., 2006). While not all of the changes brought by increased access to technology are positive, greater access to information and the rapid improvement of these technologies has a major impact in society (Carr, 2011; Kurzweil, 2000). Unlike some traditional fields such as mathematics, or history, where most basic concepts have remained unchanged, the impacts of new media and its prevalence in society has changed substantially over the past few decades and with it the difficulty in evaluating these projects. Mobile subscriptions alone increased from less than 3 billion in 2006 to 5.9 billion in 2011 (ITU, 2012)

This rapid change makes it difficult to determine the essential skills a learner must have in the workplace of tomorrow (Cobo & Moravec, 2011). With hundreds of thousands of computer applications and many types of hardware, some of high levels of complexity; it can take a person a significant amount of time to become adept in any complex program. A high level of specialization is often the norm, as using complex programs successfully requires a degree of mastery of statistical analysis or qualitative research methods. Similarly, programs such as Adobe Photoshop, or Python, among others, have a considerable learning curve (There are courses available for learning any of these programs). Being a specialist in a particular program can lead toward a very successful career, but simply mastering a single program can take  hundreds of hours of practice. While it may take 10,000 hours to become a successful reporter, a successful violinist, or a successful writer (Gladwell, 2008), ICTs encompass thousands of possibilities, each requiring differing amounts of time in which to become proficient (this includes unique musical instruments, and new ways of writing [via text or twitter]).

Understanding this rapid change is important in evaluating ICT adoption programs because it influences what we consider to be the effective use of these technologies by the general population. Texting, for example, is increasingly common and is considered by some experts to be a nascent dialect (Thurlow & Brown, 2003).  How important is it to know how to effectively send texts and use a mobile phone in the 21st century? It is hard to answer these questions as a technology may be displaced in a few years’ time. The rapid change of technology complicates how we measure digital literacy and through it the effectiveness of 1 to 1 adoption and usability programs. These complications are at times difficult to perceive because of generational differences between the evaluator and younger generations (Prensky, 2001).

Today young adults (18-24) send an average of 109.5 text messages a day or 3,200 text messages a month and many of them prefer communicating by text messages over emails. Email, a relatively recent invention, is to some already considered old fashioned and impractical (Smith, 2011). With this in mind, does an individual’s capacity to use emails effectively continue to be a 21st century digital literacy requirement? While the International Society for Technology in Education (ISTE) has developed ICT for education standards which can aid the evaluation of technology adoption programs (ISTE, 2008), these standards emphasize broad competencies and must be operationalized to reflect the distinctiveness of each 1 to 1 ICT program.

In this essay I propose to evaluate a 1 to 1 technology project over a long period of time to assess the impact of the program on the individual over time. One of the key advantages of 1 to 1 initiatives is that the participants are able to take home the devices. It is easier to become proficient using a device that one has access to at home, compare to one that is limited to use within the classroom setting. As argued by Seiter (2008), [t]here is an overestimation of access to computers in terms of economic class, and an underestimation of specific forms of cultural capital required to maintain the systems themselves and move beyond the casual, recreational uses of computers to those that might lead directly to well-paid employment” (Pg. 29). If Seiter (2008) is accurate, most of the economic benefits from ICTs come from their long term use.

ICT investment can be expensive and many  ICT projects could not be developed without the support of private industry and the government (Heshmati & Addison, 2003).  While ICT may not be as important as basic education, food, and health services, governments around the world have spent large amounts on ICT for education initiatives, hoping to imitate the success many advanced economies have obtained from their ICT industries and byproducts (MSC, 1996). “Investment in ICT infrastructure and skills helps to diversify economies from dependence on their natural-resource endowments and offsets some of the locational disadvantages of landlocked and geographically remote countries” (Heshmati & Addison, 2003, p. 5)

Adequately evaluating 1-to-1 technology adoption initiatives is increasingly important, as different education interventions may have different cost-effectiveness ratios, and cost-benefit ratios, with some interventions being much more effective than others (Yeh, 2011). Working with limited resources, governments must administer their funds in the best possible way to enable their citizens to meet their various needs, from food and shelter, to self-actualization. Just because one intervention is more cost effective, does not mean that the other intervention should be necessarily discarded. As Maslow (1943) suggested, many needs can, and should, be met simultaneously. An improvement in one area of life, such as shelter, does not occur in a vacuum, and is not exclusive from the individual’s desire to feel accepted by others, or to improve their problem-solving ability (Ibid.). Investing in ICT is important for states as they move towards becoming economically diverse, robust and more competitive, relying more on their human capital than their natural resources. To evaluate these projects more precisely; this paper encourages evaluators to use a mixed-method analysis with a long-term time perspective.

Evaluating Information and Communication Technology Projects

Evaluation can help to increase the effectiveness of programs and improve the distribution of the limited resources available to a society. The decisions made by an evaluator can impact the lives of many individuals. Evaluators can help improve a program as well as decide whether or not the program should be continued (Fitzpatrick et al., 2011). Discussing the methodology of evaluation, Scriven (1967) differentiated between formative (focus on development and improvement) and summative (focusing on whether the program is meeting its stated goals) evaluation.  By conducting an evaluation a decision-making body is able to make an informed decision about the future of the program. Yet, dealing with complex programs with large numbers of pieces it is difficult to frame an evaluation to obtain the most valuable information, particularly when there is a limited time to conduct it, and the brevity of a report can be one of its strengths (Krueger, 1986). Yet, different methods provide for valuable lenses through which to look at a problem, frames that the evaluator should consider before conducting their evaluation.

Possibly the most important elements to consider in a 1 to 1 ICT project are its cost, and its use by the learners. The most well-known 1 to 1 initiative is the One Laptop Per Child Program (OPLC) which has delivered hundreds of thousands of units (http://one.laptop.org/). Yet with over $100 cost per student it could cost $500 billion dollars to provide a computer to every person that currently lacks access to the internet worldwide ($5 billion people). This would not include continued maintenance and electricity cost or the cost to access the Internet. Is access to ICT really that important? According to a recent UNESCO (2012) publication, while 1 to 1 laptop projects are very costly, in Latin America “in the last three years, the 1:1 model has become increasingly widespread, and 1:1 programmes are now the primary focus of national policies for ICT in education in the region. Policy-makers are no longer discussing whether the 1:1 model is worthy of investment but rather how best to achieve it” (Lugo & Schurmann, 2012)

While a price tag of $100 appears as an expensive investment for developing countries, especially when some countries spend less than $100 per student a year within their educational budget, it is also important to consider that all programs have costs, even when they are not financial. Even 1 to 1 programs that are “free” (through donations) have a cost, including an e-waste disposal cost. Even when they are based on volunteer efforts, programs still have as a minimum a lost opportunity cost for instructors and learners. The cost of programs can be most effectively assessed by measuring their different ingredients. This allows programs to be quantified; for various elements to be weighted, and, as a result, for programs to be compared through a cost-effectiveness analysis (Levin, 2001). The financial benefit of the program can also be determined through a cost-benefit analysis. Through a qualitative study, “thick”, rich descriptive information can be obtained and thematically organized, helping key stakeholders to better understand elements that would otherwise go unnoticed (Geertz, 1973).

Programs can also be mapped through a logic model which can include inputs, activities, outputs, and outcomes (Alter & Murty, 1997; McLaughlin & Jordan, 1999). The order in which the elements of a program are implemented and the context may also influence the results of the program. There are also likely to be competing program alternatives, some of which may be more effective than the particular program being considered. Hoping to increase the transferability or generalizability of a study, an evaluation can also be theory driven (Weiss, 1997). These and other elements, can improve the quality and usability of data obtained by an evaluation. However, with limited time and resources, the methodology used to evaluate a program depends on both the strengths of the researcher, and what is considered of principal importance by key stakeholders.

Over time, every practicing evaluator is or is in the process of becoming a “connoisseur” (the art of appreciation), as well as a “critic” (the art of disclosure) (Eisner, 1994, p. 215). This knowledge allows him or her to more effectively recommend to key stakeholders  the best methods of evaluation to pursue in a particular scenario. However, the interests of secondary stakeholders are also important in many ICT adoption programs.

The Relevance of Mixed Methods and Triangulation

“The underlying rationale for mixed-methods inquiry is to understand more fully, to generate deeper and broader insights, to develop important knowledge claims that respect a wider range of interests and perspectives” (Greene & Caracelli, 1997, p. 7).

 

Mixed-methods can greatly benefit a study as they allow the researcher to ask questions that he or she may ignore otherwise, obtaining additional information. While “purists” oppose the use of mixed-methods due to potential epistemological and ontological contradictions, many evaluators take a more “pragmatic” approach (Greene et al., 1989). One of the concerns regarding the use mixed-methods is that they may compromise the methodological integrity of an experimental study. These are valid concerns, and it is important to consider carefully how methods are being utilized, to avoid unintended conflicts. Some of the theoretical concerns for researchers against using mixed-methods may not be as applicable to evaluators as evaluators do not have the same goals as researchers. While researchers are focused to a greater extent on theory and generalizability and transferability, many evaluators focus on utilization and the practical implication of their analysis to their key stakeholders and the future of the program (Patton, 2007). To the “pragmatist” evaluator, “philosophical assumptions are logically independent and therefore can be mixed and matched, in conjunction with choices about methods, to achieve the combination most appropriate for a given inquiry problem. Moreover, these paradigm differences do not really matter very much to the practice” (Greene, et al., 1989, p. 8).

Mixed-methods often refers to the use of methods from different paradigms, using both a qualitative method, such as unstructured interviews or participant observations, with a quantitative method, such as academic achievement scores, or another statistical value within the same study (Johnson & Onwuegbuzie, 2004). While it seems beneficial to analyze a problem in multiple ways, experts in both qualitative and quantitative methods express concerns about this approach. Johnson and Onwuegbuzie (2004) argued that some of these “purist” concerns stems from a “tendency among some researchers to treating epistemology and method as being synonymous,” which is not necessarily the case (Pg. 15). Johnson and Onwuegbuzie (2004) argue for a contingency theory approach to research which emphasizes that while no method is superior, there are instances when one is preferable to the other.

One of the biggest benefits of using mixed methods is that they allow for the triangulation of findings. According to Dezin (1978) triangulation is “the combination of methodologies in the study of the same phenomenon” (pg 291). Dezin describes four types of triangulation: data triangulation, investigator triangulation, theory triangulation, and methodological triangulation. He describes these as possible within-methods, or between-methods (Ibid.). The ways in which methods are mixed varies, sometimes all methods having the same amount of influence, while at other times one method holds preeminence. Triangulation is a common way in which to strengthen the generalizability and transferability of a study and the strength of its claims.  Other benefits of using mixed-methods include complementarity, where the results of one method are clarified by another, development, when one method informs the other, expansion, trying to increase the scope of one methodology, and initiation, which seeks the discovery of paradox by recasting results or questions from one method to another (Greene et al., 1989). Regardless of the initial results, it usually provides richer data. Comparisons between the data could lead to either “convergence, inconsistency, or contradiction” (Johnson et al., 2007, p. 115).

If there is a conflict or an inconsistency within the data, it increases the difficulty of establishing a causal relationship and the project may require further study and explanation. This explanation can be provided by a form of structural corroboration, further analysis, or by sharing both findings with the key stakeholder. He or she can then use both pieces of information to make his or her decisions (Eisner, 1994). While most evaluators feel a responsibility to provide recommendations to the stakeholders, these recommendations do not necessarily have to address the problem scientifically, rather a “connoisseur” may state that based on his experience, he or she believes this path may be the best path to follow. ICT adoption includes many invisible elements which increases the difficulty in evaluating them (Cobo & Moravec, 2011). Because of its complexity, it will be helpful for the evaluator to share his or her opinion as a “connoisseur”. Social programs are generally complex. By providing a focused report to the key stakeholders, that emphasizes the main findings of the mixed-methods evaluation, they will be more likely to make a good formative or summative decision. As will be illustrated, this was an objective pursued by to 1 to 1 iPad initiative at the University of Minnesota.

Encouraging the Long-Term Study of ICT Projects

The limited timeframe of a study can result in a restricted analysis. Iterative formative evaluations allow key stakeholders to constantly reevaluate ways in which to improve a program (Mirijamdotter et al., 2006). Iterative and continuous evaluations are very important for internet based companies.  Google, for example, is known to regularly test new algorithms and versions of their search engine simultaneously, to obtain helpful usability comparisons. They try hundreds of variations of their search engine a year in an attempt to improve their product. (Levy, 2010). Similarly, many ICT adoption projects include an iterative process in their analysis, yet in the discussion of their findings, evaluations regularly omit the potential long-term benefits of the programs, focusing instead on short-term costs and benefits.  While there are time constrains and financial limitations to evaluations of 1-to-1 laptop programs, these evaluations would benefit from more attention to measuring the long term benefits of the interventions, including cultural capital gain (Seiter, 2008).

Methodologies such as longitudinal studies, ethnographic research, and time-series are among those that can help illustrate the potential benefits of the long term analysis of an intervention. Some of these studies can be very expensive, but they allow for the observation of changes that would otherwise go unnoticed. Another example of the possibilities of looking at changes over time was recently made possible by the Google Books Project Ngram Viewer (http://books.google.com/ngrams). The NGram Viewer allows for word frequencies to be analyzed over a span of 200 years! This type of study, called Culturenomics, is one of the newest ways in which an analysis of a subject over time provides an additional insight to an issue (Michel, et al., 2010). While the NGram Viewer is not very useful for evaluators, other forms longer-term analysis can be of greater support.

Ethnography is a field of study in which time spent in the field is an important validity variable. Ethnographers focus primarily on the quality of the data, and validity can be increased if the researcher has lived in a community for a longer time and, in so doing, has obtained a greater understanding of the local culture. Some of the subtleties that are analyzed by ethnographers require time and involvement to be discovered. To some researchers, ethnography symbolizes a study that takes more than a year (Fuller, 2008). However, some projects could last perhaps a single long day, while other “projects are developed throughout the whole of a researcher’s life; an ethnography may become a long, episodic narrative” (Jeffrey & Troman, 2004). In quantitative analysis, a time series, as their name implies, also emphasizes the importance of collecting data over time. This set of statistical data can be collected at various intervals;  monthly for unemployment benefits data; or daily for the financial exchange rate; or even every 2 seconds for EGG brainwave activity. A commonly used and informative time series is population census data, which is collected by many countries in regular intervals to help their governments better understand broader demographical changes, migratory patterns, and the future outlook of various variables (Zhang & Song, 2003).

Longitudinal studies can also be very helpful in understanding how an intervention at an early stage of a person’s development influences them throughout the rest of their lives. Various longitudinal studies have been conducted within early education. Longitudinal studies include interventions in pre-natal care, youth reading programs, or the observation of children as they become older, among many other studies. One of the most famous longitudinal studies of education was the Student/Teacher Achievement Ratio (STAR) Tennessee Class Size Reduction study which began in 1985 and which continued until 1999 (Finn & Achilles, 1999; Hanushek, 1999). The study tracked students who were assigned at random to kindergartens with between 13 and 17 students, or larger classes of between 22 and 26.  Over 6000 students took part in the study in which they were kept in smaller classrooms for 4 years, and monitoring continued after the end of the intervention. The study found statistically significant changes to student achievement scores in three utilized measurements. The conclusions of this study strengthened claims regarding the positive impacts of class size reduction, encouraging the enactment of class reduction policies in California (1996) and other states. While later studies have contradicted the findings of the study, its use of an experimental design, its magnitude and its use of a longitudinal analysis strengthened its claims. There have been a number of important longitudinal studies in early childhood and other early interventions that have followed children’s development for decades (NCES, 2010).

Another popular, long-term, longitudinal study is the British Up Series which has followed a group of 14 children since age seven in 1964, and is still under production. Similar documentaries have been replicated in Australia (since 1975), Belgium (1980-1990), Canada (1991-1993), Czech Republic (1980s), Germany (1961-2006), Denmark (From 2000), Japan (from 1992), Netherlands (from 1982), South Africa (from 1982), Sweden (from 1973), USSR (from 1990), USA (from 1991). While these long-term studies can be expensive to conduct, they provide a different dimension to findings, a dimension that is sometimes not available in most 1 to 1 technology adoption evaluations.

The key benefit of including this dimension within an evaluation derives from the difficulty in knowing how the skills obtained from using new ICT devices will help an individual have the confidence and the background skills needed to develop future ICT competencies that may be beneficial to them in the job market. Will their familiarity with ICT at an early age bring about broader benefits later in their lives? A short-term outlook in an evaluation may, at times, provide a negatively skewed view of the impact of these projects, expecting more out of a pilot project than is realistic. In addition, it is common for program designers to overstate the potential outcomes of a project, expecting it to have a greater impact than it is likely possible. For example, as an evaluation of USAID basic educational projects (1990-2005) showed, most of its projects had less than a 4% increase in student achievement scores, despite the efforts of many specialists and the expenditure of millions of dollars. (Chapman & Quijada, 2007). One to one technology adoption projects can also be very expensive and, as such, can have a very negative cost-benefit analysis in the first years of the program. Evaluations should also take into account, future, longer-term benefits of the investment.

By evaluating a project while considering its impact over a longer time this article encourages the continued evaluation of a program over a number of years, on regular intervals, while providing recommendations, and reporting on the benefits and negative elements of the program as they are modified over time. This type of long term evaluation is best suited for an internal evaluator, or a combination or internal and external evaluators. When thinking of the cost of 1 to 1 programs over time, it is also important to keep in mind the rapid depreciation of technology. With the rapid depreciation of computer equipment, should 1 to 1 programs focus on purchasing the most up-to-date gadgets and tools? This is a question that is be best analyzed through the inclusion of a cost-effectiveness analysis which accounts for the depreciation of technologies.

One Laptop Per Child – An Evaluation of Peru’s Project

Possible the most controversial and also most commonly cited 1 to 1 initiative is the One Laptop Per Child (OLPC) initiative, which was started by Nicholas Negroponte, the founder of the MIT Media Lab (TED, 2008). According to Negroponte, by thinking in bytes instead of atoms, and by learning how to operate a computer, a child can learn that the world is increasingly available at the click of a button, and that they can construct and build anything that they can imagine by programing new and amazing environments (Negroponte, 1996). Following Papert’s Constructionism, Negroponte believes that programing teaches an individual how to learn, as they must go back, revisit their code and figure out why there is a mistake (Papert, 1980). As an ICT evangelist, Negroponte highlighted how simply by giving a child a computer his possibilities would be expanded (Negroponte, 1996). Since the beginning of OLPC in 2005, over 2.5 million laptops have been delivered (http://one.laptop.org/about/faq). However, despite the high level of investment, particularly in Latin America, project evaluations have not shown significant gains in achievement scores (Cristia et al., 2012).

A recent evaluation of OLPC in Peru expressed how, despite a high level of investment in these new machineries (902,000 laptops), and increasing the ratio of computers from 0.12 to 1.18, student performance in math and reading had not increased substantially. The project did find that students’ cognitive skills had improved over the time of the study. While analysts have since highlighted that the program had only limited effects on math and language achievement (0.003 standard deviations), little emphasis has been given to the potential impact of the improvement in cognitive skills, and perhaps more importantly, to what having improved their digital literacy skills will mean for these individuals in the future, as they are asked to learn other task specific digital and information literacy skills (Cristia et al., 2012).

It is also difficult to know from the available data whether a different investment would have been more cost-effective or result in a higher cost-benefit ratio in Peru. One of the unmet goals of OLPC was to produce a $100 laptop; however they currently cost around $200 (Ibid.). As a project which was not affiliated with Microsoft, Google or Apple, the OLPC laptops came with an operating system (OS) known as Sugar. While all operating systems share similarities, did the use of Linux Sugar limit or increase the possibilities for students? When testing student computer literacy skills, they found that the students quickly became more adept at using these devices. As explained earlier in this paper, evaluators also had difficulties in deciding which skills should be tested (Ibid., p. 15). Unfortunately, another unmet goal of the project was that Peru’s OLPC participants lacked internet connectivity. OLPC was partly designed so that students could benefit from increased connections either through the OLPC exclusive Mesh network or the Internet. The impacts of lacking access to the internet are hard to measure, however they may have affected the individuals’ development of information literacy skills. Peru’s evaluation of the OLPC project was very insightful. However, while it contained a qualitative element, the project had a quantitative focus, limiting reader’s understanding of how the initiative affected individuals. As a project which centers on the individual, learning more of the project’s impact on the person is increasingly relevant as ICT becomes more personalized. Apart from not discussing potential long-term gains, the evaluation also failed to mention the full cost of the devices. With the laptop only accounting for a tenth to a seventh of the total cost of the device, it is important to consider whether this is a cost-effective investment (Lugo & Schurmann, 2012). The evaluation would have benefited from a broader implementation of mixed methods, in particular on the qualitative-side, while also emphasizing these changes over a longer span of time. An element of time that is particularly important to first year initiatives is the teacher’s or instructor’s familiarity or learning curve, as they will slowly learn better ways in which to use the device and integrate them within the classroom.

A Case Study – University of Minnesota One iPad Per Student Initiative

The discussion surrounding the digital divide is traditionally centered around on access to the internet and a personal computer, yet the rapid change of technologies leads us to question whether the divide will be centered on these devices in the future (Warschauer, 2008; Zickuhr & Smith, 2012). What role will smart phones, reality augmented glasses, 3D printers or, farther into the future, nanotechnology implants signify in terms of the digital divide? (Kurzweil, 2000). A current technology that may further displace the purchase of paper books for K-12 and HE is e-reader technology, the most successful of which are the iPads (I, II, and III) and Amazon’s Kindle readers. A recent NDP report indicated that tablets may outsell laptop computers by 2016, expanding sales from 81.6 million units (2011) to 424.9 million units (2017) a year (Morphy, 2012). Will we then measure the digital divide in terms of who access and who doesn’t have access to an iPad?

Pilot projects in universities such as the University of Minnesota, the University of San Diego, Oberlin College and a few others have moved forward in answering this question. While the first successful tablet, the iPad, was released on April 2010, that same year, the University of Minnesota decided to purchase 447 units, to provide a tablet to every CEHD student in the upcoming undergraduate cohort. It was one of the first major initiatives of its type in the country. Because of its uniqueness, and being an early adoption project, its evaluation was based partly on the conclusions obtained from previous 1-to-1 projects such as the OLPC initiative and Maine’s 1-to-1 statewide adoption program. However, as a device that was substantially different from previous ICT devices, the operationalization of NETS standards, and an in-depth analysis of their potential use has not been acutely studied (ISTE, 2008). So far, only a few articles have been published regarding the use of the iPad in the classroom (EDUCAUSE, 2011). To better understand the possible educational implications of the adoption of this technological device, a CEHD research team conducted a mixed-methods evaluation (Wagoner et al., 2012). In addition, a commitment was made to continue evaluating the project for a consecutive number of years. The support of the dean was integral in the continuation of the program.

The first year, the project set a goal to increase the usability of the devices by both faculty and students, and to provide aid to faculty members so that they could familiarize themselves with the devices and consider the best ways to incorporate the devices within their classrooms. Soon after the distribution of iPads, evaluators also drafted a post-test and organized a series of interviews. The interviews asked faculty members how they learned to use their iPads, what were their plans for using them within the classroom, how the iPad had affected their teaching, and if the support received had been appropriate (From field notes).

A similar set of questions were asked to faculty members at the end of the school year, where they were asked what projects they had actually implemented, the opinions of students regarding ebooks, and their pedagogical concerns . Twenty two interviews were coded and themes were developed from the qualitative study, including concerns from faculty about time investment, how the iPad compares with other technologies, the impact of the iPad on faculty members’ pedagogy, the impact of the iPad on their classroom management, and details about faculty members’ technology learning processes. At the end of the year a series of faculty member focus groups were also conducted. Many of the details learned through the qualitative portion of the study would have been difficult to obtain otherwise. The common elements between the data from the focus groups and the interviews also allowed us to verify some observations. Below is an interesting quote from one of the participating faculty members:

“What I want, in terms of their behaviors, is for [the students] to be active explorers in the classroom, to bring the machines, and to actually utilize them for historical research … One of the things that we did as a first conversation is to describe the level of trust that is going to be involved … and they live up to those expectations. I’ve been really happy so far with what we’re learning. It conveys to them that they’re smart, capable discoverers that we’re co-creating knowledge—historical knowledge” (Wagoner, Hoover, & Ernst, 2012, p. 3)

 

While the quote above illustrates a very positive aspect, it is likely that this experience would not have been visible through an analysis of student achievement, illustrating the benefit of utilizing mixed-methods. Two student focus groups were also conducted yet unlike for faculty members where evaluators were able to interview the whole population, 447 students were more than the team could interview. To obtain a better analysis of the student responses, a survey was conducted which included a number of questions related to their use and experience with the iPad. 241 CEHD first year students responded to the survey (Wagoner et al., 2012). Having access to broader demographic data also allowed the evaluation team to compare student attitudes with socio-economic variables. Various strong correlations and significant relationships were found regarding the impacts of iPads on student learning. In particular, the evaluation found that students felt that the devices had been a positive experience in terms of their motivation. Students also expressed a high level of comfort using the devices and reported that the iPad helped them feel more engaged in some of their classes.

Inserting Picture...

 

The study also showed that students who were part of Access to Success (ATS) or had been part of the TRIO program, usually students of color or from low socio-economic backgrounds, mentioned feeling more engaged and connected during classes. From the qualitative data the evaluators also learned that for some students the iPad had become a window into the internet, and a digital item for their whole household to use.

The success of the first year implementation, and the questions that evaluators were still unable to answer led to the continuation of the program for a second and third year. A similar number of iPads (now iPads 2) were purchased. Once again the rapid change of technology provided new possibilities for evaluators, as iPad 2s include cameras, permitting students to record HD video and have audio-visual communications with anyone with access to Facetime or Skype.  After analyzing the potential savings of the extensive use of iPads for e-reading by some students, CEHD also decided to support a pilot project for the testing and adoption of Open Textbooks, as well as the establishment of a workdesk where faculty members could obtain assistance and build iBooks and ePubs if interested.

The project is now planning its third year. Adapting to the result of the first year evaluation, many of the questions of the second year survey were modified to find additional valuable information. One of the limitations of the evaluation of the program so far has been a lack of a cost-effectiveness or a cost-benefit study. Yet, such a study should not only take into account the rapid depreciation of the devices, but also consider if students are learning skills that could potentially aid them when they join the workforce. While the costs have been high, over 300,000 dollars per year, it is difficult to assess the long term benefits for participants. The rapid devaluation of the devices is an important consideration, as it may be possible that in a couple of years these devices will cost only a fifth of their original cost and be even more feature rich and powerful, allowing students to obtain a similar skill set for a fraction of the cost. It is also possible that many of the skills obtained are not very different from those obtained from using other ICTs, reducing the importance of the investment.

Currently, a website is available where individuals interested in the results of the project can learn various innovative classroom projects that were developed and how they can be adapted to other classrooms, as well as suggested best practices. In a report, CEHD concluded that the iPad had been helpful addressing the concerns of the Digital Divide, increasing access to the tools needed for media production, access to tools that facilitate personal productivity, improve students’ possibilities for information access and consumption, helped reduce the cost of printing readings, and facilitated students’ learning outside of the classroom (Wagoner, et al., 2012). For year two, the program also hopes to further analyze the usability of the devices and recently developed a space for students to submit their creative productions with the iPads.

Despite the insights provided by the use of mixed-methods for this evaluation, the limited timeframe of the study makes it difficult to determine whether or not is a worthwhile investment. With the program costing over $400 dollars per student, excluding the cost of the administrative staff, is this the best investment for a university to make in terms of technology adoption? When will it be determined that the program is no longer worth its cost and it is no longer helping to find innovative ways of learning? One of the limitations of CEHD’s 1 to1 iPad program has been the limited emphasis on the possibilities for the device within informal learning. Some of these concerns will be better analyzed from the data collected from the second year survey recently administered to students. A new wave of interviews and focus groups is also planned for the evaluation of the 3rd year of the program.

With 500,000 applications there are almost endless possibilities as to how the devices can be integrated within the classroom. The production of more apps that match more closely with the goals of each individual is likely to increase. Because of these devices’ future relevance, and the high level of creativity and innovation within this industry, constant evaluation of these devices is important as it allows for the continued improvement of the project. The use of mixed-methods allowed the evaluation team to find many interesting details that the study would not have found otherwise. These details enriched the quality of the findings and provided faculty with valuable information for the improvement of the use of the iPad and for learning how their peers were using the devices.

Conclusion

 

It is difficult to understand the repercussions of an event while it is taking place. Only with hindsight do we notice how many unexpected turns have led society to where it is today. Evaluators do not have the luxury of looking only at the past, as they are focused on improving the tomorrow. With an emphasis not just on understanding but on helping projects and programs improve in quality, decisions are made guided by what may be the most likely outcomes. Yet, without realizing it, a project could be cancelled before it demonstrates its true strengths. Too often ICT one-to- one projects focus on student achievement gains after the first year of implementation. As a magic bullet, some stakeholders may expect that just by having the device individuals will become more competitive. Projects such as OLPC have helped to promote this viewpoint. Yet, while technologies have helped improve society, it may take years for them to demonstrate the benefits to the lives of individuals. Changing cultures or behaviors takes time, and as has been the case with a large number of development projects, impact is usually moderate. Nevertheless, some investments will be more cost-effective than others and an evaluation of ICT needs to carefully analyze the costs of the ingredients of the intervention. Depreciating these ingredients and the considering which are the best ways in which students can develop competitive ICT skills is a primary objective for ICT one to one adoption projects. This paper contends that using mixed-methods and a longer-than-usual time spectrum for ICT evaluations will be able to provide more useful information to its key stakeholders, resulting in better decision making.

………………Page Break………………

 

Works Cited

Alter, C., & Murty, S. (1997). Logic modeling: A tool for teaching practice evaluation. Journal of Social Work Education, 33.

Carr, N. (2011). The Shallows: What the Internet is Doing to Our Brains. New York City: W. W. Norton & Company.

Chapman, D., & Quijada, J. J. (2007). What does a billion dollars buy? An analysis of USAID assistance to basic education in the developing world, 1990-2005. Washington DC: USAID.

Cobo, C., & Moravec, J. (2011). Aprendizaje Invisible: Hacia Una Nueva Ecologia de la Educacion. Barcelona: Universitat de Barcelona.

Cristia, J., Cueto, S., Ibarraran, P., Santiago, A., & Severin, E. (2012). Technology and Child Development: Evidence from the One Laptop per Child Program. Washington DC: IDB.

Denzin, N. K. (1978). The Research Act, 2nd Ed. New York: McGraw-Hill.

EDUCAUSE. (2011, September 02). 7 Things You Should Know About iPad Apps for Learning. EDUCAUSE Learning Initiative (ELI), p. http://www.educause.edu/Resources/7ThingsYouShouldKnowAboutiPadA/223289.

Eisner, E. W. (1994). The forms and functions of educational connoisseurship and educational criticism. In E. W. Eisner, In The educational imagination: On the design and evaluation of school programs (pp. 212-249). New York: Macmillan.

Finn, J. D., & Achilles, C. M. (1999). Tennessee’s class size study: Findings, implications, misconceptions. Educational Evaluation and Policy Analysis, 97-109.

Fitzpatrick, J. L., Sanders, J. R., & Worthen, B. R. (2011). Program Evaluation: Alternative Approaches and Practical Guidelines. Upper Saddle River: Pearson Education.

Flannigan, B. R.-K. (2006). Connecting the Digital Dots: Literacy of the 21st Century. Educause Quarterly, pp. 8-10.

Friedman, T. L. (2007). The World is Flat 3.0: A Brief History of the Twenty-first Century. New York: Picador.

Fuller, H. G. (2008). What does the term ‘ethnography’ mean to you? Quirk’s Marketing Research Review, pp. 48-50.

Geertz, C. (1973). Thick Description: Toward an Interpretive Theory of Culture. In C. Geertz, In The Interpretation of Cultures: Selected Essays (pp. 3-30). New York: Basic Books.

Gladwell, M. (2008). Outliers: The Story of Success. New York: Little.

Greene, J. C., & Caracelli, V. J. (1997). Defining and describing the paradigm issue in mixed-method evaluation. New Directions for Evaluation, 5-17.

Greene, J. C., Caracelli, V. J., & Graham, W. F. (1989). Toward a Conceptual Framework for Mixed-Method Evaluation Designs. Educational Evaluation and Policy Analysis, 255-274.

Hanushek, E. A. (1999). Some findings from an independent investigation of the Tennessee STAR experiment and from other investigations of class size effects. Educational Evaluation and Policy Analysis, 143-163.

Heshmati, A., & Addison, T. (2003). The New Global Determinants of FDI: Flows to Developing Countries. Helsinki: World Institute for Development Economics Research.

ISTE. (2008). The National Educational Technology Standards. Washington D.C.: International Society for Technology in Education.

ITU. (2012). The World in 2011: ICT Facts and Figures. Geneva: International Telecommunication Union.

Jeffrey, B., & Troman, G. (2004). Time for Ethnography . British Educational Research Journal, 535-548.

Jenkins, H., Purushotma, R., Clinton, K., Weigel, M., & Robison, A. (2006). Confronting the Challenges of Participatory Culture: Media Education for the 21st Century An analysis of USAID assistance to basic education. Chicago: The MacArthur Foundation.

Johnson, R. B., & Onwuegbuzie, A. J. (2004). Mixed Methods Research: A Research Paradigm Whose Time Has Come. Educational Researcher, 14-26.

Johnson, R. B., Onwuegbuzie, A. J., & Turner, L. A. (2007). Toward a Definition of Mixed Methods Research. Journal of Mixed Methods Research , 112-130.

Krueger, R. A. (1986). Reporting Evaluation Results: 10 Common Myths. American Evaluation Association. Kansas City: American Evaluation Association.

Kurzweil, R. (2000). The Age of Spiritual Machines: When Computers Exceed Human Intelligence. London: Penguin.

Levin, H. M. (2001). Cost-Effectiveness Analysis. Thousand Oaks: SAGE.

Levy, S. (2010, February 22). Exclusive: How Google’s Algorithm Rules the Web. Wired, p. http://www.wired.com/magazine/2010/02/ff_google_algorithm/.

Lugo, M. T., & Schurmann, S. (2012). Turning Mobile Learning in Latin America. Paris: UNESCO.

Maslow, A. (1943). A Theory of Human Motivation. Psychological Review, 370-396.

McLaughlin, J., & & Jordan, G. (1999). Logic models: A tool for telling your program’s performance story. Evaluating and Program Planning, 65-72.

Michel, J.-B., Shen, Y. K., Aiden, A. P., Veres, A., Gray, M. K., Team, T. G., . . . Lieberma, E. (2010, December 16). Quantitative Analysis of Culture Using Millions of Digitized Books. Science, p. http://www.sciencemag.org/content/early/2010/12/15/science.1199644.

Mirijamdotter, A., Somerville, M. M., & Holst, M. (2006). An Interactive and Iterative Evaluation Approach for Creating. The Electronic Journal Information Systems Evaluation, 88-92.

Mohammed, N. (2007). Facing Difficulties in Learning Computer Applications. Mount Pleasant: Central Michigan University.

Morphy, E. (2012, May 05). Tidal Wave of Tablets on the Horizon. Retrieved from E-Commerce Times: http://www.ecommercetimes.com/rsstory/75039.html

MSC. (1996). Smart School Road Map 2005-2020. Kuala Lumpur: Multimedia Development Corporation .

NCES . (2010). Early Childhood Longitudinal Study, Kindergarten Class of 1998-99 . Washington DC: U.S. Department of Education.

Negroponte, N. (1996). Being Digital. New York: Vintage.

Papert, S. (1980). Mindstorms: Children, Computers, and Powerful Ideas. Basic Books: New York.

Patton, M. Q. (2007). Utilization-Focused Evaluation. Thousand Oaks: SAGE Publications.

Prensky, M. (2001). Digital Natives Digital Immigrants. On the Horizon, http://www.marcprensky.com/writing/Prensky%20-%20Digital%20Natives,%20Digital%20Immigrants%20-%20Part1.pdf.

Scriven, M. (1967). The methodology of evaluation. In R. W. Tyler, R. M. Gagne, & M. Scriven, Perspectives of curriculum evaluation (pp. 39-83). Chicago: Rand McMally.

Seiter, E. (2008). Practicing at Home: Computers, Pianos, and Cultural Capital. In T. McPherson, Digital Youth, Innovation, and the Unexpected (pp. 27-52). Cambridge: MIT Press.

Smith, A. (2011). Americans and Text Messaging. Washington D.C.: Pew Internet.

TED. (2008, December 05). Speakers Nicholas Negroponte: Tech visionary. Retrieved from TED: Ideas Worth Spreading: http://www.ted.com/speakers/nicholas_negroponte.html

Terris, B. (2009, December 6). Rebooted Computer Labs Offer Savings for Campuses and Ambiance for Students. The Chronicle of Higher Education, pp. http://chronicle.com/article/Computer-Labs-Get-Rebooted-as/49323/.

Thurlow, C., & Brown, A. (2003). Generation Txt? The sociolinguistics of young people’s text-messaging. Discourse Analysis Online, http://extra.shu.ac.uk/daol/articles/v1/n1/a3/thurlow2002003.html.

Tippet, K., & Turkle, S. (2011, August 25). On Being: Alive Enough? Retrieved from American Public Media: http://being.publicradio.org/programs/2011/ccp-turkle/transcript.shtml

Wagoner, T., Hoover, S., & Ernst, D. (2012). CEHD iPad Initiative. Minneapolis: CEHD.

Walker, J., & Jorn, L. (2009). 21st Century Students: Technology Survey. Minneapolis: University of Minnesota.

Warschauer, M. (2008). Whither the Digital Divide? In D. L. Kleinman, K. A. Cloud-Hansen, & a. J. C. Matta, Controversies in Science & Technology: From climate to chromosomes. (pp. 140-152). New Rochelle: Liebert.

Weiss, C. H. (1997). How can theory-based evaluation make greater headway? . Evaluation Review, 501-524.

Willoughby, T. (2008). A Short-Term Longitudinal Study of Internet and Computer Game Use by Adolescent Boys and Girls: Prevalence, Frequency of Use, and Psychosocial Predictors. Developmental Psychology, 195-204.

Yeh, S. S. (2011). The Cost-Effectiveness of 22 Approaches for Raising Student Achievement. Charlotte: Information Age Publishing.

Zhang, K. H., & Song, S. (2003). Rural–urban migration and urbanization in China: Evidence from time-series and cross-section analyses. China Economic Review , 386-400.

Zickuhr, K., & Smith, A. (2012). Digital differences. Washington DC: Pew Research Center’s Internet & American Life Project.