For a More Robust Evaluation of 1 to 1 ICT for Education Adoption Projects
May 12, 2012
The rapid chance of information and communication technology (ICT) increases the challenge in determining how to best evaluate proficient use of these technological advances and their impact on learning. Through an overview of different initiatives, this paper illustrates the benefits of implementing a mixed-methods approach, and analyzing projects over a prolonged period of time. Looking at a program in a longer timeframe can help us to be more aware of the impact a program has on an individual and a community. The use of mixed-methods helps us to take into account different ways in which we can analyze a program, studying variables that are measurable and generalizable, as well as elements that are specific to a particular situation. By incorporating these elements into evaluation studies we can potentially increase the quality and usability of the reports generated. To illustrate the benefits of mixed-methods and the continued analysis of a project, this paper discusses the 1 to 1 iPad project at the University of Minnesota.
Rapid Rate of Change – A Relevant Characteristic of ICT for Education Projects
It was only a few decades ago, in 1978, when top MIT computer scientist had reservations about the usability of the personal computer and whether or not people would use for tasks such as an address book or a personal calendars (Tippet & Turkle, 2011). Since then, many technology adoption projects were promoted, but items that were originally only available for the few are much more common in the present. Today, universities in the United States increasingly consider remodeling their computer labs as almost all college students in the United States (89.1% – 2009 at UMN) bring their own laptops to the university (Walker & Jorn, 2009). Students bringing their laptops to college increased from 36% in 2003 to 83% in 2008 (Terris, 2009).
The rapid improvement of technology results in the rapid depreciation of gadgets, as well as the difficulty of evaluating them. The increase capacity of technology and their computational power has encouraged educational institutions and other industries to adopt them. The ownership of Information and Communication Technologies (ICTs) has decreased the costs of transferring data and increased worker’s potential productivity (Friedman, 2007). Other influential ICTs are the mobile phone, the television, the internet and the radio have augmented the quantity of information available to individuals. The economic benefits from improvements in information and data transfers have led to increased investments. There has also been an increased interest in the importance of information and digital literacy as a necessary skill in the 21st century (Flannigan, 2006; Jenkins, Purushotma, Clinton, Weigel, & Robison, 2006). While not all of the changes brought by increase access to technology are positive, the increased access to information and the rapid improvement of these technologies has a major impact in society (Carr, 2011; Kurzweil, 2000). Unlike some traditional fields such as mathematics, history where most basic concepts have remained unchanged, the impacts of new media and its prevalence in society has changed substantially in the past few decades and with it the difficulty in evaluating these projects. Mobile subscriptions alone increased from less than 3 billion in 2006 to 5.9 billion in 2011 (ITU, 2012)
This rapid change makes it difficult to determine the essential skills a learner must have in the work place of tomorrow (Cobo & Moravec, 2011). With hundreds of thousands of computer applications and many types of hardware, some of high levels of complexity; it can take a person a significant amount of time to become adept in any complex program. Many users of Nvivo, a qualitative research software, many not know how to use SPSS, quantitative research software, successfully. A high level of specialization is often the norm, as using specialized programs successful requires a degree of mastery over statistical analysis or qualitative research methods. Similarly, programs such as Adobe Photoshop, Bryce, Python, Android OS, Excel, Audacity, among others have a considerable learning curve (There are courses available for learning any of these programs). Being a specialist in a particular program can lead toward a very successful financial career, but simply by mastering a single program can take dozens or hundreds of hours of practice. While it may take 10,000 hours to become a successful reporter, a successful violinist, or a successful writer (Gladwell, 2008), ICTs contains within it thousands of possibilities each with their different proficiency levels (this includes unique musical instruments, and new ways of writing [via text or twitter]).
The relevance of rapid change when evaluating ICT adoption programs is important because it influences what we consider to be the effective use of these technologies by the general population. Texting for example is increasingly becoming more common place and it is consider by some experts to be a nascent dialect (Thurlow & Brown, 2003). Therefore, how important is it to know how to effectively send text and use a mobile phone in the 21st century? It is hard to answer these questions as a technology may be displaced in a few years’ time. The rapid change of technology complicates how we measure digital literacy and through it the effectiveness of 1 to 1 adoption and usability programs. These complications are at times difficult to perceive because of generational differences between the evaluator and younger generations (Prensky, 2001).
Today young adults (18-24) send an average of 109.5 text messages a day or 3,200 text messages a month and many of them prefer communicating over text messages than emails. Email a moderately recent invention, is to some already considered old fashioned and impractical (Smith, 2011). With this in mind, does an individual’s capacity to use emails effectively continue to be a 21st century digital literacy requirement? While the International Society for Technology in Education (ISTE) has worked on developing ICT for education standards which can aid the evaluation of technology adoption programs (ISTE, 2008), these standards emphasize broad competencies and must be operationalized to the distinctiveness of each 1 to 1 ICT program.
If technology continues to improve at a very rapid rate, perhaps even an exponential rate, it brings forth questions regarding what are the best ways which to evaluate a 1 to 1 technology project (laptops, mobiles, e-readers, etc.). In this essay I propose the analysis over a long period of time to assess the impact of the program to the individual over time. As argued by Seiter (2008) increasing access to technology will likely help individuals become more proficient at using the devices, yet as with playing a piano, it takes many hours of practice to become a skilled pianist (Seiter, 2008).One of the key advantages of 1 to 1 initiatives is that the participants are able to take home the devices. It is easier to become proficient using a device that one has access to at home, to one that is limited to its use within the classroom setting. As argued by Seiter (2008) “There is an overestimation of access to computers in terms of economic class, and an underestimation of specific forms of cultural capital required to maintain the systems themselves and move beyond the casual, recreational uses of computers to those that might lead directly to well-paid employment” (Pg. 29). If Seiter (2008) is accurate, and most of the economic benefits from ICT use come from their long term use.
ICT investment can be very expensive and many of ICT projects could not be developed without the support of private industry and the government (Heshmati & Addison, 2003). While ICT may not be as important as basic education, food, and health services, governments throughout the world have spent large quantities of funds in ICT for education initiatives hoping to imitate the success many advanced economies have obtained from their ICT industries and byproducts (MSC, 1996). “Investment in ICT infrastructure and skills helps to diversify economies from dependence on their natural-resource endowments and offsets some of the locational disadvantages of landlocked and geographically remote countries” (Heshmati & Addison, 2003, p. 5)
Adequately evaluating 1 to 1 technology adoption initiatives is increasingly important, as different education interventions can have different cost-effectiveness ratios, and cost-benefit ratios, with some interventions being much more effective than others (Yeh, 2011). Working with limited funds, governments must administer their funds in the best possible way to provide their citizens with the ability to meet their needs various needs, from food and shelter, to their self-actualization. Just because one intervention is more cost effective, does not mean that the other intervention should be necessarily discarded (Countries can implement multiple interventions if funds are available). As Abraham Maslow (1943) suggested, many needs can be and should be met simultaneously. While there is a clear hierarchy to human needs, an improvement in one area of life, such as shelter, does not occur in a vacuum, and is not exclusive from the individuals desire to feel accepted by others, or to improve their problem-solving ability (Maslow, 1943). Investing in ICT is important for states as they move towards becoming economically diverse, robust and more competitive, relying more in their human capital than their natural resources. To evaluate these projects more precisely; this paper encourages evaluators to consider conducting a mixed-method analysis and with a long-term time perspective.
Evaluating Information and Communication Technology Projects
Evaluation can help increase the effectiveness of programs and improve the effective distribution of the limited resources available to a society. The decisions made by an evaluator can impact the lives of many individuals. Evaluators can help improve a program as well as decide whether or not the program should be continued (Fitzpatrick, Sanders, & Worthen, 2011). Discussing the methodology of evaluation, Scriven (1967) differentiated between formative (focus on development and improvement [Cook tasking the soup]) and summative (focusing on whether the program is meeting its stated goals [Guest tasking the soup]) evaluation (Scriven, 1967). By conducting an evaluation a decision-making body is able to make an informed decision about the future of the program. Yet, dealing with complex programs with large numbers of pieces, and unique elements, it is difficult for an evaluator to frame an evaluation that can help them obtain the most valuable information about a program, particularly when there is a limited time to conduct it, and the brevity of a report can be one of its strengths (Krueger, 1986). Yet, different methods provide for different valuable lenses through which to look at a problem, frames that the evaluator should consider before conducting their evaluation.
Possibly the most important elements to consider in a 1 to 1 ICT project are its cost, and its use by the learners. The most known 1 to 1 idea is the One Laptop Per Child Program (OPLC) which has been most successful in Latin America delivering hundreds of thousands of units (http://one.laptop.org/). Yet with over $100 cost per student (closer to $200) it could cost $500 billion dollars to provide a computer to every person that currently lacks access to the internet worldwide ($5 billion people), and this would not include their continued maintenance and electricity cost or the cost to access the Internet. Is access to ICT really that important? According to a recent UNESCO (2012) publication while 1 to 1 laptop projects are very costly, in Latin America “in the last three years, the 1:1 model has become increasingly widespread, and 1:1programmes are now the primary focus of national policies for ICT in education in the region. Policy-makers are no longer discussing whether the 1:1 model is worthy of investment but rather how best to achieve it” (Lugo & Schurmann, 2012)
While a price tag of $100 appears as an expensive investment for developing countries, especially when some countries spend less than $100 per student a year within their educational budget, it is also important to consider that all programs have cost, even when they are not financial. Even 1 to 1 program that are “free” (through donations) have a cost, including an e-waste disposal cost. Even when they are based in volunteer efforts, programs still have as a minimum a lost opportunity cost for instructors and learners. The cost of programs can be most effectively asses by measuring their different ingredients. This allows programs to be quantified; for various elements to be weighted, and as a result for programs to be compared with each other through a cost-effectiveness analysis (Levin, 2001). The financial benefit of the program can also be determined through a cost-benefit analysis. Through a qualitative study, “thick”, rich descriptive information can be obtained and thematically organized helping key stakeholders to better understand elements that would otherwise go unnoticed (Geertz, 1973).
Programs can also be mapped through a logic model which can include inputs, activities, outputs, and outcomes (Alter & Murty, 1997; McLaughlin & & Jordan, 1999). The order in which the elements of a program are implemented and the context where a program is implemented may also influence the results of the program! There are also likely to be competing program alternatives some of which may be more effective than the particular program being considered. Hoping to increase the transferability or generalizability of a study, an evaluation can also be theory driven (Weiss, 1997). These and other elements, can improve the quality and usability of data obtained by an evaluation. However, with limited time and resources, the methodology used to evaluate a program depends on both the strengths of the researcher, and what is considered of principal importance by key stakeholders.
Overtime, every practicing evaluator is or is in the process of becoming a “connoisseur” (the art of appreciation), as well as a “critic” (the art of disclosure) (Eisner, 1994, p. 215). This knowledge allows him or her to more effectively propose to key stakeholders’ recommendations as to the best methods of evaluation to pursue in a particular scenario. However, the interests of secondary stakeholders are also important in many ICT adoption programs.
The Relevance of Mixed Methods and Triangulation
“The underlying rationale for mixed-methods inquiry is to understand more fully, to generate deeper and broader insights, to develop important knowledge claims that respect a wider range of interests and perspectives” (Greene & Caracelli, 1997, p. 7).
Mixed-methods can greatly benefit a study as they allow the researcher to ask questions that he or she may ignore otherwise, obtaining additional information. While “purists” oppose the use of mixed-methods due to potential epistemological and ontological contradictions, many evaluators take a more “pragmatic” approach to the use of mixed-method (Greene, Caracelli, & Graham, 1989). One of the concerns regarding the use mixed-methods is that they may compromise the methodological integrity of an experimental study. These are valid concerns, and it is important to consider carefully how methods are being utilized, to avoid unintended conflicts that jeopardize the integrity of the study. Some of the theoretical concerns for researchers against using mixed-methods may not be as applicable to evaluators as evaluators do not have the same goals as researchers. While researchers are focused to a greater extent on theory and generalizability and transferability, for many evaluators, their focus is on utilization and the practical implication of their analysis to their key stakeholders and the future of the program (Patton, 2007). To the “pragmatist” evaluator, “philosophical assumptions are logically independent and therefore can be mixed and matched, in conjunction with choices about methods, to achieve the combination most appropriate for a given inquiry problem. Moreover, these paradigm differences do not really matter very much to the practice” (Greene, Caracelli, & Graham, 1989, p. 8).
Mixed-methods, often refers to the use of methods of different paradigms, using both a qualitative method such as unstructured interviews or participant observations, with a quantitative method, such as academic achievement scores, or another statistical value within the same study (Johnson & Onwuegbuzie, 2004). While it seems beneficial to analyze a problem in multiple ways experts in both qualitative and quantitative methods express concerns against this approach. Johnson and Onwuegbuzie (2004) argued that part of “purist” concerns stems from “tendency among some researchers to treating epistemology and method as being synonymous” which is not necessarily the case (Pg. 15). To Johnson and Onwuegbuzie most researchers who use mixed-methods use them when they consider their use to be most appropriate. Johnson and Onwuegbuzie (2004) argue for a contingency theory of research approach which emphasizes that while no method is superior, there are instances when one is preferable to the other.
One of the biggest benefits of using mixed methods is that they allow for the triangulation of findings. According to Dezin (1978) Triangulation is “the combination of methodologies in the study of the same phenomenon” (pg 291). Dezin (1978) describes 4 types of triangulation: data triangulation, investigator triangulation, theory triangulation, and methodological triangulation. He described these as possible within-methods, or between-methods (Denzin, 1978). The ways in which methods are mixed varies, with both of them at times having the same amount of influence, while sometimes one method holds preeminence. Triangulation is a common way in which to strengthen the generalizability and transferability of a study and the strength of its claims. Other benefits of using mixed-methods include a complementary, where the results of one method are clarified by another, development, when one method informs the other, expansion, trying to increase the scope of one methodology, and initiation, which seeks the discovery of paradox by recasting results or questions from one method to another (Greene, Caracelli, & Graham, 1989) Regardless of the initial results, it usually provides richer data. Comparisons between the data could lead to either “convergence, inconsistency, or contradiction” (Johnson, Onwuegbuzie, & Turner, 2007, p. 115).
If there is a conflict or an inconsistency within the data, it increases the difficulty in establishing a causal relationship and the study may require further study and explanation. This explanation can be provided by a form of structural corroboration, further analysis, or by sharing both findings with the key stakeholder, he or she can then use both pieces of information to make his or her decisions (Eisner, 1994). While most evaluators feel a responsibility to provide recommendations to the stakeholders, this recommendations do not necessarily have to address the contraction scientifically, rather a “connoisseur” may state that based on his experience, he or she believes which path may be the best path to follow. ICT adoption includes many invisible elements which increases the difficulty in evaluating them (Cobo & Moravec, 2011). Because of its complexity, it will be helpful for the evaluator to share his or her opinion as a “connoisseur”. Social programs are generally complex. By providing a focused report to the key stakeholders, that emphasizes the main findings of the mixed-methods evaluation, they will be more likely to make a good formative or summative decision. As will be illustrated, this was an objective pursued by to 1 to 1 iPad initiative at the University of Minnesota.
Encouraging the Long-Term Study of ICT Projects
The limited timeframe of a study can result in a restricted analysis. Iterative formative evaluations allow key stakeholders to constantly reevaluate ways in which to improvement a programs (Mirijamdotter, Somerville, & Holst, 2006). Iterative and continues evaluations are very important for internet based companies. Google for example is known to regularly test new algorithms and versions of their search engine simultaneously with consumers, to obtain helpful usability comparisons. They try hundreds of variations of their search engine a year in an attempt to improve their product without customers noticing minor modifications and changes (Levy, 2010). Many other ICT firms regularly test new features. Similarly, many ICT adoption projects include an iterative process in their analysis, yet in the discussion of their findings, the evaluations regularly omits the potential long term benefits of the programs, focusing instead on short term costs and benefits. While there time constrains and financial limitations to evaluations of 1 to 1 laptop programs, these evaluations would benefits from a stronger effort in measuring the long term benefits of the interventions, including cultural capital gain (Seiter, 2008).
Methodologies such as longitudinal studies, ethnographic research, and time-series are among the methodologies that can help illustrate the potential benefits from the long term analysis of an intervention. Some of these studies can be very expensive, but they allow for the observation of changes that would otherwise go unnoticed. Another recent example of the possibilities of looking at changes overtime was recently made possibly by the Google Books Project Ngram Viewer (http://books.google.com/ngrams). The NGram Viewer allows for words frequencies to be analyzed for over a span of 200 years! This type of study called Culturenomics is one of the newest ways in which an analysis of a subject over time provides an additional insight to an issue (Michel, et al., 2010). While the NGram Viewer is not very useful for evaluators, other forms longer-term analysis can be of greater support.
Ethnography is a field of study in which time spend on the field is an important validity variable. Ethnographers focus primarily on the quality of the data, which validity can be increased by the researcher if him or her has lived in a community for a longer time-frame and has obtain through this extended visit a greater understanding of the local culture. Some of the subtleties that are analyzed by ethnographers require time and involvement to be discovered. To some researchers, ethnography symbolizes a study that takes more than a year (Fuller, 2008). While some projects could last perhaps a single long day, other “projects are developed throughout the whole of a researcher’s life; an ethnography may become a long, episodic narrative” (Jeffrey & Troman, 2004). In quantitative analysis, a time series, as their name imply, also emphasizes the importance of collecting data over time. This set of statistical data can be collected at various intervals such as monthly for unemployment benefits data, or daily for the financial exchange rate, or monitoring an individual’s pulse over an exercise period, or even every 2 seconds for EGG brain wave activity. A commonly used and informative time series is population census data which is collected by many countries in regular intervals to help their governments better understand broader demographical changes, migratory patterns, and the future outlook of various variables (Zhang & Song, 2003).
Longitudinal studies can also be very helpful in understanding how an intervention at an early stage of a person’s development influences them throughout the rest of their lives. Various longitudinal studies have been conducted within early education to identify the changes these interventions may have in the lives of these individuals. Longitudinal studies include interventions pre-natal care, youth reading programs, or the observation of children as they become older, among many other studies. One of the most famous longitudinal studies of education was the Student/Teacher Achievement Ratio (STAR) Tennessee Class Size Reduction study which began in 1985 and which continued until 1999 (Finn & Achilles, 1999; Hanushek, 1999). The study tracked students who were assigned at random to kindergarten having between 13 and 17 students, or larger classes having between 22 and 26. Over 6000 students took part in the study were they were kept in smaller classrooms for 4 years, and were continued to be monitored after the end of the intervention. The study found statistical significant changes to student achievement scores in three utilized measurements. The conclusions of this study strengthened claims regarding the positive impacts of class size reduction which encouraged the enactment of class reduction policies in California (1996) and other states. While other studies have contradicted the findings of the study, its use of an experimental design, its magnitude and its use of a longitudinal analysis strengthened its claims. There have been a number of important longitudinal studies in early childhood and other early interventions that have followed children development for decades (NCES , 2010). It is also used frequently within health sciences.
Another popular, long-term, longitudinal study is the British Up Series which has followed a group of 14 children since age seven in 1964, and is still under production. Similar documentaries have been replicated in Australia (since 1975), Belgium (1980-1990), Canada (1991-1993), Czech Republic (1980s), Germany (1961-2006), Denmark (From 2000), Japan (from 1992), Netherlands (from 1982), South Africa (from 1982), Sweden (from 1973), USSR (from 1990), USA (from 1991). While these long term studies can be expensive to conduct, they provide a different dimension to findings, a dimension that is sometimes not available in most 1 to 1 technology adoption evaluations.
The key benefit of including this dimension within an evaluation is due to the difficulty in knowing how the skills obtained from using new ICT devices will help an individual have the confidence and the background skills needed to develop future ICT skills competencies that may be beneficial to them in the job market. Will their familiarity with ICT at an early age, bring about broader benefits later in their lives? A short term outlook to an evaluation may at times provide a negatively skewed view of the impact of these projects, expecting more out of a pilot project than should be expected. In addition, it is common for program designers to overstate the potential outcomes of a project, expecting it to have a greater impact than it is likely possible. For example, as an evaluation of USAID basic educational projects (1990-2005) showed, most of its projects had less than a 4% in student achievement scores, despite the efforts of many specialists and the expenditure of millions of dollars. (Chapman & Quijada, 2007). One to one technology adoption projects can also be very expensive and as such can have a very negative cost-benefit analysis in the first years of the program. It is very important to take into account the rapid depreciation rate of CIT, but evaluations should also take into account, future, longer-term benefits of the investment.
Having access to personal computers is essential for most of the workforce in the 21st century. As argued by Seiter (2008), having a computer at home is almost a necessity for development competent skills in the subject. “The likelihood of gaining strong digital literacy skills on this type of machine [a computer lab] is much slimmer than on a home computer. In other words, learning to use computers at school is like the music education class in which you have forty minutes to hold an instrument in your hands once a week, along with thirty other kids” (Pg. 37).
Many of the computer programs that students may eventually learn to use will require them to invest dozens, hundreds, and perhaps thousands of hours mastering. In addition individuals, who are less familiar with computer, tend to be less confident about becoming proficient in using new programs (Mohammed, 2007). While a television and a radio, or a “feature” mobile phone may have a short learning curve, the same cannot be said of personal computers, the internet or smart phones. Each of which is complex to different extents. Digital literacy programs such as RIA can teach a digital immigrant a basic set of skills in 72 hours, but many more hours are needed for complex use of a personal computer or an internet capable device (http://www.ria.org.mx). Just learning how to type rapidly on a QWERTY keyboard will take many hours of practice.
By evaluating a project while considering its impact over a longer frame of time this article encourages the continued evaluation of a program over a number of years, on regular intervals, while providing recommendations, and reporting on the benefits and negative elements of the program as they are modified over time. This type of long term evaluation is best suited for an internal evaluator, or a combination or internal and external evaluators. When thinking of the cost of 1 to 1 programs over time, it is also important to keep in mind the rapid depreciation of technology. With the rapid depreciation of computer equipment, should 1 to 1 programs focus on purchasing the most up to date gadgets and tools? This is a question that is be best analyzed through the inclusion of a cost-effectiveness analysis which accounts for the depreciation of technologies.
A Case Study – University of Minnesota One iPad Per Student Initiative
As previously discussed, the evaluation of technology adoptions programs has tended to focus on a short-term analysis, without sufficiently addressing or discussing the importance of analyzing the implications of adoptions over a longer time spectrum. As advanced economies are increasingly fueled by the ownership of patents and new inventions, so to have other countries attempted to further develop these sectors (Heshmati & Addison, 2003). The information transferred through ICT can help countries develop into more diverse and sustainable economies. It is through ingenuity, creativity, innovation, or “Mindware”, that groups and individuals come together to form new industries and adapt to different types of crises (Cobo & Moravec, 2011). Via technology adoption programs, individuals can increasingly access the information that will help them develop valuable skills. By evaluating with a long-term focus, and incorporating both qualitative and quantitative elements to the evaluation, an evaluation will be better able to address the questions of key stakeholders. This paper illustrates the limitations and strengths of a recent evaluation of a one to one iPad initiative in the University of Minnesota.
One Laptop Per Child – An Evaluation of Peru’s Project
Possible the most controversial and also most commonly cited 1 to 1 initiative is the One Laptop Per Child (OLPC) initiative, which was started by Nicholas Negroponte, the founder of the MIT Media Lab (TED, 2008). According to Negroponte, by thinking in bytes instead of atoms, and by learning how to operate a computer, a child can learn that the world is increasingly available at the click of a button, and that they can construct and build anything that they can imagine by programing new and amazing environment (Negroponte, 1996). Following Paper’s Constructionism, Negroponte believes that programing teaches an individual not how to learn, as they must go back, revisit their code and figure out why there is a mistake (Papert, 1980). As an ICT evangelist, Negroponte highlighted how simply by giving a child a computer his possibilities would be expanded (Negroponte, 1996). Since the beginning of OLPC in 2005, over 2.5 million laptops have been delivered (http://one.laptop.org/about/faq). However, despite the high level of investment, particularly in Latin America, project evaluations have not shown significant gains in achievement scores (Cristia, Cueto, Ibarraran, Santiago, & Severin, 2012).
A recent evaluation of OLPC in Peru expressed how despite a high level of investment in these new machineries (902,000 laptops), and increasing the ratio of computers from 0.12 to 1.18, student performance in math and reading had not increased substantially. The project did find that students’ cognitive skills had improved over the time of the study (measured by Raven’s Progressive Matrices, a verbal fluency test and a Coding test). While analysts have since highlighted that the program had only limited effects on math and language achievement (0.003 standard deviations), little emphasis has been given to the potential impact of the improvement in cognitive skills, and perhaps more importantly what having improved their digital literacy skills will mean for this individuals in the future, as they are asked to learn other task specific digital and information literacy skills (Cristia, Cueto, Ibarraran, Santiago, & Severin, 2012). As mentioned by Seiter (2008) developing high level ICT may take many years to fully demonstrate themselves as marketable skills in the lives of students.
It is also difficult to know from the available data whether a different investment would have been more cost-effective or result in a higher cost-benefit ratio in Peru. One of the unmet goals of OLPC was to produce a $100 laptop; however they currently cost around $200 (Cristia, Cueto, Ibarraran, Santiago, & Severin, 2012). As a project which was not affiliated with Microsoft, Google or Apple, the OLPC laptops came with an operating system (OS) known as Sugar. While all operating systems share similarities, did the use of Linux Sugar limit or increase the possibilities for students. When testing student computer literacy skills, they found that the students quickly became more adept at using these devices. As explain earlier in this paper, they also had difficulties in deciding which skills should be tested (Cristia, Cueto, Ibarraran, Santiago, & Severin, 2012, p. 15). Unfortunately, another unmet goal of the project was that Peru’s OLPC participants lacked of internet connectivity. OLPC was partly designed so that students could benefit from increase connection either through OLPC exclusive Mesh network or the Internet. The impact of lacking access to the internet are hard to measure, however they may have affected the individuals’ development of their information literacy skills. In conclusion, Peru’s evaluation of the OLPC project was very insightful, while it contained a qualitative element; the project had a quantitative focus, limiting reader’s understanding of how the initiative affected individuals. As a project which centers on the individual, learning more of the project’s impact on the person is increasingly of relevance as ICT becomes more personalized. Apart from now discussing potential long-term gains, the evaluation also failed to mention the full cost of the devices. With the laptop only accounting for a tenth to a seventh of the total cost of the device, it is important to consider whether this is a cost-effective investment (Lugo & Schurmann, 2012). The evaluation would have benefited from a broader implementation of mixed methods in particularly in the qualitative-side, while also emphasizing these changes over a longer span of time. An element of time that is particularly important to first year initiatives is the teachers or instructor familiarity or learning curve, as they will slowly learn better ways in which to use the device and integrate them within the classroom.
University of Minnesota iPad Initiative
The discussion surrounding the digital divide is traditionally centered around on access to the internet and a personal computer, yet the rapid change of technologies leads us to question whether the divide will be centered on these devices in the future (Warschauer, 2008; Zickuhr & Smith, 2012). What role will smart phones, reality augmented glasses, 3D printers, or farther into the future nanotechnology implants signify in terms of the digital divide? (Kurzweil, 2000). A current technology that may further displace the purchase of paper books for K-12 and HE is e-reader technology, the most successful of which are the iPads (I, II, and III) and Amazon’s Kindle readers. A recent NDP report indicated that tablets may outsell laptop computers by 2016, expanding sales from 81.6 million units (2011) to 424.9 million units (2017) a year (Morphy, 2012). Will we then measure the digital divide in terms of who access and who doesn’t have access to an iPad?
Pilot projects in universities such as the University of Minnesota, the University of San Diego, Oberlin College and a few others have move forward into answering this question. While the first successful tablet, the iPad was released on April 2010, that same year, the University of Minnesota decided to purchase 447 units to provide a tablet to every CEHD student in the upcoming undergraduate cohort. It was one of the first major initiatives of its type in the country. Because of its uniqueness, and being an early adoption project, its evaluation was based partly on the conclusions obtained from previous 1 to 1 projects such as the OLPC initiative and Maine’s 1 to 1 statewide adoption program. However, as a device that was substantially different from previous ICT devices, the operationalization of NETS standards, and an in-depth analysis of their potential use has not been acutely studied (ISTE, 2008). So far, only a few articles have been published regarding the use of the iPad in the classroom (EDUCAUSE, 2011). To better understand the possible educational implications of the adoption of this technological device, a CEHD research team decided to conduct a mixed-methods evaluation (Wagoner, Hoover, & Ernst, 2012). In addition, an initial commitment was made to continue evaluating the project for a consecutive number of years. The support of the dean was integral in the continuation of the program.
The first year, the project consider as a goal to increase the usability of the devices by both faculty and students, and to provide aid to faculty members so that they could familiarize themselves with the devices and consider the best ways in which they could incorporate the devices within their classrooms. Faculty members were then encouraged to incorporate the devices as they best saw within their syllabus. Various graduate assistants as support staff. Soon after the distribution of iPads, evaluators also drafted a post-test and organized a series of interviews. The interviews asked faculty members a number of questions, including how they learned to use their iPads, what were their plans for using them within the classroom, how the iPad had affected their teaching, and if the support received had been appropriate (From field notes).
A similar set of questions were asked to faculty members at the end of the school year, where they were asked what projects they had actually implemented, the opinions of students regarding ebooks, pedagogical concerns among others. Twenty two interviews were coded and themes were developed from the qualitative study including concerns from faculty about time investment, how the iPad compares with other technologies, the impact of the iPad to faculty members’ pedagogy, the impact of the iPad to their classroom management, and details about faculty members’ technology learning process. At the end of the year a series of faculty member focus groups were also conducted. Many of the details learned through the qualitative portion of the study would have been difficult to obtain otherwise. The common elements between the data from the focus groups and the interviews also allowed us to verify some observations. Below is an interesting quote from one of the participating faculty members:
“What I want, in terms of their behaviors, is for [the students] to be active explorers in the classroom, to bring the machines, and to actually utilize them for historical research … One of the things that we did as a first conversation is to describe the level of trust that is going to be involved … and they live up to those expectations. I’ve been really happy so far with what we’re learning. It conveys to them that they’re smart, capable discoverers that we’re co-creating knowledge—historical knowledge” (Wagoner, Hoover, & Ernst, 2012, p. 3)
While the quote above illustrates a very positive emotion, it is likely that this experience will not have been visible through an analysis of student achievement, illustrating the benefit of utilizing mixed-method. Two student focus groups were also conducted where they shared some of their favorite apps and how they had used the iPad through the semester, yet unlike faculty members where evaluators were able to interview the whole population, 447 students were more than the team could interview.
To obtain a better analysis of the student response, a survey was conducted which included a number of questions related to their use and experience with the iPad. The survey was responded by 241 CEHD first year students (Wagoner, Hoover, & Ernst, 2012). Having access to broader demographic data also allowed the evaluation team to compare student attitudes with socio-economic variables. Various strong correlations and significant relationships were found regarding the impacts of iPads to student learning. In particular the evaluation found that students felt that the devices had been a positive experience in terms of their motivation they also expressed having a high level of comfort using the devices and the iPad helped them feel more engage in some of their classes.
The study also showed that students which were part of Access to Success (ATS) or had been part of the TRIO program, usually students of color or from low socio-economic backgrounds mentioned feeling more engaged and connecting during classes. From the qualitative data the evaluators also learned that to some students the iPad had become a window into the internet, and a digital item for their whole household to use.
The success of the first year implementation, and the questions that evaluators were still unable to answer led to the continued of the program for a second and third year. A similar number of iPads (now iPads 2) were purchased the second year of the program. Once again the rapid change of technology provided new possibilities for evaluators, as iPad 2 include cameras which permitting students to record HD video and have audio-visual communications with anyone with access to Facetime or Skype, and other programs. After analyzing the potential savings the extensive use of iPads for e-reading by some students, CEHD also decided to support a pilot project for the testing and adoption of Open Textbooks, as well as the establishment of a work desk where faculty members could obtain assistance and build iBooks and ePubs if interested.
The project is now planning its third year. Adapting to the result of the first year evaluation, many of the questions of the second year survey were modified to find additional valuable information. One of the limitations of the evaluation of the program so far has been a lack of a cost-effectiveness or a cost-benefit study. Yet, such a study should not only take into account the rapid depreciation of the devices, but also consider if students are learning through the use of the devices skills that could potentially aid them when they join the workforce. While the cost have been high with over 300,000 dollars per year, it is difficult to assess the long term benefits for participants (students and faculty members). The rapid devaluation of the devices is an important consideration, as it may be possible that in a couple of years these devices will cost only a fifth of their original cost and be even more feature rich and powerful, allowing students to obtain a similar skill set for a fraction of the cost. It is also possible that many of the skills obtained are not very different from those obtained from using other ICTs, reducing the importance of the investment.
Currently, a website is available were individuals interested in the results of the project can learn various innovative classroom projects that were developed and how they can be adapted to other classrooms, as well as suggested best practices. Some of the innovative uses of the iPads by students include the creation of digital stories, accessing unique applications including interactive stories, data visualization, among others, as well as rapidly accessing websites, and developing an e-book library. In a report, CEHD concluded that the iPad had been helpful addressing the concerns of the Digital Divide, increasing access to the tools needed for media production, increased access to tools that facilitates personal productivity, improve students’ possibilities for information access and consumption, helped reduce the cost of printing readings, and facilitated students’ learning outside of the classroom (Wagoner, Hoover, & Ernst, 2012). For year two, the program also hopes to further analyze the usability of the devices and recently developed a space for students to submit their creative productions with the iPads.
Despite the insights provided by the use of mixed-methods for this evaluation, the limited timeframe of the study makes it difficult to determine whether or not is a worthwhile investment. With the program costing over $400 dollars per student, apart from the cost of the administrative staff, is this the best investment for a university to make in terms of technology adoption? When will it be determined that the program is no longer worth its cost and it is no longer helping to find innovative ways of learning? One of the limitations of CEHD’s 1 to1 iPad program has been the limited emphasis on the possibilities for the device within informal learning. Some of these concerns will be better analyzed from the data collected from the second year survey that was recently administered to students. A new wave of interviews and focus groups is also planned for the evaluation of the 3rd year of the program.
With 500,000 applications there are almost endless possibilities as to how the devices can be integrated within the classroom. The production of more apps that match more closely with the goals of each individual is likely to increase. Because of these devices future relevance, and the high level of creativity and innovation within this industry, constant evaluation of these devices is important as it allows for the continued improvement of the project. The use of mixed-methods allowed the evaluation team to find many interesting details that the study would not have found otherwise. These details enriched the quality of the findings and provided faculty with valuable information for the improvement of the use of the iPad and for learning how their peers were using the devices.
Conclusion
ICT 1 to 1 adoption projects are difficult to evaluate and the short-term focus of some evaluations results in a limited view of their potential impact. One of the difficulties in evaluating these programs results comes as a consequence for rapid technological change.