Posts by Sintjago

Doering’s Course – Designing Online Learning – Week 1 Blog

»Posted by on Sep 10, 2012 in Fall 2012 | 0 comments

Doering’s Course – Designing Online Learning – Week 1 Blog

1.     Why do you want to become an online teacher?

Leaving Latin America was a difficult decision for my family. After all, we felt that our opportunities would be greater in the USA but by moving, we would not be able to contribute to the development and improvement of the country where we were born. I am glad we came to America, and when I am able to I intend to become a U.S citizen as my parents recently decided. Yet, the Internet allows us to contribute to anywhere from anywhere. The Internet and online education can help reduce some of the problems caused by transnational migration including mitigating the effects of the brain drain. For years, it has been my goal not only to help from a distance and promote the growth of open education, but also to encourage other immigrant to give back, if not in the form of financial remittances through knowledge remittances. Online education may help improve the economies of developing states, and of impoverished regions throughout the United States, as well as the rest of the world. I fully believe that while we are all different, that we should have the opportunity to develop our talents. Online education can reduce some of the traditional problems faced by increasing access to education.

2.     What are your concerns about being an online teacher?

That other voices will be less heard, both from individuals as well as cultures. While open educational programs such as MOOCs greatly increase access, they are also currently developed primarily in countries with higher economic standards and innovative traditions. I fully support the exchange of ideas from north to south and center to periphery, but I would like to see a greater transfer of knowledge and information from south to north and periphery to center. This is particularly important when we take into account that 25 native tongues are lost every year (out of 6,000+ languages). When visiting developing regions such as during my recent trip to the Dominican Republic, it was evident that an underfunded and understaffed educational system has resulted in individuals lacking awareness of their national history, and the increased globalization of their identity (accompanied by the erosion of their traditional / local identity). The transfer of ideas has been a positive force in human history, but I am concerned about the potential impact that the increased massification of education may have towards local cultures and their historical knowledge.

3.     What impact do you believe online learning will have on education in the future?

Online education will continue to grow and will also increase competition as more courses will be offered to a wider population. As more courses are offered and the cost of schooling per student decreases, there may be an overall increase in college degree graduates, yet many of them may be unable to find a job comparable (in financial remuneration) to those that were available before them. Online technologies will help increase access, but this will likely result in a potentially smaller financial gain for graduates in comparison to the financial gains which were obtained from previous generations that went to college. Competition is increasingly global and online education strengthens this trend. Overall, the increased in the number of students who obtain advanced degrees will result in a stronger national and global economy, a smarter citizenship, an improved standard of living, and an even faster rate of technological advancement, yet some of the statistics used to justify high tuition rates (such as an over $1 million gain in lifetime earnings by those who graduate from a higher education institution) may not necessarily play out as it did before. Some current prognostics are based on the gains of previous generations. Basing future statistics on what was true before can lead to economic miscalculations. While I am all for increasing access, it may not necessarily result in higher average incomes for graduates, although with a greater number of graduates, society as a whole should improve.

read more

OLPD Tech Redesign Report

»Posted by on Jun 13, 2012 in Fall 2012 | 0 comments

Year One Report (2011-2012) – Copy with Images at: http://z.umn.edu/olpdredesign

OLPD Student-led Technology Redesign

Task Group 


Table of Contents

Vision Statement 3

Introduction. 3

Creating a Collaborative Space to Promote and Maximize Innovative Technology Use. 5

Innovation Lab: Potential Uses for a Collaborative Space. 8

Proposed Collaborative Independent Study Course Within the Innovation Lab. 11

Technology Showcase. 13

Purpose. 13

Presentations. 13

Logistics. 14

Technology Survey. 15

I. Technology is evolving quickly and classroom culture needs to reflect those changes. What do you suggest for etiquette, best practices, and policies for the use of technology in the classroom?. 15

II. How can technical support be improved? And what topics need the most support?. 15

III. How would you envision a new collaborative space for OLPD students? And what would you like to see in it?. 16

Innovative Uses of the iPad. 17

Useful Links: 18

Technology Proficiency Expectations for OLPD Faculty. 19

NETS for Teachers (2008) – Digital Age Teaching Standards. 20

Internet and Computing Core Certification Requirements. 21

Computing Fundamentals. 21

Key Applications. 21

Living Online. 21

Moving Forward. 21

Useful Links: 22

Continuous Evaluation and Discussion. 23

Conclusions. 25

Recommendations. 26

Prospective Timeline. 27

References. 28

Appendix 1. 30

Appendix 2. 31

Appendix 3. 37


Vision Statement

 

The Department of Organizational Leadership, Policy and Development (OLPD) will be a campus-wide leader in the use of technology to enhance students’ educational experiences, academic praxis, and intellectual pursuits.

 

To achieve this vision, we aim to co-develop a culture within OLPD that anticipates and embraces technological change and encourages the use of existing and emerging technologies to support innovation and constructive collaboration among students, faculty and staff.

Introduction

This report describes the outcomes of the activities of the first student-led OLPD Technology Redesign Task Group (TRTG) during the spring semester, 2012. The purpose of the TRTG was to explore ways that OLPD can use technology to enhance students’ educational experiences (see Appendix 1). In Fall, 2011, OLPD students were invited to submit application for participation in a task group with the purpose of “redesigning the use of technologies in the department”. The following five students were selected on the basis of their knowledge and experience of working with technology in educational settings:

 

  1. Kit Alvis, MA Student, CIDE
  2. Andrew Plovanich, B.S. Student, Business and Marketing Education & Human Resource Development
  3. Alfonso Sintjago, Ph.D. Student, CIDE
  4. Tryggvi Thayer, Ph.D. Candidate, CIDE
  5. T. Ann Tyler, Ph.D., EdAD

 

Dr. John Moravec was faculty coordinator and member ex-officio.

 

It has been suggested that we have entered into a period of exponential technological change making it unrealistic to predict, with any meaningful accuracy, what, precisely, technological landscapes will look like in the near-term or distant future (Kurzweil, 2006). For this reason, the TRTG adopted a broad definition of “technology” to include, not only information and communication technologies (i.e. computer hardware, software, peripherals, cell phones, etc.), but also essential technologies common to academic learning environments, including, classrooms, furniture, etc… (Kelly, 2011). Information and communication technologies, however, as particularly significant drivers of change, are considered to play an especially important role in the department’s technological landscape (Oblinger, 2012). Technological improvements can increase efficiencies transforming the “iron triangle” of cost, quality, and access. As modern technologies increase their influence, it is important to train experts possessing a strong understanding of technology, development, educational, and pedagogical theory (Heeks, 2008; Koehler and Mishra, 2008).

 

The TRTG focused primarily on exploring ways that the department’s students, faculty and staff, can use technology in general to promote and foster innovative uses of modern and emerging information and communication technologies.

The task group focused its efforts around the following five broad themes:

 

  • Collaborative spaces to promote and maximize innovative technology use.
  • How is technology being used by students, staff and faculty in OLPD?
  • New computing/communication/collaboration platforms (iPads, smartphones, social networking, media).
  • Establishing support networks for new technologies.
  • Establishing an Innovation Lab – Opportunities/coursework to promote and foster innovative practices.

 

The task group sought out information relative to these themes through several activities in addition to general discussions. Activities included:

 

  • Exploring existing collaborative spaces (i.e., CoCo co-working space, LT Media Lab – University of Minnesota) – Determining ways that the department can better utilize available space.
  • Technology Showcase – Students, staff and faculty were invited to share and learn about how technology is currently being used in OLPD.
  • OLPD innovation lab – Exploring ways that an innovation lab can be made available to OLPD students and faculty to experiment with technology in collaborative settings.
  • Survey on technology needs – Looking at students’ needs for access to technology and support services.

 

This report is organized around the previously mentioned themes. Each theme was discussed collaboratively, and analyzed in detail by specific group members who intertwined their expertise with relevant publications in policy, technology and innovation. Each section details the task group’s activities relevant to the theme and summarizes what was learned. Finally, the sections conclude with a list of recommendations for further action. The task group’s recommendations are also summarized in the final section.

Creating a Collaborative Space to Promote and Maximize Innovative Technology Use.

 

We define a “collaborative space” as a physical area that is conducive to collaboration, experimentation, exploration and knowledge sharing.

 

Our brainstorming sessions on collaborative space and the design of the innovation lab were informed by a review of literature on collaboration and collaborative spaces and visits to the University’s Learning Technologies Media Lab (http://lt.umn.edu/), as well as a visit to CoCo Minneapolis (http://cocomsp.com/locations/minneapolis/), a commercial collaborative work environment in downtown Minneapolis, and the relevant literature (esp. Oblinger, 2006; Makitalo et al., 2010). The collaborative space will help facilitate a more student-centered and flexible learning experience.

 

Some features of a collaborative space:

 

1. The space is multifunctional – The space can serve a range of purposes depending on the needs or wishes of users at any given time. It can be used for multiple learning experiences and benefit individuals with different learning styles (Cisco, 2008). The space is a flexible community organized site, balancing solitude, sharing, deep thinking and collaboration (Gee, 2006).

 

2. The space is organic – It can be repurposed to meet a range of needs. Furniture serves many potential purposes and is easily organized and reorganized. For example, rather than a single large “meeting table” that takes considerable space, a number of units can be organized into a large meeting table if and when needed. Subsections of the space can be temporarily modified around particular objectives.

 

3. Information and communication technologies (ICT) are available to fill potential gaps in users’ technology where they exist – Users are expected to have their own computers that they will use. However, technologies like printers, whiteboards and projectors, that users of the space cannot be expected to bring with them, should be made available and easily accessible, along with the software necessary to operate such devices.

 

What our collaborative space is not:

 

1. The collaborative space is not an office for graduate assistants – Collaborative spaces should be readily available for all OLPD students, both undergraduate and graduate. If possible the space would be accessible all day and year-round regulated by an appropriate security system. Students using the collaborative spaces should have a reasonable expectation of being able to work with independence, in a de-stressing environment, without faculty or staff assuming that they are “on call”. Some office-like spaces, like the cubicles on the 4th floor of Wulling Hall, should be available for graduate assistants as needed.

 

2. The collaborative space is not a computer lab for the department’s students – Services relating to students’ work for the department and faculty, that students can reasonably expect to be made available to them by the department, such as computers with specialized, expensive software (ex. SPSS, NVIVO, etc.) are not an integral component of the collaborative space.

 

Problem Statement:

 

OLPD students currently lack a designated space where they can get together and work collaboratively on projects. Increasing interaction in student-led projects can improve collaborative skills, leadership skills, and strengthen personal relationships that aid students in their future professional careers. We believe it is important to explore ways to create this space to improve collaborative opportunities for students. We suggest:

 

Recommendations:

 

1. Conduct an audit of currently available space in Wulling Hall and how it can be made to better serve students, faculty and staff:

 

a. Determine what physical space is available in Wulling Hall.

b. Determine who and what groups are competing for space in Wulling Hall.

c. Prepare a cost analysis for repurposing spaces in Wulling Hall (Multiple Floors).

d. Consider how space redesign will be paid for and a timeline for each modification.

 

2. Consider administrative aspects for collaborative space:

 

a. Consider needs for an administrative team/group.

b. Consider the optimal make-up of an administrative team/group what should it do.

c. Consider steps needed to promote and support collaborative work.

d. Determine when an administrative group needs to be formed (should it take responsibility for planning the space).

e. Consider needs for a security system to safeguard equipment and the space.

 

 

The TRTG had the opportunity to learn about the design of collaborative spaces during its visit to the CoCo Minneapolis Co-working and Collaborative Space. Figure 1 below illustrates the unique collaborative space design at CoCo Minneapolis. In addition Table 1 and Figure 2, from EDUCAUSE’s (2006) Learning Spaces, highlights the importance of re-envisioning space in the 21st century to encourage collaboration, autonomy, mastery, and purpose. Labor market and social changes require a reconsideration of what is demanded of students and they are motivated (Pink, 2011).

 

 

 

 

Figure 1. CoCo Coworking space in Minneapolis, MN.

Source: CoCos Flickr Images (2010-2012)

 

Table 1. Features of contemporary common spaces.

Source: Brown and Long (2006)

 

Figure 2. Example of reconfigurable space.

Source: Gee (2006)

Innovation Lab: Potential Uses for a Collaborative Space

 

Promoting student leadership and entrepreneurial skills as well as increased student collaboration and maximizing the use of innovative technology.

 

The development of innovative ideas is often the result of an exchange of ideas and individuals’ intended or unintended collaboration. Without access to information it is difficult to build on the shoulders of giants. While a single person may not be sure how to best address the multiple steps of a complex challenge, by encouraging students to share their ideas with other students and faculty, abstract concepts can be prototyped, tested, refined, and implemented. Rather than solving problems individually, the lab will encourage students to organically organize into teams and intertwine their interests, ideas, skills, and objectives. Successful innovation often requires multiple roles and expertises (Kelly and Littman, 2005). To further innovation, promote great ideas, and solve current and future problems, it is important to provide a space that allows for the blending of thoughts, skills, and passions (Johnson, 2010).

 

A Virtual and Physical Bulletin Board:

 

We consider it important for students to regularly think outside the box and to have a way in which to express their ideas known through an open bulletin board, allowing them to potentially find interested peers. Finding one or two peers to collaborate with on a project may mean the difference between a successful start-up and an unfulfilled promise. Many writers, innovators, and entrepreneurs have benefited from collaborating with like-minded individuals or taking part in a mutually beneficial relationship. Openly sharing an idea allows it to be analyzed, criticized, and modified to better address its intended objectives (Kelly and Littman, 2005). We hope to encourage collaboration by placing a physical bulletin board within the collaborative space or in a common area within Wulling Hall. The purpose of such a bulletin board is to encourage students to share ideas at different stages of development and find co-innovators to partner with in looking for ways to address a challenge. As an OPLD initiative, only OLPD students will be able to post new ideas, while team members can include any student within the university, encouraging inter-departmental collaboration.

 

The bulletin board will also include a virtual component where students can post, or further develop ideas, at any time. The virtual bulletin board will also make it possible to search through a database of interesting ideas that have yet to be further developed. The innovation lab and the collaborative space will not be limited to the physical space of Wulling Hall, but we hope that it develops into a network that extends far beyond students’ time within the department. For privacy and copyright concerns, the virtual board will be password protected. A student-moderator, most likely a member of the collaborative space administration, will analyze students’ proposals and decide whether they violate the rules of the bulletin board. Ideas will not be discarded based on preference or feasibility rather students will be encouraged to think beyond the limitations of current conventions.

 


Figure 3. Three innovation boards at 3 institutions

 

All images

 

Benefiting from a Collaborative Space:

 

A technologically enhanced, organic, multifunctional space will encourage and enable students to brainstorm, discuss ideas, test and further develop projects. While the laboratory will not provide personal computers to its users, it will include technologies that help promote students’ creative thinking and imagination. Among these tools would be projectors and other technologies that facilitate the explanation of complex concepts and ideas and develop important media literacy skills. For example, the TRTG discussed the potential of acquiring a “padzilla” so that students can explore the potential of a table size, touchscreen tablet device (http://crunchylogistics.com/portfolio/padzilla-70-inchipadiphonecase/).

 

Various other technological gadgets and innovative tools that will be available for testing. The lab will also include a list of literature resources regarding the innovative process. The innovation lab will also benefit from the flexibility that the furniture of the collaborative space will provide. A new technological device may in a few years time become a common classroom technology.

 

Innovation Workshops

 

The innovation lab will host a series of workshops to discuss topics such as: the innovative process, the importance of innovation for society and education, and emerging innovations that have the potential to transform or modify education throughout the world. Students can also benefit from an “idea” series, or brown bag sessions, where participants will be invited to convey their ideas effectively to their peers in 10-20 minutes. The recent success of the Technology, Entertainment, and Design Conference (TED) in the diffusion of innovative ideas and information, is an example of the increased importance of video, and learning to convey ideas through multiple media formats effectively (Prensky, 2009).

 

In addition, the innovation lab will also provide a setting for a semester long innovation course that will allow students to invest additional time in co-produced projects. This course will be student led under the supervision of a faculty member. All participating students alternate and act as co-instructors emulating different faces of innovation (Kelly and Littman, 2005). Approval by students’ advisers will be required for course participation. The course will closely resemble an independent study, but emphasizing co-production and collaboration.

 

Recommendations for next steps:

1.       Survey students about their interest regarding an innovation lab and course. For example, preferred format (brief intensive courses or semester long courses, separate graduate-level/undergraduate or mixed groups, etc.)?

2.       Compile descriptions of administrative needs for innovation labs, workshops and courses.

3.       Compile descriptions of administrative needs for physical and virtual bulletin board.

4.       Produce an action plan for implementing innovation labs based on students’ interests.

5.       Produce a report of financial needs for collaborative space and innovation lab.

6.       Produce proposal for syllabus/academic needs for innovation lab (including intended audience, faculty and visiting instructors/facilitators).

 

 

 (Click here to see a video describing the Innovation Lab: http://www.youtube.com/watch?v=51sBkmSBN3U)

 

Proposed Collaborative Independent Study Course Within the Innovation Lab

The course described here will allow students to gain valuable experiences as part of a student-led group project. While students are sometimes able to join departmental projects, the number of opportunities is limited. It is often difficult for students to find the time to invest in student-led group projects while completing their course work and independent projects. Working on collaborative projects is an important 21st century skill. By encouraging the use of the collaborative space for group projects, students will be able to fulfill their academic requirements while improving on a number of valuable skills for the work place of tomorrow, including project management, conflict resolution, evaluation, and collaborative work.

 

Credit Hours: 1 (Optional)

Number of Students: 5 to 10

Course Grading: Pass or Fail (No A through F)

Prerequisite: Adviser’s Approval

Developed in Collaboration with Isaac Bolger (M.A. CIDE)

 

CEHD students learn about a wide number of innovative ideas within the field of education. There are, however, few opportunities for students to go beyond the textbook and develop projects that can impact the local and global community during their studies at the University of Minnesota. This course will provide CEHD students an opportunity to expand on their innovative ideas in collaboration with other students within the school of education. They will develop projects relating to their interests and contribute to other projects being developed by their peers.  Having a very capable, highly educated and self-motivated group of students, this laboratory will serve as an empowering space, where individuals learn to develop, collaborate on, and implement innovative projects, transforming and improving society. The course will promote the regular discussion of ideas. The physical and virtual bulletin board discussed previously will provide a forum where ideas can be visualized and discussed by all the course members. Through the bulletin board, students will be able to share their innovative ideas and obtain feedback and support for their concepts from other students. Everyone will be encouraged to promote various ideas throughout the course of the semester. Students will be able to submit these ideas either openly or anonymously. As a number of ideas are explored and refined, every year the most captivating ideas would be co-pursued. Students can either choose to collaborate on a particular project or use the space to develop their own unique idea. New flexible learning possibilities (see Figure 5) transform the traditional conceptualization of learning spaces and allow for the incorporation of audio-visual production technologies, and augmented reality capabilities (Ally, 2009)

This course will be located in Wulling Hall where students will have access to examples of educational technology available on the market (such as the Kno, Nook, Kindle, $100 laptop, etc). Students may occasionally meet outside of campus to visit other innovative learning spaces. The space will include whiteboards, bulletin boards, conference tables, and small workspaces. There will be areas where students can pin up photos, physical articles, as well as other physical objects related to education. There will also be a powerful computer set up providing access to innovative and thought-provoking educational software. Through the use of collaborative web services, such as Moodle, Ning, and Facebook, students will be able to communicate and discuss, not only within the confines of the physical laboratory, but with people anywhere in the world. Innovative ideas will be prototyped every year and successful ideas will be continued and expanded after the end of the semester. Particular attention will be given to national and international educational innovation competitions. The class will consist of 2 hours/week for 7 weeks and can be re-taken (for up to 3 credits) for students wishing to undertake larger projects. Students will not be required to sign up for credit to be engaged. Yet, students will be required to apply to be admitted into the laboratory.

 

Figure 5. Flexible Learning Network

 

Source: Peters, 2007

 

Some of the topics that may be discussed include:

 

  • the impact of incentives on students’ motivation,
  • the growing use of open educational resources (OER),
  • the “gamification” of education,
  • rapid growth and spread of online education,
  • growing use of blended learning,
  • one computer (tablet, laptop) per student initiatives,
  • the implications of cloud computing,
  • intersections between formal, informal, non-formal and serendipitous learning,
  • mobile learning,
  • invisible learning,
  • the impact of innovative design,
  • diversity within the charter school movement,
  • the movement towards a personalized education,
  • increased access to educational material outside of traditional education settings and its implications for lifelong learning and its impact on formal schooling,

 

There are numerous other innovative education policies currently being discussed and implemented in different communities and countries across the world that can be added to this list. Students will be evaluated based on their attendance, their contributions to the innovation lab, and on a final report and presentation.

 

Technology Showcase

 

Technology Redesign Symposium: Event Summary

Department of Organizational Leadership, Policy, and Development (OLPD)

 

Like the flame of Franklin’s candle, both ideas and their expressions can now be given without being given away. This ability to give expressions of knowledge without giving them away provides us with an unprecedented capacity to share—and thus an unprecedented ability to educate. (Wiley, 2010) – Openness as Catalyst for an Educational Reformation

Purpose

Digital technology offers numerous opportunities for collaboration, creativity, and academic assistance. Products evolve and new applications are created at a rapid pace. The Technology Redesign Symposium offered an opportunity for OLPD students and faculty to share what they have accomplished with digital technology in their scholarly, intellectual, and creative endeavors. The primary purpose of this event was to highlight the many ways that students, faculty and staff within the department currently use technology. Additionally, this symposium facilitated conversations around collaborative spaces that support technology and best practices within the OLPD community. By sharing our ideas and experiences virtually or in a physical space we can contribute to the transfer of helpful information and the improvement of society.

Presentations

Students and faculty were invited to share their accomplishments with digital technology. The categories of presentations and presenters were:

 

  • Moodle – Lou Quast
  • Citation Tools, Zotero – Heidi Eschenbacher
  • Choosing a Platform for Online Focus Groups – Joe Wohkittel
  • Online Focus Groups – Alison Link and Patrick O’Leary from Dr. Krueger’s class
  • Authoring and Publishing iBooks – John Moravec
  • Top 10 iPad Apps for College Students – Andrew Plovanich
  • Mobile Devices and Informal Learning – Alfonso Sintjago (See slides in Appendix 2)
  • Research Presentation on iPads in Secondary School Classrooms – Kit Alviz
  • Google+ Hangout – Kit Alviz

 

Students, staff, and faculty of OLPD were invited to view presentations and speak individually with presenters to learn about the technology tools and practices at the symposium. Approximately 40 individuals responded to email invitations, flyers, and other advertisements about the symposium to share a technology-rich learning experience. The event’s structure allowed for a close and mutually beneficial exchange between presenters and attendees. It would be beneficial to continue hosting this showcase on a yearly basis.

Logistics

 

The Technology Redesign Symposium took place on Friday, March 23, 2012 from 2-4PM in the Science and Technology Student Services (STSS) building, room 420B. This room provided an ideal environment for a technology showcase due to the round table working stations, ample electrical outlets, and hook-ups to flat screen televisions. Presenters brought their own devices to set up on a working station.

 

The Office of the Dean of CEHD generously contributed a new iPad to be raffled off at the end of the technology symposium. The winner was Nicole Murray (B.A. student in BME, OLPD).

 

Nicole Murray (B.A. student in BME, OLPD) won a brand new iPad.


Technology Survey

 

Three questions were posed during the Technology Redesign Symposium and again in the evaluation of the event.

I. Technology is evolving quickly and classroom culture needs to reflect those changes. What do you suggest for etiquette, best practices, and policies for the use of technology in the classroom?

 

Participant responses fell into two categories:

 

A. Establishing Norms: Technological devices can be a wonderful asset to a student, but they can also be distracting. Thus, norms or rules of engagement about the use of technology need to be established between the students and the professor.

 

B. Professor’s Responsibility: Most participants responded that professors should incorporate or continue to utilize technology in pedagogy for a variety of reasons: to make the classroom more interactive, to promote dialogue, and to prepare students to be successful in their future work environments. One interesting comment was that technology should appear naturally in the classroom and not be forced. Participants also recognize that having the time to learn and attend trainings about technology is an obstacle for professors.

II. How can technical support be improved? And what topics need the most support?

 

The participants offered a very wide variety of responses to this question. Some responses were about learning opportunities, while others were more specific requests.

 

A. Technology learning opportunities:

Participants would like more opportunities to learn about applications and programs, specifically in small training sessions. Trainings for professors should continue and focus on establishing a value of technology; how can technology improve learning or the classroom environment?

 

B. Specific recommendations:

-Help students with organization via citation managers

-Help professors transform online learning to become more interactive

-Have a scholarship search through OneStop

-Have a supply of Mac chargers on reserve for graduate assistants

-Establish an event calendar

 


 

III. How would you envision a new collaborative space for OLPD students? And what would you like to see in it?

 

Participants want to feel a sense of community, a place on campus that is their own and where they can work, meet their peers, and collaborate with their colleagues. Large round tables, private work areas, an announcement board, coffee machines, fridges, are Internet access are some of the things they want to have access to in a space like this. Participants like the idea of having a student-led space to share resources and referrals, work together to discuss tech issues, and provide peer counseling and advising.

 

 

Innovative Uses of the iPad

“Education is deep in Apple’s DNA” – Quote from Apple Education Event

Philip W. Schiller – Introducing iBooks 2 and iTunes U (January 19, 2012)

 

There are many uses for iPads in the classroom. They can replace paper copy textbooks, advanced apps make schoolwork more efficient and effective (Educause, 2011), in combination with Apple’s iTunes U, which allows instructors to upload audio and video to iTunes, students’ access to lectures and other materials are greatly increased (http://www.apple.com/education/itunesu/). There are currently over 500,000 free lectures, books, videos, and other resources in iTunes U, exploring thousands of subjects.

 

CEHD’s Open Textbook Catalog (https://open.umn.edu/opentextbooks/) demonstrates the growing interest in, and availability of, digital textbooks.  Digital textbooks can potentially lower the cost of textbooks and they are convenient for students. Students can carry all of their textbooks and notes on an iPad, as opposed to only a few because of the weight and size. Students can effortlessly print out lists of quotes, and use embedded interactive infographics, audio-visuals, and social elements. For an example of what is now possible with digital textbooks visit: http://ourchoicethebook.com/.

 

The iTunes store contains numerous useful apps for students, ex. for taking notes, storing and transferring files, reading books and articles, making and viewing presentations, recording audio, and much more. Some of students’ novel uses of available apps include photographing notes on boards in class, syncing notes to their account to make them available on other computers and devices, and recording lectures to be played back at a later time for optimal studying. Other applications include gamification concepts to increase student motivation and engagement (Bobo Explores Light – http://goo.gl/wdqif).

Like students, faculty can also benefit considerably from using iPads in their work. It is expected that most access to the Internet by 2015 will be from mobile devices (IDC, 2015). As previously stated, faculty are able to use iTunes U to upload audio and video for students to access outside of the classroom. iTunes U course materials are shared privately or publicly. Publicly shared courses benefit students and scholars worldwide and increase visibility for the university and the faculty (Bonk, 2009; Walsh, 2010). For faculty who choose to take attendance, applications have been developed to track this. Other faculty uses of iPads include, lecture planning, recording lectures, and reading scholarly articles. In addition, innovative uses of the iPad have been documented by the CEHD iPad, Mobile Learning Initiative, illustrating their potential benefits for data visualization, storytelling assignments, instant surveying, photo-essays, etc. (http://www.cehd.umn.edu/mobile/Projects/).

Through our personal experience, research, and CEHD investigations, we believe that careful consideration of how iPads can be used in education will help OLPD redesign how technology is used within the department in general. Outcomes of CEHD iPad research (see “Useful Links” below) are very useful in this regard. Additionally, the TRTG has outlined a draft framework for evaluating iPad apps for use within the department (see Appendix 3).

 

Using various whiteboard, flashcard, and educational apps we have found an easier and more efficient way to study. With over 500,000 applications in iOS, and over 200,000 for the iPad alone, this technology allows students to create effective personalized learning networks and join relevant virtual learning communities.

 

Useful Links:

 

CEHD iPad Research Findings – http://www.cehd.umn.edu/mobile/About.html

EDCAUSE iPad Related Articles – http://www.educause.edu/Resources/Browse/iPad/37605

PEW The Rise of E-Reading –http://libraries.pewinternet.org/2012/04/04/theriseofereading/

Technology Proficiency Expectations for OLPD Faculty

Increasingly, people access information and educational materials through the Internet. The quality of available information and the extent of document digitization are growing. Journals, books, newspapers and other information providers are increasingly embracing digitalization, not only as a form of diversification, but as a way to increase their influence and relevance among their target audiences. ICT greatly decreases the cost of duplicating and transferring information (Friedman, 2007). Most courses today provide access to e-readings and as digitalization continues this trend will likely strengthen. The high use of technology by youth, can act as a catalyst for faculty members in prioritizing the importance of understanding their students’ preferences and technology related behaviors.

 

With the average youth texting over 100 times a day, and many of them accessing most of their media through the Internet and working collaboratively on documents online, the familiarization with these technologies can help improve the learning experience for current and future students (Smith, 2011). The International Society for Technology and Education (ISTE) has regularly revisited a list of skills that students (NETS-S) and faculty members (NETS-T) would benefit from strengthening. We support this assertion and also encourage the consideration of the eventual development of a basic set of skills (ISTE, 2008). As illustrated in Figure 6, the NETS-T focus on broad skills development, emphasizing constant change, yet conservative in failing to emphasize the increasing speed of these changes.

 

Figure 6. Teaching skills in the digital age.

Source: ISTE-T, 2008.

 

 

NETS for Teachers (2008) – Digital Age Teaching Standards

 

ISTE (2008) recommends that faculty “facilitate and inspire student learning and creativity” (ISTE, 2008, Pg. 1) by promoting innovative thinking, exploring real-world issues, using collaborative tools, and modeling collaborative knowledge construction by engaging students in face-to-face and virtual environments. It also recommends that faculty “develop digital-age learning experiences and assessments” (ibid, Pg. 1) by incorporating digital tools or resources, enable to pursuit of individual curiosity, address student diverse learning styles, and provide students with varied formative and summative assessments.

 

Furthermore, ISTE (2008) recommends that faculty “model digital-age work and learning” (ibid, Pg. 1) by demonstrating fluency in technology systems and adaptability, using digital tools and resources to support student success, effectively communicating to students using a variety of digital age media formats, and demonstrating to students how to effectively find and use information. Faculty should also “promote and model digital citizenship and responsibility”  (ibid, Pg. 1) by teaching and advocating the ethical use of digital information and technology, using learner-centered strategies while providing equitable access, promoting and modeling digital etiquette and developing and modeling cultural understanding.

 

Finally, ISTE (2008) encourages faculty to “engage in professional growth and leadership” (ibid, Pg. 2) by participating in global and local learning communities, demonstrating a vision of technology infusion, reflecting on current research and professional practice, while  contributing to the effectiveness and self-renewal of the teaching profession. The elements highlighted within the NEST-Teacher document are some of the strengths that we hope faculty continuously improve and are expected of future faculty. As ISTE (2008) recommends, students would also benefit from developing various ICT related skills highlighted in the NETS-Student publication.

 

A current competency test required for some educators that could be considered as a template for requirements of OLPD faculty and staff is the Internet and Computing Core Certification or IC3 Certification. Below we included its broader standards. We hope that faculty members respond positively to these suggestions and consider requiring the equivalent of a certificate of technological competence to the Department’s teaching staff after a number of years of obtaining a position. In collaboration with CEHD Academic Technology Services and the Office of Information Technology, faculty members are able to attend a number of free courses helping them improve their ICT skills (http://www.oit.umn.edu/training). Technology changes quickly, and it is expected to continue changing. Having a strong appeal to youth, mastering technology can help to captivate them and help them learn more effectively.


Internet and Computing Core Certification Requirements

Computing Fundamentals

 

Computer Hardware – Identify types of computer and the functions of components. Basic troubleshooting of computer hardware and identifying key factors when purchasing computer equipment.

Software – Identify different types of software, and how it relates to the hardware and how it is upgraded.

Using an Operating System – Identify the basic functions and problems of an operating system. Knowing how to operate a Mac, a PC, a mobile device, and the installation and removal of programs.

Key Applications

 

Common Program Functions – Know how to start an application and find its help documentation. Perform common editing and printing functions.

Word Processing Functions – Be able to format documents, including document commenting, the creation of tables and auto-formatting tools.

Spreadsheet Functions – Create and modify data within a worksheet. Sorting data and using basic formulas, functions, and graphic options.

Presentation Software Functions – Creating and editing basic presentations. Become familiar with different presentation tools that are currently used by educators.

Living Online

 

Networks and the Internet – Identify key benefits and risks of computer networks. Understand the relationship between computer networks and other communication networks.

Electronic Mail – Familiarization with the basic elements of “netiquette”. Overview of the basic functions of email applications.

Using the Internet – Familiarization with web browsers and applications. Overview of blogs, wikis, apps, rss feeds and search engines.

Using Mobile Devices* – Identify the benefits of texting and mobile devices. Familiarization with the educational possibilities of mobile technologies.  (Added by TTF)

Moving Forward

 

ISTE and IC3 are shared as recommendations of what could be expected of incoming faculty and competencies that could be strengthened by the current faculty of OLPD. The effective use of technology can improve the efficiency and effectiveness of teaching and learning. In addition, these are skills that students increasingly expect of their instructors, and which will be required of them by the future job market.

Useful Links:

 

ISTE – Effective Teacher Model –

http://www.iste.org/standards/netsforteachers.aspx

http://www.iste.org/Libraries/PDFs/NETST_Standards.sflb.ashx

 

Internet and Computing Core Certification – IC3

http://www.certiport.com/Portal/

 

Continuous Evaluation and Discussion

The rapidly changing nature of technology and its broad impact on society and education encourages us to remain attentive to current technological developments and to constantly evaluate and brainstorm their potential relevance and impact on OLPD, local and global education futures. As illustrated by the Technological Pedagogical Content Knowledge (TPACK) framework, for students to most benefit from a learning environment, an instructor must utilize current and effective technological, pedagogical, and content skills (Koehler & Mishra 2008, Mishra and Koehler 2006). Balancing the complex relationships between content, pedagogy and technological knowledge can improve the pedagogical practice of an instructor.

 

Technology often poses a “wicked problem” (Koehler & Mishra 2008, Pg. 3), as it often changes at a faster rate than content and pedagogical knowledge. Flexibility and understanding the “affordances and constraints” (ibid, Pg. 6) of various technologies can improve the design of a module. Instructional designers must consider each technology’s propensities and biases, whether it is an old technology such as a blackboard, or a newer technology such as Twitter or other microblogging technologies, a technology that was recently named the top internet-based technology according to an annual poll by the Center for Learning and Performance Technologies (http://www.c4lpt.co.uk/)

 

The rapidly changing nature of technology supports the continued evaluation of OLPD’s adoption and successful use of technology. Focusing on program improvement, we believe that a yearly formative evaluation that includes an annual data collection process will provide the department with the necessary information to improve their ability to analyze how technology needs are being addressed by the department, student satisfaction with the level of technology application and support, and how the initiative is contributing to the preparation of students for their future professional careers. A formative evaluation allows for program improvement (Fitzpatrick et al., 2011). As the importance of technology is not expected to wane, it is critical to invest resources in the continued improvement of its utilization.

 

A summative evaluation for specific initiatives may be conducted after the first couple of years of a particular project (the innovation lab, the adoption of open textbooks, a one-to-one tablet adoption program, etc), helping to determine which particular initiatives suggested by this task force and other OLPD specific ICT initiatives should be continued, restructured, or terminated. A summative evaluation of initiatives can be conducted on a biannual basis, utilizing both qualitative and quantitative methodologies. This evaluation component can be student-driven, providing OLPD students an opportunity to apply evaluation skills. Specific questions for these evaluations are currently being formulated. We expect the collaborative space, the innovation lab, the technology showcase, and different technology seminars to improve over time as a result of these evaluations.

 

The first year of a program implementation may face unexpected circumstances, as concepts and ideas are transferred from planning to implementation. An ambitious but realistic timetable for these changes will benefit the initial success of the project. Feedback from an internal formative evaluation should result in continuous improvement. Yearly questions may include topics such as: what elements of the program are currently working most successfully? What elements need further improvement? What elements were not addressed that are a concern of OLPD’s student body? How can elements of the program be improved for a limited funding infusion (cost-benefit analysis)? In addition to these, other questions may include a measurement of student satisfaction with these initiatives, and a measurement of students’ satisfaction with the provisions of the collaborative space. As Figure 7 illustrates, the project will re-adapt itself to the needs of OLPD students.

 

Figure 7. Iterative evaluation process.

(UW-Madison – Technology Solutions for Teaching and Research)

 

The student-led evaluation will utilize diverse methodologies including focus group methodology, interviews, surveys, and observations. An analysis of long-term cost-effectiveness of the project components would also be beneficial in evaluating this initiative (Levin & McEwan, 2000). Increasing student achievement and satisfaction, contributing to the development of research projects (collaborative and independent), establishing a collaborative culture, and helping to better prepare students for the job market are at the core of this project. We also hope that, through the project workshops and the available equipment, that students will have opportunities to improve their digital and information literacy skills, while also working with cutting-edge technologies and methodologies. With the growing importance of information and communication technologies, this will help improve students’ capacity to use technology effectively during their graduate studies while developing the skills that will help them succeed in a rapidly changing and technologically driven future.


Conclusions

There are a remarkable abundance of information and communications technologies and other technologies available to individuals in OLPD. Staff and faculty generally have access to computers (desktops, laptops, and tablets) through the department while students are well equipped with their own personal devices. Furthermore, the department has access to a range of peripheral devices, such as network services, printers, and a couple of whiteboards. Furthermore, the University of Minnesota provides a range of support services related to the everyday usage of such technologies to students, faculty, and staff. What is absent within the department in terms of technology is support for, and opportunities to, explore novel uses of existing and emerging technologies which are generally not covered by existing support services within the department or the university. The TRTG therefore chose to focus its attention on ways that the department can promote active innovation and sharing of experiences and existing knowledge about effective uses of technology.

 

The TTG’s Technology Showcase event revealed that there is considerable interest in using technology in novel ways and demonstrated excellent examples of how students, faculty, and staff are using technology. Currently, however, whether technology is used in novel ways is highly dependent on an individual’s personal interest and technological prowess. There are few opportunities for individuals who are experimenting with technology to share their experiences with colleagues and even fewer opportunities to engage in collaborative exploration of new possibilities afforded by technology and the development of skills needed to realize them.

 

The lack of opportunities to explore the novel uses of technology is, in part, due to the organization of physical space available to students, faculty, and staff within the department which is not conducive to collaborative activities. Meeting rooms are small and need to be booked well in advance to ensure that activities will not be disrupted. Large blocks of space have been arranged in a very cramped and compartmentalized manner, in particular the graduate assistants’ cubicle spaces on the 4th floor of Wulling Hall, which are underused. Available technologies are spread among meeting rooms and classrooms with seemingly little thought to how they might be used. For example, some meeting spaces have no projector, one meeting room has an interactive whiteboard, and students’ access to the department’s printers is very limited.  Consequently, there is little chance for groups within the department’s currently available space to engage in collaborative activities, especially in impromptu, unstructured activities, because any such endeavor needs to be planned in advance and in considerable detail to ensure access to all needed resources.

 

In light of insufficient support services for innovative uses of technology in OLPD and the university as a whole, the TRTG is of the view that the department should work to harness the abundance of enthusiasm, experience, and knowledge present in its students, faculty and staff. The department can use resources already available to create an environment that encourages and incentivizes sharing of knowledge and experience. For example, by considering how available space may be organized to be more conducive to collaboration, and by offering opportunities to participate in creative innovation labs as part of students’ coursework.

 

Recommendations

The TTG’s recommendations, as they relate to each relevant theme explored by the TTG, are listed at the end of each of the preceding chapters of the report. In addition to those recommendations, the TRTG strongly recommends that there be an ongoing student-led technology task group. OLPD students bring a broad range of experiences and knowledge to the department. Many of the department’s students have worked with technology in various educational settings and have a keen sense of the possibilities offered by current and emerging technologies. In the near future, one of the primary tasks of the TRTG will be to follow up on the recommendations in this report. Future TTGs will also continue to monitor technological developments, consider their usefulness for OLPD, and make recommendations to the department when such needs arise. Future iterations of the TRTG will be supervised by a faculty member and work in close collaboration with relevant institutes within the department, in particular the Leapfrog Institutes and the Jandris Center for Innovative Higher Education.

 

Many of the recommendations made in this report will require significant planning and action over the next year. However, some can be acted on immediately, in particular the recommendations regarding the use of available space in Wulling Hall. The TRTG suggests that considerations concerning the use of the space and technology already available to OLPD students start in the Summer, 2012 with the aim of having at least one area within Wulling Hall available to students that incorporates some of the elements of a collaborative space as described in this report.

 

Prospective Timeline

Summer 2012

 

  • Begin transformation of graduate assistants’ area in 410 Wulling Hall to a collaborative space under the leadership of CIDE faculty and students

○      Discussion of OLPD space use and utility by students

○      Consideration of layout and furniture possibilities

 

Fall 2012

 

  • Appoint new TRTG
  • Ongoing discussion of OLPD space use and utility by students
  • TRTG to meet with the representatives of Leapfrog Institute and Jandris Center to explore possibilities for collaboration
  • Identify possibilities for funding, and developing a budget
  • TRTG to work with at least one faculty member to develop an innovation lab and course
  • TRTG to develop an improved Technology Showcase based on participants experiences of the Spring, 2012 showcase
  • TRTG work with faculty to pilot initiative to increase use of mobile technology in OLPD

 

Spring 2013

 

  • Remodeling and purchase of equipment for the collaborative space
  • First offering of a student led collaborative study course
  • Start of mobile technology grad. student pilot project.
  • OLPD Technology Showcase – 2nd Year
  • Survey and additional data collection about recent changes

 

Summer 2013

 

  • Formative Evaluation Report on the activities carried out by the TRTG during academic year 2012-13 to OLPD faculty
  • Completion of remodeling of collaborative space
  • Planning for TRTG 3rd year (2013-2014)

References

Ally, M. (2009). Mobile Learning Transforming the Delivery of Education and Training. Edmonton, AB: Athabasca University Press.

 

Bonk, C. J. (2009). The World Is Open: How Web Technology Is Revolutionizing Education. New York, NY: Jossey-Bass.

 

Fitzpatrick, J. L., Sanders, J. R., & Worthen, B. R. (2010). Program Evaluation: Alternative Approaches and Practical Guidelines (4th Edition). Upper Saddle River, NJ: Prentice Hall.

 

Friedman, T. L. (2007). The World Is Flat 3.0: A Brief History of the Twenty-first Century. New York, NY: Picador.

 

Heeks, R. (2008). ICT4D 2.0: The Next Phase of Applying ICT for International Development. Computer, 26-33.

 

IDC. (2011). IDC: More Mobile Internet Users Than Wireline Users in the U.S. by 2015. Framingham: IDC. Retrieved from http://www.idc.com/getdoc.jsp?containerId=prUS23028711.

 

ISTE. (2008). NETS·Teachers . Washington D.C.: ISTE.

 

Johnson, S. (2010). Where Good Ideas Comes From: The Natural History of Innovation. New York, NY: Riverhead Trade.

 

Katz, R. N. (2008). Tower and the Cloud. Washington D.C.: EDUCAUSE.

 

Kelley, T., & Littman, J. (2005). The Ten Faces of Innovation: IDEO’s Strategies for Defeating the Devil’s Advocate and Driving Creativity Throughout Your Organization. New York, NY: Currency / Doubleday.

 

Kelly, K. (2011). What Technology Wants. New York, NY: Penguin.

 

Koehler, M. J., & Mishra, P. (2008). Introducing TPCK. In J. A. Colbert, K. E. Boyd, K. A. Clark, S. Guan, J. B. Harris, M. A. Kelly, & A. D. Thompson, Handbook of Technological Pedagogical Content Knowledge for Educators (pp. 1–29). New York, NY: Routledge.

 

Kurzweil, R. (2006). The Singularity Is Near: When Humans Transcend Biology. New York, NY: Penguin.

 

Levin, H. M., & McEwan, P. J. (2000). Cost-Effectiveness Analysis: Methods and Applications. Thousand Oaks, CA: Sage Publications.

 

Mishra, P., & Koehler, M. J. (2006). Technological pedagogical content knowledge: A framework for teacher knowledge. Teachers College Record, 1017-1054.

 

Oblinger, D. G. (2006). Learning Spaces. Washington D.C.: EDUCAUSE.

 

Oblinger, D. G. (2012). Game Changers: Education and Information Technologies. Washington D.C.: EDUCAUSE.

 

Peters, K. (2007). m-Learning: Positioning educators for a mobile, connected future. IRRODL, Retrieved from http://www.irrodl.org/index.php/irrodl/article/view/350/894.

 

Pink, D. H. (2011). Drive: The Surprising Truth About What Motivates Us. New York, NY: Riverhead Trade.

 

Prensky, M. (2009). Why You Tube Matters – Why it is so important, why we should all be using it,and why blocking it hurts our kids’ education. Horizon, 124 – 131.

 

Smith, A. (2011). Americans and Text Messaging. Washington D.C.: Pew Internet & American Life Project.

 

Sutter, J. D. (2012, January 19). Apple: School should center on the iPad. CNN Tech, pp. Retrieved from

http://articles.cnn.com/2012-01-19/tech/tech_mobile_appleibooks-2_1_tabletcomputeripadtextbookindustry.

 

University of Wisconsin – Madison. (2012, June 1). Evaluate Effectiveness. Retrieved from Technology Solutions for Teaching and Research: http://academictech.doit.wisc.edu/evaluate

 

Walsh, T. (2010). Unlocking the Gates: How and Why Leading Universities Are Opening Up Access to Their Courses. Princeton, NJ: Princeton University Press.

 

Wiley, D. (2010). Openness as Catalyst for an Educational Reformation. EDUCAUSE Review, 14–20.

 

 

 

Appendix 1.

OLPD Technology Redesign Task Group

 

Charge statement

 

The Department of Organizational Leadership, Policy, and Development is a leader in advancing knowledge about educational and organizational change in local, national, and international contexts. Our research, teaching, and outreach reflect a commitment to interdisciplinary and intercultural engagement with educators, scholars, and policy makers seeking to enhance leadership, policy, and development around the globe.

 

The College of Education and Human Development and OLPD are committed toward supporting technology-enhanced learning. To this end, OLPD is appointing a five-member Technology Redesign Task Group to think creatively about how to use technology to enhance the educational experiences of OLPD students.  In addition, the department’s technology innovations coordinator is appointed to serve as an ex-officio member of the task group.

 

Objective

 

To create a list of actionable recommendations OLPD may act on to enhance educational experiences of its students through the use of technologies. Imaginative, creative, and innovative approaches to using technologies are encouraged. The task group should develop its own statement of objectives to help facilitate its work.

 

Scope of work

 

The five students selected to co-lead this initiative will participate in a series of meetings through spring term 2012, collaborate on a recommendations report, and present their recommendations to the OLPD faculty. Activities should include the following:

  • Develop a work plan to complete its assignments
  • Develop specific, actionable recommendations that the department may take to enhance the educational experiences of OLPD students
  • Solicit information and consider suggestions from the OLPD community
  • If necessary, form sub-task groups from the task group members to complete tasks

 

The work of the task group must be completed by May 12, 2012, when its final recommendations will be presented to the faculty at a department meeting.

 


Appendix 2

Mobile Devices and Informal Learning

 

 

 

 

 

 

 

 

 

 

 

 

Appendix 3.

Draft Framework for the Evaluation of Educational Apps (Apps/Tools, Etc.)

 

Technical Considerations

  • Is it cross platform?
  • Consider field testing with students in the course in which it will be used
  • Even if it’s new, the reliability will depend on the provider. There will also likely be early adopters online who will report on how it worked.

○      related to the above, is there an exit strategy? (i.e. what happens if the provider ends the service? Do we have an alternative?)

  • Are computers in your labs compatible? Will you need to upgrade s/w in order to use it? Are students’ devices compatible? (I guess I’m thinking s/w and not tech)
  • Can students learn to use it quickly?
  • Can instructors learn it quickly?
  • Can it be used by students and instructors who have slow Internet connections?
  • Does it have good published help information both when questions arise and for learning to use it. We recently bought 25 licenses for Camtasia Studio 7 for faculty to use and the tutorials are great.
  • Will information be secure? What are the risks to confidentiality?
  • Is the technology sustainable or will it be replaced with something new within a short period of time?

Logistical Considerations

  • Is it free?
  • Time intensity and ease of use.
  • Will there be advance notice of updates and new versions and are there costs involved?
  • How regularly is it updated?
  • Is there online support either from the provider or on a blog or similar resource? Is there a good manual?
  • Does it require students to sign up for an account?
  • Does it require installation of an app or other software on students’ or the lab computer? (This is more of a problem for those dependent on support people controlling what s/w can be installed and on situations where the support people have to do the installation, but I know it applies to at least one ETW participant.)
  • How does the technology interact with other programs or applications?

 

Pedagogical Considerations

  • Begin with the principle that “Technology is not the point, learning is” and have that guide our use of any tool.
  • How does it increase and/or facilitate learning? or is it just fun?
  • Can the same function be achieved without the technology?
  • What types of student centered and/or collaborative activities can this technology facilitate?
  • What levels of learning are facilitated by this technology, i.e. from initial knowledge acquisition to higher level problem solving and thinking skills?
  • If the purpose is to facilitate learning, what approach does the s/w use?
  • Is it to be used individually or in groups? If in groups, does it provide students guidance to carry out the tasks?
  • What support documents or guidelines will the instructor need to create to guide students through learning activities while using this technology?
  • Is it the right tool for the job? There are lots of programs available that do many of the same things, but all are not equal. The selection criteria ought to first consider how well this particular tool will help students accomplish the learning task.

○      alternatively, are there existing guidelines already available online that students and instructors can use?

  • Does the software really provide the benefit you’re looking for or does other current software/hardware do just as well. New isn’t necessarily best. The old saying about old wine in new bottles actually may apply.
  • Good technology is not necessarily good pedagogy.
  • Can instructors evaluate in a fair and cost-effective manner the level of participation, effort exerted, and learning outcomes of individual and/or groups of students?

Accessibility

  • Can it be adapted to accommodate students with physical disabilities?
  • Can it be adapted to accommodate students with learning disabilities?
  • Does it require access to devices (i.e. smartphone, iPad) that are cost-prohibitive to some students?
  • Is it cross platform?
  • Can it be accessed by people who have slow Internet connection speeds?
  • Is it available to people without U of M X500 – for workshops and courses offered outside of the traditional U of M courses for the public?

 

read more

Voting – Jim Owens for TED-ED

»Posted by on May 21, 2012 in Spring 2013 | 0 comments

Voting – Jim Owens for TED-ED

May 21, 2012

I have never met a teacher that has touched so many.  When was the last time that you knew of a teacher who had over 1,700 Facebook friends when he himself is not very fond of computers?  Teaching life and critical thinking, those are two of the gifts he attempts to impart on his students.

A world traveler, connoisseur, and inspiring orator, Jim Owens instilled in many of his students a desire to make a difference, to enjoy their lives, impact the community, and maybe more. He taught us that while a straight line may be the shortest route, it is also the one most travelled. Travelling through the world, Jim worked in various different odd jobs, as a construction worker, as sailor, a balloon seller, and many more. Doing so allowed him to meet a beautiful French woman, and then travel through her on foot through Latin America, even landing on a Peruvian jail. His travels and readings allowed him to obtain the knowledge and experienced the relationships that he then shared with our course. He started with 1 humanities course, but today that’s all he teaches, and I am sure once students graduate they which they could revisit the experience. Through sharing his life experiences, Jim captivated the classroom, transformed the lives of many students, and encouraged them to seek their own self-discovery.

Working with his students, they collectively redesigned the classroom as “the Sanctuary”, with one student painting a mural for the course, and other students sanded and polished an old table, as well as set up the sound equipment, and place the fountain on the wall. His classroom because a place of refuge for our minds, our passions, and our bodies in an increasingly test driven educational system. While I passed countless of test to be where I am, I didn’t pass I single test in his courses, there were none. From the first day he told us, his course was about learning about life, not just a book, not just a course.  Surprisingly, the memories of that class are the ones that are most in my mind, constantly rising through my subconscious, impacting my trajectory and my daily life.

Owens stimulated students by telling them to pick teachers rather than courses, for a good teacher would make a student enjoy the experience, regardless of whether he was originally interested in the subject. Owens is an artist, which nurtured in us a love for learning. In his course, he encouraged individuals to express themselves through journal entries on a regular basis, brought interesting guest speakers, and helped students to question. His course sometimes took hold of this encouragement, and sometimes decided to bring about social change. As a member of his course, we organized a group S.U.R.E and spoke at the School Board in 2001. There students express their opinions to the school board, and shared their discontent. Owens also encouraged students to share their knowledge, and at the end of the semester each classroom member would their passion to the rest of the course.

Owens is a magician, and hopefully if others were to hear him talk, about any of the subjects which as an orator he was able to captivate our minds, through TED he will be able to captivate many more. Partly because of his inspiration I did not relinquish, and I am today a PhD student at the University of Minnesota. Owens is a writer, and a highly skilled speaker. I would argue that he is someone who could explain to others what should be the value or the goal of an education. He has been the graduation speaker, as well as delivered other motivational speakers. There are many lessons that he could share, hopefully some of them will reach members of the TED community.

read more

Back Up Sites – From Zero PC to CrashPlan

»Posted by on May 21, 2012 in Spring 2012 | 0 comments

Back Up Sites – From Zero PC to CrashPlan

May 21, 2012

I recently got tired of having information in different places and not being able to access or organize all of it. My wife considers me a digital squirrel and I would agree. Originally I kept things in external hard drives, and I still do. But after hearing an unpleasant click from one of twins, I decided it was time to go to the cloud. Originally I thought of using Carbonite. They recently released an interesting mobile app and felt it their unlimited storage would be a good investment. Unlike Mozy, I could upload all of my content to Carbonite.

Yet, after looking at a series of reviews, I decided to go with CrashPlan instead. Surprisingly they are also located in Minneapolis, MN. That was a pleasant surprise as I ended up feeling as if I was contributing to the local economy. But with that aside, they seem to not slow down your upload speed after a certain number of files (unlike most critics of Carbonite). I have not been able to go over 3.0 mbps in the upload, despite using a net that goes over 50 mbps for upload. Because of that I have since been uploading it from a slower connection as I cannot upload faster despite having the connection to do so. I could seed the hard drive but is an extra cost I rather avoid since I am living in Mpls.

Regardless, I am very happy with CrashPlan so far I should have the close to 2 TB of data that I want to upload available to me through the cloud, via an iPad or a laptop soon enough. Unfortunately, as a backup site, its file management system once files are uploaded is not that flexible. I expected that but I wanted an alternative. I decided to download 3 iPad apps that would allow me to connect to various cloud services simultaneously. I came across ZeroPC, iFiles, and OrganicDocs HD and I have been able to connect to most of my cloud services through them. Another interesting site I came across is called Jolicloud. It is not yet available on iOS but it seems pretty lightweight but capable and I might add it soon to an old computer I would like to make a bit faster. Not to keen on DSL (Damn Small Linux) or various other small distributions, but I have a couple of computers with half burned GPUs and they turn off after a few minutes. I been trying to have them run with an operating system that is both flexible, modern, and doesnt crash the old PCs. Maybe Jolicloud OS can help there. Either way, the Chrome App version of it is pretty neat.

read more

Gun vs. Defibrillator (AED)

»Posted by on May 21, 2012 in Spring 2012 | 0 comments

Gun vs. Defibrillator (AED)

May 21, 2012

For the past two years, I have been thinking of buying a defibrillator, more specifically an AED (1). Not for me, but for my father, or more specifically for his home. It needs two people for it to be used, since the person who is used on must be disabled, but most of the time there are two individuals at home. Anyhow, they run anywhere from $1,500 to much more. In my opinion it is a worthwhile investment. Is like a fire extinguisher in a house, it is there as a preventative measure.  My dad is fine now, but is one of those things that you just never know, and anything that can help someone survive by 40% is worthwhile (2).

The problem however is that he and my mom see it as buying a cemetery plot. As a jinx in some morbid way. To me is preventative and I would be happy to buy it if they were ok with it. My argument is that it doesnt cost much more than other things they have, and it could save their lives. I know quite a few people for example that own guns that just sit there and they never use them. Guns that I myself when I was a kid “borrowed” from my father’s locker and play with. It was a revolver and there was nothing in the chamber but it was silly. Most of the hiding spots my dad had for this and other things I found. They were good hiding spots but I liked to explore.

I have nothing against hunting, and in many ways I think is a good thing. It can help to make natural areas more sustainable as people have a vested interest in preserving them and they can be commercially sustained. Yet, many guns in homes are useless, and in my opinion are usually an accident waiting to happen (3). For this reason, I support the get rid of your guns and buy a defibrillator campaign. No it doesnt exist, but perhaps it should.

Image

http://www.geograph.org.uk/photo/2823421 – David Larry CC-BY

Article – http://en.wikipedia.org/wiki/Automated_external_defibrillator

Article – http://www.lifecareconsultants.co.nz/defibrillators

Article – http://www.momlogic.com/2008/08/protect_your_kids_from_guns.php

read more

Online Focus Groups Presentation

»Posted by on May 21, 2012 in Spring 2012 | 0 comments

Online Focus Groups Presentation 

May 21, 2012

More information about this project can be found at – http://z.umn.edu/onlinefocusgroups

This past semester 11 participants joined efforts to research various online tools in the hope of findings ways in which to conduct an online focus group effectively with limited resources. After conducting focus group for over 30 years, Dr. Richard Krueger, with the help of Dr. David Ernst, Director of Academic Technologies, organized a course around the idea of testing various online platforms’ strengths and weaknesses for hosting focus groups. The project involved 11 co-investigators at the University of Minnesota, all with a strong background in conducting focus groups and using technologies in innovative ways. The group analyzed potential platforms for online focus groups in terms of their cost, information privacy, administrative requirements, ease of navigation, hardware requirements, data capturing process, and other criteria. Our goal was to come up with cost-effective solutions for translating the anatomy and the essence of a face to face focus group to an online environment.

Exploring focus groups through focus groups was an integral element of our research design, as we were ourselves knowledgeable in online environments and focus group theory.  Following an initial brainstorming session, the research team categorized various online social platforms based on their strengths and weaknesses. Some tools discussed included: Facebook, Google Docs, Desire to Learn (D2L), Moodle, Free Forums, Co Meeting, Google Groups, VoiceThread, Listservs, Skype, Ning, and Adobe Connect, among others. These tools were then classified according to whether they allowed for real-time (synchronous) interaction, or allowed participants to log in at different (asynchronous) times to participate.

Other variables that were considered included: cost, security, data ownership, ease of use, data capture, ability to participate anonymously, additional and unique features (“bells and whistles”), multimedia capabilities, bandwidth requirements, and the platform’s visual appeal.  The team focused its exploration by settling on a “short list” of online social platforms for further testing representing both synchronous and asynchronous options (see Figure 1).  Research team members divided up the roles of moderator and participants, modifying each platform to fit the purpose of a focus group, and using each other as “test subjects” to try out the various platforms.

Synchronous

Asynchronous

Skype

Google Groups

Adobe Connect (UM Connect)

Ning

Figure 1. Platforms tested by different research members.

While we are currently writing a brief chapter that illustrates in more detail our experience with each platform. Various other details about our work can be found at: http://z.umn.edu/onlinefocusgroups. Which includes the following PowerPoint

[googleapps domain=”docs” dir=”a/umn.edu/presentation/embed” query=”id=1X1ZhYId_ycVwYc_-OfVfRpV7gbtOwy4gBEbp4-1KdB4&start=false&loop=false&delayms=3000″ width=”960″ height=”749″ /]

Lessons Learned – Through a process of conducting several focus groups, the group came up with a list of lessons learned, summarized in the table below

Environment Technology
Make it Welcoming – Take some time to think through the way your online environment looks. Is it inviting? Is it user-friendly? You may want to film a brief introductory video to introduce the moderator(s), the purpose of the group, and the features of the online platform.Personalize it – Allow your participants to personalize their presence. Even in an anonymous group, you can have participants pick fun profile pictures, write a brief personal bio, answer a few introductory questions, upload a few personal photos, or even film a brief introductory video, if appropriate. There are several simple video tools available, such as: http://intervue.meSimple Layout and Design – Make it easy to find things on the site, build in some navigational redundancy, and avoid clutter. Make sure you only have on the site what is necessary for conducting the focus group. Teach the technology – Participants will come with varying levels of technology expertise and anxiety, and it is important to get them more or less on the same page before starting the focus group.  Try creating a “how-to” guide or filming an introductory video that introduces the platform, and ask participants to look at it prior to joining the focus group.  Consider incorporating a “test run” or a “warm-up” activity at the beginning of the focus group to introduce and test out the features of your focus group platform.Stay behind the curve – Choose platforms and features that participants have the maximum amount of familiarity with.  Remember: focus groups are about getting rich information–not about demonstrating the latest technology.  If all goes well, the technology should be as transparent as possible.Keep the technology support “quiet” and omnipresent – Technology should remain as invisible as possible, in order to focus on participants’ voices and ideas. Build in multiple avenues for troubleshooting, and be explicit about how participants can request help if something breaks down.  For example, if a microphone or webcam isn’t working, encourage participants to ask for help via text chat; if participants are confused about how to engage in an asynchronous forum, consider holding “office hours” where moderators are available for live text chat support; if a participant’s computer breaks down altogether, have a phone number or e-mail address available to participants for “last ditch” tech support.
Participants Moderator
Keep the group small – Somewhere around five participants is ideal.  This reduces the amount of reading for asynchronous text-based focus groups, and reduces the bandwidth and troubleshooting issues for synchronous voice/video focus groups.Know your audience – Make sure the platform you choose and the features you use will resonate with your participants.  There are some generational and even gendered patterns in the ways people prefer to engage with technology that may be helpful to consider when selecting a platform.  To get a sense of what you might reasonably expect from participants, you may want to look at the Pew Internet & American Life Project’s work on technology user types (see Pew Internet & American Life Project, 2009).Consider how you recruit – Recruiting for an online focus group can be very different than a face-to-face group. You have to consider motivations and ability. Do the people you are recruiting like communicating online? Or is communicating online the only way to participate in the group? When are good times to get together? What is their level of technology sophistication and will they have sufficient access to the technologies and bandwidth you require?

Furnish the right incentives – Incentives help participants stay engaged throughout the online focus group. The incentive could be intangible (e.g. “You are helping the community.” or “This research project will help others in need.”) or tangible (e.g. a gift card from a popular store or a movie ticket).  If the incentive is intangible, be sure to describe the benefit. Don’t assume that it is obvious.

Establish expectations for engagement –  Social norms vary much more online than in a face-to-face environment.  Being explicit about how and how much you expect participants to engage will help avoid “culture clashes” and confusion.  In synchronous environments, you’ll want to spell out clear expectations of who should talk and when, or even try calling on people in turn. In asynchronous environments, you can communicate guidelines for how and how frequently participants should comment on each others’ contributions.  You can also encourage participants to use bullet points, bold key ideas, and give brief explanations to avoid rambling.Be socially present – As a moderator, it is important to appear present in the conversations–perhaps more so than in a face-to-face environment. Online environments can feel impersonal, so you may want to make particular effort to address participants by name. In synchronous environments, you may find you need to more actively moderate and “fill in the gaps” in conversation.  In asynchronous environments, you can create daily summaries and bullet points to highlight key ideas in the discussion and guide further discussion for participants who don’t have time to read all the posts.Have multiple moderators – Having two or more moderators is important.  You may want to divide roles into “talk-moderator” (to guide the discussion) and “tech-moderator” (to help with troubleshooting and tech questions).
read more

GAPSA Proposal – Promoting Openness (Long Version)

»Posted by on May 14, 2012 in Spring 2012 | 0 comments

GAPSA Proposal – Promoting Openness (Long Version)

The increasing need for a highly trained work force, together with the decreasing financial support for higher education by states governments across the country, encourages higher education institutions to search for new ways by which to break away from the iron triangle of higher education, or the difficult balance between cost, quality, and access to higher education (Immerwahr, Johnson, & Gasbarra, 2008; SHEEO, 2012). I believe that further “openness”, is one of the potential game changers to the future of higher education (Oblinger, 2012). EDUCAUSE, UNESCO, and COL, among other organizations have highlighted the potential of openness to greatly reduce the cost of education by encouraging the adoption of open access journals, open teaching, and open textbooks. Various institutions in the United States are currently using and testing the potential for open textbooks to help improve textbooks as well as reduce the cost of educational materials (Carpenter, 2010). The open access movement is of relevance to graduate students as libraries are increasingly unable to subscribe to many valuable journals (http://righttoresearch.org/). In recent years there has been a rapid rise in subscription costs of academic journals, increasing by 145% over the past six years, despite the decreased cost of production due to distribution and communication advances. Spending over $3.5 million a year in subscriptions, Harvard University libraries recently argued that “many large journal publishers have made the scholarly communication environment fiscally unsustainable and academically restrictive” (Harvard Library, 2012). They are not alone; thousands of educators have also joined to boycott Elsevier (Sample, 2012).

While the production of high quality products is and will likely continue to be expensive and it is important to pay content producers a competitive rate for the development of quality educational products, production models adopted by organizations such as Flat World Knowledge are able to do so while offering their textbooks for a fraction of the cost in its printed format. Spending an average of over $900 dollars on textbooks every year, textbooks are a major expense for college students (Wiley & Green, 2012; GAO, 2005). In contrast open textbooks are free to download digitally, and the faculty member gains the liberty to be legal able to modify the modules that are included within the e-book, and by adding, formatting or deleting, the specific content of any modules.

CEHD’s Open Textbook initiative promoted by CEHD’s Dean Jean Quam and David Ersnt, CEHD Director of Academic and Information Technology and Provost Karen Hansen, who state that “the University of Minnesota should be a leader in enabling faculty and students to benefit from open content and electronic textbook options” are hopeful about the potential for this new initiative to help improve higher education (Anderson, 2012). I support this statement and hope that more students, both within the University of Minnesota and outside of it, will increasingly benefit from open textbooks and other high quality open educational resources.

Open access to academic articles is another way in which the University of Minnesota could help to further reduce long term costs for students. With journal subscriptions costing millions of dollars to major universities, a lower cost to journal subscriptions would modify the financial requests of the University of Minnesota libraries, increasing their attention to improving other services. Open access journals does not detrimentally affect many researcher as research is regularly paid by grants or by the university as part of the researcher’s salary. As the recent reaction against Elsevier illustrates, there is support within parts of the academic community for a new equation that support greater access to high quality information. It is difficult for a young scholar to “stand in the shoulders of giants” without access to high quality resources and the most recent information. A recent study showed that 40% of researchers in the UK could not access the articles they needed on a daily or weekly basis (RIN, 2009).

Open teaching which includes the recent increase of large open online courses (also known as MOOCs) by MIT, Harvard, Stanford, Yale, and various other prestigious universities, has led to the increased sharing of high quality information and the increasingly the visibility of the university and its researchers (Masters, 2011; DeSantis, 2012). Other initiatives such as Khan’s Academy and educational resources in sites such as TED, YouTube EDU, iTunes U, or Sophia.org are examples of the possibilities for impact that an instructor can have not only statewide but across the world (Bonk, 2009).

The increased visibility provided by Open Teaching can benefit students and faculty members. Short segments of courses or speeches that are particularly captivating can help the career of a scholar while the diffusion of an innovation or a good idea. Recently the University of Minnesota had the opportunity to host a TEDx event, successful organized by a number of undergraduate students (http://tedxumn.com/). TED is a good example of the power of openness. While TED has been active since 1984, it lectures have only been available through the Internet since 2007. Since then, the best TED lectures have reached million viewers. Advocating for further openness, I believe it is important to support initiatives such as TEDxUMN to increase the impact of our best ideas and innovations. The university could also potentially host a similar initiative on a yearly basis to provide a platform for the diffusion of the best ideas and innovation by university scholars.

Unfortunately, openness is often criticized, as an agent of change. Yet while some may argue that the current system is working properly, students are increasingly leaving higher education indebted with lack of certainty about their financial future. Further openness can help to reduce the rising cost of education while increasing the sustainability of the system. Without changes it may be increasingly difficult for some students, particularly those of lower socio-economic standards to attend the University of Minnesota. As state funding decreases and other cost continue to rise, it is important to search for new and innovative ways by which to reduce the cost of schooling while maintaining a high level of quality and a level of access that is in accordance with the mission of the university as a land grant institution. “The land-grant university system is being built on behalf of the people, who have invested in these public universities their hopes, their support, and their confidence” President Abraham Lincoln upon signing the Morrill Act, July 2, 1862 (http://landgrant150.umn.edu/).

I believe that Open Textbooks can impact the overall cost of school for undergraduates, professional students and some graduate programs. I also believe that by promoting further openness through open access, open educational resources (in addition to open textbooks) and open teaching, the university community can contribute to the construction of a more affordable experience for students, the increased visibility of university projects, while also giving back to the state, the nation, and the world, moving forward in fulfilling a part of the mission of the University of Minnesota.

Works Cited

Anderson, K. (2012, April 23). U creates Open Academics textbook catalog to reduce student costs. Retrieved April 30, 2012, from University of Minnesota News Release: http://www1.umn.edu/news/news-releases/2012/UR_CONTENT_383497.html

Bonk, C. J. (2009). The World Is Open: How Web Technology Is Revolutionizing Education. New York: Jossey-Bass.

Carpenter, M. A. (2010). Flat World Knowledge: Creating a Global Revolution in College Textbooks! Irvington: Flat World Knowledge.

DeSantis, N. (2012, May 05). Harvard and MIT Put $60-Million Into New Platform for Free Online Courses. The Chronicle of Higher Education , pp. http://chronicle.com/blogs/wiredcampus/harvard-and-mit-put-60-million-into-new-platform-for-free-online-courses/36284.

GAO. (2005). College Textbooks – Enhance Offerings Appear to Drive Recent Price Increases. Washington DC: United States Government Accountability Office.

Harvard Library. (2012, April 17). Faculty Advisory Council Memorandum on Journal Pricing – Major Periodical Subscriptions Cannot Be Sustained. Retrieved April 29, 2012, from Harvard University: http://isites.harvard.edu/icb/icb.do?keyword=k77982&tabgroupid=icb.tabgroup143448

Immerwahr, J., Johnson, J., & Gasbarra, P. (2008). The Iron Triangle: College Presidents Talk about Costs, Access, and Quality. San Jose: The National Center for Public Policy and Higher Education and Public Agenda.

Masters, K. (2011). A Brief Guide To Understanding MOOCs. The Internet Journal of Medical Education , http://www.ispub.com/journal/the-internet-journal-of-medical-education/volume-1-number-2/a-brief-guide-to-understanding-moocs.html.

Oblinger, D. G. (2012). Game Changers: Education and Information Technologies. Louisville: EDUCAUSE.

RIN. (2009). Overcoming barriers: access to research information content. London: Research Information Network.

Sample, I. (2012, April 24). Harvard University says it can’t afford journal publishers’ prices. The Guardian , pp. http://www.guardian.co.uk/science/2012/apr/24/harvard-university-journal-publishers-prices.

SHEEO. (2012). State Higher Education Finance FY 2011. Boulder: State Higher Education Executive Officers (SHEEO).

Wiley, D., & Green, C. (2012). Why Openness in Education? In D. G. Oblinger, Game Changers – Education and Information Technologies (pp. 81-89). Louisville: EDUCAUSE.



 

read more

For a More Robust Evaluation of 1 to 1 ICT 4 Ed Adoption Projects

»Posted by on May 12, 2012 in Spring 2012 | 0 comments

For a More Robust Evaluation of 1 to 1 ICT for Education Adoption Projects 

The rapid change of information and communication technology (ICT) increases the challenge in determining how to best evaluate proficient use of these technological advances and their impact on learning. Through an overview of different initiatives, this paper illustrates the benefits of implementing a mixed-methods approach, and analyzing projects over a prolonged period of time. Looking at a program in a longer timeframe can enable us to be more aware of the impact a program has on an individual and a community. The use of mixed-methods allows us to analyze a program in various ways, studying variables that are measurable and generalizable, as well as elements that are specific to a particular situation. By incorporating these elements into evaluation studies we can potentially increase the quality and usability of the reports generated. To illustrate the benefits of mixed-methods and the continued analysis of a project, this paper discusses the 1 to 1 iPad project at the University of Minnesota.

Rapid Rate of Change – A Relevant Characteristic of ICT for Education Projects

It was only a few decades ago, in 1978, when top MIT computer scientists had reservations about the usability of the personal computer for tasks such as an address book or a personal calendars (Tippet & Turkle, 2011). Today, universities in the United States increasingly consider remodeling their computer labs, as almost all college students in the United States (89.1% – 2009 at UMN) bring their own laptops to the university (Walker & Jorn, 2009). Students bringing their laptops to college increased from 36% in 2003 to 83% in 2008 (Terris, 2009).

The rapid improvement of technology results in the rapid depreciation of gadgets, and contributes to the difficulty of evaluating them. The increased computational power and capabilities of technology has encouraged educational institutions and other industries to adopt them. Ownership of Information and Communication Technologies (ICTs) has decreased the costs of transferring data and increased workers’ potential productivity (Friedman, 2007). Influential ICTs such as the mobile phone, the television, the internet and the radio, have augmented the quantity of information available to individuals. The economic benefits from improvements in information and data transfers have led to increased investments. There has also been growing interest in digital literacy as a necessary skill in the 21st century (Flannigan, 2006; Jenkins et al., 2006). While not all of the changes brought by increased access to technology are positive, greater access to information and the rapid improvement of these technologies has a major impact in society (Carr, 2011; Kurzweil, 2000). Unlike some traditional fields such as mathematics, or history, where most basic concepts have remained unchanged, the impacts of new media and its prevalence in society has changed substantially over the past few decades and with it the difficulty in evaluating these projects. Mobile subscriptions alone increased from less than 3 billion in 2006 to 5.9 billion in 2011 (ITU, 2012)

This rapid change makes it difficult to determine the essential skills a learner must have in the workplace of tomorrow (Cobo & Moravec, 2011). With hundreds of thousands of computer applications and many types of hardware, some of high levels of complexity; it can take a person a significant amount of time to become adept in any complex program. A high level of specialization is often the norm, as using complex programs successfully requires a degree of mastery of statistical analysis or qualitative research methods. Similarly, programs such as Adobe Photoshop, or Python, among others, have a considerable learning curve (There are courses available for learning any of these programs). Being a specialist in a particular program can lead toward a very successful career, but simply mastering a single program can take  hundreds of hours of practice. While it may take 10,000 hours to become a successful reporter, a successful violinist, or a successful writer (Gladwell, 2008), ICTs encompass thousands of possibilities, each requiring differing amounts of time in which to become proficient (this includes unique musical instruments, and new ways of writing [via text or twitter]).

Understanding this rapid change is important in evaluating ICT adoption programs because it influences what we consider to be the effective use of these technologies by the general population. Texting, for example, is increasingly common and is considered by some experts to be a nascent dialect (Thurlow & Brown, 2003).  How important is it to know how to effectively send texts and use a mobile phone in the 21st century? It is hard to answer these questions as a technology may be displaced in a few years’ time. The rapid change of technology complicates how we measure digital literacy and through it the effectiveness of 1 to 1 adoption and usability programs. These complications are at times difficult to perceive because of generational differences between the evaluator and younger generations (Prensky, 2001).

Today young adults (18-24) send an average of 109.5 text messages a day or 3,200 text messages a month and many of them prefer communicating by text messages over emails. Email, a relatively recent invention, is to some already considered old fashioned and impractical (Smith, 2011). With this in mind, does an individual’s capacity to use emails effectively continue to be a 21st century digital literacy requirement? While the International Society for Technology in Education (ISTE) has developed ICT for education standards which can aid the evaluation of technology adoption programs (ISTE, 2008), these standards emphasize broad competencies and must be operationalized to reflect the distinctiveness of each 1 to 1 ICT program.

In this essay I propose to evaluate a 1 to 1 technology project over a long period of time to assess the impact of the program on the individual over time. One of the key advantages of 1 to 1 initiatives is that the participants are able to take home the devices. It is easier to become proficient using a device that one has access to at home, compare to one that is limited to use within the classroom setting. As argued by Seiter (2008), [t]here is an overestimation of access to computers in terms of economic class, and an underestimation of specific forms of cultural capital required to maintain the systems themselves and move beyond the casual, recreational uses of computers to those that might lead directly to well-paid employment” (Pg. 29). If Seiter (2008) is accurate, most of the economic benefits from ICTs come from their long term use.

ICT investment can be expensive and many  ICT projects could not be developed without the support of private industry and the government (Heshmati & Addison, 2003).  While ICT may not be as important as basic education, food, and health services, governments around the world have spent large amounts on ICT for education initiatives, hoping to imitate the success many advanced economies have obtained from their ICT industries and byproducts (MSC, 1996). “Investment in ICT infrastructure and skills helps to diversify economies from dependence on their natural-resource endowments and offsets some of the locational disadvantages of landlocked and geographically remote countries” (Heshmati & Addison, 2003, p. 5)

Adequately evaluating 1-to-1 technology adoption initiatives is increasingly important, as different education interventions may have different cost-effectiveness ratios, and cost-benefit ratios, with some interventions being much more effective than others (Yeh, 2011). Working with limited resources, governments must administer their funds in the best possible way to enable their citizens to meet their various needs, from food and shelter, to self-actualization. Just because one intervention is more cost effective, does not mean that the other intervention should be necessarily discarded. As Maslow (1943) suggested, many needs can, and should, be met simultaneously. An improvement in one area of life, such as shelter, does not occur in a vacuum, and is not exclusive from the individual’s desire to feel accepted by others, or to improve their problem-solving ability (Ibid.). Investing in ICT is important for states as they move towards becoming economically diverse, robust and more competitive, relying more on their human capital than their natural resources. To evaluate these projects more precisely; this paper encourages evaluators to use a mixed-method analysis with a long-term time perspective.

Evaluating Information and Communication Technology Projects

Evaluation can help to increase the effectiveness of programs and improve the distribution of the limited resources available to a society. The decisions made by an evaluator can impact the lives of many individuals. Evaluators can help improve a program as well as decide whether or not the program should be continued (Fitzpatrick et al., 2011). Discussing the methodology of evaluation, Scriven (1967) differentiated between formative (focus on development and improvement) and summative (focusing on whether the program is meeting its stated goals) evaluation.  By conducting an evaluation a decision-making body is able to make an informed decision about the future of the program. Yet, dealing with complex programs with large numbers of pieces it is difficult to frame an evaluation to obtain the most valuable information, particularly when there is a limited time to conduct it, and the brevity of a report can be one of its strengths (Krueger, 1986). Yet, different methods provide for valuable lenses through which to look at a problem, frames that the evaluator should consider before conducting their evaluation.

Possibly the most important elements to consider in a 1 to 1 ICT project are its cost, and its use by the learners. The most well-known 1 to 1 initiative is the One Laptop Per Child Program (OPLC) which has delivered hundreds of thousands of units (http://one.laptop.org/). Yet with over $100 cost per student it could cost $500 billion dollars to provide a computer to every person that currently lacks access to the internet worldwide ($5 billion people). This would not include continued maintenance and electricity cost or the cost to access the Internet. Is access to ICT really that important? According to a recent UNESCO (2012) publication, while 1 to 1 laptop projects are very costly, in Latin America “in the last three years, the 1:1 model has become increasingly widespread, and 1:1 programmes are now the primary focus of national policies for ICT in education in the region. Policy-makers are no longer discussing whether the 1:1 model is worthy of investment but rather how best to achieve it” (Lugo & Schurmann, 2012)

While a price tag of $100 appears as an expensive investment for developing countries, especially when some countries spend less than $100 per student a year within their educational budget, it is also important to consider that all programs have costs, even when they are not financial. Even 1 to 1 programs that are “free” (through donations) have a cost, including an e-waste disposal cost. Even when they are based on volunteer efforts, programs still have as a minimum a lost opportunity cost for instructors and learners. The cost of programs can be most effectively assessed by measuring their different ingredients. This allows programs to be quantified; for various elements to be weighted, and, as a result, for programs to be compared through a cost-effectiveness analysis (Levin, 2001). The financial benefit of the program can also be determined through a cost-benefit analysis. Through a qualitative study, “thick”, rich descriptive information can be obtained and thematically organized, helping key stakeholders to better understand elements that would otherwise go unnoticed (Geertz, 1973).

Programs can also be mapped through a logic model which can include inputs, activities, outputs, and outcomes (Alter & Murty, 1997; McLaughlin & Jordan, 1999). The order in which the elements of a program are implemented and the context may also influence the results of the program. There are also likely to be competing program alternatives, some of which may be more effective than the particular program being considered. Hoping to increase the transferability or generalizability of a study, an evaluation can also be theory driven (Weiss, 1997). These and other elements, can improve the quality and usability of data obtained by an evaluation. However, with limited time and resources, the methodology used to evaluate a program depends on both the strengths of the researcher, and what is considered of principal importance by key stakeholders.

Over time, every practicing evaluator is or is in the process of becoming a “connoisseur” (the art of appreciation), as well as a “critic” (the art of disclosure) (Eisner, 1994, p. 215). This knowledge allows him or her to more effectively recommend to key stakeholders  the best methods of evaluation to pursue in a particular scenario. However, the interests of secondary stakeholders are also important in many ICT adoption programs.

The Relevance of Mixed Methods and Triangulation

“The underlying rationale for mixed-methods inquiry is to understand more fully, to generate deeper and broader insights, to develop important knowledge claims that respect a wider range of interests and perspectives” (Greene & Caracelli, 1997, p. 7).

 

Mixed-methods can greatly benefit a study as they allow the researcher to ask questions that he or she may ignore otherwise, obtaining additional information. While “purists” oppose the use of mixed-methods due to potential epistemological and ontological contradictions, many evaluators take a more “pragmatic” approach (Greene et al., 1989). One of the concerns regarding the use mixed-methods is that they may compromise the methodological integrity of an experimental study. These are valid concerns, and it is important to consider carefully how methods are being utilized, to avoid unintended conflicts. Some of the theoretical concerns for researchers against using mixed-methods may not be as applicable to evaluators as evaluators do not have the same goals as researchers. While researchers are focused to a greater extent on theory and generalizability and transferability, many evaluators focus on utilization and the practical implication of their analysis to their key stakeholders and the future of the program (Patton, 2007). To the “pragmatist” evaluator, “philosophical assumptions are logically independent and therefore can be mixed and matched, in conjunction with choices about methods, to achieve the combination most appropriate for a given inquiry problem. Moreover, these paradigm differences do not really matter very much to the practice” (Greene, et al., 1989, p. 8).

Mixed-methods often refers to the use of methods from different paradigms, using both a qualitative method, such as unstructured interviews or participant observations, with a quantitative method, such as academic achievement scores, or another statistical value within the same study (Johnson & Onwuegbuzie, 2004). While it seems beneficial to analyze a problem in multiple ways, experts in both qualitative and quantitative methods express concerns about this approach. Johnson and Onwuegbuzie (2004) argued that some of these “purist” concerns stems from a “tendency among some researchers to treating epistemology and method as being synonymous,” which is not necessarily the case (Pg. 15). Johnson and Onwuegbuzie (2004) argue for a contingency theory approach to research which emphasizes that while no method is superior, there are instances when one is preferable to the other.

One of the biggest benefits of using mixed methods is that they allow for the triangulation of findings. According to Dezin (1978) triangulation is “the combination of methodologies in the study of the same phenomenon” (pg 291). Dezin describes four types of triangulation: data triangulation, investigator triangulation, theory triangulation, and methodological triangulation. He describes these as possible within-methods, or between-methods (Ibid.). The ways in which methods are mixed varies, sometimes all methods having the same amount of influence, while at other times one method holds preeminence. Triangulation is a common way in which to strengthen the generalizability and transferability of a study and the strength of its claims.  Other benefits of using mixed-methods include complementarity, where the results of one method are clarified by another, development, when one method informs the other, expansion, trying to increase the scope of one methodology, and initiation, which seeks the discovery of paradox by recasting results or questions from one method to another (Greene et al., 1989). Regardless of the initial results, it usually provides richer data. Comparisons between the data could lead to either “convergence, inconsistency, or contradiction” (Johnson et al., 2007, p. 115).

If there is a conflict or an inconsistency within the data, it increases the difficulty of establishing a causal relationship and the project may require further study and explanation. This explanation can be provided by a form of structural corroboration, further analysis, or by sharing both findings with the key stakeholder. He or she can then use both pieces of information to make his or her decisions (Eisner, 1994). While most evaluators feel a responsibility to provide recommendations to the stakeholders, these recommendations do not necessarily have to address the problem scientifically, rather a “connoisseur” may state that based on his experience, he or she believes this path may be the best path to follow. ICT adoption includes many invisible elements which increases the difficulty in evaluating them (Cobo & Moravec, 2011). Because of its complexity, it will be helpful for the evaluator to share his or her opinion as a “connoisseur”. Social programs are generally complex. By providing a focused report to the key stakeholders, that emphasizes the main findings of the mixed-methods evaluation, they will be more likely to make a good formative or summative decision. As will be illustrated, this was an objective pursued by to 1 to 1 iPad initiative at the University of Minnesota.

Encouraging the Long-Term Study of ICT Projects

The limited timeframe of a study can result in a restricted analysis. Iterative formative evaluations allow key stakeholders to constantly reevaluate ways in which to improve a program (Mirijamdotter et al., 2006). Iterative and continuous evaluations are very important for internet based companies.  Google, for example, is known to regularly test new algorithms and versions of their search engine simultaneously, to obtain helpful usability comparisons. They try hundreds of variations of their search engine a year in an attempt to improve their product. (Levy, 2010). Similarly, many ICT adoption projects include an iterative process in their analysis, yet in the discussion of their findings, evaluations regularly omit the potential long-term benefits of the programs, focusing instead on short-term costs and benefits.  While there are time constrains and financial limitations to evaluations of 1-to-1 laptop programs, these evaluations would benefit from more attention to measuring the long term benefits of the interventions, including cultural capital gain (Seiter, 2008).

Methodologies such as longitudinal studies, ethnographic research, and time-series are among those that can help illustrate the potential benefits of the long term analysis of an intervention. Some of these studies can be very expensive, but they allow for the observation of changes that would otherwise go unnoticed. Another example of the possibilities of looking at changes over time was recently made possible by the Google Books Project Ngram Viewer (http://books.google.com/ngrams). The NGram Viewer allows for word frequencies to be analyzed over a span of 200 years! This type of study, called Culturenomics, is one of the newest ways in which an analysis of a subject over time provides an additional insight to an issue (Michel, et al., 2010). While the NGram Viewer is not very useful for evaluators, other forms longer-term analysis can be of greater support.

Ethnography is a field of study in which time spent in the field is an important validity variable. Ethnographers focus primarily on the quality of the data, and validity can be increased if the researcher has lived in a community for a longer time and, in so doing, has obtained a greater understanding of the local culture. Some of the subtleties that are analyzed by ethnographers require time and involvement to be discovered. To some researchers, ethnography symbolizes a study that takes more than a year (Fuller, 2008). However, some projects could last perhaps a single long day, while other “projects are developed throughout the whole of a researcher’s life; an ethnography may become a long, episodic narrative” (Jeffrey & Troman, 2004). In quantitative analysis, a time series, as their name implies, also emphasizes the importance of collecting data over time. This set of statistical data can be collected at various intervals;  monthly for unemployment benefits data; or daily for the financial exchange rate; or even every 2 seconds for EGG brainwave activity. A commonly used and informative time series is population census data, which is collected by many countries in regular intervals to help their governments better understand broader demographical changes, migratory patterns, and the future outlook of various variables (Zhang & Song, 2003).

Longitudinal studies can also be very helpful in understanding how an intervention at an early stage of a person’s development influences them throughout the rest of their lives. Various longitudinal studies have been conducted within early education. Longitudinal studies include interventions in pre-natal care, youth reading programs, or the observation of children as they become older, among many other studies. One of the most famous longitudinal studies of education was the Student/Teacher Achievement Ratio (STAR) Tennessee Class Size Reduction study which began in 1985 and which continued until 1999 (Finn & Achilles, 1999; Hanushek, 1999). The study tracked students who were assigned at random to kindergartens with between 13 and 17 students, or larger classes of between 22 and 26.  Over 6000 students took part in the study in which they were kept in smaller classrooms for 4 years, and monitoring continued after the end of the intervention. The study found statistically significant changes to student achievement scores in three utilized measurements. The conclusions of this study strengthened claims regarding the positive impacts of class size reduction, encouraging the enactment of class reduction policies in California (1996) and other states. While later studies have contradicted the findings of the study, its use of an experimental design, its magnitude and its use of a longitudinal analysis strengthened its claims. There have been a number of important longitudinal studies in early childhood and other early interventions that have followed children’s development for decades (NCES, 2010).

Another popular, long-term, longitudinal study is the British Up Series which has followed a group of 14 children since age seven in 1964, and is still under production. Similar documentaries have been replicated in Australia (since 1975), Belgium (1980-1990), Canada (1991-1993), Czech Republic (1980s), Germany (1961-2006), Denmark (From 2000), Japan (from 1992), Netherlands (from 1982), South Africa (from 1982), Sweden (from 1973), USSR (from 1990), USA (from 1991). While these long-term studies can be expensive to conduct, they provide a different dimension to findings, a dimension that is sometimes not available in most 1 to 1 technology adoption evaluations.

The key benefit of including this dimension within an evaluation derives from the difficulty in knowing how the skills obtained from using new ICT devices will help an individual have the confidence and the background skills needed to develop future ICT competencies that may be beneficial to them in the job market. Will their familiarity with ICT at an early age bring about broader benefits later in their lives? A short-term outlook in an evaluation may, at times, provide a negatively skewed view of the impact of these projects, expecting more out of a pilot project than is realistic. In addition, it is common for program designers to overstate the potential outcomes of a project, expecting it to have a greater impact than it is likely possible. For example, as an evaluation of USAID basic educational projects (1990-2005) showed, most of its projects had less than a 4% increase in student achievement scores, despite the efforts of many specialists and the expenditure of millions of dollars. (Chapman & Quijada, 2007). One to one technology adoption projects can also be very expensive and, as such, can have a very negative cost-benefit analysis in the first years of the program. Evaluations should also take into account, future, longer-term benefits of the investment.

By evaluating a project while considering its impact over a longer time this article encourages the continued evaluation of a program over a number of years, on regular intervals, while providing recommendations, and reporting on the benefits and negative elements of the program as they are modified over time. This type of long term evaluation is best suited for an internal evaluator, or a combination or internal and external evaluators. When thinking of the cost of 1 to 1 programs over time, it is also important to keep in mind the rapid depreciation of technology. With the rapid depreciation of computer equipment, should 1 to 1 programs focus on purchasing the most up-to-date gadgets and tools? This is a question that is be best analyzed through the inclusion of a cost-effectiveness analysis which accounts for the depreciation of technologies.

One Laptop Per Child – An Evaluation of Peru’s Project

Possible the most controversial and also most commonly cited 1 to 1 initiative is the One Laptop Per Child (OLPC) initiative, which was started by Nicholas Negroponte, the founder of the MIT Media Lab (TED, 2008). According to Negroponte, by thinking in bytes instead of atoms, and by learning how to operate a computer, a child can learn that the world is increasingly available at the click of a button, and that they can construct and build anything that they can imagine by programing new and amazing environments (Negroponte, 1996). Following Papert’s Constructionism, Negroponte believes that programing teaches an individual how to learn, as they must go back, revisit their code and figure out why there is a mistake (Papert, 1980). As an ICT evangelist, Negroponte highlighted how simply by giving a child a computer his possibilities would be expanded (Negroponte, 1996). Since the beginning of OLPC in 2005, over 2.5 million laptops have been delivered (http://one.laptop.org/about/faq). However, despite the high level of investment, particularly in Latin America, project evaluations have not shown significant gains in achievement scores (Cristia et al., 2012).

A recent evaluation of OLPC in Peru expressed how, despite a high level of investment in these new machineries (902,000 laptops), and increasing the ratio of computers from 0.12 to 1.18, student performance in math and reading had not increased substantially. The project did find that students’ cognitive skills had improved over the time of the study. While analysts have since highlighted that the program had only limited effects on math and language achievement (0.003 standard deviations), little emphasis has been given to the potential impact of the improvement in cognitive skills, and perhaps more importantly, to what having improved their digital literacy skills will mean for these individuals in the future, as they are asked to learn other task specific digital and information literacy skills (Cristia et al., 2012).

It is also difficult to know from the available data whether a different investment would have been more cost-effective or result in a higher cost-benefit ratio in Peru. One of the unmet goals of OLPC was to produce a $100 laptop; however they currently cost around $200 (Ibid.). As a project which was not affiliated with Microsoft, Google or Apple, the OLPC laptops came with an operating system (OS) known as Sugar. While all operating systems share similarities, did the use of Linux Sugar limit or increase the possibilities for students? When testing student computer literacy skills, they found that the students quickly became more adept at using these devices. As explained earlier in this paper, evaluators also had difficulties in deciding which skills should be tested (Ibid., p. 15). Unfortunately, another unmet goal of the project was that Peru’s OLPC participants lacked internet connectivity. OLPC was partly designed so that students could benefit from increased connections either through the OLPC exclusive Mesh network or the Internet. The impacts of lacking access to the internet are hard to measure, however they may have affected the individuals’ development of information literacy skills. Peru’s evaluation of the OLPC project was very insightful. However, while it contained a qualitative element, the project had a quantitative focus, limiting reader’s understanding of how the initiative affected individuals. As a project which centers on the individual, learning more of the project’s impact on the person is increasingly relevant as ICT becomes more personalized. Apart from not discussing potential long-term gains, the evaluation also failed to mention the full cost of the devices. With the laptop only accounting for a tenth to a seventh of the total cost of the device, it is important to consider whether this is a cost-effective investment (Lugo & Schurmann, 2012). The evaluation would have benefited from a broader implementation of mixed methods, in particular on the qualitative-side, while also emphasizing these changes over a longer span of time. An element of time that is particularly important to first year initiatives is the teacher’s or instructor’s familiarity or learning curve, as they will slowly learn better ways in which to use the device and integrate them within the classroom.

A Case Study – University of Minnesota One iPad Per Student Initiative

The discussion surrounding the digital divide is traditionally centered around on access to the internet and a personal computer, yet the rapid change of technologies leads us to question whether the divide will be centered on these devices in the future (Warschauer, 2008; Zickuhr & Smith, 2012). What role will smart phones, reality augmented glasses, 3D printers or, farther into the future, nanotechnology implants signify in terms of the digital divide? (Kurzweil, 2000). A current technology that may further displace the purchase of paper books for K-12 and HE is e-reader technology, the most successful of which are the iPads (I, II, and III) and Amazon’s Kindle readers. A recent NDP report indicated that tablets may outsell laptop computers by 2016, expanding sales from 81.6 million units (2011) to 424.9 million units (2017) a year (Morphy, 2012). Will we then measure the digital divide in terms of who access and who doesn’t have access to an iPad?

Pilot projects in universities such as the University of Minnesota, the University of San Diego, Oberlin College and a few others have moved forward in answering this question. While the first successful tablet, the iPad, was released on April 2010, that same year, the University of Minnesota decided to purchase 447 units, to provide a tablet to every CEHD student in the upcoming undergraduate cohort. It was one of the first major initiatives of its type in the country. Because of its uniqueness, and being an early adoption project, its evaluation was based partly on the conclusions obtained from previous 1-to-1 projects such as the OLPC initiative and Maine’s 1-to-1 statewide adoption program. However, as a device that was substantially different from previous ICT devices, the operationalization of NETS standards, and an in-depth analysis of their potential use has not been acutely studied (ISTE, 2008). So far, only a few articles have been published regarding the use of the iPad in the classroom (EDUCAUSE, 2011). To better understand the possible educational implications of the adoption of this technological device, a CEHD research team conducted a mixed-methods evaluation (Wagoner et al., 2012). In addition, a commitment was made to continue evaluating the project for a consecutive number of years. The support of the dean was integral in the continuation of the program.

The first year, the project set a goal to increase the usability of the devices by both faculty and students, and to provide aid to faculty members so that they could familiarize themselves with the devices and consider the best ways to incorporate the devices within their classrooms. Soon after the distribution of iPads, evaluators also drafted a post-test and organized a series of interviews. The interviews asked faculty members how they learned to use their iPads, what were their plans for using them within the classroom, how the iPad had affected their teaching, and if the support received had been appropriate (From field notes).

A similar set of questions were asked to faculty members at the end of the school year, where they were asked what projects they had actually implemented, the opinions of students regarding ebooks, and their pedagogical concerns . Twenty two interviews were coded and themes were developed from the qualitative study, including concerns from faculty about time investment, how the iPad compares with other technologies, the impact of the iPad on faculty members’ pedagogy, the impact of the iPad on their classroom management, and details about faculty members’ technology learning processes. At the end of the year a series of faculty member focus groups were also conducted. Many of the details learned through the qualitative portion of the study would have been difficult to obtain otherwise. The common elements between the data from the focus groups and the interviews also allowed us to verify some observations. Below is an interesting quote from one of the participating faculty members:

“What I want, in terms of their behaviors, is for [the students] to be active explorers in the classroom, to bring the machines, and to actually utilize them for historical research … One of the things that we did as a first conversation is to describe the level of trust that is going to be involved … and they live up to those expectations. I’ve been really happy so far with what we’re learning. It conveys to them that they’re smart, capable discoverers that we’re co-creating knowledge—historical knowledge” (Wagoner, Hoover, & Ernst, 2012, p. 3)

 

While the quote above illustrates a very positive aspect, it is likely that this experience would not have been visible through an analysis of student achievement, illustrating the benefit of utilizing mixed-methods. Two student focus groups were also conducted yet unlike for faculty members where evaluators were able to interview the whole population, 447 students were more than the team could interview. To obtain a better analysis of the student responses, a survey was conducted which included a number of questions related to their use and experience with the iPad. 241 CEHD first year students responded to the survey (Wagoner et al., 2012). Having access to broader demographic data also allowed the evaluation team to compare student attitudes with socio-economic variables. Various strong correlations and significant relationships were found regarding the impacts of iPads on student learning. In particular, the evaluation found that students felt that the devices had been a positive experience in terms of their motivation. Students also expressed a high level of comfort using the devices and reported that the iPad helped them feel more engaged in some of their classes.

Inserting Picture...

 

The study also showed that students who were part of Access to Success (ATS) or had been part of the TRIO program, usually students of color or from low socio-economic backgrounds, mentioned feeling more engaged and connected during classes. From the qualitative data the evaluators also learned that for some students the iPad had become a window into the internet, and a digital item for their whole household to use.

The success of the first year implementation, and the questions that evaluators were still unable to answer led to the continuation of the program for a second and third year. A similar number of iPads (now iPads 2) were purchased. Once again the rapid change of technology provided new possibilities for evaluators, as iPad 2s include cameras, permitting students to record HD video and have audio-visual communications with anyone with access to Facetime or Skype.  After analyzing the potential savings of the extensive use of iPads for e-reading by some students, CEHD also decided to support a pilot project for the testing and adoption of Open Textbooks, as well as the establishment of a workdesk where faculty members could obtain assistance and build iBooks and ePubs if interested.

The project is now planning its third year. Adapting to the result of the first year evaluation, many of the questions of the second year survey were modified to find additional valuable information. One of the limitations of the evaluation of the program so far has been a lack of a cost-effectiveness or a cost-benefit study. Yet, such a study should not only take into account the rapid depreciation of the devices, but also consider if students are learning skills that could potentially aid them when they join the workforce. While the costs have been high, over 300,000 dollars per year, it is difficult to assess the long term benefits for participants. The rapid devaluation of the devices is an important consideration, as it may be possible that in a couple of years these devices will cost only a fifth of their original cost and be even more feature rich and powerful, allowing students to obtain a similar skill set for a fraction of the cost. It is also possible that many of the skills obtained are not very different from those obtained from using other ICTs, reducing the importance of the investment.

Currently, a website is available where individuals interested in the results of the project can learn various innovative classroom projects that were developed and how they can be adapted to other classrooms, as well as suggested best practices. In a report, CEHD concluded that the iPad had been helpful addressing the concerns of the Digital Divide, increasing access to the tools needed for media production, access to tools that facilitate personal productivity, improve students’ possibilities for information access and consumption, helped reduce the cost of printing readings, and facilitated students’ learning outside of the classroom (Wagoner, et al., 2012). For year two, the program also hopes to further analyze the usability of the devices and recently developed a space for students to submit their creative productions with the iPads.

Despite the insights provided by the use of mixed-methods for this evaluation, the limited timeframe of the study makes it difficult to determine whether or not is a worthwhile investment. With the program costing over $400 dollars per student, excluding the cost of the administrative staff, is this the best investment for a university to make in terms of technology adoption? When will it be determined that the program is no longer worth its cost and it is no longer helping to find innovative ways of learning? One of the limitations of CEHD’s 1 to1 iPad program has been the limited emphasis on the possibilities for the device within informal learning. Some of these concerns will be better analyzed from the data collected from the second year survey recently administered to students. A new wave of interviews and focus groups is also planned for the evaluation of the 3rd year of the program.

With 500,000 applications there are almost endless possibilities as to how the devices can be integrated within the classroom. The production of more apps that match more closely with the goals of each individual is likely to increase. Because of these devices’ future relevance, and the high level of creativity and innovation within this industry, constant evaluation of these devices is important as it allows for the continued improvement of the project. The use of mixed-methods allowed the evaluation team to find many interesting details that the study would not have found otherwise. These details enriched the quality of the findings and provided faculty with valuable information for the improvement of the use of the iPad and for learning how their peers were using the devices.

Conclusion

 

It is difficult to understand the repercussions of an event while it is taking place. Only with hindsight do we notice how many unexpected turns have led society to where it is today. Evaluators do not have the luxury of looking only at the past, as they are focused on improving the tomorrow. With an emphasis not just on understanding but on helping projects and programs improve in quality, decisions are made guided by what may be the most likely outcomes. Yet, without realizing it, a project could be cancelled before it demonstrates its true strengths. Too often ICT one-to- one projects focus on student achievement gains after the first year of implementation. As a magic bullet, some stakeholders may expect that just by having the device individuals will become more competitive. Projects such as OLPC have helped to promote this viewpoint. Yet, while technologies have helped improve society, it may take years for them to demonstrate the benefits to the lives of individuals. Changing cultures or behaviors takes time, and as has been the case with a large number of development projects, impact is usually moderate. Nevertheless, some investments will be more cost-effective than others and an evaluation of ICT needs to carefully analyze the costs of the ingredients of the intervention. Depreciating these ingredients and the considering which are the best ways in which students can develop competitive ICT skills is a primary objective for ICT one to one adoption projects. This paper contends that using mixed-methods and a longer-than-usual time spectrum for ICT evaluations will be able to provide more useful information to its key stakeholders, resulting in better decision making.

………………Page Break………………

 

Works Cited

Alter, C., & Murty, S. (1997). Logic modeling: A tool for teaching practice evaluation. Journal of Social Work Education, 33.

Carr, N. (2011). The Shallows: What the Internet is Doing to Our Brains. New York City: W. W. Norton & Company.

Chapman, D., & Quijada, J. J. (2007). What does a billion dollars buy? An analysis of USAID assistance to basic education in the developing world, 1990-2005. Washington DC: USAID.

Cobo, C., & Moravec, J. (2011). Aprendizaje Invisible: Hacia Una Nueva Ecologia de la Educacion. Barcelona: Universitat de Barcelona.

Cristia, J., Cueto, S., Ibarraran, P., Santiago, A., & Severin, E. (2012). Technology and Child Development: Evidence from the One Laptop per Child Program. Washington DC: IDB.

Denzin, N. K. (1978). The Research Act, 2nd Ed. New York: McGraw-Hill.

EDUCAUSE. (2011, September 02). 7 Things You Should Know About iPad Apps for Learning. EDUCAUSE Learning Initiative (ELI), p. http://www.educause.edu/Resources/7ThingsYouShouldKnowAboutiPadA/223289.

Eisner, E. W. (1994). The forms and functions of educational connoisseurship and educational criticism. In E. W. Eisner, In The educational imagination: On the design and evaluation of school programs (pp. 212-249). New York: Macmillan.

Finn, J. D., & Achilles, C. M. (1999). Tennessee’s class size study: Findings, implications, misconceptions. Educational Evaluation and Policy Analysis, 97-109.

Fitzpatrick, J. L., Sanders, J. R., & Worthen, B. R. (2011). Program Evaluation: Alternative Approaches and Practical Guidelines. Upper Saddle River: Pearson Education.

Flannigan, B. R.-K. (2006). Connecting the Digital Dots: Literacy of the 21st Century. Educause Quarterly, pp. 8-10.

Friedman, T. L. (2007). The World is Flat 3.0: A Brief History of the Twenty-first Century. New York: Picador.

Fuller, H. G. (2008). What does the term ‘ethnography’ mean to you? Quirk’s Marketing Research Review, pp. 48-50.

Geertz, C. (1973). Thick Description: Toward an Interpretive Theory of Culture. In C. Geertz, In The Interpretation of Cultures: Selected Essays (pp. 3-30). New York: Basic Books.

Gladwell, M. (2008). Outliers: The Story of Success. New York: Little.

Greene, J. C., & Caracelli, V. J. (1997). Defining and describing the paradigm issue in mixed-method evaluation. New Directions for Evaluation, 5-17.

Greene, J. C., Caracelli, V. J., & Graham, W. F. (1989). Toward a Conceptual Framework for Mixed-Method Evaluation Designs. Educational Evaluation and Policy Analysis, 255-274.

Hanushek, E. A. (1999). Some findings from an independent investigation of the Tennessee STAR experiment and from other investigations of class size effects. Educational Evaluation and Policy Analysis, 143-163.

Heshmati, A., & Addison, T. (2003). The New Global Determinants of FDI: Flows to Developing Countries. Helsinki: World Institute for Development Economics Research.

ISTE. (2008). The National Educational Technology Standards. Washington D.C.: International Society for Technology in Education.

ITU. (2012). The World in 2011: ICT Facts and Figures. Geneva: International Telecommunication Union.

Jeffrey, B., & Troman, G. (2004). Time for Ethnography . British Educational Research Journal, 535-548.

Jenkins, H., Purushotma, R., Clinton, K., Weigel, M., & Robison, A. (2006). Confronting the Challenges of Participatory Culture: Media Education for the 21st Century An analysis of USAID assistance to basic education. Chicago: The MacArthur Foundation.

Johnson, R. B., & Onwuegbuzie, A. J. (2004). Mixed Methods Research: A Research Paradigm Whose Time Has Come. Educational Researcher, 14-26.

Johnson, R. B., Onwuegbuzie, A. J., & Turner, L. A. (2007). Toward a Definition of Mixed Methods Research. Journal of Mixed Methods Research , 112-130.

Krueger, R. A. (1986). Reporting Evaluation Results: 10 Common Myths. American Evaluation Association. Kansas City: American Evaluation Association.

Kurzweil, R. (2000). The Age of Spiritual Machines: When Computers Exceed Human Intelligence. London: Penguin.

Levin, H. M. (2001). Cost-Effectiveness Analysis. Thousand Oaks: SAGE.

Levy, S. (2010, February 22). Exclusive: How Google’s Algorithm Rules the Web. Wired, p. http://www.wired.com/magazine/2010/02/ff_google_algorithm/.

Lugo, M. T., & Schurmann, S. (2012). Turning Mobile Learning in Latin America. Paris: UNESCO.

Maslow, A. (1943). A Theory of Human Motivation. Psychological Review, 370-396.

McLaughlin, J., & & Jordan, G. (1999). Logic models: A tool for telling your program’s performance story. Evaluating and Program Planning, 65-72.

Michel, J.-B., Shen, Y. K., Aiden, A. P., Veres, A., Gray, M. K., Team, T. G., . . . Lieberma, E. (2010, December 16). Quantitative Analysis of Culture Using Millions of Digitized Books. Science, p. http://www.sciencemag.org/content/early/2010/12/15/science.1199644.

Mirijamdotter, A., Somerville, M. M., & Holst, M. (2006). An Interactive and Iterative Evaluation Approach for Creating. The Electronic Journal Information Systems Evaluation, 88-92.

Mohammed, N. (2007). Facing Difficulties in Learning Computer Applications. Mount Pleasant: Central Michigan University.

Morphy, E. (2012, May 05). Tidal Wave of Tablets on the Horizon. Retrieved from E-Commerce Times: http://www.ecommercetimes.com/rsstory/75039.html

MSC. (1996). Smart School Road Map 2005-2020. Kuala Lumpur: Multimedia Development Corporation .

NCES . (2010). Early Childhood Longitudinal Study, Kindergarten Class of 1998-99 . Washington DC: U.S. Department of Education.

Negroponte, N. (1996). Being Digital. New York: Vintage.

Papert, S. (1980). Mindstorms: Children, Computers, and Powerful Ideas. Basic Books: New York.

Patton, M. Q. (2007). Utilization-Focused Evaluation. Thousand Oaks: SAGE Publications.

Prensky, M. (2001). Digital Natives Digital Immigrants. On the Horizon, http://www.marcprensky.com/writing/Prensky%20-%20Digital%20Natives,%20Digital%20Immigrants%20-%20Part1.pdf.

Scriven, M. (1967). The methodology of evaluation. In R. W. Tyler, R. M. Gagne, & M. Scriven, Perspectives of curriculum evaluation (pp. 39-83). Chicago: Rand McMally.

Seiter, E. (2008). Practicing at Home: Computers, Pianos, and Cultural Capital. In T. McPherson, Digital Youth, Innovation, and the Unexpected (pp. 27-52). Cambridge: MIT Press.

Smith, A. (2011). Americans and Text Messaging. Washington D.C.: Pew Internet.

TED. (2008, December 05). Speakers Nicholas Negroponte: Tech visionary. Retrieved from TED: Ideas Worth Spreading: http://www.ted.com/speakers/nicholas_negroponte.html

Terris, B. (2009, December 6). Rebooted Computer Labs Offer Savings for Campuses and Ambiance for Students. The Chronicle of Higher Education, pp. http://chronicle.com/article/Computer-Labs-Get-Rebooted-as/49323/.

Thurlow, C., & Brown, A. (2003). Generation Txt? The sociolinguistics of young people’s text-messaging. Discourse Analysis Online, http://extra.shu.ac.uk/daol/articles/v1/n1/a3/thurlow2002003.html.

Tippet, K., & Turkle, S. (2011, August 25). On Being: Alive Enough? Retrieved from American Public Media: http://being.publicradio.org/programs/2011/ccp-turkle/transcript.shtml

Wagoner, T., Hoover, S., & Ernst, D. (2012). CEHD iPad Initiative. Minneapolis: CEHD.

Walker, J., & Jorn, L. (2009). 21st Century Students: Technology Survey. Minneapolis: University of Minnesota.

Warschauer, M. (2008). Whither the Digital Divide? In D. L. Kleinman, K. A. Cloud-Hansen, & a. J. C. Matta, Controversies in Science & Technology: From climate to chromosomes. (pp. 140-152). New Rochelle: Liebert.

Weiss, C. H. (1997). How can theory-based evaluation make greater headway? . Evaluation Review, 501-524.

Willoughby, T. (2008). A Short-Term Longitudinal Study of Internet and Computer Game Use by Adolescent Boys and Girls: Prevalence, Frequency of Use, and Psychosocial Predictors. Developmental Psychology, 195-204.

Yeh, S. S. (2011). The Cost-Effectiveness of 22 Approaches for Raising Student Achievement. Charlotte: Information Age Publishing.

Zhang, K. H., & Song, S. (2003). Rural–urban migration and urbanization in China: Evidence from time-series and cross-section analyses. China Economic Review , 386-400.

Zickuhr, K., & Smith, A. (2012). Digital differences. Washington DC: Pew Research Center’s Internet & American Life Project.



 

read more

For a More Robust Evaluation of 1 to 1 ICT for Education Adoption Projects

»Posted by on May 12, 2012 in Spring 2012 | 0 comments

For a More Robust Evaluation of 1 to 1 ICT for Education Adoption Projects 

May 12, 2012

The rapid chance of information and communication technology (ICT) increases the challenge in determining how to best evaluate proficient use of these technological advances and their impact on learning. Through an overview of different initiatives, this paper illustrates the benefits of implementing a mixed-methods approach, and analyzing projects over a prolonged period of time. Looking at a program in a longer timeframe can help us to be more aware of the impact a program has on an individual and a community. The use of mixed-methods helps us to take into account different ways in which we can analyze a program, studying variables that are measurable and generalizable, as well as elements that are specific to a particular situation. By incorporating these elements into evaluation studies we can potentially increase the quality and usability of the reports generated. To illustrate the benefits of mixed-methods and the continued analysis of a project, this paper discusses the 1 to 1 iPad project at the University of Minnesota.

Rapid Rate of Change – A Relevant Characteristic of ICT for Education Projects

It was only a few decades ago, in 1978, when top MIT computer scientist had reservations about the usability of the personal computer and whether or not people would use for tasks such as an address book or a personal calendars (Tippet & Turkle, 2011). Since then, many technology adoption projects were promoted, but items that were originally only available for the few are much more common in the present. Today, universities in the United States increasingly consider remodeling their computer labs as almost all college students in the United States (89.1% – 2009 at UMN) bring their own laptops to the university (Walker & Jorn, 2009). Students bringing their laptops to college increased from 36% in 2003 to 83% in 2008 (Terris, 2009).

The rapid improvement of technology results in the rapid depreciation of gadgets, as well as the difficulty of evaluating them. The increase capacity of technology and their computational power has encouraged educational institutions and other industries to adopt them. The ownership of Information and Communication Technologies (ICTs) has decreased the costs of transferring data and increased worker’s potential productivity (Friedman, 2007). Other influential ICTs are the mobile phone, the television, the internet and the radio have augmented the quantity of information available to individuals. The economic benefits from improvements in information and data transfers have led to increased investments. There has also been an increased interest in the importance of information and digital literacy as a necessary skill in the 21st century (Flannigan, 2006; Jenkins, Purushotma, Clinton, Weigel, & Robison, 2006). While not all of the changes brought by increase access to technology are positive, the increased access to information and the rapid improvement of these technologies has a major impact in society (Carr, 2011; Kurzweil, 2000). Unlike some traditional fields such as mathematics, history where most basic concepts have remained unchanged, the impacts of new media and its prevalence in society has changed substantially in the past few decades and with it the difficulty in evaluating these projects. Mobile subscriptions alone increased from less than 3 billion in 2006 to 5.9 billion in 2011 (ITU, 2012)

This rapid change makes it difficult to determine the essential skills a learner must have in the work place of tomorrow (Cobo & Moravec, 2011). With hundreds of thousands of computer applications and many types of hardware, some of high levels of complexity; it can take a person a significant amount of time to become adept in any complex program. Many users of Nvivo, a qualitative research software, many not know how to use SPSS, quantitative research software, successfully. A high level of specialization is often the norm, as using specialized programs successful requires a degree of mastery over statistical analysis or qualitative research methods. Similarly, programs such as Adobe Photoshop, Bryce, Python, Android OS, Excel, Audacity, among others have a considerable learning curve (There are courses available for learning any of these programs). Being a specialist in a particular program can lead toward a very successful financial career, but simply by mastering a single program can take dozens or hundreds of hours of practice. While it may take 10,000 hours to become a successful reporter, a successful violinist, or a successful writer (Gladwell, 2008), ICTs contains within it thousands of possibilities each with their different proficiency levels (this includes unique musical instruments, and new ways of writing [via text or twitter]).

The relevance of rapid change when evaluating ICT adoption programs is important because it influences what we consider to be the effective use of these technologies by the general population. Texting for example is increasingly becoming more common place and it is consider by some experts to be a nascent dialect (Thurlow & Brown, 2003).  Therefore, how important is it to know how to effectively send text and use a mobile phone in the 21st century? It is hard to answer these questions as a technology may be displaced in a few years’ time. The rapid change of technology complicates how we measure digital literacy and through it the effectiveness of 1 to 1 adoption and usability programs. These complications are at times difficult to perceive because of generational differences between the evaluator and younger generations (Prensky, 2001).

Today young adults (18-24) send an average of 109.5 text messages a day or 3,200 text messages a month and many of them prefer communicating over text messages than emails. Email a moderately recent invention, is to some already considered old fashioned and impractical (Smith, 2011). With this in mind, does an individual’s capacity to use emails effectively continue to be a 21st century digital literacy requirement? While the International Society for Technology in Education (ISTE) has worked on developing ICT for education standards which can aid the evaluation of technology adoption programs (ISTE, 2008), these standards emphasize broad competencies and must be operationalized to the distinctiveness of each 1 to 1 ICT program.

If technology continues to improve at a very rapid rate, perhaps even an exponential rate, it brings forth questions regarding what are the best ways which to evaluate a 1 to 1 technology project (laptops, mobiles, e-readers, etc.). In this essay I propose the analysis over a long period of time to assess the impact of the program to the individual over time. As argued by Seiter (2008) increasing access to technology will likely help individuals become more proficient at using the devices, yet as with playing a piano, it takes many hours of practice to become a skilled pianist (Seiter, 2008).One of the key advantages of 1 to 1 initiatives is that the participants are able to take home the devices. It is easier to become proficient using a device that one has access to at home, to one that is limited to its use within the classroom setting. As argued by Seiter (2008) “There is an overestimation of access to computers in terms of economic class, and an underestimation of specific forms of cultural capital required to maintain the systems themselves and move beyond the casual, recreational uses of computers to those that might lead directly to well-paid employment” (Pg. 29). If Seiter (2008) is accurate, and most of the economic benefits from ICT use come from their long term use.

ICT investment can be very expensive and many of ICT projects could not be developed without the support of private industry and the government (Heshmati & Addison, 2003).  While ICT may not be as important as basic education, food, and health services, governments throughout the world have spent large quantities of funds in ICT for education initiatives hoping to imitate the success many advanced economies have obtained from their ICT industries and byproducts (MSC, 1996). “Investment in ICT infrastructure and skills helps to diversify economies from dependence on their natural-resource endowments and offsets some of the locational disadvantages of landlocked and geographically remote countries” (Heshmati & Addison, 2003, p. 5)

Adequately evaluating 1 to 1 technology adoption initiatives is increasingly important, as different education interventions can have different cost-effectiveness ratios, and cost-benefit ratios, with some interventions being much more effective than others (Yeh, 2011). Working with limited funds, governments must administer their funds in the best possible way to provide their citizens with the ability to meet their needs various needs, from food and shelter, to their self-actualization. Just because one intervention is more cost effective, does not mean that the other intervention should be necessarily discarded (Countries can implement multiple interventions if funds are available). As Abraham Maslow (1943) suggested, many needs can be and should be met simultaneously. While there is a clear hierarchy to human needs, an improvement in one area of life, such as shelter, does not occur in a vacuum, and is not exclusive from the individuals desire to feel accepted by others, or to improve their problem-solving ability (Maslow, 1943). Investing in ICT is important for states as they move towards becoming economically diverse, robust and more competitive, relying more in their human capital than their natural resources. To evaluate these projects more precisely; this paper encourages evaluators to consider conducting a mixed-method analysis and with a long-term time perspective.

Evaluating Information and Communication Technology Projects

Evaluation can help increase the effectiveness of programs and improve the effective distribution of the limited resources available to a society. The decisions made by an evaluator can impact the lives of many individuals. Evaluators can help improve a program as well as decide whether or not the program should be continued (Fitzpatrick, Sanders, & Worthen, 2011). Discussing the methodology of evaluation, Scriven (1967) differentiated between formative (focus on development and improvement [Cook tasking the soup]) and summative (focusing on whether the program is meeting its stated goals [Guest tasking the soup]) evaluation (Scriven, 1967). By conducting an evaluation a decision-making body is able to make an informed decision about the future of the program. Yet, dealing with complex programs with large numbers of pieces, and unique elements, it is difficult for an evaluator to frame an evaluation that can help them obtain the most valuable information about a program, particularly when there is a limited time to conduct it, and the brevity of a report can be one of its strengths (Krueger, 1986). Yet, different methods provide for different valuable lenses through which to look at a problem, frames that the evaluator should consider before conducting their evaluation.

Possibly the most important elements to consider in a 1 to 1 ICT project are its cost, and its use by the learners. The most known 1 to 1 idea is the One Laptop Per Child Program (OPLC) which has been most successful in Latin America delivering hundreds of thousands of units (http://one.laptop.org/). Yet with over $100 cost per student (closer to $200) it could cost $500 billion dollars to provide a computer to every person that currently lacks access to the internet worldwide ($5 billion people), and this would not include their continued maintenance and electricity cost or the cost to access the Internet. Is access to ICT really that important? According to a recent UNESCO (2012) publication while 1 to 1 laptop projects are very costly, in Latin America “in the last three years, the 1:1 model has become increasingly widespread, and 1:1programmes are now the primary focus of national policies for ICT in education in the region. Policy-makers are no longer discussing whether the 1:1 model is worthy of investment but rather how best to achieve it” (Lugo & Schurmann, 2012)

 

While a price tag of $100 appears as an expensive investment for developing countries, especially when some countries spend less than $100 per student a year within their educational budget, it is also important to consider that all programs have cost, even when they are not financial. Even 1 to 1 program that are “free” (through donations) have a cost, including an e-waste disposal cost. Even when they are based in volunteer efforts, programs still have as a minimum a lost opportunity cost for instructors and learners. The cost of programs can be most effectively asses by measuring their different ingredients. This allows programs to be quantified; for various elements to be weighted, and as a result for programs to be compared with each other through a cost-effectiveness analysis (Levin, 2001). The financial benefit of the program can also be determined through a cost-benefit analysis. Through a qualitative study, “thick”, rich descriptive information can be obtained and thematically organized helping key stakeholders to better understand elements that would otherwise go unnoticed (Geertz, 1973).

Programs can also be mapped through a logic model which can include inputs, activities, outputs, and outcomes (Alter & Murty, 1997; McLaughlin & & Jordan, 1999). The order in which the elements of a program are implemented and the context where a program is implemented may also influence the results of the program! There are also likely to be competing program alternatives some of which may be more effective than the particular program being considered. Hoping to increase the transferability or generalizability of a study, an evaluation can also be theory driven (Weiss, 1997). These and other elements, can improve the quality and usability of data obtained by an evaluation. However, with limited time and resources, the methodology used to evaluate a program depends on both the strengths of the researcher, and what is considered of principal importance by key stakeholders.

 

Overtime, every practicing evaluator is or is in the process of becoming a “connoisseur” (the art of appreciation), as well as a “critic” (the art of disclosure) (Eisner, 1994, p. 215). This knowledge allows him or her to more effectively propose to key stakeholders’ recommendations as to the best methods of evaluation to pursue in a particular scenario. However, the interests of secondary stakeholders are also important in many ICT adoption programs.

The Relevance of Mixed Methods and Triangulation

“The underlying rationale for mixed-methods inquiry is to understand more fully, to generate deeper and broader insights, to develop important knowledge claims that respect a wider range of interests and perspectives” (Greene & Caracelli, 1997, p. 7).

 

Mixed-methods can greatly benefit a study as they allow the researcher to ask questions that he or she may ignore otherwise, obtaining additional information. While “purists” oppose the use of mixed-methods due to potential epistemological and ontological contradictions, many evaluators take a more “pragmatic” approach to the use of mixed-method (Greene, Caracelli, & Graham, 1989). One of the concerns regarding the use mixed-methods is that they may compromise the methodological integrity of an experimental study. These are valid concerns, and it is important to consider carefully how methods are being utilized, to avoid unintended conflicts that jeopardize the integrity of the study. Some of the theoretical concerns for researchers against using mixed-methods may not be as applicable to evaluators as evaluators do not have the same goals as researchers. While researchers are focused to a greater extent on theory and generalizability and transferability, for many evaluators, their focus is on utilization and the practical implication of their analysis to their key stakeholders and the future of the program (Patton, 2007). To the “pragmatist” evaluator, “philosophical assumptions are logically independent and therefore can be mixed and matched, in conjunction with choices about methods, to achieve the combination most appropriate for a given inquiry problem. Moreover, these paradigm differences do not really matter very much to the practice” (Greene, Caracelli, & Graham, 1989, p. 8).

Mixed-methods, often refers to the use of methods of different paradigms, using both a qualitative method such as unstructured interviews or participant observations, with a quantitative method, such as academic achievement scores, or another statistical value within the same study (Johnson & Onwuegbuzie, 2004). While it seems beneficial to analyze a problem in multiple ways experts in both qualitative and quantitative methods express concerns against this approach. Johnson and Onwuegbuzie (2004) argued that part of “purist” concerns stems from “tendency among some researchers to treating epistemology and method as being synonymous” which is not necessarily the case (Pg. 15). To Johnson and Onwuegbuzie most researchers who use mixed-methods use them when they consider their use to be most appropriate. Johnson and Onwuegbuzie (2004) argue for a contingency theory of research approach which emphasizes that while no method is superior, there are instances when one is preferable to the other.

One of the biggest benefits of using mixed methods is that they allow for the triangulation of findings. According to Dezin (1978) Triangulation is “the combination of methodologies in the study of the same phenomenon” (pg 291). Dezin (1978) describes 4 types of triangulation: data triangulation, investigator triangulation, theory triangulation, and methodological triangulation. He described these as possible within-methods, or between-methods (Denzin, 1978). The ways in which methods are mixed varies, with both of them at times having the same amount of influence, while sometimes one method holds preeminence. Triangulation is a common way in which to strengthen the generalizability and transferability of a study and the strength of its claims.  Other benefits of using mixed-methods include a complementary, where the results of one method are clarified by another, development, when one method informs the other, expansion, trying to increase the scope of one methodology, and initiation, which seeks the discovery of paradox by recasting results or questions from one method to another (Greene, Caracelli, & Graham, 1989) Regardless of the initial results, it usually provides richer data. Comparisons between the data could lead to either “convergence, inconsistency, or contradiction” (Johnson, Onwuegbuzie, & Turner, 2007, p. 115).

If there is a conflict or an inconsistency within the data, it increases the difficulty in establishing a causal relationship and the study may require further study and explanation. This explanation can be provided by a form of structural corroboration, further analysis, or by sharing both findings with the key stakeholder, he or she can then use both pieces of information to make his or her decisions (Eisner, 1994). While most evaluators feel a responsibility to provide recommendations to the stakeholders, this recommendations do not necessarily have to address the contraction scientifically, rather a “connoisseur” may state that based on his experience, he or she believes which path may be the best path to follow. ICT adoption includes many invisible elements which increases the difficulty in evaluating them (Cobo & Moravec, 2011). Because of its complexity, it will be helpful for the evaluator to share his or her opinion as a “connoisseur”. Social programs are generally complex. By providing a focused report to the key stakeholders, that emphasizes the main findings of the mixed-methods evaluation, they will be more likely to make a good formative or summative decision. As will be illustrated, this was an objective pursued by to 1 to 1 iPad initiative at the University of Minnesota.

Encouraging the Long-Term Study of ICT Projects

 

The limited timeframe of a study can result in a restricted analysis. Iterative formative evaluations allow key stakeholders to constantly reevaluate ways in which to improvement a programs (Mirijamdotter, Somerville, & Holst, 2006). Iterative and continues evaluations are very important for internet based companies.  Google for example is known to regularly test new algorithms and versions of their search engine simultaneously with consumers, to obtain helpful usability comparisons. They try hundreds of variations of their search engine a year in an attempt to improve their product without customers noticing minor modifications and changes (Levy, 2010). Many other ICT firms regularly test new features. Similarly, many ICT adoption projects include an iterative process in their analysis, yet in the discussion of their findings, the evaluations regularly omits the potential long term benefits of the programs, focusing instead on short term costs and benefits.  While there time constrains and financial limitations to evaluations of 1 to 1 laptop programs, these evaluations would benefits from a stronger effort in measuring the long term benefits of the interventions, including cultural capital gain (Seiter, 2008).

Methodologies such as longitudinal studies, ethnographic research, and time-series are among the methodologies that can help illustrate the potential benefits from the long term analysis of an intervention. Some of these studies can be very expensive, but they allow for the observation of changes that would otherwise go unnoticed. Another recent example of the possibilities of looking at changes overtime was recently made possibly by the Google Books Project Ngram Viewer (http://books.google.com/ngrams). The NGram Viewer allows for words frequencies to be analyzed for over a span of 200 years! This type of study called Culturenomics is one of the newest ways in which an analysis of a subject over time provides an additional insight to an issue (Michel, et al., 2010). While the NGram Viewer is not very useful for evaluators, other forms longer-term analysis can be of greater support.

Ethnography is a field of study in which time spend on the field is an important validity variable. Ethnographers focus primarily on the quality of the data, which validity can be increased by the researcher if him or her has lived in a community for a longer time-frame and has obtain through this extended visit a greater understanding of the local culture. Some of the subtleties that are analyzed by ethnographers require time and involvement to be discovered. To some researchers, ethnography symbolizes a study that takes more than a year (Fuller, 2008). While some projects could last perhaps a single long day, other “projects are developed throughout the whole of a researcher’s life; an ethnography may become a long, episodic narrative” (Jeffrey & Troman, 2004). In quantitative analysis, a time series, as their name imply, also emphasizes the importance of collecting data over time. This set of statistical data can be collected at various intervals such as monthly for unemployment benefits data, or daily for the financial exchange rate, or monitoring an individual’s pulse over an exercise period, or even every 2 seconds for EGG brain wave activity. A commonly used and informative time series is population census data which is collected by many countries in regular intervals to help their governments better understand broader demographical changes, migratory patterns, and the future outlook of various variables (Zhang & Song, 2003).

Longitudinal studies can also be very helpful in understanding how an intervention at an early stage of a person’s development influences them throughout the rest of their lives. Various longitudinal studies have been conducted within early education to identify the changes these interventions may have in the lives of these individuals. Longitudinal studies include interventions pre-natal care, youth reading programs, or the observation of children as they become older, among many other studies. One of the most famous longitudinal studies of education was the Student/Teacher Achievement Ratio (STAR) Tennessee Class Size Reduction study which began in 1985 and which continued until 1999 (Finn & Achilles, 1999; Hanushek, 1999). The study tracked students who were assigned at random to kindergarten having between 13 and 17 students, or larger classes having between 22 and 26.  Over 6000 students took part in the study were they were kept in smaller classrooms for 4 years, and were continued to be monitored after the end of the intervention. The study found statistical significant changes to student achievement scores in three utilized measurements. The conclusions of this study strengthened claims regarding the positive impacts of class size reduction which encouraged the enactment of class reduction policies in California (1996) and other states. While other studies have contradicted the findings of the study, its use of an experimental design, its magnitude and its use of a longitudinal analysis strengthened its claims. There have been a number of important longitudinal studies in early childhood and other early interventions that have followed children development for decades (NCES , 2010). It is also used frequently within health sciences.

Another popular, long-term, longitudinal study is the British Up Series which has followed a group of 14 children since age seven in 1964, and is still under production. Similar documentaries have been replicated in Australia (since 1975), Belgium (1980-1990), Canada (1991-1993), Czech Republic (1980s), Germany (1961-2006), Denmark (From 2000), Japan (from 1992), Netherlands (from 1982), South Africa (from 1982), Sweden (from 1973), USSR (from 1990), USA (from 1991). While these long term studies can be expensive to conduct, they provide a different dimension to findings, a dimension that is sometimes not available in most 1 to 1 technology adoption evaluations.

 

The key benefit of including this dimension within an evaluation is due to the difficulty in knowing how the skills obtained from using new ICT devices will help an individual have the confidence and the background skills needed to develop future ICT skills competencies that may be beneficial to them in the job market. Will their familiarity with ICT at an early age, bring about broader benefits later in their lives? A short term outlook to an evaluation may at times provide a negatively skewed view of the impact of these projects, expecting more out of a pilot project than should be expected. In addition, it is common for program designers to overstate the potential outcomes of a project, expecting it to have a greater impact than it is likely possible. For example, as an evaluation of USAID basic educational projects (1990-2005) showed, most of its projects had less than a 4% in student achievement scores, despite the efforts of many specialists and the expenditure of millions of dollars. (Chapman & Quijada, 2007). One to one technology adoption projects can also be very expensive and as such can have a very negative cost-benefit analysis in the first years of the program. It is very important to take into account the rapid depreciation rate of CIT, but evaluations should also take into account, future, longer-term benefits of the investment.

Having access to personal computers is essential for most of the workforce in the 21st century. As argued by Seiter (2008), having a computer at home is almost a necessity for development competent skills in the subject. “The likelihood of gaining strong digital literacy skills on this type of machine [a computer lab] is much slimmer than on a home computer. In other words, learning to use computers at school is like the music education class in which you have forty minutes to hold an instrument in your hands once a week, along with thirty other kids” (Pg. 37).

 

Many of the computer programs that students may eventually learn to use will require them to invest dozens, hundreds, and perhaps thousands of hours mastering. In addition individuals, who are less familiar with computer, tend to be less confident about becoming proficient in using new programs (Mohammed, 2007). While a television and a radio, or a “feature” mobile phone may have a short learning curve, the same cannot be said of personal computers, the internet or smart phones. Each of which is complex to different extents. Digital literacy programs such as RIA can teach a digital immigrant a basic set of skills in 72 hours, but many more hours are needed for complex use of a personal computer or an internet capable device (http://www.ria.org.mx). Just learning how to type rapidly on a QWERTY keyboard will take many hours of practice.

By evaluating a project while considering its impact over a longer frame of time this article encourages the continued evaluation of a program over a number of years, on regular intervals, while providing recommendations, and reporting on the benefits and negative elements of the program as they are modified over time. This type of long term evaluation is best suited for an internal evaluator, or a combination or internal and external evaluators. When thinking of the cost of 1 to 1 programs over time, it is also important to keep in mind the rapid depreciation of technology. With the rapid depreciation of computer equipment, should 1 to 1 programs focus on purchasing the most up to date gadgets and tools? This is a question that is be best analyzed through the inclusion of a cost-effectiveness analysis which accounts for the depreciation of technologies.

A Case Study – University of Minnesota One iPad Per Student Initiative

As previously discussed, the evaluation of technology adoptions programs has tended to focus on a short-term analysis, without sufficiently addressing or discussing the importance of analyzing the implications of adoptions over a longer time spectrum. As advanced economies are increasingly fueled by the ownership of patents and new inventions, so to have other countries attempted to further develop these sectors (Heshmati & Addison, 2003). The information transferred through ICT can help countries develop into more diverse and sustainable economies. It is through ingenuity, creativity, innovation, or “Mindware”, that groups and individuals come together to form new industries and adapt to different types of crises (Cobo & Moravec, 2011). Via technology adoption programs, individuals can increasingly access the information that will help them develop valuable skills. By evaluating with a long-term focus, and incorporating both qualitative and quantitative elements to the evaluation, an evaluation will be better able to address the questions of key stakeholders. This paper illustrates the limitations and strengths of a recent evaluation of a one to one iPad initiative in the University of Minnesota.

One Laptop Per Child – An Evaluation of Peru’s Project

Possible the most controversial and also most commonly cited 1 to 1 initiative is the One Laptop Per Child (OLPC) initiative, which was started by Nicholas Negroponte, the founder of the MIT Media Lab (TED, 2008). According to Negroponte, by thinking in bytes instead of atoms, and by learning how to operate a computer, a child can learn that the world is increasingly available at the click of a button, and that they can construct and build anything that they can imagine by programing new and amazing environment (Negroponte, 1996). Following Paper’s Constructionism, Negroponte believes that programing teaches an individual not how to learn, as they must go back, revisit their code and figure out why there is a mistake (Papert, 1980). As an ICT evangelist, Negroponte highlighted how simply by giving a child a computer his possibilities would be expanded (Negroponte, 1996). Since the beginning of OLPC in 2005, over 2.5 million laptops have been delivered (http://one.laptop.org/about/faq). However, despite the high level of investment, particularly in Latin America, project evaluations have not shown significant gains in achievement scores (Cristia, Cueto, Ibarraran, Santiago, & Severin, 2012).

A recent evaluation of OLPC in Peru expressed how despite a high level of investment in these new machineries (902,000 laptops), and increasing the ratio of computers from 0.12 to 1.18, student performance in math and reading had not increased substantially. The project did find that students’ cognitive skills had improved over the time of the study (measured by Raven’s Progressive Matrices, a verbal fluency test and a Coding test). While analysts have since highlighted that the program had only limited effects on math and language achievement (0.003 standard deviations), little emphasis has been given to the potential impact of the improvement in cognitive skills, and perhaps more importantly what having improved their digital literacy skills will mean for this individuals in the future, as they are asked to learn other task specific digital and information literacy skills (Cristia, Cueto, Ibarraran, Santiago, & Severin, 2012). As mentioned by Seiter (2008) developing high level ICT may take many years to fully demonstrate themselves as marketable skills in the lives of students.

It is also difficult to know from the available data whether a different investment would have been more cost-effective or result in a higher cost-benefit ratio in Peru. One of the unmet goals of OLPC was to produce a $100 laptop; however they currently cost around $200 (Cristia, Cueto, Ibarraran, Santiago, & Severin, 2012). As a project which was not affiliated with Microsoft, Google or Apple, the OLPC laptops came with an operating system (OS) known as Sugar. While all operating systems share similarities, did the use of Linux Sugar limit or increase the possibilities for students. When testing student computer literacy skills, they found that the students quickly became more adept at using these devices. As explain earlier in this paper, they also had difficulties in deciding which skills should be tested (Cristia, Cueto, Ibarraran, Santiago, & Severin, 2012, p. 15). Unfortunately, another unmet goal of the project was that Peru’s OLPC participants lacked of internet connectivity. OLPC was partly designed so that students could benefit from increase connection either through OLPC exclusive Mesh network or the Internet. The impact of lacking access to the internet are hard to measure, however they may have affected the individuals’ development of their information literacy skills. In conclusion, Peru’s evaluation of the OLPC project was very insightful, while it contained a qualitative element; the project had a quantitative focus, limiting reader’s understanding of how the initiative affected individuals. As a project which centers on the individual, learning more of the project’s impact on the person is increasingly of relevance as ICT becomes more personalized. Apart from now discussing potential long-term gains, the evaluation also failed to mention the full cost of the devices. With the laptop only accounting for a tenth to a seventh of the total cost of the device, it is important to consider whether this is a cost-effective investment (Lugo & Schurmann, 2012). The evaluation would have benefited from a broader implementation of mixed methods in particularly in the qualitative-side, while also emphasizing these changes over a longer span of time. An element of time that is particularly important to first year initiatives is the teachers or instructor familiarity or learning curve, as they will slowly learn better ways in which to use the device and integrate them within the classroom.

University of Minnesota iPad Initiative

The discussion surrounding the digital divide is traditionally centered around on access to the internet and a personal computer, yet the rapid change of technologies leads us to question whether the divide will be centered on these devices in the future (Warschauer, 2008; Zickuhr & Smith, 2012). What role will smart phones, reality augmented glasses, 3D printers, or farther into the future nanotechnology implants signify in terms of the digital divide? (Kurzweil, 2000). A current technology that may further displace the purchase of paper books for K-12 and HE is e-reader technology, the most successful of which are the iPads (I, II, and III) and Amazon’s Kindle readers. A recent NDP report indicated that tablets may outsell laptop computers by 2016, expanding sales from 81.6 million units (2011) to 424.9 million units (2017) a year (Morphy, 2012). Will we then measure the digital divide in terms of who access and who doesn’t have access to an iPad?

Pilot projects in universities such as the University of Minnesota, the University of San Diego, Oberlin College and a few others have move forward into answering this question. While the first successful tablet, the iPad was released on April 2010, that same year, the University of Minnesota decided to purchase 447 units to provide a tablet to every CEHD student in the upcoming undergraduate cohort. It was one of the first major initiatives of its type in the country. Because of its uniqueness, and being an early adoption project, its evaluation was based partly on the conclusions obtained from previous 1 to 1 projects such as the OLPC initiative and Maine’s 1 to 1 statewide adoption program. However, as a device that was substantially different from previous ICT devices, the operationalization of NETS standards, and an in-depth analysis of their potential use has not been acutely studied (ISTE, 2008). So far, only a few articles have been published regarding the use of the iPad in the classroom (EDUCAUSE, 2011). To better understand the possible educational implications of the adoption of this technological device, a CEHD research team decided to conduct a mixed-methods evaluation (Wagoner, Hoover, & Ernst, 2012). In addition, an initial commitment was made to continue evaluating the project for a consecutive number of years. The support of the dean was integral in the continuation of the program.

The first year, the project consider as a goal to increase the usability of the devices by both faculty and students, and to provide aid to faculty members so that they could familiarize themselves with the devices and consider the best ways in which they could incorporate the devices within their classrooms. Faculty members were then encouraged to incorporate the devices as they best saw within their syllabus. Various graduate assistants as support staff.  Soon after the distribution of iPads, evaluators also drafted a post-test and organized a series of interviews. The interviews asked faculty members a number of questions, including how they learned to use their iPads, what were their plans for using them within the classroom, how the iPad had affected their teaching, and if the support received had been appropriate (From field notes).

A similar set of questions were asked to faculty members at the end of the school year, where they were asked what projects they had actually implemented, the opinions of students regarding ebooks, pedagogical concerns among others. Twenty two interviews were coded and themes were developed from the qualitative study including concerns from faculty about time investment, how the iPad compares with other technologies, the impact of the iPad to faculty members’ pedagogy, the impact of the iPad to their classroom management, and details about faculty members’ technology learning process. At the end of the year a series of faculty member focus groups were also conducted. Many of the details learned through the qualitative portion of the study would have been difficult to obtain otherwise. The common elements between the data from the focus groups and the interviews also allowed us to verify some observations. Below is an interesting quote from one of the participating faculty members:

“What I want, in terms of their behaviors, is for [the students] to be active explorers in the classroom, to bring the machines, and to actually utilize them for historical research … One of the things that we did as a first conversation is to describe the level of trust that is going to be involved … and they live up to those expectations. I’ve been really happy so far with what we’re learning. It conveys to them that they’re smart, capable discoverers that we’re co-creating knowledge—historical knowledge” (Wagoner, Hoover, & Ernst, 2012, p. 3)

While the quote above illustrates a very positive emotion, it is likely that this experience will not have been visible through an analysis of student achievement, illustrating the benefit of utilizing mixed-method. Two student focus groups were also conducted where they shared some of their favorite apps and how they had used the iPad through the semester, yet unlike faculty members where evaluators were able to interview the whole population, 447 students were more than the team could interview.

To obtain a better analysis of the student response, a survey was conducted which included a number of questions related to their use and experience with the iPad. The survey was responded by 241 CEHD first year students (Wagoner, Hoover, & Ernst, 2012). Having access to broader demographic data also allowed the evaluation team to compare student attitudes with socio-economic variables. Various strong correlations and significant relationships were found regarding the impacts of iPads to student learning. In particular the evaluation found that students felt that the devices had been a positive experience in terms of their motivation they also expressed having a high level of comfort using the devices and the iPad helped them feel more engage in some of their classes.

Inserting Picture...

The study also showed that students which were part of Access to Success (ATS) or had been part of the TRIO program, usually students of color or from low socio-economic backgrounds mentioned feeling more engaged and connecting during classes. From the qualitative data the evaluators also learned that to some students the iPad had become a window into the internet, and a digital item for their whole household to use.

 

The success of the first year implementation, and the questions that evaluators were still unable to answer led to the continued of the program for a second and third year. A similar number of iPads (now iPads 2) were purchased the second year of the program. Once again the rapid change of technology provided new possibilities for evaluators, as iPad 2 include cameras which permitting students to record HD video and have audio-visual communications with anyone with access to Facetime or Skype, and other programs. After analyzing the potential savings the extensive use of iPads for e-reading by some students, CEHD also decided to support a pilot project for the testing and adoption of Open Textbooks, as well as the establishment of a work desk where faculty members could obtain assistance and build iBooks and ePubs if interested.

The project is now planning its third year. Adapting to the result of the first year evaluation, many of the questions of the second year survey were modified to find additional valuable information. One of the limitations of the evaluation of the program so far has been a lack of a cost-effectiveness or a cost-benefit study. Yet, such a study should not only take into account the rapid depreciation of the devices, but also consider if students are learning through the use of the devices skills that could potentially aid them when they join the workforce. While the cost have been high with over 300,000 dollars per year, it is difficult to assess the long term benefits for participants (students and faculty members). The rapid devaluation of the devices is an important consideration, as it may be possible that in a couple of years these devices will cost only a fifth of their original cost and be even more feature rich and powerful, allowing students to obtain a similar skill set for a fraction of the cost. It is also possible that many of the skills obtained are not very different from those obtained from using other ICTs, reducing the importance of the investment.

 

Currently, a website is available were individuals interested in the results of the project can learn various innovative classroom projects that were developed and how they can be adapted to other classrooms, as well as suggested best practices. Some of the innovative uses of the iPads by students include the creation of digital stories, accessing unique applications including interactive stories, data visualization, among others, as well as rapidly accessing websites, and developing an e-book library. In a report, CEHD concluded that the iPad had been helpful addressing the concerns of the Digital Divide, increasing access to the tools needed for media production, increased access to tools that facilitates personal productivity, improve students’ possibilities for information access and consumption, helped reduce the cost of printing readings, and facilitated students’ learning outside of the classroom (Wagoner, Hoover, & Ernst, 2012). For year two, the program also hopes to further analyze the usability of the devices and recently developed a space for students to submit their creative productions with the iPads.

Despite the insights provided by the use of mixed-methods for this evaluation, the limited timeframe of the study makes it difficult to determine whether or not is a worthwhile investment. With the program costing over $400 dollars per student, apart from the cost of the administrative staff, is this the best investment for a university to make in terms of technology adoption? When will it be determined that the program is no longer worth its cost and it is no longer helping to find innovative ways of learning? One of the limitations of CEHD’s 1 to1 iPad program has been the limited emphasis on the possibilities for the device within informal learning. Some of these concerns will be better analyzed from the data collected from the second year survey that was recently administered to students. A new wave of interviews and focus groups is also planned for the evaluation of the 3rd year of the program.

With 500,000 applications there are almost endless possibilities as to how the devices can be integrated within the classroom. The production of more apps that match more closely with the goals of each individual is likely to increase. Because of these devices future relevance, and the high level of creativity and innovation within this industry, constant evaluation of these devices is important as it allows for the continued improvement of the project. The use of mixed-methods allowed the evaluation team to find many interesting details that the study would not have found otherwise. These details enriched the quality of the findings and provided faculty with valuable information for the improvement of the use of the iPad and for learning how their peers were using the devices.

Conclusion

 

ICT 1 to 1 adoption projects are difficult to evaluate and the short-term focus of some evaluations results in a limited view of their potential impact. One of the difficulties in evaluating these programs results comes as a consequence for rapid technological change.

read more

For a More Robust Evaluation of 1 to 1 ICT for Education Adoption Projects

»Posted by on May 8, 2012 in Spring 2012 | 0 comments

For a More Robust Evaluation of 1 to 1 ICT for Education Adoption Projects

May 8, 2012

The rapid chance of information and communication technology (ICT) increases the challenge in determining how to best evaluate proficient use of these technological advances and their impact on learning. Through an overview of different initiatives, this paper illustrates the benefits of implementing a mixed-methods approach, and analyzing projects over a prolonged period of time. Looking at a program in a longer timeframe can help us to be more aware of the impact a program has on an individual and a community. The use of mixed-methods helps us to take into account different ways in which we can analyze a program, studying variables that are measurable and generalizable, as well as elements that are specific to a particular situation. By incorporating these elements into evaluation studies we can potentially increase the quality and usability of the reports generated. To illustrate the benefits of mixed-methods and the continued analysis of a project, this paper discusses the 1 to 1 iPad project at the University of Minnesota.

Rapid Rate of Change – A Relevant Characteristic of ICT for Education Projects

It was only a few decades ago, in 1978, when top MIT computer scientist had reservations about the usability of the personal computer and whether or not people would use for tasks such as an address book or a personal calendars (Tippet & Turkle, 2011). Since then, many technology adoption projects were promoted, but items that were originally only available for the few are much more common in the present. Today, universities in the United States increasingly consider remodeling their computer labs as almost all college students in the United States (89.1% – 2009 at UMN) bring their own laptops to the university (Walker & Jorn, 2009). Students bringing their laptops to college increased from 36% in 2003 to 83% in 2008 (Terris, 2009).

The rapid improvement of technology results in the rapid depreciation of gadgets, as well as the difficulty of evaluating them. The increase capacity of technology and their computational power has encouraged educational institutions and other industries to adopt them. The ownership of Information and Communication Technologies (ICTs) has decreased the costs of transferring data and increased worker’s potential productivity  (Friedman, 2007). Other influential ICTs are the mobile phone, the television, the internet and the radio have augmented the quantity of information available to individuals. The economic benefits from improvements in information and data transfers have led to increased investments. There has also been an increased interest in the importance of information and digital literacy as a necessary skill in the 21st century (Flannigan, 2006; Jenkins, Purushotma, Clinton, Weigel, & Robison, 2006). While not all of the changes brought by increase access to technology are positive, the increased access to information and the rapid improvement of these technologies has a major impact in society (Carr, 2011; Kurzweil, 2000). Unlike some traditional fields such as mathematics, history where most basic concepts have remained unchanged, the impacts of new media and its prevalence in society has changed substantially in the past few decades and with it the difficulty in evaluating these projects. Mobile subscriptions alone increased from less than 3 billion in 2006 to 5.9 billion in 2011 (ITU, 2012)

This rapid change makes it difficult to determine the essential skills a learner must have in the work place of tomorrow (Cobo & Moravec, 2011). With hundreds of thousands of computer applications and many types of hardware, some of high levels of complexity; it can take a person a significant amount of time to become adept in any complex program. Many users of Nvivo, a qualitative research software, many not know how to use SPSS, quantitative research software, successfully. A high level of specialization is often the norm, as using specialized programs successful requires a degree of mastery over statistical analysis or qualitative research methods. Similarly, programs such as Adobe Photoshop, Bryce, Python, Android OS, Excel, Audacity, among others have a considerable learning curve (There are courses available for learning any of these programs). Being a specialist in a particular program can lead toward a very successful financial career, but simply by mastering a single program can take dozens or hundreds of hours of practice. While it may take 10,000 hours to become a successful reporter, a successful violinist, or a successful writer (Gladwell, 2008), ICTs contains within it thousands of possibilities each with their different proficiency levels (this includes unique musical instruments, and new ways of writing [via text or twitter]).

The relevance of rapid change when evaluating ICT adoption programs is important because it influences what we consider to be the effective use of these technologies by the general population. Texting for example is increasingly becoming more common place and it is consider by some experts to be a nascent dialect (Thurlow & Brown, 2003).  Therefore, how important is it to know how to effectively send text and use a mobile phone in the 21st century? It is hard to answer these questions as a technology may be displaced in a few years’ time. The rapid change of technology complicates how we measure digital literacy and through it the effectiveness of 1 to 1 adoption and usability programs. These complications are at times difficult to perceive because of generational differences between the evaluator and younger generations (Prensky, 2001).

Today young adults (18-24) send an average of 109.5 text messages a day or 3,200 text messages a month and many of them prefer communicating over text messages than emails. Email a moderately recent invention, is to some already considered old fashioned and impractical (Smith, 2011). With this in mind, does an individual’s capacity to use emails effectively continue to be a 21st century digital literacy requirement? While the International Society for Technology in Education (ISTE) has worked on developing ICT for education standards which can aid the evaluation of technology adoption programs (ISTE, 2008), these standards emphasize broad competencies and must be operationalized to the distinctiveness of each 1 to 1 ICT program.

If technology continues to improve at a very rapid rate, perhaps even an exponential rate, it brings forth questions regarding what are the best ways which to evaluate a 1 to 1 technology project (laptops, mobiles, e-readers, etc.). In this essay I propose the analysis over a long period of time to assess the impact of the program to the individual over time. As argued by Seiter (2008) increasing access to technology will likely help individuals become more proficient at using the devices, yet as with playing a piano, it takes many hours of practice to become a skilled pianist (Seiter, 2008).One of the key advantages of 1 to 1 initiatives is that the participants are able to take home the devices. It is easier to become proficient using a device that one has access to at home, to one that is limited to its use within the classroom setting. As argued by Seiter (2008) “There is an overestimation of access to computers in terms of economic class, and an underestimation of specific forms of cultural capital required to maintain the systems themselves and move beyond the casual, recreational uses of computers to those that might lead directly to well-paid employment” (Pg. 29). If Seiter (2008) is accurate, and most of the economic benefits from ICT use come from their long term use.

ICT investment can be very expensive and many of ICT projects could not be developed without the support of private industry and the government (Heshmati & Addison, 2003).  While ICT may not be as important as basic education, food, and health services, governments throughout the world have spent large quantities of funds in ICT for education initiatives hoping to imitate the success many advanced economies have obtained from their ICT industries and byproducts (MSC, 1996). “Investment in ICT infrastructure and skills helps to diversify economies from dependence on their natural-resource endowments and offsets some of the locational disadvantages of landlocked and geographically remote countries” (Heshmati & Addison, 2003, p. 5)

Adequately evaluating 1 to 1 technology adoption initiatives is increasingly important, as different education interventions can have different cost-effectiveness ratios, and cost-benefit ratios, with some interventions being much more effective than others (Yeh, 2011). Working with limited funds, governments must administer their funds in the best possible way to provide their citizens with the ability to meet their needs various needs, from food and shelter, to their self-actualization. Just because one intervention is more cost effective, does not mean that the other intervention should be necessarily discarded (Countries can implement multiple interventions if funds are available). As Abraham Maslow (1943) suggested, many needs can be and should be met simultaneously. While there is a clear hierarchy to human needs, an improvement in one area of life, such as shelter, does not occur in a vacuum, and is not exclusive from the individuals desire to feel accepted by others, or to improve their problem-solving ability (Maslow, 1943). Investing in ICT is important for states as they move towards becoming economically diverse, robust and more competitive, relying more in their human capital than their natural resources. To evaluate these projects more precisely; this paper encourages evaluators to consider conducting a mixed-method analysis and with a long-term time perspective.

Evaluating Information and Communication Technology Projects

Evaluation can help increase the effectiveness of programs and improve the effective distribution of the limited resources available to a society. The decisions made by an evaluator can impact the lives of many individuals. Evaluators can help improve a program as well as decide whether or not the program should be continued (Fitzpatrick, Sanders, & Worthen, 2011). Discussing the methodology of evaluation, Scriven (1967) differentiated between formative (focus on development and improvement [Cook tasking the soup]) and summative (focusing on whether the program is meeting its stated goals [Guest tasking the soup]) evaluation (Scriven, 1967). By conducting an evaluation a decision-making body is able to make an informed decision about the future of the program. Yet, dealing with complex programs with large numbers of pieces, and unique elements, it is difficult for an evaluator to frame an evaluation that can help them obtain the most valuable information about a program, particularly when there is a limited time to conduct it, and the brevity of a report can be one of its strengths (Krueger, 1986). Yet, different methods provide for different valuable lenses through which to look at a problem, frames that the evaluator should consider before conducting their evaluation.

Possibly the most important elements to consider in a 1 to 1 ICT project are its cost, and its use by the learners. The most known 1 to 1 idea is the One Laptop Per Child Program (OPLC) which has been most successful in Latin America delivering hundreds of thousands of units (http://one.laptop.org/). Yet with over $100 cost per student (closer to $200) it could cost $500 billion dollars to provide a computer to every person that currently lacks access to the internet worldwide ($5 billion people), and this would not include their continued maintenance and electricity cost or the cost to access the Internet. Is access to ICT really that important? According to a recent UNESCO (2012) publication while 1 to 1 laptop projects are very costly, in Latin America “in the last three years, the 1:1 model has become increasingly widespread, and 1:1programmes are now the primary focus of national policies for ICT in education in the region. Policy-makers are no longer discussing whether the 1:1 model is worthy of investment but rather how best to achieve it” (Lugo & Schurmann, 2012)

While a price tag of $100 appears as an expensive investment for developing countries, especially when some countries spend less than $100 per student a year within their educational budget, it is also important to consider that all programs have cost, even when they are not financial. Even 1 to 1 program that are “free” (through donations) have a cost, including an e-waste disposal cost. Even when they are based in volunteer efforts, programs still have as a minimum a lost opportunity cost for instructors and learners. The cost of programs can be most effectively asses by measuring their different ingredients. This allows programs to be quantified; for various elements to be weighted, and as a result for programs to be compared with each other through a cost-effectiveness analysis (Levin, 2001). The financial benefit of the program can also be determined through a cost-benefit analysis. Through a qualitative study, “thick”, rich descriptive information can be obtained and thematically organized helping key stakeholders to better understand elements that would otherwise go unnoticed (Geertz, 1973).

Programs can also be mapped through a logic model which can include inputs, activities, outputs, and outcomes (Alter & Murty, 1997; McLaughlin & & Jordan, 1999). The order in which the elements of a program are implemented and the context where a program is implemented may also influence the results of the program! There are also likely to be competing program alternatives some of which may be more effective than the particular program being considered. Hoping to increase the transferability or generalizability of a study, an evaluation can also be theory driven (Weiss, 1997). These and other elements, can improve the quality and usability of data obtained by an evaluation. However, with limited time and resources, the methodology used to evaluate a program depends on both the strengths of the researcher, and what is considered of principal importance by key stakeholders.

Overtime, every practicing evaluator is or is in the process of becoming a “connoisseur” (the art of appreciation), as well as a “critic” (the art of disclosure) (Eisner, 1994, p. 215). This knowledge allows him or her to more effectively propose to key stakeholders’ recommendations as to the best methods of evaluation to pursue in a particular scenario. However, the interests of secondary stakeholders are also important in many ICT adoption programs.

The Relevance of Mixed Methods and Triangulation

“The underlying rationale for mixed-methods inquiry is to understand more fully, to generate deeper and broader insights, to develop important knowledge claims that respect a wider range of interests and perspectives” (Greene & Caracelli, 1997, p. 7).

Mixed-methods can greatly benefit a study as they allow the researcher to ask questions that he or she may ignore otherwise, obtaining additional information. While “purists” oppose the use of mixed-methods due to potential epistemological and ontological contradictions, many evaluators take a more “pragmatic” approach to the use of mixed-method (Greene, Caracelli, & Graham, 1989). One of the concerns regarding the use mixed-methods is that they may compromise the methodological integrity of an experimental study. These are valid concerns, and it is important to consider carefully how methods are being utilized, to avoid unintended conflicts that jeopardize the integrity of the study. Some of the theoretical concerns for researchers against using mixed-methods may not be as applicable to evaluators as evaluators do not have the same goals as researchers. While researchers are focused to a greater extent on theory and generalizability and transferability, for many evaluators, their focus is on utilization and the practical implication of their analysis to their key stakeholders and the future of the program (Patton, 2007). To the “pragmatist” evaluator, “philosophical assumptions are logically independent and therefore can be mixed and matched, in conjunction with choices about methods, to achieve the combination most appropriate for a given inquiry problem. Moreover, these paradigm differences do not really matter very much to the practice” (Greene, Caracelli, & Graham, 1989, p. 8).

Mixed-methods, often refers to the use of methods of different paradigms, using both a qualitative method such as unstructured interviews or participant observations, with a quantitative method, such as academic achievement scores, or another statistical value within the same study (Johnson & Onwuegbuzie, 2004). While it seems beneficial to analyze a problem in multiple ways experts in both qualitative and quantitative methods express concerns against this approach. Johnson and Onwuegbuzie (2004) argued that part of “purist” concerns stems from “tendency among some researchers to treating epistemology and method as being synonymous” which is not necessarily the case (Pg. 15). To Johnson and Onwuegbuzie most researchers who use mixed-methods use them when they consider their use to be most appropriate. Johnson and Onwuegbuzie (2004) argue for a contingency theory of research approach which emphasizes that while no method is superior, there are instances when one is preferable to the other.

One of the biggest benefits of using mixed methods is that they allow for the triangulation of findings. According to Dezin (1978) Triangulation is “the combination of methodologies in the study of the same phenomenon” (pg 291). Dezin (1978) describes 4 types of triangulation: data triangulation, investigator triangulation, theory triangulation, and methodological triangulation. He described these as possible within-methods, or between-methods (Denzin, 1978). The ways in which methods are mixed varies, with both of them at times having the same amount of influence, while sometimes one method holds preeminence. Triangulation is a common way in which to strengthen the generalizability and transferability of a study and the strength of its claims.  Other benefits of using mixed-methods include a complementary, where the results of one method are clarified by another, development, when one method informs the other, expansion, trying to increase the scope of one methodology, and initiation, which seeks the discovery of paradox by recasting results or questions from one method to another (Greene, Caracelli, & Graham, 1989) Regardless of the initial results, it usually provides richer data. Comparisons between the data could lead to either “convergence, inconsistency, or contradiction” (Johnson, Onwuegbuzie, & Turner, 2007, p. 115).

If there is a conflict or an inconsistency within the data, it increases the difficulty in establishing a causal relationship and the study may require further study and explanation. This explanation can be provided by a form of structural corroboration, further analysis, or by sharing both findings with the key stakeholder, he or she can then use both pieces of information to make his or her decisions (Eisner, 1994). While most evaluators feel a responsibility to provide recommendations to the stakeholders, this recommendations do not necessarily have to address the contraction scientifically, rather a “connoisseur” may state that based on his experience, he or she believes which path may be the best path to follow. ICT adoption includes many invisible elements which increases the difficulty in evaluating them (Cobo & Moravec, 2011). Because of its complexity, it will be helpful for the evaluator to share his or her opinion as a “connoisseur”. Social programs are generally complex. By providing a focused report to the key stakeholders, that emphasizes the main findings of the mixed-methods evaluation, they will be more likely to make a good formative or summative decision. As will be illustrated, this was an objective pursued by to 1 to 1 iPad initiative at the University of Minnesota.

Encouraging the Long-Term Study of ICT Projects

The limited timeframe of a study can result in a restricted analysis. Iterative formative evaluations allow key stakeholders to constantly reevaluate ways in which to improvement a programs (Mirijamdotter, Somerville, & Holst, 2006). Iterative and continues evaluations are very important for internet based companies.  Google for example is known to regularly test new algorithms and versions of their search engine simultaneously with consumers, to obtain helpful usability comparisons. They try hundreds of variations of their search engine a year in an attempt to improve their product without customers noticing minor modifications and changes (Levy, 2010). Many other ICT firms regularly test new features. Similarly, many ICT adoption projects include an iterative process in their analysis, yet in the discussion of their findings, the evaluations regularly omits the potential long term benefits of the programs, focusing instead on short term costs and benefits.  While there time constrains and financial limitations to evaluations of 1 to 1 laptop programs, these evaluations would benefits from a stronger effort in measuring the long term benefits of the interventions, including cultural capital gain (Seiter, 2008).

Methodologies such as longitudinal studies, ethnographic research, and time-series are among the methodologies that can help illustrate the potential benefits from the long term analysis of an intervention. Some of these studies can be very expensive, but they allow for the observation of changes that would otherwise go unnoticed. Another recent example of the possibilities of looking at changes overtime was recently made possibly by the Google Books Project Ngram Viewer (http://books.google.com/ngrams). The NGram Viewer allows for words frequencies to be analyzed for over a span of 200 years! This type of study called Culturenomics is one of the newest ways in which an analysis of a subject over time provides an additional insight to an issue (Michel, et al., 2010). While the NGram Viewer is not very useful for evaluators, other forms longer-term analysis can be of greater support.

Ethnography is a field of study in which time spend on the field is an important validity variable. Ethnographers focus primarily on the quality of the data, which validity can be increased by the researcher if him or her has lived in a community for a longer time-frame and has obtain through this extended visit a greater understanding of the local culture. Some of the subtleties that are analyzed by ethnographers require time and involvement to be discovered. To some researchers, ethnography symbolizes a study that takes more than a year (Fuller, 2008). While some projects could last perhaps a single long day, other “projects are developed throughout the whole of a researcher’s life; an ethnography may become a long, episodic narrative” (Jeffrey & Troman, 2004). In quantitative analysis, a time series, as their name imply, also emphasizes the importance of collecting data over time. This set of statistical data can be collected at various intervals such as monthly for unemployment benefits data, or daily for the financial exchange rate, or monitoring an individual’s pulse over an exercise period, or even every 2 seconds for EGG brain wave activity. A commonly used and informative time series is population census data which is collected by many countries in regular intervals to help their governments better understand broader demographical changes, migratory patterns, and the future outlook of various variables (Zhang & Song, 2003).

Longitudinal studies can also be very helpful in understanding how an intervention at an early stage of a person’s development influences them throughout the rest of their lives. Various longitudinal studies have been conducted within early education to identify the changes these interventions may have in the lives of these individuals. Longitudinal studies include interventions pre-natal care, youth reading programs, or the observation of children as they become older, among many other studies. One of the most famous longitudinal studies of education was the Student/Teacher Achievement Ratio (STAR) Tennessee Class Size Reduction study which began in 1985 and which continued until 1999 (Finn & Achilles, 1999; Hanushek, 1999). The study tracked students who were assigned at random to kindergarten having between 13 and 17 students, or larger classes having between 22 and 26.  Over 6000 students took part in the study were they were kept in smaller classrooms for 4 years, and were continued to be monitored after the end of the intervention. The study found statistical significant changes to student achievement scores in three utilized measurements. The conclusions of this study strengthened claims regarding the positive impacts of class size reduction which encouraged the enactment of class reduction policies in California (1996) and other states. While other studies have contradicted the findings of the study, its use of an experimental design, its magnitude and its use of a longitudinal analysis strengthened its claims. There have been a number of important longitudinal studies in early childhood and other early interventions that have followed children development for decades (NCES , 2010). It is also used frequently within health sciences.

Another popular, long-term, longitudinal study is the British Up Series which has followed a group of 14 children since age seven in 1964, and is still under production. Similar documentaries have been replicated in Australia (since 1975), Belgium (1980-1990), Canada (1991-1993), Czech Republic (1980s), Germany (1961-2006), Denmark (From 2000), Japan (from 1992), Netherlands (from 1982), South Africa (from 1982), Sweden (from 1973), USSR (from 1990), USA (from 1991). While these long term studies can be expensive to conduct, they provide a different dimension to findings, a dimension that is sometimes not available in most 1 to 1 technology adoption evaluations.

The key benefit of including this dimension within an evaluation is due to the difficulty in knowing how the skills obtained from using new ICT devices will help an individual have the confidence and the background skills needed to develop future ICT skills competencies that may be beneficial to them in the job market. Will their familiarity with ICT at an early age, bring about broader benefits later in their lives? A short term outlook to an evaluation may at times provide a negatively skewed view of the impact of these projects, expecting more out of a pilot project than should be expected. In addition, it is common for program designers to overstate the potential outcomes of a project, expecting it to have a greater impact than it is likely possible. For example, as an evaluation of USAID basic educational projects (1990-2005) showed, most of its projects had less than a 4% in student achievement scores, despite the efforts of many specialists and the expenditure of millions of dollars. (Chapman & Quijada, 2007). One to one technology adoption projects can also be very expensive and as such can have a very negative cost-benefit analysis in the first years of the program. It is very important to take into account the rapid depreciation rate of CIT, but evaluations should also take into account, future, longer-term benefits of the investment.

Having access to personal computers is essential for most of the workforce in the 21st century. As argued by Seiter (2008), having a computer at home is almost a necessity for development competent skills in the subject. “The likelihood of gaining strong digital literacy skills on this type of machine [a computer lab] is much slimmer than on a home computer. In other words, learning to use computers at school is like the music education class in which you have forty minutes to hold an instrument in your hands once a week, along with thirty other kids” (Pg. 37).

Many of the computer programs that students may eventually learn to use will require them to invest dozens, hundreds, and perhaps thousands of hours mastering. In addition individuals, who are less familiar with computer, tend to be less confident about becoming proficient in using new programs (Mohammed, 2007). While a television and a radio, or a “feature” mobile phone may have a short learning curve, the same cannot be said of personal computers, the internet or smart phones. Each of which is complex to different extents. Digital literacy programs such as RIA can teach a digital immigrant a basic set of skills in 72 hours, but many more hours are needed for complex use of a personal computer or an internet capable device (http://www.ria.org.mx). Just learning how to type rapidly on a QWERTY keyboard will take many hours of practice.

By evaluating a project while considering its impact over a longer frame of time this article encourages the continued evaluation of a program over a number of years, on regular intervals, while providing recommendations, and reporting on the benefits and negative elements of the program as they are modified over time. This type of long term evaluation is best suited for an internal evaluator, or a combination or internal and external evaluators. When thinking of the cost of 1 to 1 programs over time, it is also important to keep in mind the rapid depreciation of technology. With the rapid depreciation of computer equipment, should 1 to 1 programs focus on purchasing the most up to date gadgets and tools? This is a question that is be best analyzed through the inclusion of a cost-effectiveness analysis which accounts for the depreciation of technologies.

A Case Study – University of Minnesota One iPad Per Student Initiative

As previously discussed, the evaluation of technology adoptions programs has tended to focus on a short-term analysis, without sufficiently addressing or discussing the importance of analyzing the implications of adoptions over a longer time spectrum. As advanced economies are increasingly fueled by the ownership of patents and new inventions, so to have other countries attempted to further develop these sectors (Heshmati & Addison, 2003). The information transferred through ICT can help countries develop into more diverse and sustainable economies. It is through ingenuity, creativity, innovation, or “Mindware”, that groups and individuals come together to form new industries and adapt to different types of crises (Cobo & Moravec, 2011). Via technology adoption programs, individuals can increasingly access the information that will help them develop valuable skills. By evaluating with a long-term focus, and incorporating both qualitative and quantitative elements to the evaluation, an evaluation will be better able to address the questions of key stakeholders. This paper illustrates the limitations and strengths of a recent evaluation of a one to one iPad initiative in the University of Minnesota.

One Laptop Per Child – An Evaluation of Peru’s Project

Possible the most controversial and also most commonly cited 1 to 1 initiative is the One Laptop Per Child (OLPC) initiative, which was started by Nicholas Negroponte, the founder of the MIT Media Lab (TED, 2008). According to Negroponte, by thinking in bytes instead of atoms, and by learning how to operate a computer, a child can learn that the world is increasingly available at the click of a button, and that they can construct and build anything that they can imagine by programing new and amazing environment (Negroponte, 1996). Following Paper’s Constructionism, Negroponte believes that programing teaches an individual not how to learn, as they must go back, revisit their code and figure out why there is a mistake (Papert, 1980). As an ICT evangelist, Negroponte highlighted how simply by giving a child a computer his possibilities would be expanded (Negroponte, 1996). Since the beginning of OLPC in 2005, over 2.5 million laptops have been delivered (http://one.laptop.org/about/faq). However, despite the high level of investment, particularly in Latin America, project evaluations have not shown significant gains in achievement scores (Cristia, Cueto, Ibarraran, Santiago, & Severin, 2012).

A recent evaluation of OLPC in Peru expressed how despite a high level of investment in these new machineries (902,000 laptops), and increasing the ratio of computers from 0.12 to 1.18, student performance in math and reading had not increased substantially. The project did find that students’ cognitive skills had improved over the time of the study (measured by Raven’s Progressive Matrices, a verbal fluency test and a Coding test). While analysts have since highlighted that the program had only limited effects on math and language achievement (0.003 standard deviations), little emphasis has been given to the potential impact of the improvement in cognitive skills, and perhaps more importantly what having improved their digital literacy skills will mean for this individuals in the future, as they are asked to learn other task specific digital and information literacy skills (Cristia, Cueto, Ibarraran, Santiago, & Severin, 2012). As mentioned by Seiter (2008) developing high level ICT may take many years to fully demonstrate themselves as marketable skills in the lives of students.

It is also difficult to know from the available data whether a different investment would have been more cost-effective or result in a higher cost-benefit ratio in Peru. One of the unmet goals of OLPC was to produce a $100 laptop; however they currently cost around $200 (Cristia, Cueto, Ibarraran, Santiago, & Severin, 2012). As a project which was not affiliated with Microsoft, Google or Apple, the OLPC laptops came with an operating system (OS) known as Sugar. While all operating systems share similarities, did the use of Linux Sugar limit or increase the possibilities for students. When testing student computer literacy skills, they found that the students quickly became more adept at using these devices. As explain earlier in this paper, they also had difficulties in deciding which skills should be tested (Cristia, Cueto, Ibarraran, Santiago, & Severin, 2012, p. 15). Unfortunately, another unmet goal of the project was that Peru’s OLPC participants lacked of internet connectivity. OLPC was partly designed so that students could benefit from increase connection either through OLPC exclusive Mesh network or the Internet. The impact of lacking access to the internet are hard to measure, however they may have affected the individuals’ development of their information literacy skills. In conclusion, Peru’s evaluation of the OLPC project was very insightful, while it contained a qualitative element; the project had a quantitative focus, limiting reader’s understanding of how the initiative affected individuals. As a project which centers on the individual, learning more of the project’s impact on the person is increasingly of relevance as ICT becomes more personalized. Apart from now discussing potential long-term gains, the evaluation also failed to mention the full cost of the devices. With the laptop only accounting for a tenth to a seventh of the total cost of the device, it is important to consider whether this is a cost-effective investment (Lugo & Schurmann, 2012). The evaluation would have benefited from a broader implementation of mixed methods in particularly in the qualitative-side, while also emphasizing these changes over a longer span of time. An element of time that is particularly important to first year initiatives is the teachers or instructor familiarity or learning curve, as they will slowly learn better ways in which to use the device and integrate them within the classroom.

University of Minnesota iPad Initiative

The discussion surrounding the digital divide is traditionally centered around on access to the internet and a personal computer, yet the rapid change of technologies leads us to question whether the divide will be centered on these devices in the future (Warschauer, 2008; Zickuhr & Smith, 2012). What role will smart phones, reality augmented glasses, 3D printers, or farther into the future nanotechnology implants signify in terms of the digital divide? (Kurzweil, 2000). A current technology that may further displace the purchase of paper books for K-12 and HE is e-reader technology, the most successful of which are the iPads (I, II, and III) and Amazon’s Kindle readers. A recent NDP report indicated that tablets may outsell laptop computers by 2016, expanding sales from 81.6 million units (2011) to 424.9 million units (2017) a year (Morphy, 2012). Will we then measure the digital divide in terms of who access and who doesn’t have access to an iPad?

Pilot projects in universities such as the University of Minnesota, the University of San Diego, Oberlin College and a few others have move forward into answering this question. While the first successful tablet, the iPad was released on April 2010, that same year, the University of Minnesota decided to purchase 447 units to provide a tablet to every CEHD student in the upcoming undergraduate cohort. It was one of the first major initiatives of its type in the country. Because of its uniqueness, and being an early adoption project, its evaluation was based partly on the conclusions obtained from previous 1 to 1 projects such as the OLPC initiative and Maine’s 1 to 1 statewide adoption program. However, as a device that was substantially different from previous ICT devices, the operationalization of NETS standards, and an in-depth analysis of their potential use has not been acutely studied (ISTE, 2008). So far, only a few articles have been published regarding the use of the iPad in the classroom (EDUCAUSE, 2011). To better understand the possible educational implications of the adoption of this technological device, a CEHD research team decided to conduct a mixed-methods evaluation (Wagoner, Hoover, & Ernst, 2012). In addition, an initial commitment was made to continue evaluating the project for a consecutive number of years. The support of the dean was integral in the continuation of the program.

The first year, the project consider as a goal to increase the usability of the devices by both faculty and students, and to provide aid to faculty members so that they could familiarize themselves with the devices and consider the best ways in which they could incorporate the devices within their classrooms. Faculty members were then encouraged to incorporate the devices as they best saw within their syllabus. Various graduate assistants as support staff.  Soon after the distribution of iPads, evaluators also drafted a post-test and organized a series of interviews. The interviews asked faculty members a number of questions, including how they learned to use their iPads, what were their plans for using them within the classroom, how the iPad had affected their teaching, and if the support received had been appropriate (From field notes).

A similar set of questions were asked to faculty members at the end of the school year, where they were asked what projects they had actually implemented, the opinions of students regarding ebooks, pedagogical concerns among others. Twenty two interviews were coded and themes were developed from the qualitative study including concerns from faculty about time investment, how the iPad compares with other technologies, the impact of the iPad to faculty members’ pedagogy, the impact of the iPad to their classroom management, and details about faculty members’ technology learning process. At the end of the year a series of faculty member focus groups were also conducted. Many of the details learned through the qualitative portion of the study would have been difficult to obtain otherwise. The common elements between the data from the focus groups and the interviews also allowed us to verify some observations. Below is an interesting quote from one of the participating faculty members:

“What I want, in terms of their behaviors, is for [the students] to be active explorers in the classroom, to bring the machines, and to actually utilize them for historical research … One of the things that we did as a first conversation is to describe the level of trust that is going to be involved … and they live up to those expectations. I’ve been really happy so far with what we’re learning. It conveys to them that they’re smart, capable discoverers that we’re co-creating knowledge—historical knowledge” (Wagoner, Hoover, & Ernst, 2012, p. 3)

While the quote above illustrates a very positive emotion, it is likely that this experience will not have been visible through an analysis of student achievement, illustrating the benefit of utilizing mixed-method. Two student focus groups were also conducted where they shared some of their favorite apps and how they had used the iPad through the semester, yet unlike faculty members where evaluators were able to interview the whole population, 447 students were more than the team could interview.

To obtain a better analysis of the student response, a survey was conducted which included a number of questions related to their use and experience with the iPad. The survey was responded by 241 CEHD first year students (Wagoner, Hoover, & Ernst, 2012). Having access to broader demographic data also allowed the evaluation team to compare student attitudes with socio-economic variables. Various strong correlations and significant relationships were found regarding the impacts of iPads to student learning. In particular the evaluation found that students felt that the devices had been a positive experience in terms of their motivation they also expressed having a high level of comfort using the devices and the iPad helped them feel more engage in some of their classes.

The study also showed that students which were part of Access to Success (ATS) or had been part of the TRIO program, usually students of color or from low socio-economic backgrounds mentioned feeling more engaged and connecting during classes. From the qualitative data the evaluators also learned that to some students the iPad had become a window into the internet, and a digital item for their whole household to use.

The success of the first year implementation, and the questions that evaluators were still unable to answer led to the continued of the program for a second and third year. A similar number of iPads (now iPads 2) were purchased the second year of the program. Once again the rapid change of technology provided new possibilities for evaluators, as iPad 2 include cameras which permitting students to record HD video and have audio-visual communications with anyone with access to Facetime or Skype, and other programs. After analyzing the potential savings the extensive use of iPads for e-reading by some students, CEHD also decided to support a pilot project for the testing and adoption of Open Textbooks, as well as the establishment of a work desk where faculty members could obtain assistance and build iBooks and ePubs if interested.

The project is now planning its third year. Adapting to the result of the first year evaluation, many of the questions of the second year survey were modified to find additional valuable information. One of the limitations of the evaluation of the program so far has been a lack of a cost-effectiveness or a cost-benefit study. Yet, such a study should not only take into account the rapid depreciation of the devices, but also consider if students are learning through the use of the devices skills that could potentially aid them when they join the workforce. While the cost have been high with over 300,000 dollars per year, it is difficult to assess the long term benefits for participants (students and faculty members). The rapid devaluation of the devices is an important consideration, as it may be possible that in a couple of years these devices will cost only a fifth of their original cost and be even more feature rich and powerful, allowing students to obtain a similar skill set for a fraction of the cost. It is also possible that many of the skills obtained are not very different from those obtained from using other ICTs, reducing the importance of the investment.

Currently, a website is available were individuals interested in the results of the project can learn various innovative classroom projects that were developed and how they can be adapted to other classrooms, as well as suggested best practices. Some of the innovative uses of the iPads by students include the creation of digital stories, accessing unique applications including interactive stories, data visualization, among others, as well as rapidly accessing websites, and developing an e-book library. In a report, CEHD concluded that the iPad had been helpful addressing the concerns of the Digital Divide, increasing access to the tools needed for media production, increased access to tools that facilitates personal productivity, improve students’ possibilities for information access and consumption, helped reduce the cost of printing readings, and facilitated students’ learning outside of the classroom (Wagoner, Hoover, & Ernst, 2012). For year two, the program also hopes to further analyze the usability of the devices and recently developed a space for students to submit their creative productions with the iPads.

Despite the insights provided by the use of mixed-methods for this evaluation, the limited timeframe of the study makes it difficult to determine whether or not is a worthwhile investment. With the program costing over $400 dollars per student, apart from the cost of the administrative staff, is this the best investment for a university to make in terms of technology adoption? When will it be determined that the program is no longer worth its cost and it is no longer helping to find innovative ways of learning? One of the limitations of CEHD’s 1 to1 iPad program has been the limited emphasis on the possibilities for the device within informal learning. Some of these concerns will be better analyzed from the data collected from the second year survey that was recently administered to students. A new wave of interviews and focus groups is also planned for the evaluation of the 3rd year of the program.

With 500,000 applications there are almost endless possibilities as to how the devices can be integrated within the classroom. The production of more apps that match more closely with the goals of each individual is likely to increase. Because of these devices future relevance, and the high level of creativity and innovation within this industry, constant evaluation of these devices is important as it allows for the continued improvement of the project. The use of mixed-methods allowed the evaluation team to find many interesting details that the study would not have found otherwise. These details enriched the quality of the findings and provided faculty with valuable information for the improvement of the use of the iPad and for learning how their peers were using the devices.

Conclusion

ICT 1 to 1 adoption projects are difficult to evaluate and the short-term focus of some evaluations results in a limited view of their potential impact. One of the difficulties in evaluating these programs results comes as a consequence for rapid technological change. 

read more