Category: Digital Transformation (Page 1 of 2)

Mending the Meaning Gap

Higher education is struggling with the buffeting that is occurring because of several trends that have been going on for years, if not decades. The release of the National Student Clearinghouse Research Center Final Report on Enrollment, which showed significant enrollment declines across the board, has stimulated many recent conversations on the present and future health of education. Declines were particularly deep among more disadvantaged populations. My good friend Bryan Alexander analyzed these numbers in a recent blog. The ice sheets are clearly moving here and everyone is trying to figure out what’s going to happen next.

Bryan has hosted several conversations that implicitly or explicitly touched on this data on his Future Trends Forum. In the first, the Forum interviewed the authors of Leadership Matters: Confronting the Hard Choices Facing Higher Education, W. Joseph King and Brian C. Mitchell. Although both authors came from the liberal arts college space, they were concerned about the decline in equity reflected in the 2-year college numbers.

As someone who teaches at a 2-year college, I asked whether they thought the problem was partly a cultural one. Over the past century, we have attempted to scale systems originally designed to educate an elite student body. With growing equity following World War II, the implied promise to the middle class was that they too could join that elite if only they could gain admission to their exclusive colleges. Simultaneously, however, we established alternative tiers of higher education, ranging from large public colleges to community colleges that claimed to offer “the same” quality educational experience.

Despite the best efforts of these next level institutions to ape the experiences of their more hallowed cousins, there was a widespread acknowledgement that they could not hope to match the experience of the older, elite institutions. However, what they did import were the systems and cultures of those institutions. Grades, credit hours, and a general elite mentality that “this is the best way to learn” transferred from the Carnegie systems established in the early part of the 20th century. These systems assumed a general acceptance of “academic culture.” “Rigor” came to imply close conformity to those systems of operation.

However, for most of the new students, these systems were alien. At best, they learned how to “play along” just enough to “get a degree” with little appreciation of what that degree really meant other than opening the doors to jobs. At worst, they bounced off and decided that college “was not for me” and settled (often with crushing debt from their incomplete work) for jobs that did not require a degree.

I see a lot of these students in my courses. They do not know how to learn in the prescribed manner. At best, they have learned a work ethic that keeps them in the game but remained focused on the game of school rather than the task of learning. At worst, they quickly lose interest and stop doing the work, failing, or dropping the class. In either case, the work has little meaning to them.

When I asked King and Mitchell about this idea that the college system was disconnected from the realities of most students, they focused on student and teacher preparation. However, I think the problem is far deeper and more systemic than that. What they were talking about was easing the students’ conformity to the game. What I’m suggesting is perhaps we need to look at the game itself.

Most of us are products of this system and it’s no surprise when people miss the elephant in the room. It’s hard to shift paradigms far enough to where we view our own development critically. We are happy with our accomplishments, and deservedly so.

Some college professors may have risen from cultures traditionally excluded by the educational establishment, but they have done so by adopting the vestments of the new culture. Others, myself included, grew up within these systems (my father was a university professor) and take them for granted.

I think that there is a reckoning happening with this cultural disconnect and that this is a big part of the decline in more marginalized higher education students. They rationalize: “What’s the point of going to college? Everyone I know who has tried    , has dropped out. And they tell me that the kinds of stuff they are teaching there like Shakespeare, Calculus, history, and government aren’t really very useful for getting ahead in life.

So what’s the solution? Perhaps if we break it up into smaller chunks and then monetize those chunks so education becomes like currency, that will make it easier for people to get through the process. This was essentially what the Web3 advocates that appeared on The Forum a week later were arguing. Modularized education is the answer, if only we can operationalize it. People can get exposed to the system in smaller chunks and still get credit for the experience.

There is something to recommend in this approach. One challenge my students face is maintaining consistent effort over a long 16-week semester while life happens to them. Again, this is a legacy of the luxuries of industrial education. If you are living on campus or your life revolves exclusively around school, this length makes sense. Most of my students don’t have that luxury. They must deal with jobs, family, and a host of other issues, many of which have been exacerbated by the pandemic. Disruptions are a matter of course. Life gives them short attention spans.

The problem with Web3 is that it applies an economic logic to something that isn’t quantifiable: what does it mean to learn something? Learning is an internalized process. College changed me. It changed how I thought about problems. It changed how I viewed the world. Web3 does nothing to make that easier for my students. The problem isn’t just one of time. More deeply, it’s the aforementioned lack of meaning in the activities that we ask them to pursue. We ask them to “trust us” that this has value. Arbitrarily assigning extrinsic value to something doesn’t give it an intrinsic value, especially for something as ephemeral as learning.

We can use digital tools to create deeper meaning for every learner, whatever their background or capabilities. The problem is that the Industrial Thinking systems that we have constructed around education do not do this for most students. In an article for Current Issues in Education last year, I described how we could set up systems using the same technologies advocated by Web3 proponents. Instead of using the tools to verify “playing the game,” we could use these tools to establish ownership of meaningful artifacts of learning and connect those to a networked community of learning that extends far beyond the traditional boundaries of any institution, or, for that matter, higher education itself.

Ultimately, we are what we build. We build families. We build ourselves. We build our imaginations. Higher education may help us in some of these tasks, but it is itself only a tool to do so. I tell my students that I don’t teach them. They teach themselves. They probably see this as a shirking of my responsibility to them initially. However, a lot of them discover over the course of the semester that what I mean is that I can’t build learning in them. They must do that themselves.

Colleges don’t educate. They provide resources to help their students educate themselves. Until we get away from thinking that says that you have to be good at college to succeed, we’re never going to overcome the meaning gap that we have created for ourselves. Students will continue to be nothing more than cogs in our industrial wheels. It should come as no surprise when more and more of them refuse to allow themselves to be drawn into the grinder.

Universitas Technologica: The Distributed University

“Why stay in college? Why go to night school? It will all be different this time.”
“Life During Wartime” – The Talking Heads

 

“[T]he earliest universities, in the twelfth and thirteen centuries – at Bologna and Paris – were not deliberately founded; they simply coalesced spontaneously around networks of students and teachers, as nodes at the thickest points in these networks…. [U]niversities had no campuses, no bricks and mortar. The term universitas referred to a group of people, not a physical place.”
(McNeely and Wolverton, Reinventing Knowledge, 2008, pp. 79, 80).

 

We have access to more knowledge than at any time in human history. We have access to more people than at any time in human history. I can remember the breathless pronouncements of distributed intelligence, the hive mind, and “colleges without walls” in the 1990s. Yet we still structurally operate the business of learning at all levels – from instruction to scholarship – in pretty much the same way as we did in 1995. While we may have models of distributed learning as exemplified by everything from MOOCs to online colleges, these are not distributed universities but are merely the scaling of existing, entrenched models of learning.

Teaching students, however, is only one part of the university’s mission. Indeed, its first mission was to serve as a center of scholarship for the faculty itself. Concentrations of scholars quickly attracted students but that’s where it started in places like Paris, Bologna, and Oxford.  Scholars used to travel to universities to “read” for a degree because that’s where the books literally were. After Gutenberg, this was decentralized somewhat but idea of concentrating knowledge never really went away. Industrializing learning in the 19thcentury made it a logical efficiency to bring the students to where the books were. The internet increasingly makes it possible to explode those notions. I know I am increasingly frustrated when I can’t access a book or article instantly to move my work forward. There is no “technical” reason this can’t happen.

There is precedent for decentralized scholarship. The Republic of Letters of the Enlightenment was central to the advancement of scientific and philosophical thinking. Vannevar Bush conceived of the Memex in 1945 as a means of using technology to drive knowledge forward. Tim Berners-Lee originally conceived of the web as a way of coordinating scientists working at CERN with partner facilities around the world.

While physical universities of place were essential, especially early on, in the hosting and spreading of the internet, they may have sown the seeds of their own successors. This is because in so doing they have laid the groundwork to finally flip the idea of a “center” of learning.

But the more important caveat is that universities were, and are, critical “social nodes” for scholars and innovators to come together. The community of learning is perhaps the most important role that the best universities continue to play. At the same time, this reality also poses the greatest limitation and threat to a true universitas.

These physical communities invite siloes that run counter to the needs of the universitas technologica. While Harvard may be on the internet, at the end of the day Harvard is still in Cambridge. Most universities are even more divided, with sub-nodes of colleges insulating and competing with one another. These competitions extend to siloes that rise above the walls of the university itself into disciplinary organizations that create dogma and channel creative energies in very specific directions. Corporate interests, particularly in publishing, have served to further exacerbate the channeling and masking of knowledge.

The rationale for these “disciplines” is rapidly fading. Answers to complex issues are increasingly more likely found in the spaces between disciplines than within disciplines. Furthermore, as I argued in another context, constructing castles, for siloes are just that, invites attack if for no other reason than for their fixed nature. The more we have an Ivory Tower, the more ivory hunters will emerge to pillage it.

What we need is a completely new vision for our learning enterprises that redefines place and discipline. In 1994 Kevin Kelly wrote, “it is possible that a human mind may be chiefly distributed, yet, it is in artificial minds where distributed mind will certainly prevail. “ (Kelly, p. 19)  We now have the technological capacity to create such minds – a collective hive of intelligence – even if the platforms we by and large use to communicate with tend to perpetuate rather than erode siloes of knowledge.

The most successful companies in the new economy, such as Google and Apple, have managed to create interdisciplinary hives. Two centuries of specialization in the university, ironically the originator of this concept, has surprisingly lagged behind in this regard. Bryan Alexander recently argued that this is one of the reasons that it finds itself under attack as enrollments have declined.

Ironically, the attacks on the edifices of higher learning come at a time where there is increasing consensus that our children will require ever more complex mental tools to navigate their futures. Now more than ever we need to reinvent communities of learning to harness the distributed intelligence in order to solve ever more complex problems.

Instead, when we try to construct and redefine our communities of learning, we are often confronted by an atomized landscape, limited in its interdisciplinarity and ability to cope with the level of change and complexity likely in the near future. The needs of knowledge creation have outstripped the physical and organizational infrastructures of universities (and, by extension, the business specializations that they have in turn created) more than ever before. This reality has been a constant challenge to me over the last few years as I have been involved in numerous projects where different cultures, disciplines, and skills have had to come together to make them work (most haven’t, yet).

Organizationally, this problem became particularly acute with the dissolution of the New Media Consortium as that grouping provided a number of mechanisms for cross-disciplinary collaboration. It provided a center of gravity for activities that were not tied to discipline, institution, or even profession. However, even it often struggled with figuring out how to operationalize its hive mind against centripetal forces. Without it as a center, however, there was nothing stopping those forces from scattering the universitas that came together under its broad tent.

In the last year I have been involved in a number of interesting conversations and attempts to “irrigate the sands of the desert” at various levels and these might start to point us toward the visions of how the internet might finally disrupt the Ivory Tower and create a true “universitas technologica.”

On a very micro level, several of my colleagues and I decided to try to meet regularly starting last summer to free form discuss various readings that were brought to the table. Call it a “book club” or graduate seminar, but it created a lot of interesting trans-disciplinary thought. We met using Zoom videoconferencing and so were able to create a very effective universitas on a very small scale. We are currently discussing how to expand this effort, make it more public, and possibly complement it with more asynchronous tools. It has been a very rewarding effort that has introduced me to more concentrated, new thought (and entire streams of literature) than at any time since I left grad school.

On the macro end, FOEcast is attempting to create a “distributed innovation network.” The idea here is to spin up nodes of innovation in teaching and learning using technology around the world and interconnect. We hope to create a professional organization with no physical “center” to leverage the connective tissue of the internet and united through a federative structure of ideas and principles. Like all attempts to colonize virgin territory, it is struggling to lay down its roots. The vision is there, however, and we are probably closer to realizing it than we have been at any time in the last 20-30 years.

The power of both of these efforts is not due to any technological revolutions. True, video conference over the web has reached a certain level of simplicity and affordability that was not available a few years ago, but that’s not really what’s driving things here. What is unique to both the seminar and FOEcast is that we’re willing to cast off from any preconceived notions of what an educational effort is. Indeed, I don’t think many of either efforts’ participants have even thought of them as “educational” until very recently – at least not in the sense of their potential for being self-educational.

The last couple of years for me have been spent trying to tie together strands of brilliance and resources to create spaces (both real and virtual) that will accelerate the processes of reinventing education. So far, this has often been an exercise in frustration as I have constantly run up against cultural norms and institutional barriers.

What I have come to realize, however, is that we may collectively be starting to create a “Distributed University”  or universitas technologica to replace the physical university that has evolved from its medieval genesis. Now that the readings are everywhere, recreating the university’s communities of learning in a distributed fashion is our next big task. Like the internet itself was designed to be redundant and resistant to failure, we must construct a global interdisciplinary community to achieve the next steps in the evolution of scholarship and learning. Maybe we are finally seeing the emergence of that.

Unbound: Teaching in the New Media Landscape

(Originally published on the nmc.org website February-September 2013)

In the Spring Semester of 2013 I embarked on an experiment to see if I could teach a government class based on the principles that I have been advocating:

  1. Create opportunities for collaboration and teamwork among the students
  2. Allow the students to explore the material for themselves
  3. Create an iterative, yet cumulative, format that allowed students to be creative, to take risks, fail without “failing” and build on that for later success in class
  4. Gamify the course to create positive incentives for the students to push themselves
  5. Focus on skills mastery rather than content mastery

This series of four blogs under the broad rubric of “Unbound” chronicle that experience and analyze the outcomes.

 

Unbound: The Role of Textbooks in the New Media Environment


Unbound: Observations on the Structure of the Class

As you may recall from my column in January, I launched a radical redesign of my American Government class this semester. I always start out the semester with a certain spirit of optimism, but I felt it was important in this case to talk about what worked and what didn’t work as I tried to leverage New Media to reshape how I teach.

I structured this class with five goals in mind:

  1. Create opportunities for collaboration and teamwork among the students
  2. Allow the students to explore the material for themselves
  3. Create an iterative, yet cumulative, format that allowed students to be creative, to take risks, fail without “failing” and build on that for later success in class
  4. Gamify the course to create positive incentives for the students to push themselves
  5. Focus on skills mastery rather than content mastery

 

In this installment, I’ll focus on my pursuits to meet the first three objectives. In addressing the first goal, my aim was to create incentives for my students to prepare for the kind of collaboration that they are likely to encounter in the professional world. Very few things in the real world effectively reflect the individualism of what we ask them to do in class. With the interconnectedness of social technology, this is becoming even more pronounced. Therefore, with the exception of the final portfolio, I randomly assigned students to groups. I reshuffled those groups four times over the course of the semester.

So, how well did this work? Getting students to work effectively in groups has been one of my greatest struggles. There is a lot of data to suggest that peer learning is one of the most effective teaching methods. My own experience validates this. I had a class once (in a mythical fairy land – no really) that spontaneously formed study groups and pulled each other along. As a group, they made higher grades than any class I’ve ever taught, before or since.

This was a spontaneous result of the chemistry of the class. I have never been able to make this happen. More often, typical groupwork can be summed up with this graphic from Endless Origami.

To evaluate group work, I used ballots and self-evaluation techniques. However, students tended to inflate everyone’s grades, giving even the biggest slacker full points. This was not a good way to figure out who was pulling their weight in the group. So, in classic economic fashion (I am a social scientist after all), I introduced scarcity into the equation. I limited the number of points that any individual could give out to less than the number of other group members. It turns out that my students would have made excellent communists. Their response was to aggregate the points the group had as a whole and distribute them equally throughout the group.

Considering the self-avowed conservatism of most of my students, I think this sheds an interesting light on the relative value that they place on educational incentives. Grades are so disconnected from financial goals that they assume little or no value in the minds of most students. I could be reading too much into this considering the small sample size (and I’ve had other classes that behaved quite differently), but it’s a point worth considering.

The students’ unwillingness to give each other meaningful evaluations didn’t stop them from complaining about the efforts of the other members of their group who did not pull their own weight, making it extremely hard to evaluate the internal workings of a particular group. As a consequence most of the learning here was of a soft variety. It consisted mainly of counseling students about how to handle these kinds of situations based on my own experiences.

Regrettably, the group work scenarios did not inspire students through their shared sense of purpose or the competitive incentives I’d laid out to explore educationally. In other words, I once again failed to recreate the effect of the class that pulled itself along collectively.

The second major departure in the class structure I implemented was to eliminate the requirement for a standard textbook. Instead, I gave them information frameworks such as my class notes and Wikipedia and then encouraged them to explore more widely. My goal was to give them open-ended information with a basic structure to allow them to follow their creative impulses.

I set this up on the very first day of class, by engaging the class in a discussion of creativity, the role of information and education in today’s world. I then showed them Ken Robinson’s video on education and creativity.

We then discussed how we could construct the course so that artificial boundaries, such as those imposed by a standard textbook, did not limit our creative efforts. For structure I provided my standard Study Guide, which is simply a list of questions organized by topics from the course as well the class notes, which answer those questions.

As I discovered the last time I did this, giving the students the freedom to map their own intellectual journeys is usually a recipe for them not leaving home. While I set minimum work requirements, students continued to do the minimum necessary. For instance, they would post a random articles without meaningful commentary. Many students failed to even look at the basic starting points in my notes or Wikipedia. Then they complained about a lack of structure in the course. Over the course of the semester, the situation improved, although there were some students who never understood or took advantage of the freedom to let their curiosity guide their educational path.

This issue relates to a basic dilemma I seem to face every time I deviate from the standard course structure. Students are unprepared for novelty. They have been well trained in “this is what a college course looks like.” In theory, they love the idea of freedom from their shackles but relatively few of them know what to do with their freedom once it is given to them. This is a fundamental cultural issue we all face as we try to innovate our way out of some of the contradictions of the educational system we have created. Change in this area is likely to come slowly, especially among those students who have never understood the purpose of education or their responsibilities to their own educational enterprise.

The third structural element I built into the course comes from the insights of Dan Ariely’s work in The Upside of Irrationality and his discussion of motivation. He discovered that relatively meaningless tasks discourage people even if they are incentivized with extrinsic rewards such as grades (or money).

I tried to address this issue in two ways. First, I created a point system that was completely cumulative in nature. Students could accumulate points, but they were never taken away. No assignment was graded as a ratio of achieved points/possible points. Second, I constructed the course so that the pieces would theoretically build upon one another. The weekly assignments led to the four projects and the work produced in the course contributed to the Final Portfolio assignment, which I deliberately put in a format that was both public and would persist after the course was over. I also provided incentives for going back and citing earlier student work on Google+ in later postings or presentations.

Getting students to look above the individual task effort and to take a longer view over the course of the semester has always been a challenge. I usually attempt this in some form, but this is first time I explicitly wove it into the structure of the class. I was also hoping that the relative permanence and openness of the Final Portfolio would make it more of a tangible object than something that ends up in my filing cabinet.

It was a mixed success. On the negative side, students, like in my more traditional classes, failed to use the earlier parts of the class to gain insights into the later parts of the class such as recognizing how the priorities of Congressmen tend to undermine long-term goals in foreign or economic policy. Furthermore, almost none of them cited their fellow classmates’ prior work. Even when I relaxed the rules and allowed students to go back and post in earlier topics on Google+ in order to bring up their point totals, very few of them exercised that option.

On the other hand, I was pleasantly surprised by the quality of the effort on the Final Portfolios (web pages). On the whole, the students improved their performance as they created their individual web pages. No one did worse on this assignment than they had done previously in their presentations and Google+ postings and a significant number of them did considerably better. This is in marked contrast to the traditional finals I had given in previous iterations of the class where students did not typically raise their overall grades through that one assessment.

Overall, the structure of the course did not significantly improve learning outcomes, but it also did not harm them. The class did about as well as past classes did in terms of what they learned and retained from the class. It was my hope, however, that the strong emphasis on group work, the focus on creative exploration, and the open-ended structure of the syllabus would foster better outcomes. Although quite a few students commented that they liked the class and had more fun with it, this was not, for the most part, reflected in the quality of their work. In the future, I may build on the one clear success, the Final Portfolio/web page, to create more assignments like this.

I am still scratching my head, however, about how to incite peer learning and group work as well as how to propel students to outline their own educational journeys. In the next installment of this series, I will discuss my experience and related struggles in creating a more explicit assessment rubric with the goal of gamifying extrinsic motivators for learning.

Top


The Grader’s Dilemma


Unbound: Teaching Questioning Instead of Answering

Pivoting the World

(Originally published on nmc.org  April 2013)

I’ve had a lot of interesting strands come together this past week as I have been pondering the next steps in our technological evolution. For the last 20 years or more we have been obsessed with transforming the real world into virtual worlds. Books have become eBooks. Magazines have become blogs. Cocktail parties have become Facebook. Meetings have become webinars, and so on. Now, however, I’m beginning to think more and more about technology bending in on itself. Our next steps will be to bleed the virtual worlds we created back into the physical world and this will reshape how we iterate the physical world.

The virtualization of everything was a trend some embraced more than others. It brought with it new opportunities to connect (from Flickr to Facebook to Twitter). Perhaps its height was the creation of virtual worlds such as Second Life. While I agree that there are huge advantages to bringing in larger, more diverse crowds, there has always been the concern that we are sacrificing depth for breadth. Marshall David Jones expressed this sentiment in his riveting poetry slam.

Jones forcefully expresses some real contradictions inherent in living in virtual spaces. I was thinking about him while working on the Horizon Report Technology Outlook for Community Colleges, when I came across an interesting article in the Toronto Globe about a professor who is building “pet rocks” using Arduino devices to physicalize social media. Consider the implications of making virtual social spaces real again and making your community extend online as well as on very real personal levels. This has implications for how we interact in education as well.

I am working with my science faculty to create a STEM-focused campus here at Houston Community College (HCC). There has always been a certain amount of resistance from the science faculty when it comes to virtualizing aspects of their curriculum. At our level of instruction, teachers need to be able to put science in the hands of their students, not just on a screen. However, I think that their wariness of mixing the virtual in with their instruction is increasingly missing the mark. There are emerging technologies exemplified by the “pet rock” project that represent a potential tipping point. They are making the virtual real.

It doesn’t stop with “pet rocks.” There are similar things going on right now from hackerspaces to Ingress where people are taking what were formerly virtual spaces and turning them into reality. As I was contemplating the science labs in our new campus, I thought the “pet rock” professor’s would be a great model. Students would be able to virtually design items and then make them real using technologies such as 3D printers or micro-computing devices such as Adruino or Raspberry Pi to test real world physical concepts. Imagine a moderated scientific hackerspace where app development meets experimentation and 3D printing meets iteration.

The promise of iteration provides a final strand to this argument. I have been thinking a lot about how to teach both our faculty and staff to become more entrepreneurial in how they approach their present and future jobs. I am convinced that this will be a critical skill moving forward as we are forced (and blessed) by technology to reinvent our jobs and ourselves repeatedly as we move forward.

Last year I stumbled across Eric Ries and his concept of the “lean” startup.

Ries makes a critical point here. He says, “if we can reduce the time between pivots we can increase the chances of success before we run out of money.” Or, in the educational context, before we run out of time. After all, in the immortal words of Billy Crystal in Spinal Tap, “Mime is money.”

Ries argues that technology is helping us reduce the time between pivots and therefore, assuming we can pivot quickly, our chances of success in any entrepreneurial effort go up. Technology hardware is becoming cheaper and cheaper. Raspberry Pi and Arduino units go for much less than $50. You can get a 3D printer for $1000. Software is becoming more accessible too, as simple programs are easier and easier to create and implement using mobile devices or the aforementioned hardware devices.

Teaching and entrepreneurism have a lot in common. A colleague of mine likes to argue that one of the reasons that it is so hard to manage faculty is that it is like managing an organization of entrepreneurs. The problem I often run into is that faculty fail to see how technology can improve their entrepreneurial efforts in the classroom. Instead, they often see it as limiting their scope for action or as a mechanism to oversee and control how they teach. However, there are a lot of parallels between what Ries is talking about and teaching. We have a limited quantity of time with which to interact with our students. Semesters end before you know it. The question is how many times can you pivot before you lose them.

Let’s go back to the idea of scientific hackerspaces. Science is fundamentally about trial and error, and the best way to get students to understand this is by allowing them to experiment and fail. The problem is that every pivot requires an initial failure, and we don’t do a good job of rewarding failure in higher education. We are also constantly up against the clock and calendar when it comes to allowing our students to fail within the context of an academic semester. It’s much easier for the instructor to drill them on the “right” answer and enable them to “pass” some sort of evaluation. This is a tried and true method that doesn’t teach our students the key tenets of entrepreneurism that will be critical to their future success. If you have any doubt about this watch this 2007 TED talk from Alan Kay:

The solution here is to make pivots much less costly in terms of time. Let’s assume our scarcity is the amount of time given to us in a given academic term (usually 48 contact hours). How quickly can we iterate physical concepts in that time and what can we do to maximize those iterations? This is where we get back to the promise of making concrete the virtual. If students can quickly program or design physics experiments virtually and then test them in the real world, we can maximize the number of pivots they could have in any kind of course requiring experimentation (and almost all courses, science or otherwise, should require some sort of experimentation). This would also have the beneficial side effect of teaching entrepreneurial, computing, and design skills critical to many occupations in today’s world.

The problem with many traditional science labs is that the inflexibility of the equipment often steers students toward predictable outcomes. What the new technology/employment environment demands, however, is people willing to create unpredictable outcomes and to be able to assess them creatively. Fortunately, this is also a core tenet of the scientific method and therefore something that we need to encourage our students to do.

A final exciting possibility that this opens up the possibility of a completely new kind of mashup. Until recently we tended to think of mashups in terms of visual or audio media. We can now start to think of them in terms of creating physical objects. This takes the concept of mashups out of the English classroom and opens them up in new ways to STEM disciplines. Remember that all fundamental breakthroughs in science are mashups. Newton’s Principia was a mashup of the thinking of Gallileo, Kepler, and many other theorists who had been struggling with the new cosmology for a century before Newton did a mashup and added his own innovations. Shouldn’t we be creating learning spaces that enable everyone to be a Newton?

Again, imagine a science campus that is more of a hackerspace than what we traditionally imagine science labs to be. In this space students and faculty will be enabled to create, fail, and recreate. It will require close cooperation between computer science and the natural sciences, but isn’t this what we are striving for in STEM in the first place?

I would leave you with one last question that is critical to the success of this kind of model: Can we graft an entrepreneurial maker culture onto our educational processes? I think we can. I think we have to.

Small Steps for Big Changes

(Originally published on pbk.com December 2017)

At the beginning of 2016, my former Instructional Technology Team at Houston Community College and I opened the Design Lab at the HCC Alief campus. It was designed to prototype the MakerSpace we were developing at the West Houston Institute. This space, which occupied a repurposed computer lab, used surplus furniture, and only required about $20,000 in new equipment, was designed to give us practical experience in running a MakerSpace. It became so much more than that.

Within months, students using the space developed everything from an augmented-reality sand table to drones to senior engineering projects for the University of Texas-Tyler Engineering program. The space quickly became the first stop on every campus tour given by our president and soon thereafter attracted the attention of the chancellor, the trustees, and the local community. Most importantly, the space completely changed the conversation at the campus and the larger college around what NextGen learning should look like.

D-Lab when it was opened

What does the example of the HCC Design Lab teach us? Many of the projects we undertook at HCC used what is called the “Lean Startup” method. Under this methodology small experiments are tried out with the expectation that a high percentage of them will fail and iteration is assumed as part of the process. The strategy essentially argues that you “build” something small, “measure” its impact, and “learn” from your mistakes to “pivot” quickly leveraging the advantages of technology.

The Design Lab itself was a lean startup project but even within the D-Lab itself we undertook many experiments, particularly with regard to the mix of equipment in the space. The most expensive piece of initial equipment we purchased was something called a Z-Space Tablet. This system allows you to use 3D glasses technology to view and manipulate objects in space. We thought this would be a cool piece of technology that would be used by students to do 3D visualization before sending the items to be 3D printed. The problem was that no one ever used it. It was too complicated and clunky for the neophyte user and 2D screens worked fine, even for 3D projects. So we quickly moved it out of the D-Lab and used its space for other projects. In its place, we purchased a $300 vinyl cutter, which quickly became one of the most popular parts of the tool package we presented to users.

The Z-Space was a failure for our purposes and we quickly moved on from it. We saw a need for a new kind of tool in the vinyl cutter and quickly pivoted around to getting one. Both of these decisions were made using the Lean Startup method, leveraging the falling costs of technology, and observing how the students in the space behaved. Both could be considered failures of the original design. Both represented what Lean Startup practitioners call “pivots” and both were necessary failures that allowed us to achieve success in the space. Note the specific mention of the vinyl cutter in the Houston Chronicle.

The D-Lab a little over a year after it was opened

Sustainability and Next Generation learning share one trait in common: they both represent cultural changes in the way we deal with teaching learning and the very spaces that we construct to do them in. Cultural changes are hard by definition. People fall into established patterns of thought and action that are very hard to dislodge. Doing this at scale is exponentially more difficult. The Lean Startup method presents a strategy that be used to make incremental instead of wholesale changes to an organization.

It is easy to underestimate how hard cultural changes are going to be. Active learning represents a significant cultural shift from traditional practice. It’s asking a lot for an educational community to suddenly pivot to NextGen teaching and learning. Further complicating this is the reality that we do not have a shared understanding of next steps on the path. We have an imperfect understanding of the nature of the technological changes sweeping through your classrooms and society as a whole. We also have an imperfect understanding of how humans learn. We can see general trends and certainly understand that teaching and learning need to change but there is no general agreement on how to get there.

The D-Lab presents a useful model for how to approach this problem. Experimentation has to become the norm as we feel our way into the future. However, experimenting with a $150 million high school is not really an option. Nor will it work very well because the scale of change is simply too large for most organizations to digest. Instead, the Lean Startup path is to make changes on a much smaller scale: a single classroom or lab, an out building, or a library makeover. The investment is much smaller and it is easier to retrain a core group of faculty or staff to maximize use and effectiveness of the space and to allow them to experiment with new modalities of teaching and learning.

If the D-Lab is any guide, demonstration spaces will also form a catalyst for change when the rest of the school sees the impact of what’s going on. With all of the pieces in place (a much easier proposition on a small scale), there will be demands for duplication or scaling of the activity. In my decade redesigning and experimenting with innovative learning spaces at HCC this is precisely what my team did. Not everything was as successful as the D-Lab has been but the costs of failure were much more easily digestible than it was on larger projects. And we learned crucial lessons that we applied to subsequent efforts. Furthermore, over time this built a community of support and also started to change the culture of the institution as faculty and staff more readily accepted entrepreneurism and felt empowered to attempt it themselves.

Without small scale projects such as the D-Lab and others, we would never have built the West Houston Institute, which was recently named a finalist for the SXSW Learn by Design Award. As we discussed in a recent Knowledge Base Article, the West Houston Institute is designed to be one giant Lean Startup incubator which will adapt and reconfigure itself and its programs to meet changing demands and also to act as a teaching tool for its students and visitors in methods, skills, and mindsets to carry them into the world of the future.

Digital Space and Time

(Originally Published on pbk.com October 2017)

I was reading an interesting blog entry recently. In it Ryan O’Connor argued that one of the dictums of modern architecture, “form follows function,” should be replaced by “structure follows strategy.” Read full article here. In other words, he was saying that in a digital world function is no longer a guide to form. He was making an argument about user interface design and the virtual world but in the last decade we have seen the virtual world encroach upon the physical world in ways that will profoundly impact how we work, live, and learn. Furthermore, arguably architecture is an exercise, on a vast scale, of user interface design. Technology is creating a world that is vastly more customizable and subject to change in very short timeframes than ever before in human experience. This presents significant opportunities and challenges for both education and the architecture that supports it.

A telephone existed for one reason. It was there for sending voice signals from Point A to Point B.  “Form follows function” certainly applied to these kinds of analog technologies. The shape of a Walkman was driven by the cassette tape in it. The shape of the cassette in turn was driven by the nature of the magnetic tape that formed its core.

The digital age explodes all of these constraints and, over the last decade, has increasingly encroached on tangible, physical technologies from telephones to cars to buildings themselves. The implications of this will be profound. When everything is reducible to zeros and ones the shape of anything can suddenly be changed. The ripples of this new reality extend far beyond computers or even the iPods that replaced the cassette and CD Walkmans. Those were just the first step.

These shifts have implications for both architecture and education. Consider how much of education has been driven by the necessity of seeing the chalkboard (or whiteboard or display) at the front of the room and the limits that this technology imposed on a teacher trying to convey information. In turn, the necessity of moving large groups of students through the school day is driven in part by the constraints of putting them into distinct learning spaces driven in large part by an immobile blackboard. As a consequence, learning, hard enough in an unconstrained environment, is now required to happen according to schedule. The logistical need to segregate also feeds specialization in the academic environment. From 10-11 am you are in History, not English.

Taking this to a more abstract level, consider how much learning is driven by the medium of the book. The linear narrative of the textbook drives classes. Tests come as a seemingly natural consequence of that linear narrative.

In this way the textbook drives the structure of learning in a school and that, in turn, can drive the very structure of the building itself. We have specialized boxes that we call classrooms. They are a direct product of the technologies of the blackboard and the book in much the same way as a cassette Walkman was the product of the cassette tape.

Despite the literature emphasizing the importance of communities of learning, classrooms often serve to divide communities as much as create them, especially in the older grades where specialization becomes pronounced. However, the logic of the schedule, the textbook, and test-centric instruction continue to drive the design of schools.

The fundamental assumptions we make as we consider what learning spaces should be are all subject to disruption in a digital age. There is less and less logic to having a physical textbook and the information it contains can easily be acquired through digital means that are much cheaper and more adaptable to the needs of the teacher and learner [here is another article on the changing role of textbooks]. While the blackboard/whiteboard are still very useful tools they can easily be supplemented and/or replaced by other means of transmitting information to the learner. The digital age effectively decouples information exchange from the physical space, allowing us to optimize those spaces with the human element as the primary consideration. This can happen either within the context of a particular space (digital displays, interactive touch, augmented reality) or by leveraging the cloud to put those environments onto mobile devices.

These technologies allow us to create buildings that can functionally disconnect what we traditionally understand as a classroom from the learning experience. Learning spaces can become more communal and less driven by specialization. In essence the school can become a bazaar of interconnecting ideas rather than egg carton of disconnected concepts.

The world of work is being driven by increased demand for diverse skillsets. The need for highly specialized workers is declining and those with broad ranging skillsets are in high demand. Working backward from this fact, does it still make sense to teach in a segregated, highly-specialized environment represented by the classroom [click here for more on this subject]? Instead, we need to mold the tools around the needs of the teacher and the learner, not the reverse. Strategies of teaching and learning should drive the technology and space design that supports it. The rationale for constraining education based on technological limitations makes less and less sense every day.

We also have to be mindful that strategies of teaching and learning are still very much in flux. Schools are driven by the concerns of today even as they struggle with the implications of building for tomorrow.

Therefore, the other key takeaway from these technological shifts is that we need to build for flexibility wherever possible. Teaching and learning are likely to change considerably over the lifetime of the spaces we are now creating. We need to recognize the limitations driven by technology that we impose on our structures and constantly re-evaluate whether or not they are still relevant. Furthermore, we need to recognize that these constraints are likely to be eliminated at an exponential rate going forward. Schools will need to adapt to innovations just like everyone else and the extent to which we can create spaces that can be easily adapted over time will determine the long-term vitality of the structure.

Adaptable Technology is giving people a vast range of choices and this can become overwhelming, especially to those with many other decisions to make in their day-to-day existence. The strategy behind mastering these new realities is to focus on the intent of the activity and to do so critically. While the technological means may be constantly shifting, the ends typically don’t change much. We want learners to emerge from schooling with the knowledge, skillsets, and mindsets that will help them succeed in life. Let’s put that up front and build backwards from there. We have the tools to do it.

Hacking School

(Originally published on pbk.com September 2017)

For over a decade authors as diverse at Frans Johansson (The Medici Effect – https://www.fransjohansson.com/books-by-frans-johansson/), Daniel Pink (A Whole New Mind – http://www.danpink.com/books/whole-new-mind/), and Steven Johnson (Where Good Ideas Come From – https://stevenberlinjohnson.com/where-good-ideas-come-from-763bb8957069) have argued that innovation comes from a diverse approach to solving problems. This either means developing a broad educational approach within a given individual or bringing together diverse people in a collaborative setting. As Johansson says in Medici, “Leonardo da Vinci, the defining Renaissance man and perhaps the greatest intersectionalist of all times, believed that in order to fully understand something one needed to view it from at least three different perspectives.”

The industrial age was one of specialization and our schools reflect this. Instead of Da Vinci’s dictum that you have to apply a broad spectrum of views to any particular subject, the educational world for the last century has been one of increasing specialization. The further you go up the educational ladder, the more specialized this gets. But even at the lower grades there is usually separation between “art” and “math,” for instance. These distinctions are the product of the industrialized educational system that purports to prepare our students for specific careers that require specialized and in-depth knowledge of a particular subject. Alan Kay shows just how siloed thinking undermines the learning process in this TEDtalk from 2007.

 

The problem is compounded by magical thinking. Arthur C. Clarke once stated, “Any sufficiently advanced technology is indistinguishable from magic.” This is exactly what has happened to our relationship with technology. Most users have no idea how it works or why it works the way that it does. The same is true for most technologies in schools (and is becoming worse as technological systems become more complex). It is common for schools to approach technology as something expensive that needs to be protected from the students. It is also separated from instruction in the sense that it is used to facilitate certain kinds of experiences but is rarely the focus of instruction itself. As a result, both students and teachers tend to view technological systems as black boxes rather than as learning opportunities.

The struggle to align the educational world with technological realities is a direct product of the specialization that has characterized our educational experiences for a century or more. Is coding a technical subject taught in a Career and Technical Education environment? Is it an exercise in mathematics and logic best taught by engineers? Most controversially, should coding be taught as a foreign language class? The answer is all three and yet our curriculum insists on trying to insert it into traditional boxes that don’t really create effective coders. Good coders have to understand logic and the basic technical limitations of the various languages but also need to be able to speak to the machines in a natural way. These kinds of people are incredibly sought after in the technology world but our systems do a very poor job of producing them. And coding is only the first level of the problem. To get to an iPhone-level device you have to integrate coding with hardware engineering and design (and ultimately entrepreneurship). You need a multidisciplinary approach to practically any problem these days.

How do we work toward a system that creates these kinds of thinkers? One way to do this is to create environments where it is okay to experiment. In the 1950s early computer scientists at MIT and elsewhere often came from the model railroading community. Model railroads are complex electrical switching systems so the transition was a natural one. Bundles of wires often had to be cut, or hacked, and reconfigured in order to make a system work. This was true in computing as well. This is where we get the term “hacker.”

Over the last 20 years this term has returned to its definitional roots through technology spaces that allow users to build, break, and repurpose technologies. Advanced tools such as 3D printers and laser cutters have been added to the mix to allow the rapid fabrication of prototypes. Microcomputers such as Raspberry Pi’s and Arduinos provide rapid access to electronic brains. Hackerspaces, and their more commonly referred to cousin MakerSpaces, have started making their way from community spaces into education. It is not always a natural fit.

MakerSpaces go against the grain of the industrialized educational model. Like coding, they raise questions about where they belong. Viewed purely as a set of tools they can augment traditional programs in Career and Technical Education or the Fine Arts. In higher education they are most commonly placed in the context of engineering or other STEM-based programs. While these programs can greatly benefit from the access to these technologies, it is only through broad access that the true potential of developing the next generation of Da Vinci’s can be realized.

If he were a young student today Steve Jobs might never have had access to this technology because he wasn’t an engineer. Fortunately, he had access to Steve Wozniak, who was an engineer. Together they developed what was to become Apple Computer, an early example of blending computing with design. MakerSpaces offer the opportunity of developing diverse kinds of communities within our schools. However, they can only reach their true potential if they are built with those goals in mind and are accessible and visible to the entire student body, not just those who are technically inclined. As a resource area like a library they can provide a vital bridge for students from a specialized, industrial-focused curriculum and the realities of a post-industrial age. If they are tucked away supporting specialized programs and/or are invisible they are as useful as a library of unread books.

The Hidden Future

(Originally Published on nmc.org, July 2017)

I seem to be have been hit by the Futurism bug recently. I say “hit” rather than “bit” deliberately. I’ve been fascinated by Futurism ever since I read Isaac Asimov’s classic Foundation trilogy as a child. The idea of psychohistory has stayed with me. Part of the reason is that I recognize prediction as largely an effort centered around the human rather than the machine. With psychohistory, Asimov argued that, given a large enough sample size, one could predict the future through the aggregated actions of vast numbers of people. Note that his version of futurism was based on societal factors, not technological ones. It is only when societies accept the new cultural norms imposed by technological change that we can say that change has actually occurred. When we focus on specific technologies, we miss the point.

In a recent post, Bryan Alexander remarked how little the world has apparently changed since the 1980s. With the exception of smartphones, there is scant visible evidence of change seen while walking down the street in Boston or DC. The cars and buildings are similar to what existed in 1985. Bryan and I are almost exactly the same age and we both graduated from high school in that year. I suspect we both expected our hoverboards by 2015.

In supporting his argument, Bryan references a great book, The Shock of the Old by David Edgerton, which makes a similar point to Asimov: namely, that we miss the pace of change because we focus on technology, not the human element. Edgerton remarks on the repeated persistence of traditional technologies well past the time when histories seem to imply that they were long gone. Everything from the use of horses to the longevity of radio as a medium is cited in this excellent book.

Edgerton’s book, written in 2007, somewhat misses a fundamental shift that occurred in the way that technology has evolved. Most of the technologies that Edgerton highlights require heavy capital expenditure to change, and large swathes of this planet do not have access to the latest and greatest. I can understand when seemingly anachronistic technologies seem to persist and/or are adapted into weird hybrids like the tuk-tuk in Asia (a hybrid of the rickshaw and scooter). Modern taxis are simply not economically feasible in many cities in less prosperous parts of the world. It makes sense that the locals try to maximize functionality of the limited resources available to them. This is what humans do; they adapt their tools to their needs.

At the same time, I wonder if this argument only makes sense when applied to physical media (and that may be shifting, too). The barriers to entry for virtual technology are much, much lower than those required to operate a New York City-style taxi in Phnom Penh. In the developing world, hacking can now be applied in revolutionary new ways. Several years ago, I read an article about how West African “hackers” were converting discarded PCs (did you ever wonder where that seven-year-old Dell winds up after your IT department carts it away?) into 3D printers.

Circuit boards are the physical manifestation of the rapid diffusion of “invisible” tech throughout the world. I carry around the rough equivalent of a 1985 Cray supercomputer on my belt. Those seven-year-old Dells are still more powerful machines than my iPhone 6 — and we’re throwing them away. Even more accessible, the Raspberry Pi delivers similar performance for just $35 — making it far more prevalent than an $8 million Cray ever could be (bench not included, however). Moreover, that is only the tip of the iceberg. The utility of these supercomputers on our belts and the basic ones embedded in almost every facet of our lives is that many of them are connected to a much larger “computer” that is the network. I don’t need to store the entire contents of Google Maps on my belt-mounted computer (or in my car, for that matter), as long as I’m connected.

The one area that Edgerton really doesn’t discuss is communications technology, and it is in this area that everything else is being upended. This is true both globally and locally. You can now hail a tuk-tuk on the streets of Phnom Penh using Uber. In many fundamental ways my own life has shifted, especially in the last few years, from what it used to be. I would put the blame for this squarely on one invisible technology: the Network. When I refer to “the Network,” I’m talking about a vast array of wired and wireless communications, from broadband internet in the home to Bluetooth in my car to cellular networks that pervade every fiber of my existence as well as the ubiquitous computing technology that binds it together. I think we often underestimate the pervasive ways that it changes everything we do.

This may surprise many of you, but I am a technological conservative. I rarely get excited about the “Next Big Thing” that will “change the world.” They rarely do — and I don’t adopt things until I can personally see utility that supersedes the old model by enough of a margin to justify the opportunity cost of changing my habits. I frustrate some of my more technologically progressive friends by resisting change until it suits me. I also get mad when I’m forced into a technological downgrade due to a company’s commercial interests. Some good examples of my recent frustrations in this area include Apple’s decision to get rid of most of the ports on their “Pro” laptops and Facebook’s forced transition to a separate app for Facebook Messenger.

That being said, I have noticed some significant changes in my behavior over the past half-decade or so, particularly in media consumption and transportation, both related to the Network. I am more likely to buy books on Kindle than I used to be, and that is usually because of a need for immediate gratification and the added benefit of synchronization across various devices. I recently took my first trip ever where I didn’t pack a paper book. While I still enjoy the serendipity of browsing through CDs, I find that I’m increasingly buying my audio in downloadable form. Both of these are partial shifts, impacting perhaps 50% of my media purchasing these days.

My video consumption, on the other hand, has shifted almost 100%. Other than sports, I watch almost nothing live anymore. I can’t be bothered to conform my schedule to that of some random network scheduler. That means that, unlike 1985, there are no more “Must-See Thursdays” in my life. If I want go out or my kids have a game, I can do that when I want, regardless of what’s on. I don’t feel the need to fuss with DVR or recording a VHS tape anymore, either. I watch what’s available on Netflix, Amazon Prime, or HBOGo. I buy a very limited selection of movies, but more and more, I won’t buy a DVD/Blu-ray unless it comes with a free digital download. In most cases, the physical media don’t even get used.

In the area of transportation, I have to ask myself, how many of the fossil fuel-driven “dumb” cars that Bryan Alexander references have Google Maps running on their phone? I am a heavy user of Google Maps. Even when I know where I am going, I run the route before departing to warn me of unexpected traffic. In the past few years, rideshare services like Uber and Lyft have exploded onto the scene. They are heavily dependent on Google Map-like software to get their drivers where they are needed as quickly as possible. While not a Tesla, my Mazda 6 includes sensors that warn me of nearby objects when I back up or when someone is sitting in my blind spot; the car also adapts its cruise control to vehicles in front of me. When I am driving a car without these features now, I really notice it.

The one thing I don’t know, and that neither Bryan Alexander nor I can see with our eyes, is just how widely diffused these technologies are. There is plenty of evidence that “the Network” has proliferated far more rapidly than “physical” technologies, such as those described by Edgerton. While the developing world may have become a technological junkyard, those pieces are being repurposed into cutting-edge technologies, perhaps leapfrogging the ideas being produced in the technologically fat and happy “developed” world. While transportation in crowded urban centers remains challenging, friction-free dispatching and navigation services are now facilitating it.

What Edgerton argues persuasively is that when people see a need, they will adapt their tools to that need. “The Network” facilitates tool adaptation in ways unimaginable in 1985 or even when Edgerton was writing 20 years later. While I may not have my hoverboard, I can watch Marty McFly anytime without physical media (he inhabits my iTunes library). In the very near future, I will be able to experience him in a VR environment from my sofa so I can get the hoverboard experience without risking injury to my knees. The outside world will continuously adapt to these new realities. In the meantime, the street will look much the same, but those of us who have taken the red pill will see the matrix that imperceptibly lies beneath it.

Squaring the Circle: The Learning Analytics Dilemma

(Originally Published on nmc.org May 2012)

Learning analytics presents some serious dilemmas for the academic community. It is tempting to think that our advancing capabilities in data collection and processing make understanding the learning process an achievable goal. However, because we have never really managed to define what successful learning looks like, figuring out what and how to measure, much less how to use that data to change what we do, poses some major challenges. My experience teaching this semester as well my intellectual effort in imagining a learning analytics platform have shown me that there are at least three major, interconnected hurdles to overcome before we can proceed: definition, doctrine, and culture.

Before we can go into the technicalities of developing a learning analytics system, we need to come to some definitional agreement over the ultimate purpose of higher education. Is it to teach a basic canon of knowledge? Is it to teach critical thinking skills? Or, is critical thinking not going far enough? Do we need to teach what Michael Wesch calls “knowledge-ability?”

Our only hope of developing meaningful variables to measure learning is by clearly defining our educational goals. Making meaningful comparisons across multiple classes, and even more so across disciplines, depends on coming to some sort of consensus around these often-divisive issues.

In addition to being clear about definitions we also need to address the doctrinal dichotomy over whether learning is essentially a cognitive process or behavioral process. A behaviorist would argue that learning could, in theory, be broken down into a set of discrete steps or benchmarks that lead to success. This approach would lend itself quite well to the quantitative aspects that big data would seem to offer us. Its misuse is also, unfortunately, the basis for much of the standardized testing environment that we have seen inflicted on the K-12 environment.

There is a powerful cognitive counter-argument, recently expressed by Gardner Campbell in addressing George Siemens’ LAK12 MOOC. The argument here is that learning is essentially impossible to quantify because it is a voyage of discovery with ill-defined markers along the way. The job of a teacher is to facilitate this voyage among his or her students. How do you measure the learning process using this approach? Can we develop a tool to measure curiosity, for instance? Furthermore, quantification risks taking education even further over to the dark side of standardized formulas for “learning.”

Philosophically, I wholeheartedly agree with Gardner’s position on this subject. I spent most of graduate school arguing for a cognitive model in international relations and it certainly appeals to my right-brained nature. Furthermore, I know my own learning process has been governed to a large extent by my insatiable curiosity. At the same time, that curiosity attracts me to the possibility of dissecting the learning process in a meaningful way. This is also in part because I continue to be frustrated as a teacher in trying to understand my inability to motivate the vast majority of my students to undertake curiosity-generated, self-directed learning.

Even if we can get past the definitional and cognitive/behaviorist issue, there is yet a third hurdle and that is cultural/institutional inertia. My recently concluded semester teaching a government class is a good example of the perils of getting too far ahead of the curve or outside the norms of traditional teaching approaches. As usual, I started the semester with high hopes that trying something new would lead to greater success for my students. I refocused the syllabus around a skills-based approach and attempted to give my students the freedom to explore areas of interest to them. I also tried some novel technological solutions such as incorporating Google+ and Google Docs into the workflow with the goal being to mentally take students out of the classroom as much as possible. At the same time I wanted to give them exposure to technical tools that they will need master in future work environments.

This experience left me disillusioned over my ability to break through cultural norms even in my own class, much less convincing other faculty to attempt similar efforts in their own classes. From the perspective of a faculty member, breaking down my assignments into their skill components was hard work. As a teacher with a built-in motivation (my interest in learning analytics) and an unusual propensity to take risks in class, I was able to convince myself to see this through. My experience with other faculty is that they are often understandably risk-averse and resistant to change. In other words, I’m not sure what I attempted was scalable even if it could be proven effective.

Furthermore, the changes required a significant paradigm shift on the part of both my students and myself. The students resisted, for the most part without complaint or feedback, my attempt to shift the paradigm of their learning experience. I ultimately gave up on trying to impress the significance of this approach on them. For the most part they were only concerned with the bottom line of what their final grade would be. Furthermore, freedom to explore did not agree with many of them as the class had a high failure rate. This was because many of them used their freedom to avoid doing the minimum amount of work necessary for success in the class.

The bottom line is that both students and faculty will resist this kind of cultural shift. It’s hard work and it involves an intellectual paradigm shift many won’t be willing to undertake. It’s hard to imagine a meaningful learning analytics project without a significant re-evaluation of teaching and learning being a part of the process.

To reinforce the point, I recently attended an iPad workshop put on by Apple. In it, they demonstrated how the new iBooks app, coupled with iBooks Author, could be used to create self-tests and flashcards for the readings. When I pointed out to the presenter that this was a classic McLuhanesque mistake and, furthermore perpetuated an outmoded form of teaching (rote memorization), he pointed out to me that, while he agreed with me, this was what most of the audience wanted to see pitched. Unfortunately, I could not argue with him on that point.

I am down but not out when it comes to learning analytics and the related mission of reinventing teaching and learning to meet the realities of the modern world. While I don’t see it as a magic bullet for student success, I continue to hope that it will provide critical insights to enable us to find a pathway to achieving those ideals. It also offers me an irresistible intellectual puzzle with the hope of making a real difference in the lives of our students.

As Yeats said, “Education is not the filling of a pail, but the lighting of a fire.” We clearly need more matches and less water. At the same time we can’t overlook the theoretical, doctrinal, and cultural barriers that will impede meaningful measurement of the learning process, much less allow us to reshape the process based on the data produced. However, we ignore the coming seismic shifts in teaching and learning at our peril. There is still a lot of discussion and hard thinking to be done here. In order to square this circle we have to figure out how to be disruptors while preserving the essence of what makes education such a special experience.

The Visualization Gap

(Originally Published on nmc.org in August 2015)

I was recently reading Bryan Alexander’s excellent post about making effective use of PowerPoint. This got me thinking about the much bigger challenge lurking out there, of which bad presentations are only the tip of the iceberg: Visualization Literacy. This theme has concerned me for some years now. Indeed, arguably it spans the 30 years of my photographic experience, as photography is a struggle to achieve a particular form of visual storytelling. I have given photography presentations in which I discuss photographic composition as a form of visual narrative. There are also many issues that overlap with my yearlong discussion of technology design, which relates to my struggles to both teach and master visual literacy myself.

My presentation on narrative photography on Slideshare

Calling on one of my favorite quotes, Marshall McLuhan said in The Medium is the Message, “The serious artist is the only person able to encounter technology with impunity, just because he is an expert aware of the changes in sense perception.” This is because technology inevitably forces us to reframe our perspectives in ways that are uncomfortable to those who are used to the linear forms imposed on us by industrial and textual narratives. McLuhan dissected this mode of thinking in much of his work, which was written at the beginning of the electronic age and highlighted how the emerging media of the time, especially visual media, had reshaped the largely linear textual landscape. Film, television, and photography were often perceived as lower art forms in part because visualization was perceived as literally “cartoonish.”

The academic and literary elite saw the book as the highest form of art. This has resulted in an inbred bias against visual communication, in higher education particularly. We don’t teach our students how to visualize in part because the vast majority of faculty went through textual training in their undergraduate and, especially in, their graduate experience. That was certainly the case for me.

You see this deficit in a vast range of areas from the aforementioned presentations to textbooks littered with bad visualizations. There are exceptions to this and some disciplines are more forced, by necessity, to visualize their content. Outside of dedicated programs, however, there is little training in visual information creation and presentation. “Technology” programs often teach the mechanics of using software such as PowerPoint, Illustrator, or Photoshop, but there is often little thought given to the purpose and the power of these software packages to reshape narrative.

Why is this suddenly an issue? We now have increasing power to create visual narratives ourselves, and it is in our power to lift dialog out of the linear tyranny of text. This is my attempt to show what I can do with a visual creation tool. It has both a narrative and metanarrative function of illustrating what I’m talking about. It also demonstrates my (and its) limitations in trying to convey exactly what I’m talking about.

Going back to an insight from a great American technologist, Vannevar Bush, one of the key motivations for the computing revolution was to keep up with the increasingly complex, non-linear problems that were confronting society. While the Simple Linear path of text is the shortest route between thinking and understanding, more and more of our problems lie along the path that requires Holistic and Contextual thinking. Visual media are much better at creating an understandable representation of that complexity.

Textual narrative assumes there is a beginning, middle, and end to a story, argument, or problem. Visual narrative is far more fluid. It does a better job of showing the intertwingled nature of many of today’s problems. To cite just one example, there is no beginning, middle, or end of the global warming crisis. There are myriad inputs, combinations of solutions, and outcomes that are often poorly expressed in linear format. This is precisely the kind of problem that motivated Vannevar Bush, Doug Engelbart, JCR Licklider, and Ted Nelson to think about technology in the first place. Yet, we still approach learning and communication in much the same way as was done when they were writing decades ago.
So, how do we begin to address this critical challenge? First, we should enlist the talents of visual artists to explain the potential and unique challenges of visual storytelling. Scott McCloud makes a good start of this in his books and TEDtalk, which he gave a decade ago.
There are a lot of parallels between web comics and MindMaps as Randall Munro has repeatedly demonstrated. It’s all about manipulating perspective and visual representations, that when done well, fundamentally mess with your perspective on an issue. I often enlist my photography in my own visual storytelling, but this is just one way to create a visual narrative.

Graphics created through MindMapping software are a much simpler way to create a diagram or idea flow chart that can be incorporated into presentations. These can even be created live and on-the-fly during brainstorming sessions and meetings in order to capture the complex interrelationships of the discussions that happen (and that are often lost) in the linear albeit fragmented, nature of meeting minutes and notes. I am still looking for a truly collaborative way to make MindMaps. Some of the online mapping platforms such as MindMeister offer some promise for working collaboratively and persistently. There are still some frustrating limitations in this area, but I am confident that incremental improvements will increase the accessibility of visualization software.

The bigger problem, however, is our mental limitations in both teaching and thinking visually. Most classes that “teach” PowerPoint gloss over the narrative changes that it imposes on us through its transition from a linear textual narrative to a nonlinear visual one. They also fail to examine the information transfer capacities of various media. PowerPoint is software that complements a performance and often fails as a container for information. It needs to be augmented by more persistent visual and textual media. I’ve worked around this by creating websites as a mechanism to gloss my presentation; provide background linkages; and to create a persistent, living complement to what happens live. Slideshare fails to do this because it only gives you half of the presentation, the visual part, which may or may not stand on its own. Part of visual literacy is understanding how visual media complements other media, such as audio and text.

Finally, we need to start embedding design thinking into our processes. Design thinking is, by its very nature, closely tied to the visual. Not all design is based on images, but even textual design relies on layout and other visual composition techniques for its power. Most importantly, it teaches us to think about these issues in fundamentally different ways than simple critical thinking and textual composition do. These two strands need to be interwoven if we have any hope of preparing our students and ourselves for the coming challenges of the visual age.

In a sense, technology challenges us all to become serious artists. Those of us who teach or have taught have learned the power of a good visualization in presenting complex information to our students. As the world gets more complex, visualization becomes even more critical to our methods of teaching, learning, and communicating. The trick is not being afraid of it. Get out the finger paints. Try to tell your next story through pictures. If you mess up, play with it and refine it just like you would with text. Above all, apply a visual eye to your presentations, websites, and videos and be aware of the narrative shifts created by the media.

« Older posts

© 2024 IdeaSpaces

Theme by Anders NorenUp ↑


Deprecated: Directive 'allow_url_include' is deprecated in Unknown on line 0