“On one hand, information wants to be expensive, because it’s so valuable…. The right information in the right place just changes your life. On the other hand, information wants to be free, because the cost of getting it out is getting lower and lower all the time. So you have these two fighting against each other.” – Stewart Brand quoted in Hackers(p. 360)

Why do we build fences? We build fences to control nature. We build them to protect scarcity. However, we build fences for intangible assets as well. We use them to control knowledge and information.

These are the kinds of fences that AI threatens. This is a good thing. We can either hoard knowledge or profit from wisdom. It’s our choice. One path is dangerous, the other is liberating. Universities will play a key role in deciding which path AI follows going forward.

Recently, AI has dominated debates in the media and academic discourse. Much of the rhetoric has been about how AI impacts the flows of information and knowledge in our societies and how it threatens certain fences.

We can take a lesson from the early hackers because this was also a central concern in the early years of the computing revolution. Steven Levy, in his classic book Hackers, describes a crucial fork in how we view information between closed and open systems. He writes that, “crucial to the Hacker Ethic was the fact that computers, by nature, do not consider information proprietary.” (Levy p. 323).

The public face of AI follows the hacker ethic, at least in its execution. However, most of the AI systems in the news today have at their core the other side of Levy’s quote from Stewart Brand that led off this blog: they hide information to give it value. AI’s value, as seen by most AI companies, lies in its proprietary algorithms.

Which produces more value, the closed or open approach to information? Without the Hacker Ethic we would never have had: the personal computer, the internet, the graphical user interface, and a host of applications that have had an undeniable effect on augmenting human intellect. The proprietary model gave us competing operating systems, incompatible applications, proprietary algorithms, and fenced-off knowledge systems.

It is no accident that the initial crop of hackers emerged from the post-World War II mass university environment, particularly at MIT. Their ethic of knowledge distribution goes back at least as far as the Enlightenment. This ethos is central to what universities are. However, even within their respective universities, these hackers ran into systems that tried to contain their explorations.

Levy’s book is filled with stories of hackers running up against barriers as humble as physical locks and going around them to get what they needed. They did this out of a quest for knowledge, not profit, so the universities tolerated it (to a point).

To a hacker, a closed door is an insult, and a locked door is an outrage. Just as information should be clearly and elegantly transported within a computer, and just as software should be freely disseminated, hackers believed people should be allowed access to files or tools which might promote the hacker quest to find out and improve the way the world works. When a hacker needed something to help him create, explore, or fix, he did not bother with such ridiculous concepts as property rights. (Levy, p. 78)

These tensions never went away. Over the last 40 years, we’ve seen the gradual encroachment of closed technological systems on the digital idea space, even in academia. Hacking is now confined to specific “safe” places. The larger technological and information environment became more and more locked down as proprietary software platforms took over the digital world.

Despite these barriers, information insists on becoming more accessible. When I started my learning journey, I spent long hours sifting through stacks of books in various libraries. Now, I rarely go to a physical library. My first instinct is to see what I can find online. Usually, it’s enough to get the job done.

However, even here I am constantly confronted by locked doors. Some of them I can pick with my college’s subscriptions. There are others that become too difficult. I bypass them when I can’t access them. This choice has little to do with their intrinsic value as ideas and everything to with their extrinsic value to a publisher.

The hacker in me finds these fences to be incredibly frustrating. I can say the same thing about opening files created by proprietary software I don’t own. In both cases, commercial forces have constructed fences around ideas.

Richard Stallman told Levy that, “American society is already a dog-eat-dog jungle, and its rules maintain it that way. We [hackers] wish to replace those rules with a concern for constructive cooperation.” (Levy, p. 360)

Universities are a nexus for constructive cooperation. They will be essential if we hope to solve the complex problems facing humanity. Constructive cooperation is also essential for the future development of AI.

On the surface, LLM AI weaponizes the Hacker Ethic. In its current state, it’s capable of raiding informational cupboards and reassembling them into weird creative stews. New knowledge happens when we reconstruct old information into unexpected paradigms. This has traditionally been the role of scholarship.

GPT helps me work my way out of creative funks by repurposing existing knowledge into novel pathways. It helps me play with ideas like a good scholarly article or debate does, but in a more dynamic fashion. AI doesn’t replace academic debate; it complements it.

Digital technologies have helped us play with ideas in ways that were difficult, if not impossible, before the digital world. Play creates knowledge by letting us explore alternative realities. AI is just the latest toy in our idea-generating cupboard.

Every technology brings with it dangers. AI is no exception, although I think we often overestimate just how much has changed. The danger in AI comes from hiding the “valuable” algorithms that drive its creations.

This is not a new debate. My chapter “Living in the Panopticon” in Discovering Digital Humanity argues that the problem with these algorithms isn’t the technology but their lack of transparency.

The companies that control large swathes of the social media and AI landscape see value in hiding the calculations that take place within these algorithms. They use them to manipulate our preferences. Generative AI has the same potential.

This brings us back again to Stewart Brand’s dichotomy between free and valuable information. By treating these algorithms as a “proprietary business secret”, we are introducing a false scarcity into the equation.

Companies can use algorithms to facilitate work or to track and direct human activity (or both). However, without access to their workings, we will never know what they were designed for.

The dangers of AI do not lie in the technology’s advancement itself. The dangers lie in the “proprietary business secret” aspect of their development. This is where government regulation should concentrate.

Trying to control the technology is futile and counterproductive. However, we can insist that the underlying processes be transparent. Those who argue that this would eliminate the business logic that drives its development miss the glaring example of the effects of the open development of digital technology since the 1960s.

Creative platforms must be open for them to work collaboratively. If we’re worried about competing with other actors like the Chinese, we must have more faith in the power of openness in driving knowledge development.

Sure, the Chinese will have access to that information too, but we’ll be better at figuring out how to use it. Open systems of knowledge have an inherent collaborative advantage, a central feature of all innovation.

Universities are natural places to create that kind of environment. We just need to nurture the chaos, much like MIT, Stanford, and other institutions did for the computer hackers in the 1960s. Systems create realities. Open systems will create open realities. On one path lies danger, on the other, progress. Technology is not the deciding factor here, humans are.