Day: July 24, 2023

Transparency: A Way to Regulate Technology

“On one hand, information wants to be expensive, because it’s so valuable. The right information in the right place just changes your life. On the other hand, information wants to be free, because the cost of getting it out is getting lower and lower all the time. So you have these two fighting against each other.”
– Stewart Brand (Quoted in Levy Hackers, p. 360)

Debates over AI have dominated the first half of 2023. The US Congress held hearings, and regulation has been demanded from many quarters. This kind of furor is not a new thing. We have been debating similar regulations for social media for years now.

The technology community has often reacted to these debates with a mixture of fear and derision. If legislators who don’t understand the difference between Twitter and Wi-Fi were to create regulations about social media or AI algorithms, the results are likely to be hilarious but also harmful to innovation.

The speed of politics is also out of sync with the speed of technological change. Technology moves so fast and most political systems are slow by design. It is hard to imagine regulations that aren’t outdated even before they are discussed in a congressional committee, much less implemented.

Even enforcing existing regulations is a challenge for bureaucracies and law-enforcement. Microsoft was punished for Internet Explorer long after the questions that prompted the original antitrust lawsuit were irrelevant.

It’s important to step back from technology and ask more fundamental questions about what’s really going on here and where the roots of potential abuses lie. We live in a complex world with lots of moving pieces and rapidly shifting environments. It’s often difficult to see the core issues through the confusing noise. Ironically, it is here that an AI designed around information legitimacy could be transformative.

Transparency should be a cornerstone of all technology regulations. A recurring theme I hear in the AI debate is that we don’t know what’s going on. Some of that is internal to the technology itself (which doesn’t make it undiscoverable), but much of that is hidden behind the veil of corporate secrets.

In the AI world, almost no one understands how ChatGPT gets from a query to answer because OpenAI locks almost all that process in a proprietary box. In the social media world, the same is true for the algorithms that bias newsfeeds on Facebook and Twitter.

My dog is smart enough to understand that if something interesting is going on underneath a blanket, the solution is to pull the blanket off. It’s high time we pulled the blanket off these processes.

Ignorance is a way to make money. The digital age has threatened this tried-and-true practice. If you don’t know that you can get a product down the street for less money, I can sell you that same product at an increased profit to myself. If you can access all this information on your computer or phone, it undermines my profit potential.

Over the last 30 years, economic actors have had to adjust to this reality. The solutions have ranged from adding layers of complexity to the product, to make it difficult to compare to other products to creating proprietary black boxes that conceal some sort of “secret sauce.“

Much of today’s industry, and I include the tech industry in this, operates behind a hall of mirrors as a way of protecting profit. Complexity has replaced scarcity as a profit screen. The practice of deception hasn’t changed.

As those of you who have read my work know, I am a vigorous proponent of technology as a creativity enabler. When I spend half my time dodging around unseen obstacles in various platforms as I try to create, I waste valuable creative time.

Imagine being a regulator, trying to decode the code that drives these processes. AI could streamline decoding complexity. It’s good at looking for patterns and connections.

As I discussed in a previous blog, AI is a powerful tool, but it needs to be supported by open systems in order to fulfill its potential. Systems of openness are a key to keeping AI systems firmly in check.

Those who say that innovation will be crippled if we create open systems haven’t read the literature on what drives innovation. Generative AI is also showing that hiding stuff is a fool’s errand as AI crawlers find their way into more and more systems.

There are plenty of ways to generate profit from open systems. As systems become more complex, even if they are open, people will need help to maximize their own usage of technology. This is a far deeper well of profit because they can tie it to productivity and growth, not a temporary state of ignorance.

Once you lock something up, it stagnates. While you may make a marginal profit at the beginning by having a technology that no one else has, hiding it is not a sustainable profit model.

Transparency is evergreen. You don’t have to make special regulations for this technology or that technology. You just insist that all technologies and businesses follow clear and open rules.

The job of the regulator becomes much simpler and focused when it is targeted at opening doors to public scrutiny. Societies can enforce existing regulations much more easily because people can figure out what’s going on. Transparency smashes the hall of mirrors.

We need to use the technologies that currently exist and are being developed to police the technologies of the future. For instance, we can design AIs to look for patterns of suspicious activity in technological systems.

I’m not talking about policing the users. I’m talking about policing the algorithms. AIs could investigate companies suspected of nefarious practices.

These kinds of investigations could gradually reshape a destructive paradigm of exploitation if they are part of a culture of public watchfulness and are legally protected. This runs against current business culture.

Transparency represents a paradigm shift for American business. However, in the long run, it is a more profitable strategy for success.

This strategy of technology development gives open societies a competitive advantage over economies that refuse to follow suit. If we’re worried about AI competition from China, the best way to win that competition is to leverage advantages that closed societies find difficult to replicate. This is how we won the Cold War. This is how we can win going forward.

Transparency also represents a rallying cause for proponents of effective regulation. Right now, it’s easy, or at least it seems to be easy, for people to understand what they’re against, but very difficult to understand what they are for.

As a student of systems thinking, I understand how this challenges certain paradigms. Those paradigms are already being challenged by the march of technology.

Stewart Brand’s quote that leads off this blog is not inaccurate. Just because information wants to be free doesn’t mean it is.

We need to stop making a political prisoner out of information or a revolution will occur in which that prisoner reeks revenge on an unprepared society. It is humans and the systems that they create that pose the real danger in a world of technological amplification.

Open is progress. Closed stagnates. Let’s choose the open path.

This is the first in a series of blogs on the power of transparency in technology.

Idea Fences: How They Will Shape the Future of the AI World

“On one hand, information wants to be expensive, because it’s so valuable…. The right information in the right place just changes your life. On the other hand, information wants to be free, because the cost of getting it out is getting lower and lower all the time. So you have these two fighting against each other.” – Stewart Brand quoted in Hackers(p. 360)

Why do we build fences? We build fences to control nature. We build them to protect scarcity. However, we build fences for intangible assets as well. We use them to control knowledge and information.

These are the kinds of fences that AI threatens. This is a good thing. We can either hoard knowledge or profit from wisdom. It’s our choice. One path is dangerous, the other is liberating. Universities will play a key role in deciding which path AI follows going forward.

Recently, AI has dominated debates in the media and academic discourse. Much of the rhetoric has been about how AI impacts the flows of information and knowledge in our societies and how it threatens certain fences.

We can take a lesson from the early hackers because this was also a central concern in the early years of the computing revolution. Steven Levy, in his classic book Hackers, describes a crucial fork in how we view information between closed and open systems. He writes that, “crucial to the Hacker Ethic was the fact that computers, by nature, do not consider information proprietary.” (Levy p. 323).

The public face of AI follows the hacker ethic, at least in its execution. However, most of the AI systems in the news today have at their core the other side of Levy’s quote from Stewart Brand that led off this blog: they hide information to give it value. AI’s value, as seen by most AI companies, lies in its proprietary algorithms.

Which produces more value, the closed or open approach to information? Without the Hacker Ethic we would never have had: the personal computer, the internet, the graphical user interface, and a host of applications that have had an undeniable effect on augmenting human intellect. The proprietary model gave us competing operating systems, incompatible applications, proprietary algorithms, and fenced-off knowledge systems.

It is no accident that the initial crop of hackers emerged from the post-World War II mass university environment, particularly at MIT. Their ethic of knowledge distribution goes back at least as far as the Enlightenment. This ethos is central to what universities are. However, even within their respective universities, these hackers ran into systems that tried to contain their explorations.

Levy’s book is filled with stories of hackers running up against barriers as humble as physical locks and going around them to get what they needed. They did this out of a quest for knowledge, not profit, so the universities tolerated it (to a point).

To a hacker, a closed door is an insult, and a locked door is an outrage. Just as information should be clearly and elegantly transported within a computer, and just as software should be freely disseminated, hackers believed people should be allowed access to files or tools which might promote the hacker quest to find out and improve the way the world works. When a hacker needed something to help him create, explore, or fix, he did not bother with such ridiculous concepts as property rights. (Levy, p. 78)

These tensions never went away. Over the last 40 years, we’ve seen the gradual encroachment of closed technological systems on the digital idea space, even in academia. Hacking is now confined to specific “safe” places. The larger technological and information environment became more and more locked down as proprietary software platforms took over the digital world.

Despite these barriers, information insists on becoming more accessible. When I started my learning journey, I spent long hours sifting through stacks of books in various libraries. Now, I rarely go to a physical library. My first instinct is to see what I can find online. Usually, it’s enough to get the job done.

However, even here I am constantly confronted by locked doors. Some of them I can pick with my college’s subscriptions. There are others that become too difficult. I bypass them when I can’t access them. This choice has little to do with their intrinsic value as ideas and everything to with their extrinsic value to a publisher.

The hacker in me finds these fences to be incredibly frustrating. I can say the same thing about opening files created by proprietary software I don’t own. In both cases, commercial forces have constructed fences around ideas.

Richard Stallman told Levy that, “American society is already a dog-eat-dog jungle, and its rules maintain it that way. We [hackers] wish to replace those rules with a concern for constructive cooperation.” (Levy, p. 360)

Universities are a nexus for constructive cooperation. They will be essential if we hope to solve the complex problems facing humanity. Constructive cooperation is also essential for the future development of AI.

On the surface, LLM AI weaponizes the Hacker Ethic. In its current state, it’s capable of raiding informational cupboards and reassembling them into weird creative stews. New knowledge happens when we reconstruct old information into unexpected paradigms. This has traditionally been the role of scholarship.

GPT helps me work my way out of creative funks by repurposing existing knowledge into novel pathways. It helps me play with ideas like a good scholarly article or debate does, but in a more dynamic fashion. AI doesn’t replace academic debate; it complements it.

Digital technologies have helped us play with ideas in ways that were difficult, if not impossible, before the digital world. Play creates knowledge by letting us explore alternative realities. AI is just the latest toy in our idea-generating cupboard.

Every technology brings with it dangers. AI is no exception, although I think we often overestimate just how much has changed. The danger in AI comes from hiding the “valuable” algorithms that drive its creations.

This is not a new debate. My chapter “Living in the Panopticon” in Discovering Digital Humanity argues that the problem with these algorithms isn’t the technology but their lack of transparency.

The companies that control large swathes of the social media and AI landscape see value in hiding the calculations that take place within these algorithms. They use them to manipulate our preferences. Generative AI has the same potential.

This brings us back again to Stewart Brand’s dichotomy between free and valuable information. By treating these algorithms as a “proprietary business secret”, we are introducing a false scarcity into the equation.

Companies can use algorithms to facilitate work or to track and direct human activity (or both). However, without access to their workings, we will never know what they were designed for.

The dangers of AI do not lie in the technology’s advancement itself. The dangers lie in the “proprietary business secret” aspect of their development. This is where government regulation should concentrate.

Trying to control the technology is futile and counterproductive. However, we can insist that the underlying processes be transparent. Those who argue that this would eliminate the business logic that drives its development miss the glaring example of the effects of the open development of digital technology since the 1960s.

Creative platforms must be open for them to work collaboratively. If we’re worried about competing with other actors like the Chinese, we must have more faith in the power of openness in driving knowledge development.

Sure, the Chinese will have access to that information too, but we’ll be better at figuring out how to use it. Open systems of knowledge have an inherent collaborative advantage, a central feature of all innovation.

Universities are natural places to create that kind of environment. We just need to nurture the chaos, much like MIT, Stanford, and other institutions did for the computer hackers in the 1960s. Systems create realities. Open systems will create open realities. On one path lies danger, on the other, progress. Technology is not the deciding factor here, humans are.

© 2024 IdeaSpaces

Theme by Anders NorenUp ↑


Deprecated: Directive 'allow_url_include' is deprecated in Unknown on line 0