Month: July 2023

Transparency: Seeing a Wider World

I argued last week that one of the chief challenges in today’s world is a lack of transparency in our complex systems of information. However, even if we regulate to create transparency, we will need to develop tools for navigating the resultant deluge of new information.

We have those tools at hand, but most of us don’t use them. We miss opportunities, such as using the tools of mapmaking to help us learn. Instead, we persist in the learned behavior of following textual narrative pathways.

Learning and innovation are about perceiving information in new ways. We miss so much context when we never deviate from linear textual narratives.

Visualizing our ideas changes what we see. It allows us to cut through information soup and perceive what’s important in complex systems.

Humans excel at pattern recognition. “The speed of this kind of human visual processing contrasts dramatically with relatively slow and error-prone performance in strictly logical analysis (such as mathematics).” (MacEachren and Ganter, 1990, p. 67) Pattern recognition is the human strength that visualization unlocks.

I use maps to help my students grasp complex ideas and patterns as they navigate the political systems of Texas and the United States. We can use the same technique to navigate the complex information environment that we face today.

In Discovering Digital Humanity, I noted many groups found their voice because of the democratization of the network through technology. We heard voices that were ignored, relegated, or suppressed because they didn’t have to pass through the filters of “mass” media. However, that same technology democratized the spread of misinformation and deliberate manipulation of information streams.

All stories lack context. We have been conditioned through our use of text in education to accept linear, text-based narratives as being the most legitimate forms of communication. Challenges to that supremacy, such as comics or “hot” McLuhanesque media, have traditionally been characterized as less legitimate.

Text introduces a set of blinders. A careful author will list sources to describe the constellation of ideas that influenced his or her book or article. This is a good start, but it’s still looking outward from inside a linear narrative. The choices that the author makes are never linear, but once he commits them to text, they look that way.

In the last blog, I advocated for a high level of transparency as the first step toward gaining traction on the complex problems facing our societies, both inside and outside of technology. This is only the first step. Assuming for a moment that this strategy is effective, what we have done is unlock a whole new stack of information to add to the flood we are already confronted with.

Most of this information will be textual. A stack of papers or even PDF files is not transparent. Search tools are very useful for digital documents, but they also have limitations if you don’t know the words to search for. Adding AI assistants will help a lot, but visualizations of complex systems will reveal hidden patterns.

I have been using concept mapping for over a decade to decode my own thoughts. I also use it in brainstorming and teaching activities as a mechanism for discovering new ideas and fostering the exploration of ideas.

It is one thing to write down your thoughts, and it has great value, but I have pages of ideas and thoughts that are buried on my hard drive or a box somewhere. I have forgotten what was in many of them. I can’t even keep up with the stuff I’ve published half the time.

Article List

A map of the blogs and articles I’ve written since 2022
and how they relate to
a model of thinking about technology that I developed last year.

There is a connective tissue to all of this, but I often take that for granted, because it is so implicit in the way I approach the world. However, I like to think that my thinking evolves. Understanding that evolution (and my shifting biases), while understanding common elements of my thinking, leads me to new ideas that do not follow a linear path.

A recent attempt to connect and map my thinking

When I’m working with groups, I have the same problem but multiplied. Now I am mapping the collective evolution of multiple minds. As MacEachren and Ganter point out, this has deep neurocognitive roots, “[visualization] utilizes ‘preconscious’ processes to sort out patterns before conscious (i.e., logical) processing of the information is required.”

The technology to create these kinds of maps was limited to a small group who developed special talents and were blessed with a high level of artistic skill. My friend, Karina Branson, does amazing work capturing the thoughts of groups visually, but she possesses unique cognitive and dexterous talents.

For the last decade, however, we have had tools available to us that allow us to create our own cognitive maps. They are much more accessible.

We have moved away from the need to have technical proficiency with pen and ink to create graphics to perceive patterns of information.

Karina was one of the people who exposed me to Miro, which adds a collaborative element to concept mapping. Miro made my pivot to remote teaching possible because it opened so many possibilities for collaborative active learning and seeing.

A course map I created in Miro to help my students navigate my US Government class

The pandemic further narrowed our vision. Suddenly, we were communicating through digital pinholes, whether we were teaching classes or conducting business. Many bemoaned the loss of context and interactivity that this process of “Zoomification” created.

There is a power and spontaneity in having groups of humans gather in a physical space and toss ideas around. There are limitations to that model as well. We lose a lot in the process of debate. We leave good ideas on the table. Very little actionable material remains after the fact unless the brainstorming activity is well-designed and structured.

Tools like Miro, however, create a persistent object that users can access asynchronously. This is a power that I have been using for concept mapping my own ideas for years. Now groups can go back and look at where they were going yesterday or last week. They can also change it.

This is an incredibly useful teaching tool. But almost no one uses it.

However useful this tool is for mapping our own ideas, we can also use it to map complex systems and allow us to see outside narratives much more clearly. Imagine teaching US history, for instance, as an interlinked concept map rather than a linear narrative. You wouldn’t have to choose between competing narratives, you could see them all.

Teaching would be about finding the connections between narratives and understanding how context produces bias. Instead of a textbook, we would have a visual map to guide us.

These maps could also guide us in understanding complex systems, such as climate change or computational algorithms. For instance, to regulate AI and social media algorithms, companies could be forced to provide visual maps detailing their sources of information and how they connected them into creating text, graphics, or media streams.

As Erasmus wrote 500 years ago, “in the kingdom of the blind, the one in eyed man is king.” We are the kingdom of the blind.

The one-eyed men of today are those that control the narrative, but even they only see imperfectly. Visual mapping can open many eyes and democratize our stories in the process.

We have it in our power to restore our sight, but it will require tools we haven’t grasped yet. These tools are out there. With them, we can map the information we already have and use them to see the context and complexity that we may be missing.

Transparency means nothing without context. Maps provide context.

Next Up: The Promise of Layered Concept Mapping

Transparency: A Way to Regulate Technology

“On one hand, information wants to be expensive, because it’s so valuable. The right information in the right place just changes your life. On the other hand, information wants to be free, because the cost of getting it out is getting lower and lower all the time. So you have these two fighting against each other.”
– Stewart Brand (Quoted in Levy Hackers, p. 360)

Debates over AI have dominated the first half of 2023. The US Congress held hearings, and regulation has been demanded from many quarters. This kind of furor is not a new thing. We have been debating similar regulations for social media for years now.

The technology community has often reacted to these debates with a mixture of fear and derision. If legislators who don’t understand the difference between Twitter and Wi-Fi were to create regulations about social media or AI algorithms, the results are likely to be hilarious but also harmful to innovation.

The speed of politics is also out of sync with the speed of technological change. Technology moves so fast and most political systems are slow by design. It is hard to imagine regulations that aren’t outdated even before they are discussed in a congressional committee, much less implemented.

Even enforcing existing regulations is a challenge for bureaucracies and law-enforcement. Microsoft was punished for Internet Explorer long after the questions that prompted the original antitrust lawsuit were irrelevant.

It’s important to step back from technology and ask more fundamental questions about what’s really going on here and where the roots of potential abuses lie. We live in a complex world with lots of moving pieces and rapidly shifting environments. It’s often difficult to see the core issues through the confusing noise. Ironically, it is here that an AI designed around information legitimacy could be transformative.

Transparency should be a cornerstone of all technology regulations. A recurring theme I hear in the AI debate is that we don’t know what’s going on. Some of that is internal to the technology itself (which doesn’t make it undiscoverable), but much of that is hidden behind the veil of corporate secrets.

In the AI world, almost no one understands how ChatGPT gets from a query to answer because OpenAI locks almost all that process in a proprietary box. In the social media world, the same is true for the algorithms that bias newsfeeds on Facebook and Twitter.

My dog is smart enough to understand that if something interesting is going on underneath a blanket, the solution is to pull the blanket off. It’s high time we pulled the blanket off these processes.

Ignorance is a way to make money. The digital age has threatened this tried-and-true practice. If you don’t know that you can get a product down the street for less money, I can sell you that same product at an increased profit to myself. If you can access all this information on your computer or phone, it undermines my profit potential.

Over the last 30 years, economic actors have had to adjust to this reality. The solutions have ranged from adding layers of complexity to the product, to make it difficult to compare to other products to creating proprietary black boxes that conceal some sort of “secret sauce.“

Much of today’s industry, and I include the tech industry in this, operates behind a hall of mirrors as a way of protecting profit. Complexity has replaced scarcity as a profit screen. The practice of deception hasn’t changed.

As those of you who have read my work know, I am a vigorous proponent of technology as a creativity enabler. When I spend half my time dodging around unseen obstacles in various platforms as I try to create, I waste valuable creative time.

Imagine being a regulator, trying to decode the code that drives these processes. AI could streamline decoding complexity. It’s good at looking for patterns and connections.

As I discussed in a previous blog, AI is a powerful tool, but it needs to be supported by open systems in order to fulfill its potential. Systems of openness are a key to keeping AI systems firmly in check.

Those who say that innovation will be crippled if we create open systems haven’t read the literature on what drives innovation. Generative AI is also showing that hiding stuff is a fool’s errand as AI crawlers find their way into more and more systems.

There are plenty of ways to generate profit from open systems. As systems become more complex, even if they are open, people will need help to maximize their own usage of technology. This is a far deeper well of profit because they can tie it to productivity and growth, not a temporary state of ignorance.

Once you lock something up, it stagnates. While you may make a marginal profit at the beginning by having a technology that no one else has, hiding it is not a sustainable profit model.

Transparency is evergreen. You don’t have to make special regulations for this technology or that technology. You just insist that all technologies and businesses follow clear and open rules.

The job of the regulator becomes much simpler and focused when it is targeted at opening doors to public scrutiny. Societies can enforce existing regulations much more easily because people can figure out what’s going on. Transparency smashes the hall of mirrors.

We need to use the technologies that currently exist and are being developed to police the technologies of the future. For instance, we can design AIs to look for patterns of suspicious activity in technological systems.

I’m not talking about policing the users. I’m talking about policing the algorithms. AIs could investigate companies suspected of nefarious practices.

These kinds of investigations could gradually reshape a destructive paradigm of exploitation if they are part of a culture of public watchfulness and are legally protected. This runs against current business culture.

Transparency represents a paradigm shift for American business. However, in the long run, it is a more profitable strategy for success.

This strategy of technology development gives open societies a competitive advantage over economies that refuse to follow suit. If we’re worried about AI competition from China, the best way to win that competition is to leverage advantages that closed societies find difficult to replicate. This is how we won the Cold War. This is how we can win going forward.

Transparency also represents a rallying cause for proponents of effective regulation. Right now, it’s easy, or at least it seems to be easy, for people to understand what they’re against, but very difficult to understand what they are for.

As a student of systems thinking, I understand how this challenges certain paradigms. Those paradigms are already being challenged by the march of technology.

Stewart Brand’s quote that leads off this blog is not inaccurate. Just because information wants to be free doesn’t mean it is.

We need to stop making a political prisoner out of information or a revolution will occur in which that prisoner reeks revenge on an unprepared society. It is humans and the systems that they create that pose the real danger in a world of technological amplification.

Open is progress. Closed stagnates. Let’s choose the open path.

This is the first in a series of blogs on the power of transparency in technology.

Idea Fences: How They Will Shape the Future of the AI World

“On one hand, information wants to be expensive, because it’s so valuable…. The right information in the right place just changes your life. On the other hand, information wants to be free, because the cost of getting it out is getting lower and lower all the time. So you have these two fighting against each other.” – Stewart Brand quoted in Hackers(p. 360)

Why do we build fences? We build fences to control nature. We build them to protect scarcity. However, we build fences for intangible assets as well. We use them to control knowledge and information.

These are the kinds of fences that AI threatens. This is a good thing. We can either hoard knowledge or profit from wisdom. It’s our choice. One path is dangerous, the other is liberating. Universities will play a key role in deciding which path AI follows going forward.

Recently, AI has dominated debates in the media and academic discourse. Much of the rhetoric has been about how AI impacts the flows of information and knowledge in our societies and how it threatens certain fences.

We can take a lesson from the early hackers because this was also a central concern in the early years of the computing revolution. Steven Levy, in his classic book Hackers, describes a crucial fork in how we view information between closed and open systems. He writes that, “crucial to the Hacker Ethic was the fact that computers, by nature, do not consider information proprietary.” (Levy p. 323).

The public face of AI follows the hacker ethic, at least in its execution. However, most of the AI systems in the news today have at their core the other side of Levy’s quote from Stewart Brand that led off this blog: they hide information to give it value. AI’s value, as seen by most AI companies, lies in its proprietary algorithms.

Which produces more value, the closed or open approach to information? Without the Hacker Ethic we would never have had: the personal computer, the internet, the graphical user interface, and a host of applications that have had an undeniable effect on augmenting human intellect. The proprietary model gave us competing operating systems, incompatible applications, proprietary algorithms, and fenced-off knowledge systems.

It is no accident that the initial crop of hackers emerged from the post-World War II mass university environment, particularly at MIT. Their ethic of knowledge distribution goes back at least as far as the Enlightenment. This ethos is central to what universities are. However, even within their respective universities, these hackers ran into systems that tried to contain their explorations.

Levy’s book is filled with stories of hackers running up against barriers as humble as physical locks and going around them to get what they needed. They did this out of a quest for knowledge, not profit, so the universities tolerated it (to a point).

To a hacker, a closed door is an insult, and a locked door is an outrage. Just as information should be clearly and elegantly transported within a computer, and just as software should be freely disseminated, hackers believed people should be allowed access to files or tools which might promote the hacker quest to find out and improve the way the world works. When a hacker needed something to help him create, explore, or fix, he did not bother with such ridiculous concepts as property rights. (Levy, p. 78)

These tensions never went away. Over the last 40 years, we’ve seen the gradual encroachment of closed technological systems on the digital idea space, even in academia. Hacking is now confined to specific “safe” places. The larger technological and information environment became more and more locked down as proprietary software platforms took over the digital world.

Despite these barriers, information insists on becoming more accessible. When I started my learning journey, I spent long hours sifting through stacks of books in various libraries. Now, I rarely go to a physical library. My first instinct is to see what I can find online. Usually, it’s enough to get the job done.

However, even here I am constantly confronted by locked doors. Some of them I can pick with my college’s subscriptions. There are others that become too difficult. I bypass them when I can’t access them. This choice has little to do with their intrinsic value as ideas and everything to with their extrinsic value to a publisher.

The hacker in me finds these fences to be incredibly frustrating. I can say the same thing about opening files created by proprietary software I don’t own. In both cases, commercial forces have constructed fences around ideas.

Richard Stallman told Levy that, “American society is already a dog-eat-dog jungle, and its rules maintain it that way. We [hackers] wish to replace those rules with a concern for constructive cooperation.” (Levy, p. 360)

Universities are a nexus for constructive cooperation. They will be essential if we hope to solve the complex problems facing humanity. Constructive cooperation is also essential for the future development of AI.

On the surface, LLM AI weaponizes the Hacker Ethic. In its current state, it’s capable of raiding informational cupboards and reassembling them into weird creative stews. New knowledge happens when we reconstruct old information into unexpected paradigms. This has traditionally been the role of scholarship.

GPT helps me work my way out of creative funks by repurposing existing knowledge into novel pathways. It helps me play with ideas like a good scholarly article or debate does, but in a more dynamic fashion. AI doesn’t replace academic debate; it complements it.

Digital technologies have helped us play with ideas in ways that were difficult, if not impossible, before the digital world. Play creates knowledge by letting us explore alternative realities. AI is just the latest toy in our idea-generating cupboard.

Every technology brings with it dangers. AI is no exception, although I think we often overestimate just how much has changed. The danger in AI comes from hiding the “valuable” algorithms that drive its creations.

This is not a new debate. My chapter “Living in the Panopticon” in Discovering Digital Humanity argues that the problem with these algorithms isn’t the technology but their lack of transparency.

The companies that control large swathes of the social media and AI landscape see value in hiding the calculations that take place within these algorithms. They use them to manipulate our preferences. Generative AI has the same potential.

This brings us back again to Stewart Brand’s dichotomy between free and valuable information. By treating these algorithms as a “proprietary business secret”, we are introducing a false scarcity into the equation.

Companies can use algorithms to facilitate work or to track and direct human activity (or both). However, without access to their workings, we will never know what they were designed for.

The dangers of AI do not lie in the technology’s advancement itself. The dangers lie in the “proprietary business secret” aspect of their development. This is where government regulation should concentrate.

Trying to control the technology is futile and counterproductive. However, we can insist that the underlying processes be transparent. Those who argue that this would eliminate the business logic that drives its development miss the glaring example of the effects of the open development of digital technology since the 1960s.

Creative platforms must be open for them to work collaboratively. If we’re worried about competing with other actors like the Chinese, we must have more faith in the power of openness in driving knowledge development.

Sure, the Chinese will have access to that information too, but we’ll be better at figuring out how to use it. Open systems of knowledge have an inherent collaborative advantage, a central feature of all innovation.

Universities are natural places to create that kind of environment. We just need to nurture the chaos, much like MIT, Stanford, and other institutions did for the computer hackers in the 1960s. Systems create realities. Open systems will create open realities. On one path lies danger, on the other, progress. Technology is not the deciding factor here, humans are.

© 2024 IdeaSpaces

Theme by Anders NorenUp ↑


Deprecated: Directive 'allow_url_include' is deprecated in Unknown on line 0