Information Architecture in the Age of AI, Part 4: The IA-powered AI Future
or: I, For One, Welcome Our New Robot Overlords
This is the fourth in a four-part series of posts on IA and AI, mainly inspired by talks at IAC24. Read Part 1, Part 2, and Part 3.
“As artificial intelligence proliferates, users who intimately understand the nuances, limitations, and abilities of AI tools are uniquely positioned to unlock AI’s full innovative potential.” -Ethan Mollick, Co-Intelligence
As we’ve seen in the previous posts in this seires, AI is seriously useful but potentially dangerous. As Kat King put it, AI is sometimes a butter knife and sometimes a bayonet.
It’s also inevitable. As Bob Kasenchak pointed out at IAC24, AI has become a magnet for venture capital and investment, and companies are experiencing major FOMO; nobody wants to be left behind as the GenAI train pulls out of the station.[1]
So, if that’s true, what do we do about it? Specifically as information architects: what does the IA practice have to say about an AI-spiced future? I think we need to do what we’ve always done: make the mess less messy and bring deep, systemic thinking to AI-ridden problems.
In short, we need to:
- Define the damned thing
- Make AI less wrong
- Champion AI as a tool for thinking, not thought.
Define the damned (AI) thing
As I suggested in Part 1 of this series, AI needs to be better understood, and no one is better at revealing hidden structures and defining complex things than information architects.[2] Let’s work as a community to understand AI better so that we can have and encourage better conversations around it.
To do that, we need to define the types of AI, the types of benefits, the types of downsides, and the use cases for AI. In addition, catalogs of prompts and contexts, lists of AI personas, taxonomies for describing images, and so forth – all easily accessible – would help improve the ability of users to interact with LLM-based AI agents.
Here’s a starter list of things any of us could do right now:
- Figure out your own personal taxonomy of AI. Let’s talk about GenAI or Artificial General Intelligence or Large Language Models as appropriate. Don’t fall into the trap of saying just “AI” when you want to talk about a specific AI technology. To get you started with some AI taxonomies in progress, try here, here, and here.
- Get clear on the risks of AI, and the nuanced risks for each type of AI. Define what AI does well, and what it does badly. Follow Ethan Mollick, Kurt Cagle, Mike Dillinger, and Emily Bender, for a start. They’ll point the way to more experts in the field.
- Talk with clients and colleagues about AI definitions. Help them get clear on what they mean when they say “AI.”
- Help out with projects like “Shape of AI,” which is creating a pattern library of AI interactions.[3] For instance, GenAI interfaces can’t just be an empty text box. IAs know that browse behavior is a necessary complement to search behavior. How do we ensure that’s part of a GenAI experience?
- Create and distribute resources like this Chat GPT Cheat Sheet to improve the ability of people to use GenAI experiences.
- Think about what a resource might look like that listed and evaluated use cases for AI. How might we help people understand how to use AI better?
Make AI less wrong
Y’all, this is our moment. As IAs, I mean: the world needs us now more than ever. LLMs are amazing language generators, but they have a real problem with veracity; they make up facts. For some use cases, this isn’t a big issue, but there are many other use cases where there’s real risk in having GenAI make up factually inaccurate text.
The way to improve the accuracy and trustworthiness of AI is to give it a solid foundation. Building a structure to support responsible and trustworthy AI requires tools that IAs have been building for years. Meaning things like:
- ontology - an accurate representation of the world, which feeds:
- knowledge graphs - structured, well-attributed, and well-related content; which needs:
- content strategy - understanding what content is needed, what’s inaccurate or ROTting, what’s missing, and how to create and update it; which needs:
- user experience - to understand what the user needs and how they can interpret and use AI output.
As Jeffrey MacIntyre said at IAC24, “Structured data matters more than ever.” As IAs, our seat at the AI table is labeled “data quality”.
To get there, we need to define the value of data quality so that organizations understand why they should invest in it. At IAC24, Tatiana Cakici and Sara Mae O’Brien-Scott from Enterprise Knowledge gave us some clues to this when they identified the values of the semantic layer as enterprise standardization, interoperability, reusability, explainability, and scalability.
As an IA profession, we know this is true, but we’re not great about talking about these values in business terms. What’s the impact to the bottom line of interoperable or scalable data? Defining this will solidify our place as strategic operators in an AI-driven world. (For more on how to describe the value of IA for AI, pick up IAS18 keynoter Seth Earley’s book “The AI-Powered Enterprise,” and follow Nate Davis, who’s been thinking and writing about the strategic side of IA for years.)
Finally, as Rachel Price said at IAC24, IAs need to be the “adults in the room” ensuring responsible planning of AI projects. We’re the systems thinkers, the cooler heads with a long-term view. In revealing the hidden structures and complexities of projects, we can help our peers and leaders recognize opportunities to build responsible projects of all kinds.[4]
AI as a tool for thinking, not thought
In 1968, Robert S. Taylor wrote a paper titled “Question-Negotiation and Information Seeking in Libraries.” In it, he proposed a model for how information-seekers form a question (an information need) and how they express that to a reference librarian. Taylor identified the dialog between a user and a reference librarian (or a reference system) as a “compromise.” That is, the user with the information need has to figure out how to express their need in a way that the librarian (or system) can understand. This “compromised” expression may not perfectly represent the user’s interior understanding of that need. But through the process of refining that expression with the librarian (or the system), the need may become clarified.
This is a thinking process. The user and the librarian both benefit from the process of understanding the question, and knowledge is then created that both can use.
In his closing keynote at IAC24, Andy Fitzgerald warned us that “ChatGPT outputs things that LOOK like thinking.” An AI may create a domain model or a flow chart or a process diagram or some other map of concepts; but without the thinking process behind them, are they truly useful? People still have to understand the output, and understanding is a process.
As Andy pointed out, the value of these models we create is often the conversations and thinking that went into the model, not the model itself. The model becomes an external representation of a collective understanding; it’s a touchstone for our mental models. It isn’t something that can be fully understood without context. (It isn’t something an AI can understand in any sense.)
AI output doesn’t replace thinking. "The thinking is the work,” as Andy said. When you get past the hype and look at the things that Generative AI is actually good for – summarizing, synthesizing, proofreading, getting past the blank page – it’s clear that AI is a tool for humans to think better and faster. But it isn’t a thing that thinks for us.
As IAs, we need to understand this difference and figure out how to ensure that end users and people building AI systems understand it, too. We have an immensely powerful set of tools emerging into mainstream use. We need to figure out how to use that power appropriately.
I’ll repeat Ethan Mollick’s quote from the top of this post: “As artificial intelligence proliferates, users who intimately understand the nuances, limitations, and abilities of AI tools are uniquely positioned to unlock AI’s full innovative potential.”
if we understand AI deeply and well, we can limit its harm and unlock its potential. Information Architecture is the discipline that understands information behavior, information seeking, data structure, information representation, and many other things that are desperately needed in this moment. We can and should apply IA thinking to AI experiences.
Epilogue
I used ChatGPT to brainstorm the title of this series and, of course, to generate the images for each post. Other than that, I wrote all the text without using GenAI tools. Why? I’m sure my writing could have been improved and I could have made these posts a lot shorter if I had I fed all this in to ChatGPT. I know it still would have been good, even. But it wouldn’t have been my voice, and I don’t think I would have learned as much.
That’s not to say I have a moral stance or anything against using AI tools to produce content. It’s just a choice I made for this set of posts, and I’m not even sure that it was the right one. After all, I’m kind of arguing in this series that the responsible use of AI is what we should be striving for, not that using it is bad or not using it is good (or vice versa). But I guess, as I said above, echoing Andy Fitzgerald, I wanted to think through this myself, to process what I learned at IAC24. I didn’t want to just crank out some text.
I do believe that with the rise of AI-generated text and machine-generated experiences, there’s going to be an increasing demand for authentic human voices and perspectives. You can see, for example, how search engines are becoming increasingly useless these days as more AI-generated content floods the search indexes. Human-curated information sources may become more sought-after as a result.
Look, a lot of content doesn’t need to be creative or clever. I think an AI could write a pretty competent Terms of Service document at this point. No human ever needs to create another one of those from scratch. But no GenAI is ever going to invent something truly new, or have a new perspective, or develop a unique voice. And it is never going to think. Only humans can do that. That’s still something that’s valuable.
So, use GenAI. Use it a lot. Experiment with it and figure out what it’s really good at. I think that’s our responsibility with these tools: to understand them. But don’t forget to use your own voice, too. No AI is going to replace you, but they might just make you think faster and better. Understanding AI and using it to make a better you… that’s the best of all outcomes.
-
Bob wanted us to pump the brakes on building new AI experiences, but I think that’s pretty unlikely at this point. Sorry, Bob. ↩
-
Our inability to define our own profession aside, of course. Exception that proves the rule, etc. ↩
-
I learned about this project on Jorge Arango’s podcast, The Informed Life. If you’re not already following Jorge, what are you waiting for? ↩
-
Rachel’s full talk is available on her site. ↩