This week I’m sharing an interview I had with fellow UH student Philip Jones. Philip made enough intelligent comments about AI in class for me to want to pick his brain, and of course I love sharing this kind of thing with my wonderful readers. Some of this have a staying-up-way-too-late-in-the-dorm-talking kind of vibe, in the best way. Enjoy!
Introduction
Philip introduces himself, talks about his work as a Responsible AI Business Strategist at Salesforce1, building a business model on trust, the ubiquity of 5ish-point responsible AI frameworks, and how he found his way from a pre-law track into technology consulting and change management.
Epistemology
Here I derail the conversation to ask Philip about core philosophy and we talk about practical epistemology, the role of language in thought and imagination, and how our worldviews influence our response to change. Then I pull it back by asking how AI will affect epistemology and we discuss the role of recommendation algorithms, generative AI “closing the loop” by allowing the creation of perfect echo chambers (Philip calls them “echo helmets”) which don’t allow for a moderating influence, and how the existence of these tools further erode a shared reality.
Trust and Identity
Here Philip shares a few possibilities and weak signals about a solution to our current/emerging problems of trust: re-emergence of deep human specialization with AI creating connections, the power of humility, the likelihood of a verification arms race and the unequal global distribution of technology. This gets deep into questions of human identity and our emotional reactions to protect our sense of self. The United Healthcare CEO shooting reaction is raised as an example of latent energy building up for change in the way society is organized, and information flow as a potential leverage point for social systems.
AI in Healthcare
Philip shares opportunities for AI to help patients and family identify rare diseases, identifying new uses for drugs, and creating new opportunities for simulation. We also talk about the future of interactions between companies and parties via an AI layer.
Progress in AI: Reasons for Short-Term Optimism
Despite a meandering open question, this is an interesting discussion about measuring AI performance, and how AI being a general-purpose technology means that change will be slow at first, and difficult to predict when it does pick up, as a natural language interface to all of our technology.
Progress in AI: Reasons for Short-Term Skepticism
The importance of reasoning across multiple perspectives, de Bono’s thinking hats, AI agents (or maybe we just mean personas), the difference between reasoning and aggregating reason, and how utilizing tools such as Python and calculators can help fill in the gaps in Large Language Model capabilities.
Longer Horizons for AI Futures
On future possibilities of AI as a simulation tool for evaluating policies and interventions, how most of the investment will almost certainly be funneled into targeting ads, and how the materiality of use cases determine the speed of adoption.
Wrapping Up
I really appreciate Philip taking the time to talk, and I hope we get more opportunities to work together. I enjoyed his company and we actually ended up chatting about all kinds of things for a while after the interview. If you get the chance to be a part of the University of Houston program, don’t sleep on the opportunity to connect with all the fascinating and brilliant people you’ll be associated with2.
Salesforce has its own futures team, led by Peter Schwartz, but that’s separate from Philip’s work.
This is probably good advice in general, but I can vouch for UH.