Associate Professor, Department of Philosophy
Director, Centre for Digital Philosophy
Co-Director, PhilPapers Foundation
University of Western Ontario
A Q&A between David Bourget, director of the Centre for Digital Philosophy and philosophy professor at Western University, and Eric Piper, senior editor at Wiley. Topics discussed include exciting new tools and methods that can change the nature of philosophical scholarship, the possibilities and ethics of artificial intelligence, and how Bourget developed a career combining philosophy and computing.
Q: Your academic interests combine both philosophy and computing. How did you first begin to think about bringing these areas together?
I’ve been struggling to bring these interests together for much of my life. When I was 10 or 11, something I saw on TV made me ask my dad if there are limits to what computers can do. He told me that the only limit is how well they are programmed. This idea—that computers can do anything if programmed right—stayed with me and drove a lifelong fascination for computers (even after I learned about the theoretical limits of computation, which don’t really impinge on the practical import of the idea). A few years down the road my parents bought a computer, I found myself a QuickBasic programming book, and I was off. But I didn’t spend my teenage years blissfully immersed in techy concerns. Not long after I picked up programming, I had a major existential crisis, like many kids that age have. It all started when I saw one of those multiyear calendars (I think it was a 10-year calendar) and realized that my whole life would be just a few of those. It was like I woke up for the first time, and life was tragic. I started seeking the meaning of it all. This led me down a tortuous path from religion to new age to Enlightenment philosophy to contemporary analytic philosophy. I emerged as fascinated by philosophical questions as I was by computers. When the time came to choose a career path, I was hugely anxious about having to choose between my two passions. I couldn’t see how to reconcile them and nobody that I asked had anything to say about this.
The choice I had to make was drastic because I couldn’t even do something like a combined degree—not at my home university in Quebec City. For this I would have had to go out of town, which I didn’t have the money to do. So, I ended up picking a career in computing with an eye to possibly doing philosophy after. It made sense to study computer science first because it would enable me to find a good job. By the time I reached 3rd year, I was itching to go back to philosophy. I ended up visiting the University of Waterloo for a year because there was a professor there— Paul Thagard—who straddled the fields of computing and philosophy, and he seemed willing to tutor me a little. He was tremendously helpful in inspiring and guiding my transition to philosophy (including once telling me in no uncertain terms how bad a paper I had written was). From Waterloo I went to the University of Toronto for a PhD in philosophy and became a pure philosophy guy… for a few years.
Pretty soon after I joined the program at Toronto, I started seeing ways that I could put my programming skills to work in philosophy. Dave Chalmers was maintaining a list of online papers by various scholars, as well as a separate online bibliography of the philosophy of mind. I found these pages extremely helpful, but I could see various possible improvements. I figured I could crawl the web to find links for the works listed in the bibliography and add them right there on the page. In 2004 I contacted Dave to offer my help, and he was happy to let me have a go at it—we’ve been doing projects together ever since. I kept programming more tools and features for the bibliography. Eventually we renamed it to something catchier (“MindPapers”) and it kept growing fast. The crawling tools allowed us to add a lot more material, and we had tools for classifying the papers. In 2006 I transferred to the Australian National University to finish my PhD with Dave, and pretty soon it became clear that we could use the tools we had to expand to all of philosophy. This is how PhilPapers was born. The first version (launched in January 2009) had about 200,000 indexed items. We’re now at about 2.5M. The site has changed tremendously over time. Growth and improvements have accelerated over the past couple of years thanks to increased funding and the excellent team that we now have at the CDP, including in particular Steve Pearce, our lead developer, and Jen McKibbon, our content manager.
Q: Can you tell us about the resources you’ve developed at the Centre for Digital Philosophy? Looking ahead, what tools and technology do you see as having the most potential in philosophy?
The Centre for Digital Philosophy at Western is really the continuation of what was the Centre for Computing in Philosophy at the University of London, which was really a continuation of the work that Dave Chalmers and I did without an official organization back at the ANU. Over time, we’ve launched five major services: PhilPapers, PhilEvents, PhilJobs, PhilArchive, and PhilPeople. All these sites have the same overall mode of operation: they are partly or fully crowdsourced databases recording information on a certain kind of thing (works in philosophy, academic events in philosophy, jobs in philosophy, open access works in philosophy archived by us, and people in philosophy, respectively). These services make it easier to find the things they collect information about (papers, events, people, etc.) using fine-grained criteria. There is nothing else quite like these tools in terms of comprehensiveness and community participation. The crowdsourcing element is important because it’s what enables us to offer very rich information about people, papers, etc.
These services allow philosophers to do their search and perform other professional activities faster, better, and more efficiently. I expect the new services that we’re working on now to have an even greater impact by changing the nature of scholarship itself. In particular, we’re developing a service called “PhilSurvey” that will collect the positions of large numbers of philosophers (I hope virtually all professional philosophers in the English-speaking world) on essentially all philosophical claims that one might care to think about. I believe that analyzing this data will illuminate the structure of philosophical debates and, through the insights that we gain, help restructure these debates.
I also think that the mere publication of the data collected by PhilSurvey (without any fancy analysis) will by itself help move debates forward and improve the quality of communications. Right now philosophers are largely in the dark regarding where others stand on philosophical questions. This is something that Dave and I showed with a pilot survey and an accompanying “meta-survey”, in which we asked respondents to guess the results of the main survey. We found that on average professional philosophers are off by 15% on philosophical claims. For a view that boils down to an answer to a yes/no question this could mean, for example, that the community on average believes the distribution of views is 50/50 when in fact it’s 35/65. For many issues the discrepancy between the expected and actual distribution of views is much larger. This is problematic—in order to discuss and debate effectively, the first thing you need to know is where your interlocutors stand. For those who would like to learn more about our survey, see our paper “What do philosophers believe?” published in Philosophical Studies.
Looking much farther down the road, I can see PhilSurvey or something else along these lines developing into a much more comprehensive service that partly replaces traditional writing and publishing (I’m looking at you Wiley-Blackwell!). To see why this might happen, you have to appreciate that we have a bit of crisis on our hands at the moment—there is far too much to read. The number of works published in philosophy over the past fifty years follows a scary accelerating curve and everyone I talk to has the feeling that too much is getting submitted and published. The crisis is most obvious on the side of journals right now because there’s been a surge in submissions that hasn’t been matched by a surge in publishing opportunities. One way or another this pressure to publish is going to continue to power a rapid increase in annual publications. Sooner or later, we are going to be forced to re-examine how we do things and look for efficiencies. To my mind traditional publishing is hugely inefficient. Most of the words in an average, considered-well-written paper are in some sense superfluous: for the right audience, you can usually boil it down to a few statements. The audience has to know exactly how you’re using the key terms and you have to be allowed to refer to claims introduced elsewhere, but, in the right context a paper ‘s main contribution can be summarized very quickly. A lot of it is setup (background, definitions), rhetoric, and forays down the garden path of objections and replies to try to anticipate others’ thinking. PhilSurvey could become a huge “context” in which you can make major contributions very succinctly.
I know we can do this on the web because we are currently doing it over sound waves. We already have a better way of interfacing with each other’s ideas than by producing these huge blurbs of text (papers) or long monologues (talks); it’s called “conversation”. In conversation we can often make enormous progress in terms of understanding or persuading each other with minimal setup (definitions etc.). In conversation you figure out what the other thinks and tell them just what they need to hear rather than throwing at them everything you have on the oft chance that something will stick. Obviously there are obstacles to scaling conversation to thousands of people—we all know what sort of mess an online “conversation” can be (look at reddit)—but ordinary conversation is a useful model in some respects. It shows us how to use context to save on words, and it’s not very hard to see how written interactions could be made more conversation-like in this respect. On the web, we make all the background that’s relevant to a compact claim (definitions, other claims, etc.) immediately accessible to provide context. There is no reason why a point should always be delivered as part of a big blurb of text that includes some often distorted version of the relevant context.
Q: Fields such as literature and history have used artificial intelligence techniques to perform text mining and analysis across large numbers of texts and historical documents. Do you see a use for artificial intelligence in philosophy? It would seem more difficult to mine arguments as opposed to discrete phrases or birth records, for example.
Yes, it’s much harder to mine information that’s relevant to current research on philosophical problems than it is to mine information that’s relevant to historical research (including research in history of philosophy). The fact is that what we would want to extract are claims, arguments, and other abstract items that even professional philosophers often find hard to extract from texts. I think it’s possible to fairly reliably tag sentences making key claims and to make a good guess about the sentiment (agreement or disagreement) expressed, but this would be a very big and expensive project. This a project that requires cutting-edge natural language processing AND machine learning AND large scale data entry. Arguments are much harder. I’m pessimistic about our ability to use AI to extract arguments using any technique I know of.
More importantly, I think the information that we really need to help philosophers is something else than claims and arguments (which humans can easily find), and it’s not there in the corpus to be extracted. Think about what might count as the “answer” or “solution” to a traditional philosophical question, say, whether the mind is physical or not. In a sense the answer in this case is either “physical” or “non-physical”. But just giving the right answer in this narrow sense is not very helpful. What we should expect from philosophers is a detailed working out and weighting of all the relevant considerations (clarifications, distinctions, definitions, pieces of empirical evidence, etc.). It’s really a synthesis of the correct and important things that can be said about the question. I like to think of solutions to philosophical problems as structures resembling Choose Your Own Adventure books. A complete resolution of a philosophical question would be an artifact (presumably made mostly of text with added structure) that any intelligent person can use to convince themselves rationally of the correct answer. The end point is much less important than the ability of the artifact to guide anyone there.
We can’t extract such artifacts from the philosophical literature because, in general, the information for building such an artifact isn’t there. My guess is that a lot of the central arguments and definitions and clarifications are out there. What is sorely missing is information regarding which are the right considerations that should be included in the resolution of a problem. Perhaps a machine could (with much better AI) find that author A said P while B said Q, and so on, but it needs a way of deciding which are the claims to put in the Choose Your Own Adventure book. The only way that a machine could get to “know” what these are is by deriving this information somehow from what philosophers believe, but there is in general no record of what philosophers believe. There are thousands of philosophers with an opinion on the mind-body problem, for example, but relatively few that have published their view on any given question that is relevant to this problem. We have no way of knowing what their views are. Obviously, we can’t assume that the right points are those in the most cited papers, because being wrong (and at least a little famous) is one of the ways to get cited. Someone could write a paper that makes all the key points that had to be made about an issue and thereby essentially settle the issue, and there would be no obvious trace of this in bibliographic records or anywhere even if the paper was hugely influential. It would get some citations, but that depends on whether the issue settled is relevant to something else (e.g. an issue in logic) or just something that we were interested in for its own sake. Now maybe you can see better why I think PhilSurvey can be revolutionary: it will help us with the biggest hurdle that stands in the way of using machine power to help resolve philosophical debates, which is a lack of information on where philosophers stand on fine-grained questions and what they think about what they’ve read. It will also help us get around the difficulty with extracting propositions, arguments, and other abstract items from raw text because it will encode these things in a structured way from the get go.
Q: Does your research on consciousness shape how you view artificial intelligence? Is it possible for machines to be intelligent in the way that humans are, for example? Do you see any ethical concerns with how artificial intelligence is developing?
My philosophical views on consciousness definitely inform what I think about artificial intelligence in general, but not really how I approach the machine learning problems that I have in practice as part of my digital projects. I have this view, which I’m currently working out in a book called Grasping that I hope to publish soon, that consciousness plays an important causal role in cognition. I think consciousness is the medium of thought. If that seems surprising, I’m not surprised: part of the aim of the book is to show how we often don’t think as much as we tend to think we do, and how this explains a lot of dumb things we do. So, I think that human intelligence is crucially dependent on consciousness. A closely related idea is that consciousness is probably not that easily replaceable. If it really plays an important causal role in brain activity, you probably can’t build a machine that works sort of like the brain but without consciousness and produces the same behavior. For this reason I think AI techniques that try to mimic neural mechanisms on ordinary (presumably non-conscious) computers or even specialized circuits are probably more limited in power than we tend to think (unlike Roger Penrose, I’m not suggesting that consciousness can “compute” noncomputable functions or anything of the sort, just that it is part of the workings of the brain; I think it is probably just an efficient way of performing certain computations that it has the job of performing; in principle other mechanisms could perform them, and a lot of computing taking place in the brain is free of consciousness). So, I view artificial intelligence as limited by the presumably non-conscious nature of the hardware we have, but that’s a theoretical consideration that’s not really relevant in practice—at least until we know how to make machine consciousness.
I don’t want to rule out that there could be machine consciousness. In fact, I lean toward a kind of panpsychism, a view on which all matter is to some degree conscious (albeit not in the interesting way that we are). In principle we might be able to create machines that leverage consciousness in the same way as the brain. I just don’t think neural networks running on current hardware do this. It may well be that the machine that will be do this is some kind of quantum computer. So, my answer to your second question is that, yes, it’s probably possible for machines to be intelligent in the way that humans are, but this will likely require conscious machines.
There are many ethical or policy questions raised by predictable applications of AI, for example, what to do about the effects of AI on labor markets (think about drivers and assembly line workers) and what sorts of trade-offs should be programmed into AIs making decisions where they have to distribute risks among people. Perhaps the most consequential ethical question is whether we should shy away from developing powerful AIs altogether in order to avoid a catastrophic scenario on which they take over humanity. I don’t lose sleep over that one because I’m a bit of a pessimist regarding the prospects for AI in the medium term (say 50 years).
This pessimism is based in some part on what I just said regarding the role of consciousness in cognition, but it’s also based in large part on my own experience as a software developer. One thing that you butt against all the time in machine learning and in programming generally is that it’s often relatively easy to make a program that gets things right 80 to 95% of the time, but nearly impossible to do better. You run up against things like noise in the training data and the intractably long tail of possible cases. I think the most impressive AI applications that we see today, such as self-driving cars and AI personal assistants, are at best showing off the easy 80% (regarding the personal assistants I’ve tried, I feel we’re not even there). Voice recognition (transcoding sound into written text) is nothing at all compared to understanding. Self-driving cars on select streets along law-abiding drivers is really not that impressive. I’d like to see a self-driving car negotiate the country roads in Greece, where the shoulder gets used as an extra lane, doubling the road capacity. Unless it understands things like tacit norms and compromise, the AI will use the regular lane (as opposed to the shoulder) to drive within the speed limit throughout its journey. But the convention is that you drive about 50% over the speed limit in the regular lane. This will result in some very unwise drivers it’s sharing the road with engaging in all sorts of dangerous maneuvers, which will greatly increase the risk to its occupants and other drivers. An AI just couldn’t decide to follow Greeks’ unwritten road conventions on the spot. If an AI really could drive as well as a human anywhere, I would be impressed, because that problem is essentially AI-complete. Driving-in-carefully-selected-conditions is not a very impressive feat compared to other human activities such as anticipating other human’s actions. Obviously, AIs that cannot reliably anticipate our actions aren’t going to take over the world.