The Epistle of Q — Chapter 105 (b)

Thursday at AME and more thanks for Lawrence Kohlberg!!

As I mentioned in the opening section of this chapter, this year the Association of Moral Educators is meeting in Seattle. I decided to return to AME for this year’s AGM & Conference in large part to experience a refresher on Kohlberg’s work and legacy as well as to gain some insights into the emerging questions around ethics and artificial intelligence (which was also a feature of this the 45th Annual Conference of AME).

While not wishing to bore you with all the ideas, information, and research data that I took in throughout the conference, I will try to give you a sense of why I found this conference to be a worthwhile investment of my time and resources. First of all I should say that the agenda has more symposiums than presentation sessions, and when used correctly these are considerably more interesting that those moments when people simply read their papers. As well, each section of the agenda is seventy-five minutes in length which gives a reasonable amount of time to address the topic and then enable a decent conversation that engages the participants. Now if I can only get presenters to learn how to prepare their slides/pictures for their presentations. They need to take a page from the CMA courses we used to teach in BC – four lines to a screen…and if you are going to have more, then roll them in and out. Make your screens readable with big print and pictures only if they are clearly relevant and visible at the back of the room. A couple of people were good at this – but the vast majority were not. Some were so bad they couldn’t even be read by the presenter. But let me not dwell on the few negative things that popped up.

The initial keynote was titled: Digital Equity – Oxymoron or Moral Imperative by Dr. Mahiri of University of California Berkeley. He led off by raising the idea of digital citizenship and then asking whether there obstacles or opportunities given the potential divide with racial vs. digital equities. Referencing Don Collins Reed he wondered if there is a significant difference between natural morality and inspired morality. He is concerned that people not rely on silo additions but rather weave ethics in all that we do. Separate can never be equal, no matter what we are talking about; but in terms of learning, then high quality pre-schooling is essential. He went on to point out that race is not a scientific fact but a social fact and even raised an interesting concept – the prism. This is where white light is broken down into an array of colours – this is better than the reverse: trying to get the colours to meld into whiteness!!

He also made reference to Rucker Johnson’s work Why Integration Works and talked about the pigmentocracy or illegible skin. He is concerned that technology may not necessarily bring us together automatically in constructive ways. Sometimes when we assemble it may simply be because we are agreeing to be alone together. At the same time internet useage is not in and of itself a universal equity – while Scandanavia has a +95% usage, the Sub-Sahara region is -3%. Moreover we need to see blogs as often being the same as wall tags that others do on railway cars and city buildings. In the end it seems that we ought not be too complacent that the internet and related communications technologies will in and of themselves promote a more equal or even equitable world. We need to ask ourselves what our values are and thus how do we want to use the digital communications systems. In a way I was reminded of something that has resonated throughout my own vocational journey – how do we get to better? That question needs to be asked, before we can hope for a truly ethical digital world…

Next was a dynamic, well-managed, interactive symposium on AI (artificial intelligence) and ethics…the first comment that almost seemed too obvious was that this is a challenging issue in part because of the global nature of it all. But then the representatives of Microsoft and Amazon proceeded to describe the behind the scenes challenges. Firstly there is the UN Declaration of Human Rights which stresses integrity, individuality and humanity and all data generators are expected to address these in their efforts. So Microsoft developed an internal ethics committee that is cross-disciplinary. At the same time it developed partnerships on AI which brings all sorts of interests together – hoping to get it all right. But then, as the Amazon representative pointed out: how do you get people (e.g. engineers) to actually do ethics? The challenge often is to ask the question: what don’t you want this challenge (i.e. program or instrument or machine) to do? What could go badly? In many ways the ethics question is embedded in a failure metric. An example was given of a program utilized to help determine parole release that didn’t adequately consider recidivism. At this point the representative from the Paul Allen Institute for Brain Science interjected that what is most needed in addition to broad input is imagination. Moreover it is important to make the data readily available. But then the challenge becomes whether Open Data will lead to its own pitfalls.

Questions such as: What is data? How is it collected? What is missing? Can we interrogate the AI data? began to demonstrate just how different it is to establish a clear path to an ethical approach to the emerging world of AI. The panelists then raised further issues including: What is the way the data was collected? What is the impact of data when experiments are on animals? Are there protocols from independent review processes? And has privacy and security been addressed? It is obvious that compliance is a very real challenge.

As I listened to the conversation it was only somewhat comforting to learn that many times problems get caught early on, or at least it is determined we can’t let this do that… But does everyone hold to the ideals? I wondered whether moral education is actually being done inside the organization. Are the leaders continually asking if the data and the processes are accurate? Moreover is the project equitable and is it fair? The example was given relative to facial recognition technologies. When it was determined that it was too white-male oriented (a gender-shade study) the only way to properly correct it was to return to ground zero and re-develop the entire concept which is very expensive. Moreover how do we know failure mode was adequately addressed and accommodated in each project?

This in turn led to a concern about public confidence. Are there needs for international agreements? Can we program moral AI into current AI ventures? It was only somewhat reassuring to learn there are organizations working on principles, because it seems essential that we have confidence in this entire world of AI. While learning will help it is critical to beware of not knowing the overrides (e.g. how to shut down certain driving assist mechanisms in new cars!!) and so there is a corporate obligation to properly inform the potential user(s). Always remember that tools are NOT neutral – they are designed for a specific purpose. Improve one’s engagement with the AI community and focus on the benefits (recognizing there needs to be some recognition of potential pitfalls)! AI only will do what it is programmed to do – it cannot think on its own. All in all, an informative conversation but not totally reassuring!!

Faith & AI…
The next symposium gave a much different look into the world of AI. It examined the issue of speaking faith values to AI and contained a Muslim, a Buddhist, a Roman Catholic and a Protestant (although I’m not sure the individual was of the reformed tradition). The primary question is: what are the implications of the moral results? What about the humanity of it all? The example was put forward of Uber and the taxi business. Many taxi drivers are from the poorer economic classes who have invested heavily in their taxi licences and lost this investment. Meanwhile the Uber drivers have often found that their net earnings are much lower than seemingly were promised and now their employer is far more remote – can’t just walk into the office and have a conversation with the manager or dispatcher.

The concept of human interiority was raised and with it an interesting fact. Many philosophy grads are ending up in high tech, often in actual careers translating technology-talk into plain English. This could have a positive impact on the incorporation of moral thinking, except it is not always clear how they are being initially trained. And do faith traditions establish a wider world view? In one study 60% of the respondents felt their high tech jobs could hurt people. Moreover is there a hierarchy in who’s at the table in the standards conversation? Technology is not value-free: is efficiency more or less valuable than empathy? Ought there to be moral prescription: most religions agree, but what is the foundation for AI? It was suggested that faith can overcome the loneliness/disconnect aspect that seems rampant in today’s high-tech social media-based community. Can faith be a mitigator when faith may not be a singular reality? And what is multi-dimensional faith in an interdisciplinary world?

Some additional questions emerged related to how we ought to view AI from a faith perspective:
• can we expect a machine to solve a problem we can’t figure out ourselves, let alone solve it?!?
• can we occupy someone else’s space?
• too often ethicists are dealing with yesterday’s problems, but can we undo things already done?
Another interesting aspect of the panel’s conversation with the participants was the seeming single focus on what they labeled as progressive faith. For one thing the Apocalyptic Tradition was once again marginalized, if not totally ridiculed in the discussion about the possibility that AI could bring about the end of humanity (let alone human nature) itself. In fact in the pejorative dismissal of anything other than what the panel’s collective definition of what reason would embody, it actually raised the question of whether they subtly were acting on a predestined theology of their own. Certainly there was a sense they saw a day of judgment coming, but just not a terminal one – provided people of faith kept themselves deeply involved, even invested in the evolving world of AI.

All in all an interesting session to sit in on, although most of the time I felt more like an observer. When an opening question asks: how do one’s deeply ingrained beliefs about the universe, humanity, and the self inform one’s attitudes towards morality in a world shaped by AI? and then the panel for all its diversity doesn’t want to discuss diverse perspectives of faith itself, it seems less likely that they really are looking for extensive response from a somewhat diverse crowd!!

In any event, by the time this symposium concluded, it was time for a break and some nourishment…(but I will be back with another installment in this doubtless lengthy chapter!!)