By solving long-standing problems, can cutting-edge technology also create ethical issues in its wake? Can these issues spurred by technology be preempted during the research and development stages? How can businesses and individuals forecast the next problem? These (and other) questions were triggered and explored by start-ups and large corporates at EY’s Journey 2016 – Israel’s most prestigious annual business conference.
I was one of more than 2,000 people who attended the conference in Tel Aviv last month. For the past 20 years, Journey has connected Israeli innovators and entrepreneurs with global corporate executives, business advisers, investors and opinion leaders to seek out the hottest industry trends and share the latest ideas
The conference halls buzzed with app developers, technologists, executives and other enthusiasts who wrestled with some very big topics, such as:
Israel – Start-Up Nation
Israel is a unique place. As the only independent Jewish state in the world, its roots stretch back millennia, yet the modern Israeli state is a comparatively new invention. Founded in 1948, it is essentially a start-up nation. Israel embraces the risk that comes with innovation and entrepreneurship. It has become a vibrant hub for technology innovation, defined by perseverance and a willingness to try again when something does not work the first time.
While I was in Tel Aviv, Shimon Peres, the former President of Israel, passed away. Widely admired, Peres’ lasting legacy will be that of a leader with foresight. In one of his last public letters, which was quoted at Journey, Peres seemingly understood what we face on the horizon: “Science which develops without a moral spine may destroy the world. On the other hand, morality which prevents science from developing may starve the world.”
As we now routinely consider the impact of new technologies – on business, labor markets and our lives more broadly – we are in urgent need of a wider debate on technoethics – that is the moral and ethical issues associated with technology.
Emer Coleman, a colleague and Chair of the Open Data Governance Board in Ireland, recently underscored this urgency at a talk in Dublin.
Coleman pointed out that search engines, which are not constrained by ethical frameworks, already know a huge amount about us through our email content, demographic backgrounds, geographic presence, and lifestyle interests. As a result, software developers are positioned to reinforce but also engineer our social behaviors through our search habits. The big question for technologists, concluded Coleman, is: “We can do this, but should we?”
Businesses and government need to grapple with the ethics of technology. Through all of this discussion, several immediate takeaways come to mind:
The downside risk.
Technology can usher in unintended consequences, some of which may not be apparent upfront. A technology platform is implemented individually, but its cumulative effects are important – the whole is more than the sum of its parts. For example – automated technologies are substituting humans with machines, and to date the associated job loss is substantially outweighing the creation of new jobs. So while this might enhance efficiency and profitability for businesses, a cumulative effect could be large scale social displacement. An unintended consequence like this is problematic and needs to be considered in the development stages.
I love my gadgets. But we should question whether some technologies actually make the world work better and in other cases are they negatively impacting peoples' ability to connect with each other both publicly and privately? Seems to me they do both, sometimes driving more connection rather than less. However, a 2015 Pew Research Center study found that 82 percent of adults felt the way they used their phones socially hurt their dialogues; and a 2010 University of Michigan study found a 40 percent decline in empathy among college students, with most of the decline taking place after 2000.
The digital divide is entering an era of new sophistication, in which it is not just access to technologies that is at issue, but also the extent of both operational and strategic digital literacy required. Individuals may know how to operate their smartphones and personal home assistants, but how many of us can plan the technological configuration to get our maximal advantage in our lives? Similarly, we know that health literacy is low even in our most mature societies. Yet health and biotechnical advances are staggering and unprecedented, particularly in areas such as neuroscience and genetics. Health is getting better at obtaining informed consent for administrative processes – the small things – but how are the major choices about which technologies are ethical, under which circumstances, being made?
In light of these takeaways, Coleman raises several questions worth thinking about:
It is clear that as technology moves on, we get used to it and we willingly give up some of our privacy in exchange for convenience. It is equally clear, however, that we do not understand all the consequences that come with ubiquitous technology. The smartphone epitomizes this divide and undoubtedly there will be other future technologies we will be hard pressed to live without.
That’s why it’s so important that we heed President Peres’ advice and examine ethical issues now to ensure that science and technology do indeed develop with a moral spine.