Thứ Sáu, 31 tháng 5, 2019

We Need to Build Up ‘Digital Trust’ in Tech

Read more useful articles at: Tech Deeps

For months, there's been a steady march of controversies over how tech companies collect, manage, process, and share massive (and passive) amounts of data. And even though the executives and founders of these companies profess a renewed commitment to privacy and corporate responsibility, people are beginning to worry about surveillance and power—and reconsider how much faith they should put in both the leaders and services leveraging these quickly evolving technologies. The latest manifestation of these concerns came out of San Francisco, home to the tech economy: the city banned facial recognition technology to "regulate the excesses of technology."

WIRED OPINION

ABOUT

Daniel Dobrygowski is head of governance and policy at the World Economic Forum’s Centre for Cybersecurity. William Hoffman is the World Economic Forum’s project lead for data policy. Both are based in New York City.

As tech winds its way deeper and deeper into our lives, deeper questions arise: How can you trust someone you'll never see? How can you trust an algorithm that is making thousands of decisions a second of which you aren't even aware? How can you trust a company that tracks your movement every day? The biggest question of all? Given that trust is such a foundational principle for the global economy, and the global economy is digital, what is a meaningful definition of "digital trust"?

To start, trust in digital products and the companies that produce them is already eroding. Edelman's 2019 Trust Barometer shows that more than 60 percent of respondents, globally, believe tech companies have too much power and won't prioritize our welfare over their profits. "If the lifeblood of the digital economy is data, its heart is digital trust," notes a recent PWC report that claims the most consequential companies of the next generation will be the ones that prioritize security, privacy, and data ethics. The ones that don't are facing a costly problem. A recent study by Accenture found that during the next five years, CEOs could reclaim more than $5 trillion in lost value with new governance approaches for safeguarding the internet. For a global company, that could mean the equivalent of 2.8 percent in revenue growth. Yet a recent report on Digital Trust and Competitiveness from Tufts University found few business leaders are confident they have sufficient “digital trust” controls in place.

So, how do you build "digital trust" and what does it look like? At the World Economic Forum, our new report provides a framework for a more efficient and effective global dialogue on digital trust built on two main components: mechanical trust and relational trust.

Mechanical trust, especially as it relates to cybersecurity, is the heart of digital trust. It is the means and mechanisms that deliver predefined outputs reliably and predictably. An automobile's braking system provides a good metaphor. Step on the brakes. The car stops. No ambiguity, no uncertainty. Predictable, reliable outputs are expected to be delivered every time. If a system is secure and performs predictably, individuals will be more willing to use it. They'll be able to trust it.

But we need another, equally important, form of trust to support this: relational trust. Even if all the mechanical systems work, if people don't believe that we're all playing by the same rules, trust breaks down. That is why relational trust—the social norms and agreements that address life's complex realities—is vital. While the brakes in a car may be highly reliable, we also need a shared agreement that a red light means to use them. Similarly, we need a shared agreement on when, where, why, and how technologies are used.

To establish these rules, we need people, processes, and tools. For emerging tech, that means creating frameworks that incorporate accountability, auditability, transparency, ethics, and equity. By incorporating these principles in the early stage design of digital products and services, stakeholders can have a more meaningful say in how emerging networked technologies are bound by (and in turn affect) our long-standing normative and social structures. Relational trust also ensures that the promise and value apportionment of new technologies can be more equitably delivered, fostering a virtuous cycle of trust leading to improved outcomes, which leads to greater trust.

Considered this way, trust is an amalgam of many elements; a combination of tools and rules. If global trust is to be strengthened, this is the new lens for understanding digital trust.

We need this new lens because cybersecurity failures, by business and by governments, erode digital trust globally. These breakdowns in mechanical trust leave citizens wondering who they can rely on to protect them. Unless they take cybersecurity seriously, companies' and governments' credibility—and relational trust in them—will continue to wear away.

Failures of relational trust are both difficult to recognize and difficult to resolve because they stem from a lack of accountability. If no one is accountable for the problem, it's hard to find someone to blame and even harder to find someone to fix it. This breakdown in relational trust fuels the current "techlash."

This brings us back to the San Francisco facial recognition ban. At least part of the reason such technologies are seen as creepy or dangerous is the belief that they will be used to harm rather than help citizens and consumers. The worry is not that such tech isn't secure; the worry is that the owners of these technologies build them in order to exert control. This legitimate concern comes from the fact that these technologies seem unaccountable and their uses are not transparent or responsible. In other words, there's no trust here and no mechanisms for establishing it.

Unless implementors take digital trust seriously, more technologies will be similarly received. This is where so-called "ethics panels"—meant to advise on the ramifications of new technologies, such as AI—are meant to come in. While laudably attempting to include some components of relational trust in decisions about technology use, the process of creating these panels lacks transparency, accountability, and auditability. So, despite being aimed at ethical use and building trust, these panels succumb to the distrusted mechanisms that made them seem necessary in the first place.

Establishing digital trust is a team sport and one that requires significant effort on the part of businesses and governments. It requires prioritization of security and development of systems that ensure transparency and accountability. However, the costs of distrust are significantly greater. New, innovative technologies require data to work and that data will only be available to trusted actors. More importantly, national, global, and international institutions rely on trust to function—without digital trust now, we won't be able to build the institutions we need for the future. We'll retreat to isolation, suspicion, and uncertainty. Our response needs to be global in scale and local in ability to address contextual and cultural differences.

The users and subjects of technologies all have to agree that the goal is a world open to innovation with equal chances at achieving the prosperity that new technologies bring. Building in both mechanical and relational digital trust ensures that we can do that.

WIRED Opinion publishes pieces written by outside contributors and represents a wide range of viewpoints. Read more opinions here. Submit an op-ed at opinion@wired.com


Business Latest

Read more useful articles at: Tech Deeps

Không có nhận xét nào:

Đăng nhận xét