[ad_1]
The artificial-intelligence revolution has arrived. One of us is a venture capitalist, the other a philanthropist, and we see leaders in every field placing bets, by the billions, on what comes next.
hat makes this a perilous moment. Machine learning is poised to radically reshape the future of everything for good and for ill, much as the internet did a generation ago.
And yet, the transformation under way will probably make the internet look like a warm-up act. AI has the capacity to scale and spread all our human failings, disregarding civil liberties and perpetuating the racism, caste and inequality that are endemic to our society.
Machine learning mimics human learning, synthesising data points and experiences to formulate conclusions. Along the way, these algorithms replicate human error and bias, often in ways not discernible until the consequences are before us – intolerable cruelty, unjust arrests and the loss of critical care for millions of black people, to name a few.
AI trains on our flawed human data sets, unrestrained by a moral compass, social pressure or legal restrictions. It ignores fundamental guardrails. This is a profound test for everyone – private sector, public sector and civil society.
Businesses that research and develop AI are sharing a powerful tool with a public that might not be ready to absorb or wield it responsibly. Governments are poorly equipped to regulate this technology in a way that safeguards the people who use it or those who might be dislocated by it, and neither group feels much urgency to understand or work with the other.
All of this has our alarm bells ringing. The time has come for new rules and tools that provide greater transparency on both the data sets used to train AI systems and the values built into their decision-making calculus. We are also calling for more action to address the economic dislocation that will follow the rapid redefinition of work.
Software developers should commit to continuous monitoring through “algorithmic canaries” – models designed to spot malign content like fake news – and external, independent audits of their algorithms.
We are heartened by OpenAI CEO Sam Altman’s commitment to open the company’s research to independent auditing, as well as his challenge to the industry to avoid a reckless race to release models as fast as possible.
Policymakers and regulators must catch up on protections for privacy, safety and competition. More than half of American workers with AI-related PhDs work for a handful of big-name companies, so elected US representatives should initiate a whole-of-government effort across all relevant departments to build a regulatory framework to match.
This would require widely shared technical literacy and expertise – one reason the Ford Foundation and others are supporting efforts to place technologists across offices on Capitol Hill, one element of the gathering movement for public-interest technology.
In addition to government oversight, the venture capital and start-up community must evolve – and quickly. Investors cannot count on others to allay unintended consequences. We must pursue intended consequences from the start, which means in-depth investigations, scenario-planning and boundary-setting before investment. We must set responsible innovation guidelines that will standardise how we unlock the possibilities and avoid the pitfalls of this transformational technology.
No one wants capitalism to destroy itself, which is why the private sector must broaden its definition of value to include the interests of all stakeholders, not just shareholders. By re-tethering wages to rising productivity, firms can ensure the money pouring into new technologies flows beyond the wealthy investor and founder class. Every company that endeavours to use AI ought to build retraining capabilities for its people.
Finally, business leaders must stop assuming they can reap the profits of disruption and then repent through philanthropy. All too often, corporate leaders use the language of philanthropy and corporate social responsibility to mitigate harm on the back end rather than designing with intentionality from the start.
If we are to survive the AI test, everyone must do business differently.
Darren Walker is president of the Ford Foundation; co-author Hemant Taneja is CEO of General Catalyst
[ad_2]