Ai why cant i leave you




















For each poem, Hafez responded to a given topic—say, Presidential elections—then collected rhyming words from within the database and strung them together using its ever-improving neural networks. Ghazvininejad had spent years studying language processing, but even she was impressed by the results. Training also helped create coherence and style—that most elusive of human touches. People picking up electric chronic. The balance like a giant tidal wave, Never ever feeling supersonic, Or reaching any very shallow grave.

An open space between awaiting speed, And looking at divine velocity. A faceless nation under constant need, Without another curiosity. Or maybe going through the wave equation. An ancient engine offers no momentum, About the power from an old vibration, And nothing but a little bit of venom. Moreover, there is a sense of thematic coherence, glimmers of narrative.

I read it and find myself getting jammed rather than enjambed. I feel the presence of the machine, stuck in the literary uncanny valley. As it turns out, Hafez was dreamed up and executed as a submission to the Turing Tests in the Creative Arts at Dartmouth, which I helped organize.

A machine passes the Turing Test if it can prove itself to be indistinguishable from a human. Hafez was better than the other machines but still distinguishable from a person. The Turing Test has long been a standard for assessing artificial intelligence, but, in the context of making art—rather than simulating consciousness—it may not be the most valuable, or the most interesting, metric.

One of my colleagues, Mary Flanagan, a poet, artist, and professor of digital humanities, thinks the notion that machine-generated poems should be expected to pass the Turing Test is boring. Do something new! As we interact more and more with machines, both knowingly and unknowingly, our own expectations around both work and art will change, and labels will start to dissolve. The uncanny valley both widens and narrows as humans and writing gadgets evolve together.

In fact, almost all the work we read—including poetry—is touched by machines. Some writers—just a few now, but surely more in the future—are using computers as creative collaborators. John Seabrook has examined what this artificial intelligence could mean for the future of everyday, utilitarian acts of writing.

After paragraphs come essays, short stories, novels. There is growing recognition of the importance of causal understanding to more robust machine intelligence. Developing AI that understands cause and effect remains a thorny, unsolved challenge. Making progress on this challenge will be a key unlock to the next generation of more sophisticated artificial intelligence. It did not go well. The basic problem with Tay was not that she was immoral ; it was that she was altogether amoral.

Tay recited toxic statements as a result of toxic language in the training data and on the Internet—with no ability to evaluate the ethical significance of those statements. The challenge of building AI that shares, and reliably acts in accordance with, human values is a profoundly complex dimension of developing robust artificial intelligence. It is referred to as the alignment problem. As we entrust machine learning systems with more and more real-world responsibilities—from granting loans to making hiring decisions to reviewing parole applications—solving the alignment problem will become an increasingly high-stakes issue for society.

Yet it is a problem that defies straightforward resolution. We might start by establishing specific rules that we want our AI systems to follow. In the Tay example, this could include listing out derogatory words and offensive topics and instructing the chatbot to categorically avoid these. Yet, as with the Cyc project discussed above, this rule-based approach only gets us so far.

Language is a powerful, supple tool: bad words are just the tip of the iceberg when it comes to the harm that language can inflict. It is impossible to manually catalog a set of rules that, taken collectively, would guarantee ethical behavior—for a conversational chatbot or any other intelligent system. Part of the problem is that human values are nuanced, amorphous, at times contradictory; they cannot be reduced to a set of definitive maxims.

This is precisely why philosophy and ethics have been such rich, open-ended fields of human scholarship for centuries. How can we hope to build artificial intelligence systems that behave ethically, that possess a moral compass consistent with our own? But perhaps the most promising vein of work on this topic focuses on building AI that does its best to figure out what humans value based on how we behave, and that then aligns itself with those values. This is the premise of inverse reinforcement learning, an approach formulated in the early s by Stuart Russell, Andrew Ng, Pieter Abbeel and others.

It can then internalize this reward function and behave accordingly. A related approach, known as cooperative inverse reinforcement learning, builds on the principles of IRL but seeks to make the transmission of values from human to AI more collaborative and interactive.

We propose that value alignment should be formulated as a cooperative and interactive reward maximization process. As the real-world dangers of poorly designed AI become more prominent—from algorithmic bias to facial recognition abuses—building artificial intelligence that can reason ethically is becoming an increasingly important priority for AI researchers and the broader public.

As artificial intelligence becomes ubiquitous throughout society in the years ahead, this may well prove to be one of the most urgent technology problems we face. This is a BETA experience. Your IP address will be recorded. Log in No account? Create an account. Remember me. Previous Share Flag Next. Briefings Magazine. Briefings for the Boardroom.

Special Edition. Future of Work. Sales Transformation. Beyond Coronavirus. Sales Effectiveness. Workforce Transformation. Customer Experience. People Cost Optimization. High Performing Executive Teams. Organization Design. Cultural Transformation. Change Management. Employee Rewards. Executive Compensation. Sales Compensation. Succession Planning. Professional Search.

Recruitment Process Outsourcing. Executive Search. Professional Development. Leadership Development.



0コメント

  • 1000 / 1000