Science News Hubb
Advertisement
  • Home
  • Science News
  • Technology
  • Contact us
No Result
View All Result
  • Home
  • Science News
  • Technology
  • Contact us
No Result
View All Result
Science News Hubb
No Result
View All Result
Home Technology

Safeguarding AI Is Up to Everyone

admin by admin
August 16, 2023
in Technology



Artificial intelligence is everywhere, and it poses a monumental problem for those who should monitor and regulate it. At what point in development and deployment should government agencies step in? Can the abundant industries that use AI control themselves? Will these companies allow us to peer under the hood of their applications? Can we develop artificial intelligence sustainably, test it ethically and deploy it responsibly?

Such questions cannot fall to a single agency or type of oversight. AI is used one way to create a chatbot, it is used another way to mine the human body for possible drug targets, and it is used yet another way to control a self-driving car. And each has as much potential to harm as it does to help. We recommend that all U.S. agencies come together quickly to finalize cross-agency rules to ensure the safety of these applications; at the same time, they must carve out specific recommendations that apply to the industries that fall under their purview.

Without sufficient oversight, artificial intelligence will continue to be biased, give wrong information, miss medical diagnoses, and cause traffic accidents and fatalities.

There are many remarkable and beneficial uses of AI, including in curbing climate change, understanding pandemic-potential viruses, solving the protein-folding problem and helping identify illicit drugs. But the outcome of an AI product is only as good as its inputs, and this is where much of the regulatory problem lies.

Fundamentally, AI is a computing process that looks for patterns or similarities in enormous amounts of data fed to it. When asked a question or told to solve a problem, the program uses those patterns or similarities to answer. So when you ask a program like ChatGPT to write a poem in the style of Edgar Allan Poe, it doesn’t have to ponder weak and weary. It can infer the style from all the available Poe work, as well as Poe criticism, adulation and parody, that it has ever been presented. And although the system does not have a telltale heart, it seemingly learns.

Right now we have little way of knowing what information feeds into an AI application, where it came from, how good it is and if it is representative. Under current U.S. regulations, companies do not have to tell anyone the code or training material they use to build their applications. Artists, writers and software engineers are suing some of the companies behind popular generative AI programs for turning original work into training data without compensating or even acknowledging the human creators of those images, words and code. This is a copyright issue.

Then there is the black box problem—even the developers don’t quite know how their products use training data to make decisions. When you get a wrong diagnosis, you can ask your doctor why, but you can’t ask AI. This is a safety issue.

If you are turned down for a home loan or not considered for a job that goes through automated screening, you can’t appeal to an AI. This is a fairness issue.

Before releasing their products to companies or the public, AI creators test them under controlled circumstances to see whether they give the right diagnosis or make the best customer service decision. But much of this testing doesn’t take into account real-world complexities. This is an efficacy issue.

And once artificial intelligence is out in the real world, who is responsible? ChatGPT makes up random answers to things. It hallucinates, so to speak. DALL-E allows us to make images using prompts, but what if the image is fake and libelous? Is OpenAI, the company that made both these products, responsible, or is the person who used it to make the fake? There are also significant concerns about privacy. Once someone enters data into a program, who does it belong to? Can it be traced back to the user? Who owns the information you give to a chatbot to solve the problem at hand? These are among the ethical issues.

The CEO of OpenAI, Sam Altman, has told Congress that AI needs to be regulated because it could be inherently dangerous. A bunch of technologists have called for a moratorium on development of new products more powerful than ChatGPT while all these issues get sorted out (such moratoria are not new—biologists did this in the 1970s to put a hold on moving pieces of DNA from one organism to another, which became the bedrock of molecular biology and understanding disease). Geoffrey Hinton, widely credited as developing the groundwork for modern machine-learning techniques, is also scared about how AI has grown.

China is trying to regulate AI, focusing on the black box and safety issues, but some see the nation’s effort as a way to maintain governmental authority. The European Union is approaching AI regulation as it often does matters of governmental intervention: through risk assessment and a framework of safety first. The White House has offered a blueprint of how companies and researchers should approach AI development—but will anyone adhere to its guidelines?

Recently Lina Khan, Federal Trade Commission head, said based on prior work in safeguarding the Internet, the FTC could oversee the consumer safety and efficacy of AI. The agency is now investigating ChatGPT’s inaccuracies. But it is not enough. For years AI has been woven into the fabric of our lives through customer service and Alexa and Siri. AI is finding its way into medical products. It’s already being used in political ads to influence democracy. As we grapple in the judicial system with the regulatory authority of federal agencies, AI is quickly becoming the next and perhaps greatest test case. We hope that federal oversight allows this new technology to thrive safely and fairly.



Source link

Previous Post

Are We Alone In The Universe? 4 Essential Reads On Potential Contact With Aliens

Next Post

Intuitive Machines sets Nov. 15 launch date for moon lander

Next Post

Intuitive Machines sets Nov. 15 launch date for moon lander

Recommended

Homo naledi may have dug cave graves and carved marks into cave walls

June 6, 2023

Chandrayaan-3: India’s moon craft enter sleep mode and await freezing lunar night

September 5, 2023

Don't miss it

Science News

Curbing pedestrian stops might not reduce police-civilian encounters

September 30, 2023
Science News

Hollow nanoparticles linked by DNA make unusually strong materials

September 30, 2023
Science News

NASA’s Perseverance rover spots dust devil on Mars (video)

September 30, 2023
Technology

In War-Torn Ukraine, a Doctor Evacuates Children with Cancer

September 30, 2023
Technology

Who Invented Money and What Is the World’s Oldest Currency?

September 30, 2023
Technology

Emerging from Silence: Capturing the First Heartbeat

September 30, 2023

© Science News Hubb All rights reserved.

Use of these names, logos, and brands does not imply endorsement unless specified. By using this site, you agree to the Privacy Policy and Terms & Conditions.

Navigate Site

  • Home
  • Science News
  • Technology
  • Contact us

Newsletter Sign Up

No Result
View All Result
  • Home
  • Science News
  • Technology
  • Contact us

© 2022 Science News Hubb All rights reserved.