This hearing is on the oversight of our artificial intelligence, the first in a series of hearings intended to write the rules of AI. Our goal is to demystify and hold accountable those new technologies, to avoid some of the mistakes of the past. And now for some introductory marks.
Too often we have seen what happens when technology outpaces regulation, the unbridled exploitation of personal data, the proliferation of disinformation and the deepening of societal inequalities. We have seen how algorithmic biases can perpetuate discrimination and prejudice, and how the lack of transparency can undermine public trust. This is not the future we want.
If you were listening from home, you might have thought that voice was mine and the words from me. But in fact, that voice was not mine. The words were not mine, and the audio was an AI voice cloning software trained on my floor speeches.
The remarks were written by Chat GPT. When it was asked how I would open this hearing, and you heard just now the result, I asked Chat GPT, why did you pick those themes and that content? And it answered, and I'm quoting blumenthal has a strong record in advocating for consumer protection and civil rights. He has been vocal about issues such as data privacy and the potential for discrimination in algorithmic decision making.
Therefore, the statement emphasizes these aspects. Mr. Altman, I appreciate Chat GPT's endorsement.
In all seriousness, this apparent reasoning is pretty impressive. I am sure that we'll look back in a decade and view Chat GPT and GPT Four like we do the first cell phone, those big clunky things that we used to carry around. But we recognize that we are on the verge, really, of a new era.
The audio and my playing, it may strike you as curious or humorous, but what reverberated in my mind was what if I had asked it? And what if it had provided an endorsement of Ukraine surrendering or Vladimir Putin's leadership? That would have been really frightening. And the prospect is more than a little scary, to use the word. Mr.
Altman, you have used yourself, and I think you have been very constructive in calling attention to the pitfalls as well as the promise. And that's the reason why we wanted you to be here today. And we thank you and our other witnesses for joining us.
For several months now, the public has been fascinated with GPT Dally and other AI tools. These examples, like the homework done by Chat GPT, or the articles and op ed that it can write, feel like novelties. But the underlying advancement of this era are more than just research experiments.
They are no longer fantasies of science fiction. They are real and present. The promises of curing cancer, or developing new understandings of physics and biology, or modeling climate and weather all very encouraging and hopeful.
But we also know the potential harms, and we've seen them already. Weaponized disinformation, housing discrimination, harassment of women, and impersonation fraud. Voice cloning, deep fakes.
These are the potential risks, despite the other rewards. And for me, perhaps the biggest nightmare is the looming new industrial Revolution. The displacement of millions of workers, the loss of huge numbers of jobs, the need to prepare for this new industrial revolution in skill training and relocation that may be required.
And already industry leaders are calling attention to those challenges. To quote Chat GPT this is not necessarily the future that we want. We need to maximize the good over the bad.
Congress has a choice now. We had the same choice when we faced social media. We failed to seize that moment.
The result is predators on the Internet, toxic content exploiting children, creating dangers for them. And Senator Blackburn and I and others like Senator Durbin on the Judiciary Committee are trying to deal with it. Kids Online Safety Act but Congress failed to meet the moment on social media.
Now we have the obligation to do it on AI before the threats and the risks become real. Sensible safeguards are not in opposition to innovation. Accountability is not a burden, far from it.
They are the foundation of how we can move ahead while protecting public trust. They are how we can lead the world in technology and science, but also in promoting our democratic values. Otherwise, in the absence of that trust, I think we may well lose both.
These are sophisticated technology, but there are basic expectations common in our law. We can start with transparency. AI companies ought to be required to test their systems, disclose known risks, and allow independent researcher access.
We can establish scorecards and nutrition labels to encourage competition based on safety and trustworthiness limitations on use. There are places where the risk of AI is so extreme that we ought to impose restriction or even ban their use, especially when it comes to commercial invasions of privacy, for profit and decisions that affect people's livelihoods. And of course, accountability or liability.
When AI companies and their clients cause harm, they should be held liable. We should not repeat our past mistakes. For example, Section 230 forcing companies to think ahead and be responsible for the ramifications of their business decisions can be the most powerful tool of all.
Garbage in, garbage out. The principle still applies. We ought to beware of the garbage, whether it's going into these platforms or coming out of them.
And the ideas that we develop in this hearing, I think, will provide a solid path forward. I look forward to discussing them with you today, and I will just finish on this note. The AI industry doesn't have to wait for Congress, I hope there are ideas and feedback from this discussion and from the industry and voluntary action such as we've seen lacking in many social media platforms.
And the consequences have been huge. So I'm hoping that we will elevate rather than have a race to the bottom. And I think these hearings will be an important part of this conversation.
This one is only the first. The ranking member and I have agreed there should be more. And we're going to invite other industry leaders.
Some have committed to come. Experts, Act, Demics and the public, we hope, will participate.