Dear HN. Now that Geoff Hinton said it openly, I wanted to post this discussion.
I remember how many here were bullish on Blockchain back when it first came out 10 years ago, and these days any news about Blockchain or Smart Contracts is rightly viewed with suspicion, and downvoted.
I propose that the same will happen with Generative AI. I have delved into the space in the last few months, I have built applications leveraging ChatGPT at scale, put in as many ethics guardrails as I could without destroying the application’s purpose, and this is my “Moxie Marlinspike writes about Web3” moment.
In Web3, the early adopters got the most rewards, from later adopters, but by and large the applications of the technology didn’t generate much value for society at all. I have come to believe that with Generative AI deployed at scale, the situation is even worse. My claim is that:
Nearly all applications of Generative AI APIs by their very design and value proposition externalize cost to society - resulting in a net negative that grows superlinearly the more they are used at scale. Because as Geoff said:
Such fierce competition might be impossible to stop, resulting in a world with so much fake imagery and text that nobody will be able to tell what is true anymore.
The core value proposition is to leverage the short-term profit motive, like any new tech. It is to generate work at scale that passes for real humans doing the work.
If ChatGPT generates an essay for you, or a homework assignment, that is fake. You didn’t do your homework. You didn’t write the essay. You’re passing it off like you did. If MidJourney “painted” your painting, and you pass it off as your own to others, you’re lying. This guy won a photography contest with a non-real photo:
He was honest enough to reveal it and reject the prize. But had he not done that, humans and honest photos would be out of the running just as if a chessplayee with a hiddenchess engine entered a tournament. It’s cheating, plain and simple.
It’s bad enough to cheat yourself, but using the API of any generative AI service is an incentive to cheat many people, in ways that can destroy our trust in ALL content and interactions online.
I propose a law that people should not be able to distribute works that leverage Generative AI without fully disclosing it. There can be exceptions for small-time operations with a small volume. To avoid slippery slope arguments, I’d mandate a “BuiltWith” thing all the way down to the compilers you used for code, graphics editors for images, and what you used for sounds etc. Like Ingredients that FDA demands be listed on locally produced food in USA, but for a different reason.
Also, having a human in the loop to spot-check accuracy and be responsible for accuracy can earn a specific certification. An AI “describing” a product it has never seen is inherently as fake as a photograph of a scene that never happened.
There should be explicit penalties for passing off AI-generated work without honest disclosures, the bigger the scale the bigger the penalty.
I just recently found out that Huawei has a long history of not so honest practices and the US government at least heavily warns against using their telecom products. More about this specific concern can be found here http://www.cbsnews.com/news/huawei-probed-for-security-espionage-risk/
I actually learned this from a comment here on HN from rl3 - https://news.ycombinator.com/item?id=10297879#up_10300668
As somebody who ordered the 6P I was very concerned and wanted to see what the community thought of this. Further thought doesn't have me much concerned about backdoors for the chinese government. Why would they be interested in me, an American citizen. But what does have me more concerned is the NSA's ability to find and exploit that backdoor. In fact I believe the Snowden docs showed this was the plan.
I don't know if this is even worth worrying about considering the baseband.
Watch the video as well. What's fascinating is that the records of the army corps show climate change, but the interviewee cannot "speculate." Similarly, many of the farmers see "climate change" but struggle to say those words, even as some start adapting.
>Walking over soggy lifeless crops, Brett Adams, a fifth generation Nebraska farmer, paused to catch his breath. Under the dark grey clouds of the Midwestern spring, he was forced to come to terms with an alarming reality: 80% of his farmland was under freezing floodwater.
...
>The floods damaged public infrastructure and led to the loss of crops, livestock and the evacuation of thousands of people from their homes. Nebraska's governor said that in that state alone alone, the cost of damage has surpassed $1.3 billion.
...
>Modern agriculture and food production aren't just impacted by climate change — they also contribute to it. According to the EPA, more than 8% of all U.S greenhouse gas emissions in 2017 came from the agriculture sector.
>While some farmers in conservative parts of the country may be reluctant to define increasingly extreme weather as climate change, Christensen says with each storm, more attitudes start to change.
I read what appear to be ridiculous numbers (e.g., "total number of new cases to 79" at https://www.cbsnews.com/news/coronavirus-china-cases-beijing-cluster-areas-locked-down-fears-covid-19-second-wave-today-2020-06-15/ ) but I find that impossible to believe) and was wondering if there is anyone monitoring the situation in mainland China?
Reporter Bob Woodward (whose reporting led to Nixon's resignation) claims that the US military has a new secret technique that's so revolutionary that it's on par with the tank and the airplane.
Here's the relevant quote from his interview with Scott Pelley on 60 Minutes last night:
Woodward: This is very sensitive and very top secret, but there are secret operational capabilities that have been developed by the military to locate, target, and kill leaders [in Iraq].
Pelley: What is this? Some kind of surveillance, some kind of targeted way of taking out just the ... leadership?
Woodward: ... It is the stuff of which military novels are written.
Pelley: Do you mean to say that this special capability is such an advance in military technique and technology that it reminds you of the advent of the tank and the airplane?
Woodward: Yeah.
Quoted from the 60 Minutes video starting at 7 minutes 55 seconds: http://www.cbsnews.com/stories/2008/09/04/60minutes/main4415771.shtml
I'd like to ask the readers here what they think this groundbreaking technology might be? I'll start off with these two guesses which might be feasible today:
(1) facial recognition by satellite or high-flying aircraft
(2) wholesale tracking and transcription of all cell phone calls
The second clip on the site said that a nation-state had developed a bios bug that would brick a computer[1], but aren't there lots of problems with creating a bug for each bios? I forget the name of the post, but this problem came up quite recently with badbios.
While doing research for my newsletter I came across this CBS News report on Boston Dynamics from yesterday: https://www.cbsnews.com/news/boston-dynamics-robots-humans-animals-60-minutes-2021-03-28/. You can see the robot around 11:15 timestamp of the video.
The concept seems very similar to the Handle robot minus the balancing as the robot is closer to a traditional wheeled ground robot. It's interesting to see Boston Dynamics focusing on project that can be quickly commercialized.
I remember how many here were bullish on Blockchain back when it first came out 10 years ago, and these days any news about Blockchain or Smart Contracts is rightly viewed with suspicion, and downvoted.
I propose that the same will happen with Generative AI. I have delved into the space in the last few months, I have built applications leveraging ChatGPT at scale, put in as many ethics guardrails as I could without destroying the application’s purpose, and this is my “Moxie Marlinspike writes about Web3” moment.
In Web3, the early adopters got the most rewards, from later adopters, but by and large the applications of the technology didn’t generate much value for society at all. I have come to believe that with Generative AI deployed at scale, the situation is even worse. My claim is that:
Nearly all applications of Generative AI APIs by their very design and value proposition externalize cost to society - resulting in a net negative that grows superlinearly the more they are used at scale. Because as Geoff said:
Such fierce competition might be impossible to stop, resulting in a world with so much fake imagery and text that nobody will be able to tell what is true anymore.
The core value proposition is to leverage the short-term profit motive, like any new tech. It is to generate work at scale that passes for real humans doing the work.
If ChatGPT generates an essay for you, or a homework assignment, that is fake. You didn’t do your homework. You didn’t write the essay. You’re passing it off like you did. If MidJourney “painted” your painting, and you pass it off as your own to others, you’re lying. This guy won a photography contest with a non-real photo:
https://www.cbsnews.com/amp/news/artificial-intelligence-photo-competition-won-rejected-award-ai-boris-eldagsen-sony-world-photography-awards/
He was honest enough to reveal it and reject the prize. But had he not done that, humans and honest photos would be out of the running just as if a chessplayee with a hiddenchess engine entered a tournament. It’s cheating, plain and simple.
It’s bad enough to cheat yourself, but using the API of any generative AI service is an incentive to cheat many people, in ways that can destroy our trust in ALL content and interactions online.
I propose a law that people should not be able to distribute works that leverage Generative AI without fully disclosing it. There can be exceptions for small-time operations with a small volume. To avoid slippery slope arguments, I’d mandate a “BuiltWith” thing all the way down to the compilers you used for code, graphics editors for images, and what you used for sounds etc. Like Ingredients that FDA demands be listed on locally produced food in USA, but for a different reason.
Also, having a human in the loop to spot-check accuracy and be responsible for accuracy can earn a specific certification. An AI “describing” a product it has never seen is inherently as fake as a photograph of a scene that never happened.
There should be explicit penalties for passing off AI-generated work without honest disclosures, the bigger the scale the bigger the penalty.