Humans in the Loop
AI doesn’t hurt people, people hurt people. As a technology, artificial intelligence is neutral, like power or money. It depends on what we do with it. And by we, we mean the Humans in the Loop who have the power to use AI and decide to deploy AI in poor taste, instead of choosing to support other humans who make our planet a meaningful and beautiful place to live.
The term human-in-the-loop refers to a human that assists in training or interactively operating a machine learning model. Typically, this is used in contexts where a model operating on its own can be dangerous because it does not have sufficient perceptive or decision-making ability, and in that sense, the human is a good thing to have in the system.
Here at AI FROM HELL, we believe that all AI systems have Humans in the Loop, because at least one human must have taken the initiative to fund, create, approve, and deploy that AI in the first place. And so whenever AI exhibits hellworthy behavior, we must consider that some human or group of humans decided to let that happen–whether out of neglect, incompetence, or self-interest.
On this site, we want to explicitly point out where this is happening in the world we live in today; because:
- people who do not work on AI but are affected by it (e.g. it threatens their employment) can better understand why things are happening the way they are, instead of resorting to the gut reaction that all AI is bad;
- doomer discussions often talk about the potential impact of artificial general intelligence (AGI) in the more distant future, but miss the ways that AI is causing harm now and do nothing to consider it;
- we can point at the deeper incentives driving these human behaviors and hope to have a stab at fixing them cooperatively.
AI has the potential to empower humanity by automating things we don’t want to do, solve problems that are as of yet too hard to solve, and raise us out of hell on Earth. With great power comes great responsibility.