AI: Radioactive
The common reaction to the term radioactive is to think about the dangers of radioactivity and forget the benefits we get from safely harnessing it.
If you want love on social media, dunk on AI. AI is less popular than ICE with only 25% of Americans having a positive view of it.
Why would we like it?
We've been sold scientific breakthroughs, and given slop, hallucinations, and CSAM. All of this for the cost of depleted water supplies tainted with PFAS and skyrocketing electricity bills. All of that is true, and more.
Positive uses of AI are everywhere.
AlphaFold by Google DeepMind has accelerated drug discovery, understanding of diseases, and bio-engineering.
Huawei’s Pangu model has been applied to cement manufacturing, leading to higher product quality, lower coal consumption and significant reductions in emissions.
Jobs will be destroyed, and new jobs will be created. I’ve worked with AI, even in use cases where it shines; it still needs human collaboration. This isn’t changing any time soon.
Many professional services will be more affordable, lowering barriers to creating new businesses and non-profits. Much like the internet lowered the cost of mass communication and collaboration, we are going to be opening doors most never dreamed of.
Like most tools, it's neither good nor bad. It has both bad and good uses. Both come with a cost.
I have a foot in both worlds. Both my work as a software engineer and my hobby of producing content are areas heavily impacted by generative AI. I see how it can enable. I also see the absolute waste of resources with no purpose like OpenAI’s failed Sora “social network”.
It also affects my values. I'm a fan of affordable electricity and clean water. I wonder how our economy will change and what room will be left for people to make a living.
I am both optimistic about the potential and concerned about the implementation. In general, I see it as an enabling force because that is how I use it.
Our approach to technology influences whether its net impact is positive or negative. So how can we help shape that impact? With “use policies”.
Many are familiar with technology use policies. The declarations we sign off on when starting a new job that detail how company equipment can be used.
AI use policies are similar. They detail what tools can be used and for what purposes. The goal is to protect the company from risk while taking advantage of opportunity provided by the technology. What would individual use policies look like?
Opportunities
The greatest opportunities are in learning and research.
AI is a great tool for learning. Five years ago, when I wanted to learn a new programming language, I spent $50 at Udemy, and a few weeks later, I was good enough to try starting a new project. This year, I started learning another language using AI. In a matter of days, I was further along in my learning and well on my way to building an actual project.
Learning with AI is like having a private tutor that can design the course just for you and be there to answer every question you have along the way. If you are looking for a first use case, start with using AI to learn how to use AI.
Research is another great use case. When I am doing initial research for an article, planning travel, or thinking about a purchase, I don’t ask for a simple recommendation. I give some background on what I’m looking for and ask it to compare and contrast multiple options. After a few rounds of questions, I then head off to dig deeper into the details.
Don’t forget to check Consumer Reports.
Chatbots are great brainstorming buddies. If you are looking for lists or ideas around a topic, it can give you a lot to work with. Keep in mind that it’s good at regurgitating ideas that already exist. That can be helpful, but it won’t come up with novel ideas. That’s your job.
Risks
There are risks to ourselves and our shared resources.
You have to enforce critical thinking. Like social media, AI is designed in part to please you and encourage you to keep using it. If you ask it if your idea is good, you are likely to hear reasons it finds why it is. It’s not going to tell you otherwise.
Make sure you are asking for reasons for and against your position or idea. You are looking for well-rounded advice, not a yes-man who gives you confidence not supported by reality.
You have to check the sources. Not only does AI hallucinate, but it is also trained on misinformation and disinformation. A study last year found that all chatbots supported Kremlin-linked propaganda up to 5% of the time. Grok is trained on the lies and political propaganda shared on X and is tuned to support Elon Musk’s personal viewpoints.
Don’t get lazy. Over-reliance on it to do things we can do ourselves will cause our skills to disappear. While I use it for programming at work, I make sure I'm still an active participant. Keeping my skills alive and me employed.
Protect shared resources like electricity and water. There is a cost to using chatbots, but not all questions are the same. The simple questions we often ask LLMs are like running the microwave for a second. Image generation is much more expensive, and video generation is exponentially more expensive than image generation.
Asking for a comparison of hosting services is not the same as generating a 30-second video of a giant cat fighting Godzilla. No one wants to see that anyway.
Often, a simple search is all you need. Not everything needs to be a visit with your favorite chatbot.
Not using it can be a risk. I was discussing the idea of having the major LLMs debate each other on policy issues. The reaction I got from my friends was borderline disgust. Then I explained that LLMs outperformed communications experts, beat humans in debate two-thirds of the time, and were 10 times more effective than political ads. Suddenly, the idea of using LLMs to discuss and persuade people about political opinions seemed highly effective.
The biggest personal risk may be not developing skills to help you thrive in the economy that AI is shaping. We set ourselves up for failure if we don’t do our best to understand the technology and view it not just with a critical perspective but also with an open mind.
The reality is undeniable; both the opportunities and risks are real. AI is the equivalent of economic nuclear technology. Its presence is exciting, dangerous, and inevitable. The societies and people that choose to shun it are sure to be dominated by those who harness it.