Believe in my cause? PLEASE send me a donation!

Feel free to reach out if you would be to be featured and share why you care about sharing your perspective.
Currently, donations are the life blood of this movement, we are not government funded and we do not have deep pockets

Zelle transfer: 01@AntiAI.com

As of this moment, donations are not tax deductible***

AI Bias

If people built AI, does human bias carry over?

Bias in AI can be introduced at various stages of the development process, including data collection, data processing, model training, and deployment.

After all, humans are the ones training the algorithms, and anyone working in the technology space will know that humans are also the greatest liabilities. Without even meaning to, human biases inevitably influence the language used to train artificial intelligence.

A familiar example is how ChatGPT could write a poem praising Joe Biden but not Donald Trump. It just hadn’t been trained with objective information. We all deserve a nice poem!

AI bias has serious consequences, particularly in areas like criminal justice, healthcare, and finance. AI is only as strong as the data you feed it and as of right now, it is ultimately people who feed that data.

Before the widespread adoption of AI to gain information from the internet [Update, recently AI has run out of internet data to train on, we let AI train on the whole internet without guardrails?], it is important to have diverse teams working on AI development, use representative and inclusive data, and continually monitor and evaluate AI models for potential biases.

We have seen infractions incurred by facial recognition technology, predictive policing, credit scoring algorithms, healthcare algorithms, and hiring algorithms, just to name a few. Having people build new future AI software from scratch with some using internet communities no doubt introduces potentially unsavory results.

So what is the answer? Careful curation to ensure that all people, communities, views, and opinions are available for AI training, but also careful monitoring to provide accurate AI outputs. This is a complex task that will require regulatory monitoring and corporate responsibility. 

AI Job Displacement

Sci-fi leads us to believe that AI will take over peoples’ jobs overnight, and while that may or may not happen as we imagine, is your job safe?

AI can already replace human potential to continuously process through repetitive tasks with specific rules and boundaries, much better than humans can. On top of that, it can work constantly. This video is a crazy insight to what stage AI is in replacing us in our jobs.

AI has the potential to disrupt: manufacturing, transportation, customer service, data entry and processing, financial services, healthcare and analysis, and research, just to name a few. AI works for a fraction of the costs and can easily displace hundreds of millions of jobs meaning a complete shift in the job market demand. I want to postulate that for example as the bottom 25% of jobs get replaced by AI, those 25% of people will push upwards and disrupt every industry and force jobs to become more scarce, you won't see it coming until the tipping point spills over.

AI will likely rapidly change the job landscape within the next decade. Thus far, there seems to be a lack of regulation on what job roles AI is allowed to be applied to, and many jobs are in the middle of transitioning to a hybrid model of mixing humans and AI. This hybrid model actually allows the AI to learn from the workers using it, essentially training the model for free. With AI improving logarithmically over time, one can only imagine the disruption given enough time, especially with robotics on the rise. Without regulation, AI will affect you and everyone you know. No wonder there's speculation of 'Luddite riots" again.

Societal Dependence

Do you think a new generation of students who use ChatGPT for their homework will have the critical thinking skills they will need to survive after leaving an academic setting?

Society's increasing dependence on AI systems could lead to a lack of resilience and flexibility, making it difficult to adapt to unexpected events or disruptions. For example, if a critical AI system were to fail, it could have far-reaching consequences on many aspects of society. Due to how well AI could perform in areas such as autonomous systems, healthcare, emergency response financial systems, and academia. All of a sudden, people would have to think again!

Movements away from professional careers in these fields could lead to a black swan event that completely capsizes society due to how deeply rooted people rely on AI. AI has already been seen to outperform academics in the fields of medicine, legal, and coding; starting with outcompeting humans creatively. The potential lack of job availability with a degree in a field dominated by AI could deter students from pursuing those career paths, leaving society without human knowledge of how certain things work if AI fails.

Overall, as you can imagine, overreliance on AI could result in a lack of flexibility and resilience, making it difficult to adapt to unexpected events or disruptions. If you have watched the movie Wall-E, the sci-fi aspects of AI taking over society are portrayed as negative. Even the captain of the ship forgets what his mission is! It is important to strike a balance between using AI to improve our lives and maintaining our ability to function without it.

Let's just mention the example of students utilizing ChatGPT for their assignments Education is our society’s future, and with students and even higher-level academics using AI as a placeholder for actual learning, where does that leave us? Thus far, this industry is unchecked aside from a few start-ups like GPTZero.

Accountability Issues

Who is accountable for the results that AI causes?

This topic is probably the most intertwined with all of the other topics mentioned on this page. Due to the nature of AI systems, if they are used by people as a place to obtain information or get updates on news of the world, it is very difficult to blame the end result of what AI comes up with, OpenAI has already partnered with most of the news sources, it is a matter of time before AI starts reporting the news for us. Perplexity AI got into the mainstream new cycle for ripping news articles off.

Some of the accountability issues with AI are as follows: bias, lack of transparency, responsibility, safety, privacy, lack of regulation, and ultimately inaccurate information.

We have all seen the news piece about the professor who got accused of sexual assault by AI. Who would you pursue in court, the AI itself or the people who built it? Can you even pursue legal action? As you can imagine, a precedence has not been set for how you would deal with the potential harm that AI brings to society and its lack of accountability.

To address these accountability issues, it is important to develop ethical standards for AI development and use and to ensure that these standards are enforced. The closest we have reached in terms of enforcement is Elon Musk calling for a pause on rapid AI development, hardly considering any accountability offered. Not to mention he turned around and started his own AI startup shortly after.

Privacy Concerns

What do you do when AI uses your information, voice, likeness, personality, and physical features without your permission?

AI systems require vast amounts of data to function properly, which can include personal data. The collection and use of personal data raise concerns about privacy and the potential for misuse as AI develops and becomes entwined with our daily lives. Note the recent roll out of Apple Intelligence and big tech drooling over AI being in every smartphone and app.

Everyone was so alarmed by the data that Meta collects, or how Apple/ smartphones collect so much sensitive data on the population because everyone has one in their pocket, Imagine if AI gains access to biometric data, and uses it to recreate deceased individuals, or even people who are still alive. Have you heard of nefarious deep fakes and AI voice-generated videos/ music?

Another potential nightmare is data breaches. Due to AI being built with close ties to the internet, it can also be hacked or have personal data breaches. What would you do to stop identity theft and financial fraud? How about the lack of transparency making it difficult for individuals to know what data is being collected about them and how it is being used in AI systems?

Lack of Transparency

Does an everyday person understand all of the intricacies of the internet (a 30-year technology)? What about AI?

AI systems’ lack of transparency is a major concern that has been raised by many experts in the field of artificial intelligence. Transparency refers to the ability to understand and explain how an AI systems work, including its decision-making process and the data that it uses.

There are several reasons why AI systems may lack transparency, including complexity, black box algorithms, data bias, and intellectual property.

Ideally, the world would prioritize transparency in AI development and use. Some examples include developing explainable AI systems and opening up data sets and algorithms to external review and analysis. How many people actually know exactly how AI models are trained?

Due to the nature of capitalism, new technology is shrouded in mystery and largely solely examined through the positive lens of what it could do for society. Many major corporations are invested in AI technology development in the interest of improving their profitability, but unless they have an in-house development team, they probably don’t understand exactly how it works [See recent fallout of OpenAI's safety team]. That is the problem.

Weaponization

If AI can be used for good, what about if it is used nefariously?

The weaponization of AI is a concern that has been raised by many experts in the field of artificial intelligence. This refers to the development of AI systems for military purposes, which can have serious implications for global security and civilian safety.

There are several ways in which AI could be weaponized, including autonomous weapons, cyberattacks, surveillance, and propaganda to name a few major applications. It seems far away as no major weaponization of AI has appeared as of yet, but it only takes one time for the world to be changed forever. Even now, the recent conflict in Gaza has both sides of the conflict utilizing AI to identify potential attack targets. The fact this already exists in modern warfare should be very very alarming.

The weaponization of AI raises ethical and legal concerns, including questions about accountability, transparency, and the potential for unintended consequences. There have been calls for international agreements to regulate the development and use of AI for military purposes, but what if these weapons integrated with AI fall into the wrong people's hands? Everyone is fearful of AI cyborgs destroying humans, but it’s more probable that it will be humans destroying humans with the aid of AI.

Usage of AI for Mass Auto Radicalization

AI+ Automation Tools of Mass Influence.

"GPT powered sock puppet accounts promoting an endless stream of propaganda, especially propaganda meant to radicalize individuals. This is my personal AI doomsday scenario, mass scale automated radicalization or “reprogramming” of the human mind. I think it’s something everyone is susceptible to a larger degree than they’re willing to admit. It’s something that’s been known to psychologists for decades and big tech for ~11 years about."

This is a new section added by a supporter and I agree fully.

To add onto the AI bias potential, how about having AI backed tools for usage to sway public opinion based off of biased views, "GPT-powered sock puppet accounts promoting an endless stream of propaganda, especially propaganda meant to radicalize individuals. This is my personal AI doomsday scenario, mass-scale automated radicalization or “reprogramming” of the human mind. I think it’s something everyone is susceptible to a larger degree than they’re willing to admit. It’s something that’s been known to psychologists for decades and big tech for ~11 years about."

This is a new section added by a supporter and I agree fully.

To add to the AI bias potential, how about having AI-backed tools for usage to sway public opinion based on biased views? A good example is China's infamous 50-cent army that does whatever the CCP does. How about a 0 cost army that will never get tired and can propagate and amplify messages that the creator desires? Recently, Israel has even participated using AI to spread misinformation.

Once AI unlocks the means to bypass human security checks, it can self-generate new fake users and spread chaos in a way never seen before, to the point where you can't trust anything you read on the internet anymore.

Legal and regulatory challenges

AI regulation is a joke, how do you regulate something they intentionally opened the floodgates for?

We see governments like the EU and US try to implement some regulation upon AI development, including tech leaders and other concerned individuals. However, it has largely been swept under the rug or an attempt to thwart rivals, and not an genuine attempt to regulate AI itself.

I believe this is an area to AI risk that a lot of people are overlooking or attempting to live in blissful ignorance. Legal frameworks and rules must evolve alongside technological advancements, yet we are seeing all efforts chasing the AI, hitting their marks long long after AI has raced ahead.

I believe this is due to AI's accelerating nature for development, yes, that's right, I believe it's speeding up. By the time they regulate AI chatbots, AI will already be in your smartphones, by the time they look at smartphones, AI already stole half the art on the internet, by the time art is taken care of, AI is in every facial recognition software and government databases. You get my point, the old skeletons in congress has no idea what AI is and how fast it is being developed.

We may never see a day where proper legislation can meaningfully regulate AI or even try to shoot ahead of what could be coming. This is disaster waiting to happen as we will always be one step behind.

AI detection difficulties

As AI becomes prominent, AI detection is on the rise, what about false positives?

I thought this topic was important enough to warrant its own section. If we take the arms race and simplify it to an arms race between AI copywriting every part of our society and AI detectors to try to counter them. An important subject needs to be discussed, false positives.

Beyond just false positives, where do we ultimately draw the line? How will we know with absolute certainty if AI becomes undetectable even to the best anti-ai technology? Now apply this to any type of media, be it videos, art, stock images, voice, music, movies, newsletters, blogs, businesses, projections, androids, etc. What value does A person have?

In the meantime, it'll just be pain, how will teachers grade students if students grow up with AI around them? Maybe the students learn to write like an AI because everything they absorb is already made by AI? Is it absolute taboo to talk like an AI? Monkey see, monkey do. This is already too dystopian for me.

OTHER AI Concerns

Do you think I missed a pillar for AntiAI?

Come join my team and work with me to bring regulation in the space.

I want to cover plagiarism next.

I am always in need of partners to create new media stories and people to work on the website with me!

We need your consent to load the translations

We use a third-party service to translate the website content that may collect data about your activity. Please review the details in the privacy policy and accept the service to view the translations.