X
Innovation
Why you can trust ZDNET : ZDNET independently tests and researches products to bring you our best recommendations and advice. When you buy through our links, we may earn a commission. Our process

'ZDNET Recommends': What exactly does it mean?

ZDNET's recommendations are based on many hours of testing, research, and comparison shopping. We gather data from the best available sources, including vendor and retailer listings as well as other relevant and independent reviews sites. And we pore over customer reviews to find out what matters to real people who already own and use the products and services we’re assessing.

When you click through from our site to a retailer and buy a product or service, we may earn affiliate commissions. This helps support our work, but does not affect what we cover or how, and it does not affect the price you pay. Neither ZDNET nor the author are compensated for these independent reviews. Indeed, we follow strict guidelines that ensure our editorial content is never influenced by advertisers.

ZDNET's editorial team writes on behalf of you, our reader. Our goal is to deliver the most accurate information and the most knowledgeable advice possible in order to help you make smarter buying decisions on tech gear and a wide array of products and services. Our editors thoroughly review and fact-check every article to ensure that our content meets the highest standards. If we have made an error or published misleading information, we will correct or clarify the article. If you see inaccuracies in our content, please report the mistake via this form.

Close

AI isn't necessarily taking your job: 12 reasons to ease those worries

Despite fears of job disruption, we can harness the best of generative AI. David Gewirtz delivers good and bad news while exploring the latest in AI.
Written by David Gewirtz, Senior Contributing Editor
Person working on a tablet
primeimages/Getty Images

Over the years, ZDNET readers have challenged, informed, and inspired me in countless ways.  Recently, I wrote about how using ChatGPT as a virtual assistant for a data project saved me many hours of work. Reader @f8lee pushed back on my entire premise, arguing that my use of ChatGPT cost an intern or worker either experience or income. Here's his note:

The important takeaway here is that some intern type did not get the workload (and commensurate experience) so it's merely another example of tech replacing humans. Which is fine -- I'm no Luddite -- but begs the question of how people starting up in new jobs will get some "grunt work" experience?

It's a great question that warrants further exploration. 

Also: AI will have a big impact on jobs this year. Here's why that could be good news

As with nearly all the technology previously created, AI has a dual nature. It presents both challenges and opportunities. It's up to all of us where it goes. Will we be able to integrate AI into our lives as a force multiplier, or will we find ourselves fighting Skynet? Only time will tell.

Challenges facing the long-term use of generative AI

AI is undoubtedly a rapidly evolving field. We've only dealt with generative AI and ChatGPT for just over a year, and we've seen the scope of improvement, pushback, and ongoing problems.

Also: GPT-4 is getting significantly dumber over time, according to a study

Here are 12 factors that could diminish the usefulness of these tools as we move forward:

  1. Guardrails: As the owners of AI implementations discover that their creations respond in troubling ways, they're adding guardrails that limit the topics the AIs are willing to engage in. This does prevent the AI from saying something heinous, but it also may limit the value of the AI as a content production tool.
  2. Limited response length: Currently, most AIs start breaking down if the response requires more than approximately 500 words. This article, for example, is coming in at just under 3,000 words. None of the AIs would be able to do this level of analysis.
  3. Laziness: I have repeatedly tried to get various AIs to do aggregated research, examining as many sources on a given topic as possible, and constructing a literature review. But the AIs tend to look into their existing knowledge bases -- and, scarily enough, Wikipedia -- for the knowledge fueling their responses. Today, there's no way to give an AI a research assignment and send it off to read all the literature on the topic. And, as such, there's no way to get an analysis of the results.
  4. Increasing testing for AI-generated content: We're concerned about student plagiarism and about publications using AI to create stories that may not have been vetted by human experts. As our tests get better in distinguishing AI content, it's possible that content might lose its overall value.
  5. Search engines optimizing for what is perceived to be human-generated content: That filtering is also true in search engines.
  6. Data privacy: A big concern about public AI platforms is that the information passed into the AI becomes part of its overall knowledge base. Anything considered confidential may not be provided to the AI; therefore, projects that require access to such confidential information may not have AI assistance.
  7. Intellectual property issues: This goes hand-in-hand with data privacy, in that the AI may produce content that creates liability for its users. In my Halloween image generation test, for example, we noticed how much copyrighted content Dall-E 3 used to produce its output.
  8. Bias and fairness: While AI can help even the playing field for some who are challenged with writing, it also has a strong record of bias. While there are many efforts to counter that bias, it's still a substantial problem -- and is likely to remain a problem-- because humans are inherently biased.
  9. Accuracy: As we've seen over the past year, AIs are capable of generating whopping big lies. While there are techniques for reducing AI hallucinations, the overall concern about accuracy is quite valid.
  10. Adaption to breaking news situations: Sometimes our world changes instantly. Whether it's a disaster we're suddenly facing or a business change that happened overnight, AIs may not have the necessary knowledge to respond accurately and relevantly. There's also a counterargument that when AIs are constantly sifting through live data, they may be able to notice changes and trigger alerts. This is particularly relevant in blocking cyberattacks.
  11. Misinformation and misuse: A big concern with AI is just how quickly AIs can create and spam misinformation. Whether it's generating pixel-perfect fakes of real people or spamming completely false text pitches, AI has no natural ethical and moral leaning, and can be corrupted for misuse. Fighting back against that behavior may limit AI use overall.
  12. Regulatory and ethical governance: Depending on how worrisome AI gets, we'll likely see regulatory or statutory defenses against AI behaviors, investment, and use. It's too soon to tell how far that will go, but don't expect the AI freedom era of 2023 and 2024 to remain as wide open as we've become accustomed to.

I've just presented a dozen factors that could limit or change how AI is used and how useful it may be. When we look at the question of whether AIs are destined to take our jobs, keep these mitigating factors in mind.

But the impact of generative AI on jobs is not a simple issue. (Back in 2010 -- well before the advent of generative AI -- I explored the subject of job disruption and creation in my book How To Save Jobs.)  Today, AI has the potential to destroy some jobs (possibly including my own), but it also has the potential to empower -- and provide deep value to -- workers and employers. We call that disruption, and it's nothing new because disruption is always new.

Let the disruption begin

Anyone who has spent any time working in or with tech is familiar with disruptive tidal forces.

When the Internet first became practical, we could feel the shifting sands. This was going to change everything.

When smartphones stopped being novelties and became ubiquitous universal computing and communications devices, we could feel the changing currents. This was going to change everything.

And now, with generative AI, the winds of change are blowing hard enough that, with a huff and a puff, AI -- clearly -- is going to blow the roof off of some of the old ways, and change everything.

AI is, fundamentally, one of those classic "good news, bad news" stories. On the one hand, it presents a challenge to employees and a cost-saving opportunity to employers that -- even with an inevitable reduction in quality -- might be hard to resist.

That's because AI is good enough that some companies are considering replacing some of their human writers, programmers, or designers with a bunch of ChatGPT prompts. After all, $20/month is a lot less expensive than a fully burdened salary for an entire department of creatives with demanding personalities. And that's deeply worrisome to workers who need to make a living.

Also: Saving hours of work with AI: How ChatGPT became my virtual assistant for a data project

On the other hand, it presents a powerful opportunity for budget-constrained individuals and small businesses to turbocharge their work output, compete with larger players, and possibly even regain personal time. Add to that the democratizing effect of generative AI, as it enables those challenged linguistically or artistically to produce output that they might not otherwise have been able to offer.

Evolution of work roles and career paths

Technological advancement is inherently disruptive. For those doing the disrupting, the cycle of advancement and obsolescence -- driven by the pursuit of better solutions, more efficiency, more convenience, cost savings, and new capabilities -- introduces a period of promise and growth. But for those in careers left behind, the transition can be painful.

Also: Want to work in AI? How to pivot your career in 5 steps

Take the telephone. Today, we all have smartphones. A century ago, the telephone system was a network of wires, switched by human operators. When electronic switching became practical, driven in part by the use of dial phones, many of those operators lost their jobs. There's a parallel argument to be made that when those jobs went away, others opened up to serve new needs.

The migration was also a challenge for communities and individuals. The following video shows how Western Electric, probably around 1940, helped inform its users about the benefits, scope, and new skills the transition would require.

Here are some other technologies that disrupted traditional work roles:

  • Horse-drawn carriages replaced by cars: All those skills related to harness making, animal tending, and blacksmithing were replaced by the skills necessary to make our cars go vroom! And even those skills are now being replaced, because electric vehicles require specialized skills that internal combustion engine mechanics need to learn.
  • Film cameras replaced by digital cameras: These were mostly replaced by smartphone cameras. This transition put thousands of film processing labs out of business, with the concomitant loss of jobs. Also devastated were companies that couldn't make the transition from producing film to producing digital media.
  • Pagers and public payphones: These two industries were also replaced by the smartphone and its Internet-based mobile infrastructure.
  • Movable type replaced by electronic typesetting: There used to be an industry of specialized tradespeople who set physical blocks of type to create newspapers and other publications. These were replaced, first by electronic typesetting machines, then by page layout programs like PageMaker, and then by the Web and social media. We now have SEO experts and social media consultants, and we bloggers do most of our own "typesetting" by entering our articles into a content management system.

Examples could go on and on. Robots have replaced workers in factories. Automated tellers have replaced many bank workers. There has been a never-ending stream of disruption. The fact that generative AI can produce text and images that approach the quality produced by skilled writers and designers is scary. But technological disruption has shaken up the jobs of blue-collar workers, white-collar workers, and knowledge workers since, well, since there were collars.

Also: The incredible evolution of smartphone cameras and how AI powers a dazzling future

There's no doubt that today's generative AI offerings have limitations that keep them from producing at the quality of a trained professional journalist, for example. But there is the risk that even if the quality can't be reproduced, some companies might choose to sacrifice quality in exchange for very inexpensive quantity.

Automation, whether used to replace writers like me or auto workers, can be deeply worrisome when it comes to job security, professional identity, creating a perception of quality, and actual quality work. No doubt, we'll be facing the consequences of unintentional disinformation due to AI errors, spamming of lower-quality content, and devaluation of creative work and artistic contribution.

But, as we've seen, change will change. And, as we'll see next, there can be benefits to change as well.

Augmenting and amplifying individual skills

My wife runs a small e-commerce business based on a popular hobby. As part of that business, she hosts a very active Facebook group. Each month, she produces a creative hobby challenge for her group members. Because I have years of experience as a designer and art director for my earlier businesses, my job is to produce an evocative image for each month's challenge.

Before I started using Dall-E 3 and Midjourney, I did my best with clipart and Photoshop. My illustrations were workable, but they weren't awesome. But by tapping into text-to-image AI capabilities, I've taken to writing very evocative prompts, and the AIs have given me back results that have improved her presentation game by orders of magnitude. Her users love the new challenge images, and they help inspire them to participate.

Also: Can AI detectors save us from ChatGPT? I tried 5 online tools to find out

My use of the AI doesn't take work from anyone else. It simply makes my work output, as intended for this application, much better.

When @f8lee wrote in their comment that it was "merely another example of tech replacing humans," that wasn't true in my case. No humans were replaced. I was never going to hire anyone to do these jobs. However, the quality of my work output has increased far beyond my base skillset. And I got to spend a few more hours with my family.

Sure, some people are using AIs to do their work for them. I'm concerned that students will use AIs to write their papers, losing out on important hands-on experience that's a major factor in learning.

AIs can enable folks to go beyond their skillsets, or offload work to recover time. I often think about business owners who are great at the technical aspects of their businesses but aren't great at writing, and who need to produce letters and marketing content for their businesses. An AI could help those people produce output at parity with folks who are more skilled at writing.

Also: AI is changing cybersecurity and businesses must wake up to the threat

For the past year, we've talked about many places where AI can serve as a "force multiplier" -- enabling creative work, innovation, and efficiency that would be impossible for individuals or small teams alone. I've discussed a few ways that these tools have worked for me, including helping me set up an Etsy store and creating much of the marketing for my two record albums.

There's a fine line here, though. I wanted the AI's help on the art and the marketing copy (as well as because I do these projects to learn, and then report what I've learned back to you). But I also wanted my listeners to know that my music was mine, and not created by an AI -- even though I used the AI for help in art and promo copy. As such, each of my albums on my personal music site ends with the phrase, "No AI was used in the creation or mixing of this music."

Then there was the original article that inspired @f8lee's comment. I used ChatGPT to help me quickly chop down some data, saving me clerical time that would never have been delegated but would have come out of my sleep or family time.

Another project involved feeding some sentiment analysis data I gathered from users of my software projects, and enlisting ChatGPT Plus to do a wide range of analytical evaluations on that data. I got some helpful and informative feedback in under an hour. The alternative would have been to put in a few weeks of work writing custom software to parse, manage, and process all the data.

Also: The moment I realized ChatGPT Plus was a game-changer for my business

Again, neither of these projects would have been delegated. They either would have taken me a lot longer, or they would never have been done, robbing me of the insights and understandings I got from them.

This is how generative AI has the potential to be a force multiplier for individuals working on projects. That said, it's hard to foresee whether AI will increase its value and threat, or find its value reduced -- thereby also reducing its potential for world domination.

Also: Is AI in software engineering reaching an 'Oppenheimer moment'? Here's what you need to know

What do you think? Let us know in the comments below.


If you're interested in reading How To Save Jobs, it's available in a fancy print edition, as a free PDF via a non-profit, and for a buck  -- because Amazon won't let me give it away for free -- as a Kindle book.

You can follow my day-to-day project updates on social media. Be sure to subscribe to my weekly update newsletter, and follow me on Twitter/X at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, and on YouTube at YouTube.com/DavidGewirtzTV.

Editorial standards