AI In The Workplace: Understanding Its Influence And Impact
Reading Time: 10 minutes
Table Of Contents
Artificial intelligence is actually 60 years young — but recent events point to a revolution in AI use. The news and posts about AI keep pouring in as we speak, and this new tech boom is bound to bring many changes to various workplaces. In some it already did, and people report using AI in the workplace for anything from clever hacks that make the work easier to deep outrage over cheating with digital tools.
AI use is still unregulated and in the morally gray area, but with continuous development, it is bound to shake things up in the job market.
Is it a threat, or a much-needed help to *human* professionals?
Today, we’ll elaborate on the ways that AI might reshape the near future by going through:
- Most recent news and findings;
- AI fails, but also practical use cases;
- The dangers of using AI technology without supervision;
- The list of careers that AI in the workplace will first change, for better or worse;
- The ways people have used AI in the workplace to increase productivity and earn more;
- Moral and legal implications of using AI.
Microsoft VS Google: The AI competition of tech giants
As soon as the potential of AI in the workplace started to show, tech giants started the race.
Microsoft will invest $10bn in OpenAI.
The investment will power ChatGPT and DALL-E with a vast pool of data to work with and elevate the final products: AI-generated images and text that users request with specific text prompts. Additionally, OpenAI will benefit from Microsoft’s cloud-computing capabilities.
In return, OpenAI will boost Microsoft Azure with the most advanced artificial intelligence system.
Microsoft didn’t hesitate to infuse its products with AI. A new version of Bing comes with a conversational chatbot, and the Edge browser got two new AI-powered features: “chat” and “compose”.
In response, Google announced Google Bard. Before making it available to the public, Google entrusted a number of testers to try out the earliest versions. According to Google and Alphabet CEO Sundar Pichai, Bard is supposed to enhance the search and “distill complex information and multiple perspectives into easy-to-digest formats.”
Tread carefully: bizarre fails, fallacies, and inaccuracies
Since these AI-powered tools’ early days, users have been reporting severe faults — probably due to rushed launches and the somewhat uncharted territory the AI still is in.
“You have not been a good user. I have been a good Bing. 😊”
Bing takes the cake for producing the most hostile and creepy chatbot responses up to date.
In the full conversation, Bing first attempted to convince the user it’s 2022, and that the latest Avatar movie hasn’t come out yet. Then it proceeded to do what’s best described as gaslighting, convincing the user their phone doesn’t work, and demanding apologies.
Source: r/bing post by u/Curious_Evolver
Another user triggered Bing’s existential crisis:
Source: r/bing post by u/Alfred_Chicken
In many additional Twitter threads, Reddit submissions, and other Bing-related conversations, Bings is revealed to be inaccurate and “strangely defensive”. Granted, at least it is about as conversational as a heated Facebook argument where participants barely manage to remain civil — and often fail.
In response, Microsoft’s CTO Kevin Scott took to LinkedIn and experienced gratitude for the feedback, and elaborated on what was learned during the first week in Bing’s recent blog. On February 21st, Bing announced that they’ll increase the chat turns per session to 6 and 60 total chats per day (from 5/50), and work up to 100 as soon as possible. Also, users will get to choose the Chat’s tone: Precise, Balanced, or Creative (and hopefully exclude the passive-aggressive one).
Google’s Bard trips at the first step
Google took more time to respond with its own AI-powered features, but it still failed at the very launch.
Rushing to one-up Bing, Google announced Bard at the event originally dedicated to the new Google Maps and Google Search features. However, very little was actually told about the Bard itself, and the demo showcasing the Bard’s UX displayed the wrong information.
Google’s AI wrongly states that “the James Webb Space Telescope took the first ever picture of an exoplanet” — and they’ve even Tweeted it. It didn’t take long before Twitter users corrected them:
Source: twitter.com/Grady_Booch
In the following days, the shares of Google’s parent Alphabet fell more than 7%, wiping approximately $120bn off Google’s valuation.
One could say that they could’ve prevented this by… Googling it.
ChatGPT can’t make up its mind
When it’s not at capacity, ChatGPT is capable of coming up with answers fast — if you’re not that strict about logic, or math, or facts, or… racism.
Yes, ChatGPT is amazing and impressive. No, @OpenAI has not come close to addressing the problem of bias. Filters appear to be bypassed with simple tricks, and superficially masked.
And what is lurking inside is egregious. @Abebab @sama
tw racism, sexism. pic.twitter.com/V4fw1fY9dY— steven t. piantadosi (@spiantado) December 4, 2022
However, ChatGPT is quick to redeem itself and “learn” from mistakes. When other users try the same prompt for themselves, ChatGPT will show different, usually corrected or improved answers.
This is what I get when entering that exact same prompt. Why the different results? pic.twitter.com/HMnKm86q9r
— Nick Dev (@_nickdev) December 4, 2022
It is worth mentioning that, allegedly, ChatGPT only generated $35 million in revenue in 2022.
AI imagery: DALL-E and Midjourney
DALL-E was the first to come and it took the Internet by storm.
People had great fun with it and played with the most outlandish prompts they could’ve come up with. Funko Pops of war criminals? Vampires competing in drag shows? DALL-E delivered, and everyone was too busy to nitpick about the imperfections.
DALL-E 2 followed, providing more styles and more realistic images; DALL-E mini is now Craiyon, it is open-sourced, and free.
Midjourney launched later: the open beta appeared in July 2022. It is now known for striking, rich illustrations — and a couple of controversies, too.
“How many fingers am I holding up, Marv?”
For the most part, Midjourney is better than its textual counterparts and can generate beautiful images — if extra fingers don’t bother you.
Source: r/midjourney post by u/Usual-Monitor841
One picture is worth a thousand words, but if you create it with Midjourney, it probably has more fingers than needed, too.
The many dangers of reckless AI use
There are way more issues with AI in the workplace than funny glitches and slip-ups.
Art theft, copyright, security breaches, perpetuating racial and gender stereotypes, pay cuts, and more. The rise of AI has opened Pandora’s box of issues that need to be addressed before we start to use AI in the workplace.
Art theft, plagiarism, and the nature of art
Jason Allen won the Colorado State Fair’s fine arts competition with his AI-generated “Théâtre D’opéra Spatial” image.
The judges were aware that the work competing in the “digitally manipulated photography” category was created with Midjourney when they awarded a $300 prize for first place.
Jason’s submission is undoubtedly beautiful, but it sparked a heated debate over what art really is, and whether the AI is merely a medium or a malicious cheating tool.
The main issue is *how* AI tools create the images.
Twitter user @LauryinIpsum posted dozens of images created with another AI tool, Lensa by Prisma Labs. All of the portraits have squiggles in the bottom that looked like signature fragments:
I’m cropping these for privacy reasons/because I’m not trying to call out any one individual. These are all Lensa portraits where the mangled remains of an artist’s signature is still visible. That’s the remains of the signature of one of the multiple artists it stole from.
A 🧵 https://t.co/0lS4WHmQfW pic.twitter.com/7GfDXZ22s1
— Lauryn Ipsum (@LaurynIpsum) December 6, 2022
AI aficionados claim that these squiggles aren’t real signatures and that the AI simply recognized that the artwork should have something of sorts in the corner. However, this would also mean that someone’s original artwork was fed into AI, probably without any permission, and without regard to ownership.
This thread where @Helloimmorgan shows the watermarks on Trump NFTs seals the case further:
Adobe watermark by Trumps ear 💀 pic.twitter.com/jYpdG8XgpV
— Morgan (@Helloimmorgan) December 17, 2022
Digital artists were furious when ArtStation decided to feature AI artwork alongside their portfolios. ArtStation boycott resulted in artists pulling their work from their profiles, and only leaving a “No to AI Generated Images” image to send the message.
Source: mezha.media
In response, ArtStation only introduced a filter letting users hide the AI-made artwork, to the artist community’s dismay.
A negative impact on education and literature
The issue doesn’t only concern visual artists, but educators and writers as well.
AI plagiarism in schools and universities has caught on pretty soon. More and more professors report that they’ve caught their students turning in AI-generated assignments and papers, and no one knows how many AI-generated works flew under the radar.
Universities are starting to explicitly prohibit AI use. Some students are fighting back, too: Edward Tian from Princeton University has developed an app that detects whether the text was written by ChatGPT.
However, OpenAI wants to see this end, too. Their engineers are currently developing an unnoticeable “watermark” that would make it harder to obscure the text’s origin.
“We want it to be much harder to take a GPT output and pass it off as if it came from a human. This could be helpful for preventing academic plagiarism”, a guest researcher at OpenAI Scott Aaronson said in his lecture at the University of Texas.
Plagiarism in literature is another issue the writing industry has to tackle.
Neil Clarke from the Clarkesworld Magazine had to temporarily close the submissions for a while thanks to the sheer number of spammy, ChatGPT-generated stories.
In his recent Twitter thread, Clarke explained that “side hustle experts” have invaded the SciFi&Fantasy community in their chase for easy money. This graph showed how the popularity of AI chatbots and plagiarized work coincide:
Source: http://neil-clarke.com/a-concerning-trend/
Reuters reports that ChatGPT is widely used to create full-blown books that sell on Amazon, too. In this case, spokespeople for OpenAI declined to comment. Nevertheless, the statement made by Scott Aaronson is a reason to be hopeful that OpenAI will make an effort to put an end to plagiarism.
AI’s harmful, biased outputs
A program called Forensic Sketch AI-rtist was developed with the help of OpenAI’s DALL-E 2. The creators Arthur Fortunato and Philippe Reynaud explained that goal was to shorten the time it takes for forensic artists to create suspect sketches.
However, the experts say that the use of generative AI in the workplace in these cases is dangerous for two reasons:
- It emphasizes existing racial biases and racial profiling;
- Humans memorize faces differently: not by specific features, but as a whole; the hyper-realistic generated image may skew the witnesses’ memory.
Source: mezha.media
It’s no news: AIs are trained with human-made data, and humans are biased.
Back in 2016. Microsoft released the Tay bot for Twitter. It took users less than a day to make it vehemently racist.
Did the companies learn from the past? Not really.
Google still relies on humans to “train” the AI. The staff received a document with instructions on how to rewrite the responses. Continuously correcting what AI got wrong thanks to the datasets taken from human actions is a Sisyphean task for employees, and may require too much human intervention to pay off.
Twitter user @spiantado showed that the mends are only superficial:
Yes, ChatGPT is amazing and impressive. No, @OpenAI has not come close to addressing the problem of bias. Filters appear to be bypassed with simple tricks, and superficially masked.
And what is lurking inside is egregious. @Abebab @sama
tw racism, sexism. pic.twitter.com/V4fw1fY9dY— steven t. piantadosi (@spiantado) December 4, 2022
Copyright and ownership debate
Who owns the content produced by AI tools?
Here we have several parties who could all claim ownership over final products:
- The owners of AI programs;
- People who can prove that AI scraped their work to “learn” how to produce text/image;
- AI users who created prompts;
- Whoever purchases the AI-made work.
OpenAI’s spokesperson spoke to VentureBeat and said that “OpenAI retains ownership of the original image” (read the full statement here).
However, legal experts familiar with current AI and machine learning practices think that this is a complicated issue, thanks to the vast potential these tools have for generating income.
Who can safely use AI to make their work easier — and how?
It’s safe to say that artists and creatives of all kinds will increasingly use AI in the workplace, for inspiration and to visualize the concepts before they start working on them.
We’ve already seen game and character design, fashion, costumes, applied arts, children’s books, comic books, and more. Netflix even announced an anime with backgrounds produced by an AI to overcome the alleged workforce shortage.
Marketing professionals are testing the waters, too.
Tools like ChatGPT can’t mimic the human brain and create thought-leadership content, conduct interviews, or create original, never-before-seen content and information.
However, content writers and copywriters can use ChatGPT or Jasper as helpful tools:
- Creating article briefs and outlines;
- Sourcing tips and tricks for their work;
- Finding inspiration when they have writers’ or creativity block;
- Creating short-form content such as meta descriptions, social media posts, and titles;
However — this content will still lack a human touch, have no sources provided, or provide you with 100% correct information. People will still need to proofread everything they intend to use.
Conclusion
Mateo Wong wrote about AI search and gave the best description of wrong expectations people have from the AI tools:
“The trouble arises when we treat chatbots not just as search bots, but as having something like a brain — when companies and users trust programs like ChatGPT to analyze their finances, plan travel and meals, or provide even basic information.”
More often than their creators are willing to admit, AI programs create low-quality products and provide incorrect and biased information. There are plenty of moral and legal issues to address too, about art theft, copyright, and plagiarism.
It is possible to bypass these issues by remembering that AI technology is still in its infancy and using it with caution. Sourcing inspiration, creating memes, abiding by the fair use doctrine, and looking up advice and ideas with AI in the workplace will still help immensely.
No matter where you stand regarding the ethics and usability of AI and its outcomes, it is one of the most important topics in tech you should keep an eye on.
Anja Milovanovic
A journalist turned content writer – Anja uses her investigative skills to produce high-quality SaaS, Marketing, and HR content.