Workers Speak Out Against Software Company [24]7.ai

0
29
Workers Speak Out Against Software Company [24]7.ai


Earlier in the month, in a series of viral TikToks, a remote worker alleged her employer—an undisclosed customer service company that leads campaigns for finance technology company Klarna—spied on her through her webcam, locked her out of her computer for getting up from her desk, and fired her for speaking out on TikTok. The Daily Dot spoke with the TikToker in a recent article

Following the report, five former employees came forward to the Daily Dot to name the accused company as artificial intelligence (AI) technology company [24]7.ai. Klarna has since confirmed [24]7.ai is the company in question. These former employees anonymously shared various stories of a “toxic” work environment allegedly maintained by the company, which runs customer services campaigns for a variety of notable clients—including AT&T, Verizon, Walmart, Target, Kohl’s, and Urban Outfitters. The Daily Dot reviewed paystub documentation and a termination letter to confirm the individuals were once employed by [24]7.ai.

The former employees alleged verbal abuse; discriminating behavior toward women and LGBTQ people; worked hours missing from pay stubs; a broken human resources (HR) reporting system; and unreliable, disruptive software at [24]7.ai. Furthermore, two of these sources claim they were encouraged to suppress customer satisfaction surveys while at [24]7.ai.

Advertisement

A spokesperson for [24]7.ai told the Daily Dot in a statement it is investigating these claims.

“[24]7.ai takes these allegations very seriously. We are committed to the success of our employees, customers, and partners and are proud to be recognized as a Great Place to Work around the globe. We promote a work culture where all employees are valued. Our agents have access to many employee programs including career development opportunities, diversity and inclusion programs, and accelerated pay incentives. We provide training on workplace ethics and access to an ethics hotline. We do not tolerate any form of abuse, discrimination, or falsification of campaign results,” [24]7.ai said. 

Multiple former employees who came forward said the AI software created by [24]7.ai to monitor customer service agents via webcam photographs—which was formerly named “Orange Eye”—was faulty and often wrongly marked employees as not working, even when they were, affecting pay and productivity. 

A former manager told the Daily Dot “Orange Eye” took photos of remote workers at various intervals, approximately ranging from five to 30 minutes. She said the software was supposed to detect phones, work-from-home agents away from their computers, or unauthorized people in the room. However, she and another anonymous manager said the software frequently erred, mistaking faces on posters, T-shirts, and even animals as unauthorized people as well as cups, mugs, and other household items as cellphones.

“Despite the fact that ‘AI’ is in the company’s name, and that’s actually their main business, the AI behind that software taking photos and everything was junk. It was absolutely terrible,” a former manager said. “As someone that was in management, I would constantly bring up issues like this, but they would always get hand-waved and dismissed. Nobody ever really took it seriously.”

Advertisement
Advertisement

Although the remote monitoring software was supposed to allow employees to step away for lunch and bathroom breaks, one former manager alleged the software took photos of employees after they locked their computers. In these situations, the manager said the software would lock employees out of their laptops and demand supervisor approval to reaccess the device. 

“Because of Orange Eye lockouts, I had to remain on call pretty much basically the entire time of our hours of operation, which was from 7-1am in case something happened, and there wasn’t a manager available,” the former manager said. 

Multiple former employees said the constant surveillance took a mental toll. 

“People knowing they’ve got Big Brother watching their every move, that the camera might misfire and still be going off while their computer is locked, just the stress of knowing they’re essentially off the clock because their system hasn’t been unlocked in a timely manner—that absolutely impacted the mental health of everybody involved,” one former employee said. “For the company’s branding in a bunch of their stuff, they’re really into the color orange, and so I think it had to do with why they named it that. It’s proprietary; they developed it themselves. … Not Big-Brother-y at all.” 

[24]7ai told the Daily Dot webcam monitoring is a security protocol it takes to protect their clients and customers. 

Advertisement

“We use webcams for remote monitoring of agents working from home (WFH) to ensure the confidentiality and data privacy of our clients and that of their consumers. Agents are trained on information security best practices, equipment use, remote monitoring via webcam, workspace set-up to ensure their own privacy, and processes for logging on and off when they need to step away from their workspace. Employee acknowledgement of these policies and procedures is recorded in our learning management system,” [24]7.ai said in a statement. 

When asked what kind of breaks are allowed for employees to step away from the computer, [24]7.ai responded by stating the following: “We have agents working in different countries around the world. We follow the local employment law based on where the employee is physically located.”

[24]7.ai also said it no longer uses the name Orange Eye to refer to the software and denied the system wrongly deducted pay for time agents spent locked out of computers. 

“Yes, ‘Orange Eye’ was an internal name we used to communicate the remote monitoring webcam to employees. We no longer refer to it as such, instead we refer to it as [24]7 Remote. … [24]7 Remote, previously noted as ‘Orange Eye,’ has reported false positives and like any AI technology uses machine learning and human intervention to improve the AI models to reduce these false positives. We have a very efficient process for agents to report false positives allowing them to quickly get back to work within minutes. Agent pay is not negatively impacted when this occurs,” [24]7.ai said in a statement. 

Despite [24]7.ai denying pay is affected by AI errors, all five employees said they witnessed employees being deducted hours they worked due to technological errors. In addition, two managers and one team leader told the Daily Dot the timekeeping system created by [24]7.ai, which they say was named “Time On,” was flawed. These employees told the Daily Dot the software would often crash, wrongly deduct time for breaks that were supposed to be paid, and completely miss chunks of worked hours for some employees. 

Advertisement
Advertisement

“We could have made what we called ‘manual time cards,’ or a time card by hand, to make up the missing hours [for an upcoming paycheck], … but, oftentimes, it got to the point that the managers were getting written up or threatened to be written up because of the constant manual timecards. And the tickets we submitted for Time On, our technology team would close out as user error,” one former manager said. 

A former team leader similarly criticized the manual time cards.

“We had to compare [the manual time cards] to the login system, because the tools that [24]7 used for agents to clock in on didn’t always work correctly and that in and of itself was a nightmare,” a former team leader said. 

[24]7.ai responded to criticisms of Time On with the following statement: “​​Time On is a dashboard but it is not specifically used to track time for payroll purposes. Payroll uses reports from the agent workspace consoles. We have an established exception management process for agents to dispute any payroll concerns and have reviewed the data to find that the accuracy rate of our reporting is over 99%.” 

Beyond concerns over hours and pay, two former managers also said they were pressured by upper management to suppress customer satisfaction surveys in customer service campaigns. When responding to customer complaints and inquiries on social media, two former managers said customer service agents were supposed to auto-generate surveys to receive customer service feedback. They alleged agents had workaround options to not automatically send out surveys when they anticipated the feedback would be negative. 

Advertisement

“The work environment was extremely toxic for the entire time I worked there. … We were encouraged to suppress customer satisfaction surveys if we thought that they were going to be negative,” one former manager said. 

The aforementioned manager said she was personally asked to suppress negative surveys on at least three occasions. Another manager said he was pressured to suppress survey results for one campaign he worked on.  

“The management that was under these guys at the top, we kind of felt like we always had a gun to our head, and we were always sort of under pressure and feeling threatened. It kind of created this culture where everybody had to lie and hide and fake to be able to feel comfortable and keep their job. … We literally got pressured by the management above us to close these social media cases that we were working on with dispositions that would suppress surveys,” the manager said. 

Advertisement

A former agent said she faced similar pressures to manipulate campaign outcomes. 

“[We] always found ways to manipulate the tools in our favor. We manipulated the metrics on a regular basis because we were given unrealistic goals in our work. The clients never had a clue we were fudging a lot of stuff,” the agent told the Daily Dot. 

Advertisement
Advertisement

As previously mentioned, [24]7.ai said it is investigating these claims and does “not tolerate” any “falsification of campaign results.” 

Former employees also complained of improper hiring and HR practices. Two former managers said they discovered from a Google search that a registered sex offender was working at the company in a management role.

“Not only did they put somebody that is a registered sex offender in a position of authority over others, but they also put him in a position where he had access to cellphone accounts, and data, and records, and things like that,” a former manager said. 

[24]7.ai confirmed the individual’s identity and employment with the company in an email to the Daily Dot.  

“Yes, [24]7.ai is aware of this previous employee. He was hired through one of our staffing partners and after 6 months was hired directly by [24]7.ai. A background check was conducted by both parties, his offender status acknowledged, and no recent complaints identified. He was hired by [24]7.ai for a work from home position where he had no physical contact with employees, clients or consumers,” [24]7.ai said in a statement. 

Advertisement

The Daily Dot confirmed the individual identified by the company is a registered sex offender through both the national sex offender public website and the Texas public sex offender registry. 

Other former employees claimed there was a lack of consequences for poor behavior, leading to a culture of bullying and harassment. 

“They hired anyone that could breathe, whether they were professional or not. They had one goal and that was to keep the seats full because the client paid for headcount. Half the agents were slackers. Many were aggressive people. One had guns and bragged about it. He chewed a bullet at his desk, and we complained many times that he did that. HR would not listen to or validate our complaints,” one former agent said. 

One former manager said a former executive-level HR employee made misogynistic comments.

“We revamped our dress code, I believe in 2018, and, as we were rolling out that dress code to the managers to pass down to the reps, she made the comment of, ‘Men cannot help but look so it is a woman’s responsibility to cover up,’” the manager said.  

Advertisement
Advertisement

Multiple former employees said the company also had poor HR practices for protecting LGBTQ people. One former manager alleged an executive-level HR employee, who is still with the company, dead-named and misgendered transgender employees.

“They continuously have poor practicing standards, is the best way I can say it, towards the LGBTQ community. Up to and including [a current executive-level HR employee] saying that they cannot utilize preferred pronouns and continuously dead-naming or misgendering management and reps,” one manager alleged. 

Other employees made similar allegations of discrimination against transgender individuals at the company.

“There was a particular issue I had with an agent who liked to dead-name another agent, and it got to a point where it just escalated to a shouting match at like 7 o’clock in the morning. Of course, I had reported it to HR because I was the team lead. … The only thing HR told me was, ‘Well, sorry. But she doesn’t have to respect that agent’s preferences because that’s not their legal name,’” a former agent said. 

[24]7.ai said the following about the alleged incidents of executive-level HR employees making comments about the company dress code and misgendering trans employees: “We are unable to comment on specific internal HR matters but I can confirm that we investigated these allegations at the time they were reported and the individual is no longer with the company…We do not tolerate any form of abuse or discrimination.” 

Advertisement

Former managers and agents said they were also pressured to regularly work overtime for low wages. One employee said that being overworked in a stressful environment contributed to her decision to leave the company.

“It was a highly stressful thing because we were salaried, but we never really got time off. We were constantly having to be available 24 hours, seven days a week. … At the point when I left, my anxiety was so bad that I was sick every morning. … It was exhausting, and it was hard,” a former team lead said. 

Another former employee, an agent, reflected on her decision to depart from the company. 

“By the time I left, I was broken. Anyone who has been in my shoes there has described quitting as a lot of heaviness being lifted. I was crying daily, and it would turn out many of my colleagues were too up till the moment they left,” the former agent told the Daily Dot. 

Michae Jay, the TikToker who was fired from [24]7.ai after speaking out about her working conditions, told the Daily Dot the story of her termination. She said she was fired by an HR employee and then un-fired in a Zoom call. In the Zoom call, she said, managers said they were conducting an investigation and that she was only suspended. However, two days later, she said she was informed that she was in fact permanently terminated.  

Advertisement
Advertisement

“I hated the job, sure enough,” the TikToker told the Daily Dot in an email. “It was weird how they fired me. … I expressed concerns when I first started working at the job about how I don’t feel secure because every time I come to HR, they never have an answer or resolution to anything. … I don’t have a job now, I’m a single mother trying to figure it out. But I will be ok. I wasn’t going to last in that environment anyways.” 

Some of the former employees said they learned of the Daily Dot’s reporting on Jay’s termination through a network of those who had left the company. 

“We often swap war stories of our time there when we are together. People ask why we stayed so long, and it’s just because we were a band of misfits in life who found family in each other. It was easy for that company to manipulate people who felt they had no other choice. I personally had no work experience because I was a stay at home wife for years, and this was the first job I took as a way to gain my own independence. And, of course, we didn’t wanna abandon each other either so most of us stuck it out for years. Eventually most of my friends and I made it out of there and on to greener pastures,” a former agent said. 

Former employees also said some of the onus should fall on [24]7.ai’s clients.

“I cannot say how much the client knew with regard to the environment while I was there. They should of known. Shame on them if they did not know. … I will not give them a dime of my money. I know who they do business with and the type of people they employ,” one former agent said. 

Advertisement

“The only goal was to make it seem like it was a perfect work environment any time the client came to visit. When they came there were awards and big shows of enthusiasm. We were asked to dress up, and then they would present all these awards and take photos. Once they were gone, we returned to the norm, which was micromanagement, a lot of team leaders yelling at agents, and working in conditions that are intolerable,” another former employee alleged. 

The Daily Dot reached out to some of [24]7.ai’s clients—AT&T, Verizon, Walmart, Target, Kohl’s, Urban Outfitters, and Klarna—for comment via email. Only Klarna responded. 

“We take these matters extremely seriously and have asked [24]7.ai to investigate these claims so that the high standards we have at Klarna are being upheld by the companies we partner with. We have asked [24]7.ai to stop webcam monitoring immediately for any employee working on Klarna related campaigns; they confirmed this change has been made. Klarna will not tolerate any form of discrimation or workplace abuse and will take appropriate action following [24]7.ai’s investigation,” a Klarna spokesperson told the Daily Dot.


Today’s top stories



Source link

Advertisement
Advertisement
Advertisement

LEAVE A REPLY

Please enter your comment!
Please enter your name here