Wednesday Newsbytes: Scammers Use AI to Mimic Voices of Loved Ones; Microsoft’s Paranoia; Microsoft’s Medical AI Sees Ghosts; TPM 2.0 Flaw Leaves Billions Vulnerable; Most Disturbing Site on the Internet… and more!
Every day we scan the tech world for interesting news in the world of technology and sometimes from outside the world of technology. Every Wednesday, we feature news articles that grabbed our attention over the past week. We hope you find this week’s ‘Wednesday Newsbytes’ informative and interesting!
In 2022, $11 million was stolen through thousands of impostor phone scams.
AI models designed to closely simulate a person’s voice are making it easier for bad actors to mimic loved ones and scam vulnerable people out of thousands of dollars, The Washington Post reported.
Quickly evolving in sophistication, some AI voice-generating software requires just a few sentences of audio to convincingly produce speech that conveys the sound and emotional tone of a speaker’s voice, while other options need as little as three seconds. For those targeted—which is often the elderly, the Post reported—it can be increasingly difficult to detect when a voice is inauthentic, even when the emergency circumstances described by scammers seem implausible.
Tech advancements seemingly make it easier to prey on people’s worst fears and spook victims who told the Post they felt “visceral horror” hearing what sounded like direct pleas from friends or family members in dire need of help. One couple sent $15,000 through a bitcoin terminal to a scammer after believing they had spoken to their son. The AI-generated voice told them that he needed legal fees after being involved in a car accident that killed a US diplomat.
According to the Federal Trade Commission, so-called impostor scams are extremely common in the United States. It was the most frequent type of fraud reported in 2022 and generated the second-highest losses for those targeted. Out of 36,000 reports, more than 5,000 victims were scammed out of $11 million over the phone.
Because these impostor scams can be run from anywhere in the world, it’s extremely challenging for authorities to crack down on them and reverse the worrying trend, the Post reported. Not only is it hard to trace calls, identify scammers, and retrieve funds, but it’s also sometimes challenging to decide which agencies have jurisdiction to investigate individual cases when scammers are operating out of different countries. Even when it’s obvious which agency should investigate, some agencies are currently ill-equipped to handle the rising number of impersonations…
When will Microsoft stop getting upset when anyone so much as considers looking elsewhere?
There are times when Microsoft just can’t help itself.
Those times include today, yesterday and always.
For the longest time, Redmond has pressed, prodded and persisted in trying to force human beings to only use Microsoft products.
Why, just the other day Microsoft tried to tease me into moving up the line for the new AI-powered Bing chat by suggesting I make Microsoft my default everything.
Now, it seems, the company has gone back to another form of pernickety, performative perturbation.
I’m indebted to, and sympathetic with, Taras Buria at Neowin who discovered that having the temerity to install Chrome while using Edge Canary incited Microsoft’s ire.
As the download started, a full width banner ad appeared. It pleaded: “Microsoft Edge runs on the same technology as Chrome, with the added trust of Microsoft.”
And without the added mistrust of Google? Or does only Edge enjoy the added trust of Microsoft while, say, Word doesn’t?
As I’ve wondered so very many times before, why is this necessary?
Edge is a perfectly fine product. Doesn’t advertising such as this feel grotesquely paranoid…
Microsoft Research made some bold claims earlier this year about their new medical artificial intelligence (AI), which is designed to answer queries about medicine and biology.
The software giant said in a Twitter post that their medical AI, called BioGPT, has achieved human parity that could perform roughly as well as a person under specific situations. That post quickly went viral and some riding the hype wave of ChatGPT have shown their enthusiasm toward the new technology […]
Testing BioGPT’s Accuracy
Despite those tests, Futurism reports that it seems that the system is still prone to producing wildly inaccurate answers that no medical professional or researcher would recommend.
When Futurism tested it, the model produced nonsensical answers based on pseudoscientific and supernatural phenomena and sometimes generates misinformation that could be dangerous to poorly-informed patients.
Furthermore, like other powerful AI systems that have been known to “hallucinate” erroneous information, BioGPT regularly thinks up medical claims that are so absurd that they are unintentionally amusing.
When asked about the average number of ghosts haunting an American hospital, it cited nonexistent data from the American Hospital Association that claimed the “average number of ghosts per hospital was 1.4.” The AI also said that those “who see the ghosts of their relatives have worse outcomes while those who see unrelated ghosts do not.”
Other weaknesses of Microsoft’s medical AI are more serious. It sometimes makes conspiracy theories, like suggesting childhood vaccination can cause the onset of autism…
Wasn’t TPM 2.0 supposed to protect your laptop?
The TPM 2.0 chip is designed to help make Windows 11 PCs and other devices more secure, which you may recall from our explainer on TPM 2.0 back when it was announced as a requirement for Windows 11.
It’s also what makes the news that there is a security flaw in TPM 2.0 all the more upsetting. According to a report from BleepingComputer, a newly-discovered vulnerability in TPM 2.0 could allow hackers to execute malicious code, which could in turn give them access to your data or give them escalated privileges on your PC or laptop (via Tom’s Guide).
Should you be worried about the TPM 2.0 vulnerability?
Yes, but it’s a qualified yes. The key words that we were looking for were “actively exploited in the wild” or some variation on those terms. The Quarklab’s researchers, Francisco Falcon and Ivan Arce, didn’t evoke that language. Now that doesn’t make the vulnerabilities less real, but it makes us downgrade from a full 5-alarm fire alert.
One key reason why this is still concerning is the sheer volume of devices impacted, billions when you factor in Windows PCs and other devices that rely on TPM 2.0. The other reason is that while the original warnings about this flaw went out months ago, Lenovo is the only major OEM to issue a security advisory (opens in new tab).
Basically, this means that if you have a Windows 10 or Windows 11 laptop with a TPM 2.0 chip, you have to assume you are impacted for now…
Wherever you look in the world of technology, artificial intelligence is being developed – and sometimes with quite scary results.
Chatbots are saying they want access to nuclear codes and attempting to convince people talking with them to break up with their partners by claiming they don’t really love each other.
A guy managed to generate an AI copy of his own voice and showed how it could be used to break into his bank account with only the most basic of personal details.
Then, of course, there’s the worry over what sort of jobs AI are going to steal from people, and in our pursuit of whether or not we can do something it always helps to pause and consider whether we actually should.
Basically, we all need to be a bit more like Jeff Goldblum’s character in Jurassic Park.
With that in mind, an image search site which uses AI has been slammed as ‘the most disturbing AI website on the internet’, and it’s a place called PimEyes.
The basic premise is that you give the site a photo of yourself and it searches the internet to identify any other pictures of you that are online, so you can in theory see all the places on the internet where there are images of you.
Phishing schemes have been spotted by IT security firm Trend Micro which is warning billions of iOS and Android users about three scams revolving around fake emails designed to appear as though they are coming from three trusted companies. Those are delivery firms FedEx and DHL, and tech giant Apple. Phishing involves sending email or text that impersonates legitimate correspondence from real companies. The goal is to get you to divulge personal information that could be used to steal money from you.
Scammers use phishing attacks in an attempt to get your credit card number, social security number, and more
This scheme works by preying on your emotions. For example, you might get a fake text from FedEx that claims that you won a prize and asks you to click on a link to help schedule a delivery date and to give the delivery firm your address. Who wouldn’t want to win a prize! Continuing through the email, you might be asked to leave your name and address or other personal data. And then comes the coup de grace. You are asked for a credit card number, security code, and expiration date to pay FedEx.
While this is done just to get your credit card information, many consumers will let their emotions get the better of them and type in this data even if they know deep inside that they shouldn’t. Sometimes, the requested information is a social security number or a bank account number. The DHL scam is similar to the FedEx one except that it is sent via e-mail and asks for your DHL account number which will allow the scammers to hijack your DHL account.
So what could get an iPhone user so worried that he/she might feel compelled to give out personal information? How about receiving a text from Apple saying that your Apple Wallet has been hacked into? This bogus info comes via a text that screams “Apple pay was suspended on your device.” The text says that your Apple pay account will work once the Wallet app is “re-activated.” A link is provided and you’re then asked to fill out your address, mobile number, and credit card info…
“We are heading into a world where a flat screen TV that covers your entire wall costs $100, and a four year college degree costs $1 million, and nobody has anything even resembling a proposal on how to systemically fix this,” Andreessen wrote.
The Silicon Valley investor said that sectors provided or controlled by the government have become “technologically stagnant.” Innovation in certain highly regulated sectors, like education and healthcare, “is virtually forbidden,” causing high prices, he wrote. Andreessen said that over time the price of highly regulated products will continue to climb, while less-regulated products, like flatscreen TVs, will become cheaper.
Andreessen pointed to a chart that pulled data from the US Bureau of Labor Statistics from January 2000 to June 2022 to prove his point. The chart showed the price of television sets had decreased more than 80% in two decades, while college tuition and hospital services had each increased by more than 160%.
Data like this — which he says is caused by regulation — makes him less concerned about AI innovation replacing jobs, despite the current “panic.”
“Those industries are monopolies, oligopolies, and cartels, with extensive formal government regulation as well as regulatory capture, price fixing, Soviet style price setting, occupational licensing, and every other barrier to improvement and change you can possibly imagine,” Andreessen wrote.
He does not, however, provide any specific examples of regulation in the blog post.
The billionaire has made similar arguments in the past, saying in 2017 that there are two different economies: one in which innovation is encouraged and moves quickly, and one in which innovation moves slowly due to government regulations.
Since ChatGPT was released in November, the chatbot has spawned speculation that the technology could eventually replace some workers. Insider previously reported that some jobs could be at a higher risk of being replaced by AI than others…
Thanks for reading this week’s Wednesday Newbytes. We hope you found these articles informative, interesting, fun, and helpful. Darcy & TC.