Infosec Reading List - August 2025
On a monthly basis I will publish my reading recommendations which mainly focus on Information Security (InfoSec) and Outdoor Sports. All InfoSec Reading Lists can be found here. Text in italic represent quotes from the original article.
InfoSec
- FileFix - A ClickFix Alternative - [link]
- ChatGPT is getting ‘memory’ to remember who you are and what you like - the idea of ChatGPT “knowing” users is both cool and creepy - we enter a similar situation as we had with search engines some decades ago, simply the technology is more powerful this time - [link]
- Writing is thinking - Nevertheless, outsourcing the entire writing process to LLMs may deprive us of the opportunity to reflect on our field and engage in the creative, essential task of shaping research findings into a compelling narrative — a skill that is certainly important beyond scholarly writing and publishing. - [link]
- Weak password allowed hackers to sink a 158-year-old company - Incidents have almost doubled to about 35-40 a week since she took over the unit two years ago, Ms Grimmer says. - [link]
- Breaking down ‘EchoLeak’, the First Zero-Click AI Vulnerability Enabling Data Exfiltration from Microsoft 365 Copilot - Aim Security discovered “EchoLeak”, a vulnerability that exploits design flaws typical of RAG Copilots, allowing attackers to automatically exfiltrate any data from M365 Copilot’s context, without relying on specific user behavior. - Unlike “traditional” vulnerabilities that normally stem from improper validation of inputs, inputs to LLMs are extremely hard to validate as they are inherently unstructured - this reminds me of discussions in the early 00s where we talked about the importance of differentiating „data channels“ from „command channels“ in order to be able to validate commands properly. For LLMs, its all the same. Lastly, the article talks about „zero click“ but the victim still must chat with Copilot about the email as it seems - doesnt sound completely like „zero click“. - [link]
- SonicWall SMA devices hacked with OVERSTEP rootkit tied to ransomware - [link]
- Malware in DNS - Put very simply, files can be partitioned and stored in DNS TXT records. They can then be retrieved via DNS requests and put back together. This also means these files may persist until the DNS server removes the records or overwrites them thereby providing a form of unwitting file or data storage - In summary, in 2021-2022 an actor was using DNS TXT records to store and possibly deliver ScreenMate malware and stagers for likely Covenant C2 malware infections. - [link]
- npm ‘accidentally’ removes Stylus package, breaks builds and pipelines - [link]
- The ‘white-collar bloodbath’ is all part of the AI hype machine - Amodei told Cooper. “AI is going to get better at what everyone does, including what I do, including what other CEOs do.” To be clear, Amodei didn’t cite any research or evidence for that 50% estimate. - [link]
- SharePoint Vulnerabilities (CVE-2025-53770 & CVE-2025-53771): Everything You Need to Know - [link]
- Visionary MYNDs: Insights on Teufel’s First Open-Source Speaker - we should support commercial products built on this kind of mindset more as customers - [link]
- The accelerating enshittification of search - For example, only ~1% of ChatGPT users pay today. But as these models become fixtures in workflows, free tiers will disappear. Once users are dependent, monetization begins. It’s only a matter of time. - We are watching the slow-motion collapse of a search ecosystem that once connected people to knowledge and rewarded the humans who created it. Instead of fostering discovery, search is becoming a walled garden of monetized, hallucinated answers, where creators are unpaid ghostwriters for platforms they didn’t agree to feed. - interesting and important article / topic - [link]
- When we get komooted - [link]
- The CISO code of conduct: Ditch the ego, lead for real - [link]
- A bar? A church? China? A stolen iPhone’s baffling journey around the globe - [link]
- Every Reason Why I Hate AI and You Should Too - If a machine is consuming and transforming incalculable amounts of training data produced by humans, discussed by humans, and explained by humans. Why would the output not look identical to human reasoning? - In reality, all we’ve created is a bot which is almost perfect at mimicking human-like natural language use, and the rest is people just projecting other human qualities on to it. - Well, it’s basically glorified plagiarism. While I’d argue LLMs in general are just Plagiarism-as-a-Service, RAG is a lot closer to actual plagiarism that typical LLM behavior. - LLMs act as a sort of magic funnel where users only see the output, not the incalculable amounts of high-quality human-produced data which had to be input. - [link]
- Attacking GenAI applications and LLMs – Sometimes all it takes is to ask nicely! - [link]
- What if A.I. Doesn’t Get Much Better Than This? - I don’t hear a lot of companies using A.I. saying that 2025 models are a lot more useful to them than 2024 models, even though the 2025 models perform better on benchmarks, - [link]
- Ransomware crews don’t care about your endpoint security – they’ve already killed it - RealBlindingEDR is an open-source tool designed to disable endpoint detection and response products, and Crypto24’s custom version is programmed to disable kernel-level hooks from a hardcoded list of 28 security vendors. - [link]
- From Bing Search to Ransomware: Bumblebee and AdaptixC2 Deliver Akira - [link]
This post is licensed under CC BY 4.0 by the author.