📡 Cybersecurity Innovation Pulse #29: SandboxAQ CISO Council; SQL-Based K8s Detections; Scattered Spider; And Product News
Covering Nov. 23rd - Nov. 30th
Welcome to Issue 29 of the Cybersecurity Innovation Pulse! I'm Darwin Salazar, Product Manager at Monad and a recovering detection engineer. Each week, I distill the latest and most exciting developments + trends in cybersecurity innovation into digestible, bite-sized updates. Want to stay to stay ahead of the curve? Hit subscribe and get these insights delivered straight to your inbox 🚀
After 2 weeks away, it feels good to be back in the saddle crafting this week’s TCP issue. While AWS re:Invent has grabbed all the headlines this week, I won’t dive into any of that in this piece. For re:Invent news, check out Jonathan Rau’s recaps for that or the official AWS blog. I highly recommend the former.
Now, let’s jump into all the non-AWS innovative stuff that’s transpired in the security world over the past couple of weeks 🏄🏽♂️
SandboxAQ Launches CISO Council
SandboxAQ, an Alphabet-incubated start-up specializing in combining AI and quantum technologies, has established a CISO Council to spearhead innovation in cryptographic management. This council includes cryptographic and security luminaries such as Taher Elgamal ("father of SSL"), Teresa Shea, and Steven Ramsden.
The council will guide the development of SandboxAQ's security suite, focusing on zero-trust architecture and preparing for quantum-era threats.
SandboxAQ is a special company working on solving some of the technology’s toughest challenges. It’s shaping out to be a once-in-a-generation company like Google, Apple, and Palo Alto Networks. One to watch for sure.
Overreliance on AI Dev Tools?
Snyk recently released a report that reveals the widespread adoption of AI coding assistants among developers. Among the 537 engineers and security practitioners surveyed, 96% reported using these tools and over half reported using them most or all of the time. 92% also acknowledged that these tools frequently generate insecure code.
The report reveals a contradiction in the use of AI coding tools. Although 86% of developers surveyed are worried about the security risks of AI code completion tools, they continue to use them.
This is one of those dev velocity + convenience vs. security trade-offs that many organizations are unknowingly making. I personally don’t think this will change until there’s a high-profile attack that is traced back to an AI-generated code vulnerability. You can dig into the report here.
Joint Guidelines for Secure AI System Development by CISA and NCSC
CISA, U.K.'s National Cyber Security Centre (NCSC), and many other global security agencies have endorsed guidelines for secure AI system development. The guidelines which have been reviewed and approved by big tech and G7 members, focus on "secure by design" principles across the AI development lifecycle, including secure design, development, deployment, and operation.
The guidelines are fairly practical and the doc includes a great list of additional resources that dive into secure AI development, adversarial ML, and the G7 Hiroshima AI Process…. O.o
Using Snowflake and Panther to Detect K8s Threats
This joint blog post from Panther Labs and Snowflake is a masterclass on Kubernetes threat detection leveraging Panther, Snowflake, and SQL. The post dives into the anatomy of K8s Audit Logs and detections for a few attacker tactics including Initial Access, Privilege Escalation, Defense Evasion, and Discovery. The post includes sample logs for each detection which makes it super easy to connect the dots for the detection use cases.
Source: Panther Labs
Ransomware Crews Developing GenAI Tools for Cyber-Attacks
Trellix’s CyberThreat report finds increasing collaboration between ransomware groups and nation-state ops. The report also shows a rise in ransomware attacks using GenAI for phishing, increased nation-state threat activities, and the rise of new ransomware groups. It also highlights the increase in the use of Go for ransomware, backdoors, and trojans.
This report pretty much highlights that the threat landscape is as dicey and political as ever, especially with the wars and geopolitical situations across the globe.
A Look at Yet Another Scattered Spider Ransomware Attack
ReliaQuest's recent report on Scattered Spider highlights the sophistication and speed at which the actor(s) execute. The group is said to be responsible for the recent MGM cyberattack so we already know that they’re a force to be reckoned with.
In this attack which was discovered through a retroactive threat hunt, they rapidly pivoted from a third-party cloud service to an on-prem network within an hour. Leveraging stolen Okta credentials from a help-desk employee, they conducted socially engineered MFA fatigue attacks and privilege escalation. Tactics included IDaaS cross-tenant impersonation and exploiting enterprise apps, leading to significant data encryption and exfiltration.
Skyhawk Security's AI-Based Purple Teaming
Skyhawk Security recently announced an AI-based purple team capability called Continuous Proactive Protection. It combines AI-driven red and blue team techniques to identify security weaknesses in cloud infrastructure and simulate attacks. Below are a details on how it works:
Discover: Discover the environment inventory and continuously identify the crown jewels
Analyze: Analyze the least resistance paths to the organization’s most important assets
Simulate Attacks: Determine the attack recipes against the high priority crown jewels
Evaluate Defenses: Understand how your defenses detect and respond to threats to identify gaps in your posture, generate suggestions for a pre-verified automated response as well as remediation recommendations
Automated Learnings: The platform uses the results of the continuous process to adapt detection capabilities resulting in “adaptive cloud detection and response”, a tailored detection tuned to each customer’s cloud infrastructure
This is similar to the Google Cloud Security Command Center Attack Path Simulation feature that was introduced during RSA earlier this year.
Lacework's AI Assistant for Cloud Security
Lacework has introduced their GenAI assistant similar to what we’ve seen with Orca, GCP Security Command Center, SentinelOne, and several other solutions. You can ask it questions such as “Which S3 buckets tagged with X in Y region are publicly exposed?” and it will return any assets that meet the criteria.
These types of assistants and copilots which we covered in-depth in TCP Byte #1, are becoming tablestakes for any security solution looking to be competitive in the security industry.
Wiz's Secure Cloud Development module
During re:Invent, Wiz unveiled their Secure Cloud Development module for their CNAPP. The module provides a great deal of AppSec coverage including scanning GitHub code repositories and enriching findings with context such as who the author of the code is and some cool stuff around leaked secrets and SBOMs as well.
That's all for this week! I hope you found this issue insightful. Your feedback shapes the future of this newsletter, so drop me a line on what resonated with you or what you'd like to see more of. If you believe others can benefit from these insights, share the love and encourage them to subscribe. Every week, I dive deep into a sea of headlines to curate the most pivotal stories in cybersecurity innovation just for you. Your continued support is a testament to the value this brings. Catch you in the next issue!