This is what I will be up to today

This is what I will be up to today

While Snowflake provides strong default security capabilities, enterprises must take additional steps to lock down the identity and access layer to prevent unauthorized data access. In this article, we focus on front-end hardening, including authentication mechanisms, federated identity controls, network access constraints, and token lifecycle management. The goal is to ensure that only the right identities, coming from the right paths, under the right conditions, ever reach Snowflake.
Snowflake supports SAML 2.0 and OAuth 2.0-based SSO. You should disable native Snowflake usernames/passwords entirely for production accounts.
role, email, department.Avoid provisioning
ACCOUNTADMINvia SAML. Manage this break-glass account separately.
EXTERNAL_OAUTH instead of SAML for modern applications or service integrations.Snowflake supports OAuth for both interactive and non-interactive use. You must restrict token scope, audience, and expiration windows to reduce risk.
aud) and scopes strictly. Avoid issuing tokens with SESSION:ALL unless required.ACCOUNTADMIN access through any OAuth app.| Scope | Description |
|---|---|
session:role:<role_name> | Restrict token to one role only |
session:warehouse:<wh> | Bind session to specific warehouse |
sqlCopyEditSELECT SYSTEM$EXTRACT_OAUTH_TOKEN_CLAIM('aud', '<access_token>');
Even if federated auth is enforced, native logins can still exist and be exploited. Use Snowflake Network Policies to harden this path.
admin_policy: Only allows break-glass IP rangesdefault_policy: Allows enterprise proxy or VPN egress IPs onlysqlCopyEditALTER ACCOUNT SET NETWORK_POLICY = default_policy;
admin_policy only to named break-glass users.Consider a Cloudflare or Zscaler front-door to enforce geo/IP conditions dynamically.
Identity hardening isn’t complete without strict RBAC practices. Prevent role escalation and sprawl.
SET ROLE = ANY unless strictly required.PUBLIC role from having any privileges:sqlCopyEditREVOKE ALL PRIVILEGES ON DATABASE <db> FROM ROLE PUBLIC;
Snowflake itself doesn’t natively enforce MFA — this must be handled at the IdP or via authentication proxy.
Snowflake OAuth integrations (especially EXTERNAL_OAUTH) rely on JWT signing keys.
sqlCopyEditSHOW INTEGRATIONS;
LOGIN_HISTORY or EVENT_HISTORY.Service identities are often the weakest link. Use automation to provision/deprovision service roles and tokenize all secrets via a secrets manager like Vault.
SECURITYADMIN or ACCOUNTADMIN.session:user_agent, session:client_ip and audit the usage patterns.Don’t trust Snowflake alone for auditing — bring data into your SIEM.
LOGIN_HISTORYQUERY_HISTORYACCESS_HISTORYEVENT_HISTORYPipe into Splunk, Snowflake, or S3 via Snowpipe and monitor:
In an age where cybersecurity threats are relentless and disinformation moves faster than truth, we need leaders who are brave enough to speak facts—even when it’s inconvenient. Chris Krebs, the former director of the Cybersecurity and Infrastructure Security Agency (CISA), has been that kind of leader from day one.
Krebs didn’t seek fame. He didn’t seek a fight. He sought the truth.
As director of CISA, he led one of the most critical missions in modern government: protecting the integrity of U.S. elections and critical infrastructure. Under his leadership, CISA declared the 2020 election “the most secure in American history”—a statement backed by career security experts, intelligence assessments, and hard data.
That statement, grounded in evidence, got him fired.
And now, years later, in a deeply concerning escalation, the current administration has reportedly revoked his security clearance and ordered an investigation into his work at CISA. Let’s be clear—this isn’t about security. It’s about political revenge.
Krebs has since continued to serve the public good, both as co-founder of the Krebs Stamos Group and in his role at SentinelOne. He remains one of the few voices in the field who speaks plainly, refuses to bend to political pressure, and puts the country before career.
If we want to live in a world where facts matter, where professionals are empowered to do the right thing, and where public servants don’t fear retaliation for speaking truth, then we must stand by Chris Krebs.
This isn’t about party. It’s about principle.
We owe our respect—and our support—to those who prioritize the safety of the country over the safety of their own jobs. Krebs did exactly that.
And we should all be damn grateful he did.
Messing around with Terraform this weekend, I dove into some new functionalities for storing data in HashiCorp Vault, and I was blown away by how much I could automate using Terraform Cloud. The integration between these two tools has helped me automate a lot in my home lab making it more efficient and secure.
HashiCorp Vault is a powerful tool for securely storing and accessing secrets. It provides a centralized way to manage sensitive data, such as API keys, passwords, and certificates. Vault’s dynamic secrets feature is particularly impressive, allowing for the automatic generation and rotation of secrets. This significantly reduces the risk of secret sprawl and unauthorized access.
Terraform Cloud is a robust platform for infrastructure as code (IaC) management. It enables teams to collaborate on Terraform configurations, providing a consistent and reliable way to manage infrastructure. Terraform Cloud’s powerful automation capabilities allow for the continuous integration and deployment of infrastructure changes, ensuring that environments are always up-to-date and compliant.
Combining Terraform Cloud with HashiCorp Vault has been a game-changer for my projects. Here’s how I utilized these tools over the weekend:
Happy automating!
Backing Up Settings with Python Scripting
PyTorch stands out as one of the most popular frameworks due to its flexibility, ease of use, and dynamic computation graph. Managing settings and configurations across different experiments or projects can sometimes become a cluster f*@%. In this blog, i’ll explain a streamlined approach to managing settings in PyTorch using Python scripting, allowing for easy backup and retrieval of configurations.
Understanding the Importance of Settings Management:
Leveraging Python for Settings Backup:
Designing the Backup Script:
Here is a good example.
import json
def extract_settings():
# Example: Extract settings from PyTorch code
settings = {
‘learning_rate’: 0.001,
‘batch_size’: 32,
‘num_epochs’: 10,
# Add more settings as needed
}
return settings
def backup_settings(settings, filepath):
with open(filepath, ‘w’) as file:
json.dump(settings, file)
def main():
settings = extract_settings()
backup_settings(settings, ‘settings_backup.json’)
print(“Settings backup complete.”)
if name == “main“:
main()
Introduction: In the ever-evolving landscape of data security, understanding the tools at our disposal is crucial. Two such tools, HashiCorp Vault and Hardware Security Modules (HSMs), often get mentioned in the same breath but serve distinctly different purposes. This blog post aims to demystify these technologies, highlighting why a Vault is not an HSM and how they complement each other in securing our digital assets.
What is HashiCorp Vault? HashiCorp Vault is a software-based secrets management solution. It’s designed to handle the storage, access, and management of sensitive data like tokens, passwords, certificates, and encryption keys. Vault’s strengths lie in its versatility and dynamic nature, providing features like:
What is a Hardware Security Module (HSM)? An HSM is a physical device focused on protecting cryptographic keys and performing secure cryptographic operations. Key aspects include:
Key Differences:
Why Vault is Not an HSM: Simply put, Vault is not an HSM because it operates in a different realm of data security. Vault is a software layer providing a broad spectrum of secrets management capabilities. It doesn’t offer the physical security inherent in HSMs but excels in managing access to secrets and encrypting data. Conversely, HSMs provide a hardened, secure environment for cryptographic operations but don’t have the extensive management features of Vault.
Complementary, Not Competitive: In a comprehensive security strategy, Vault and HSMs are not competitors but collaborators. Vault can integrate with HSMs to leverage their physical security for key storage, combining the best of both worlds: the flexibility and extensive management of Vault with the robust, physical security of HSMs.
Creating the perfect PowerPoint presentation is an art—an equilibrium between compelling content and striking visuals. However, for professionals and developers who need to test the efficiency of co-authoring tools or presentation software, the content itself can sometimes be secondary to the functionality being tested. That’s where the power of automation comes in, particularly in generating mock data for PowerPoint presentations.
I’ve been working on a fun side project It’s a script that allows users to create ‘fake’ PowerPoint data to simulate various scenarios and test how long it takes to read through the content in a process akin to co-authoring. For those intrigued by how this automation operates and its potential benefits, you can delve into the details on my GitHub repository.
The reasons for automating data generation are numerous, especially in a corporate or development setting:
The automation script I developed is designed to be intuitive. It populates PowerPoint slides with random text, images, and data. The script takes into account different factors like text length and complexity, mimicking real-world presentations without the need for manual data entry.
Moreover, I incorporated a timing mechanism to assess how long a ‘co-authoring’ read-through would take. This feature is invaluable for software developers who aim to improve the collaborative aspects of presentation tool
It is up now on my github
As someone who hasn’t been using Terraform for years, something things I’m about to say are obvious to you, someone who likely already knows that it’s a powerful infrastructure-as-code (IAC) tool that allows you to automate the provisioning and management of your cloud resources. With Terraform, you can define your infrastructure using a declarative language, and then use that definition to create, update, and destroy your resources in a consistent and repeatable way.
It has been a fantastic tool to get to know. Most fun I’ve had in technology in a long time.
One of the key benefits of using Terraform is that it allows you to abstract away the complexity of the underlying cloud APIs and services. Instead of having to write custom scripts or manually configure each individual resource, you can define your infrastructure in a high-level, human-readable format that can be version-controlled and shared with your team. This makes it easier to collaborate, track changes, and ensure consistency across your infrastructure.
Terraform also provides a number of built-in features and plugins that make it easy to work with a wide range of cloud providers, services, and tools. For example, you can use Terraform to provision infrastructure on AWS, Azure, Google Cloud, and many other cloud providers. Additionally, Terraform supports a wide range of resource types, including compute instances, load balancers, databases, and more.
Another benefit of using Terraform is that it allows you to automate your infrastructure changes with confidence. Because Terraform is declarative, you can see exactly what changes will be made to your infrastructure before you apply them. This helps you avoid unexpected changes and ensures that your infrastructure remains stable and secure.
Terraform is a fantastic tool for automating your infrastructure and managing your cloud resources. Whether you’re working on a small project or a large-scale enterprise deployment, Terraform can help you achieve your goals quickly and efficiently.
I often wonder why haven’t more companies rolled out DKIM at this point as it is clearly a fix for so many phishing/SPAM issues.
DKIM, which stands for DomainKeys Identified Mail, is an email authentication method designed to detect email spoofing and phishing. It works by allowing an organization to attach a digital signature to an email message, which can be validated by the recipient’s email server. DKIM is an important security feature for any organization that sends email, as it helps to prevent fraudulent emails from being delivered to the recipient’s inbox.
In Office365 and Exchange online, not using DKIM can pose several dangers. Here are a few of them:
Not using DKIM in O365 can pose several dangers, including increased risk of phishing attacks and email spoofing, increased risk of email interception, and decreased email deliverability. Therefore, it is highly recommended that organizations use DKIM to help ensure the security and authenticity of their email communications.
Introduction: The world of technology is continually evolving, and with it comes new challenges in ensuring the safety and security of our digital systems. One such challenge is the ever-present threat of memory exploits. These security breaches occur when hackers manipulate a program’s memory to gain unauthorized access, allowing them to steal sensitive data or execute malicious code. This article will discuss the dangers of memory exploits, the importance of developers securing their memory usage, and why using Rust, while helpful, is only part of the solution.
The Dangers of Memory Exploits: Memory exploits are a severe concern for several reasons. They have the potential to impact not only individual users but also large organizations and government institutions. Some of the most critical dangers include:
The Need for Developers to Secure Memory Usage: Developers play a crucial role in mitigating the risks associated with memory exploits. They can implement various measures to ensure that the software they create is less susceptible to such attacks. Some of these measures include:
Rust as a Partial Solution: Rust is a systems programming language designed with safety and performance in mind. Its syntax and unique features, such as its ownership system and the borrow checker, help prevent memory-related issues like data races, null pointer dereferences, and buffer overflows. While adopting Rust can significantly reduce the risk of memory exploits, it is not a magic bullet.
Conclusion: The dangers of memory exploits are very real and have far-reaching consequences. Developers play a vital role in securing their memory usage and should employ a multi-faceted approach to minimize the risk of memory exploits. While adopting Rust can be a step in the right direction, it is important to recognize that it is only part of the solution. By combining Rust with secure coding practices, regular software updates, and security audits, developers can create more secure software and help defend against the threat of memory exploits.