In Defense of Integrity: Standing with Chris Krebs

In an age where cybersecurity threats are relentless and disinformation moves faster than truth, we need leaders who are brave enough to speak facts—even when it’s inconvenient. Chris Krebs, the former director of the Cybersecurity and Infrastructure Security Agency (CISA), has been that kind of leader from day one.

Krebs didn’t seek fame. He didn’t seek a fight. He sought the truth.

As director of CISA, he led one of the most critical missions in modern government: protecting the integrity of U.S. elections and critical infrastructure. Under his leadership, CISA declared the 2020 election “the most secure in American history”—a statement backed by career security experts, intelligence assessments, and hard data.

That statement, grounded in evidence, got him fired.

And now, years later, in a deeply concerning escalation, the current administration has reportedly revoked his security clearance and ordered an investigation into his work at CISA. Let’s be clear—this isn’t about security. It’s about political revenge.

Krebs has since continued to serve the public good, both as co-founder of the Krebs Stamos Group and in his role at SentinelOne. He remains one of the few voices in the field who speaks plainly, refuses to bend to political pressure, and puts the country before career.

If we want to live in a world where facts matter, where professionals are empowered to do the right thing, and where public servants don’t fear retaliation for speaking truth, then we must stand by Chris Krebs.

This isn’t about party. It’s about principle.

We owe our respect—and our support—to those who prioritize the safety of the country over the safety of their own jobs. Krebs did exactly that.

And we should all be damn grateful he did.

Terraform Cloud with Vault

Messing around with Terraform this weekend, I dove into some new functionalities for storing data in HashiCorp Vault, and I was blown away by how much I could automate using Terraform Cloud. The integration between these two tools has helped me automate a lot in my home lab making it more efficient and secure.

Simplifying Secrets Management with Vault

HashiCorp Vault is a powerful tool for securely storing and accessing secrets. It provides a centralized way to manage sensitive data, such as API keys, passwords, and certificates. Vault’s dynamic secrets feature is particularly impressive, allowing for the automatic generation and rotation of secrets. This significantly reduces the risk of secret sprawl and unauthorized access.

Automating Infrastructure with Terraform Cloud

Terraform Cloud is a robust platform for infrastructure as code (IaC) management. It enables teams to collaborate on Terraform configurations, providing a consistent and reliable way to manage infrastructure. Terraform Cloud’s powerful automation capabilities allow for the continuous integration and deployment of infrastructure changes, ensuring that environments are always up-to-date and compliant.

Unleashing the Potential of Terraform Cloud and Vault

Combining Terraform Cloud with HashiCorp Vault has been a game-changer for my projects. Here’s how I utilized these tools over the weekend:

  1. Automated Secrets Storage: Using Terraform Cloud, I automated the process of storing and managing secrets in Vault. This eliminated the manual steps typically required, ensuring that secrets are securely stored and easily accessible when needed.
  2. Dynamic Secret Generation: I leveraged Vault’s ability to generate dynamic secrets, automating the creation of temporary credentials for various services. This not only improved security but also simplified the management of credentials.
  3. Infrastructure Provisioning: With Terraform Cloud, I automated the provisioning of infrastructure components that require access to secrets. By integrating Vault, these components could securely retrieve the necessary credentials without hardcoding them in configuration files.
  4. Policy Management: I used Terraform Cloud to define and manage Vault policies, ensuring that the right permissions were in place for different users and applications. This centralized approach made it easier to enforce security best practices across the board.

Happy automating!

Backing up Pytorch Settings


Backing Up Settings with Python Scripting

PyTorch stands out as one of the most popular frameworks due to its flexibility, ease of use, and dynamic computation graph. Managing settings and configurations across different experiments or projects can sometimes become a cluster f*@%. In this blog, i’ll explain a streamlined approach to managing settings in PyTorch using Python scripting, allowing for easy backup and retrieval of configurations.

Understanding the Importance of Settings Management:

  • In any machine learning project, experimentation involves tweaking various hyperparameters, model architectures, and training configurations.
  • Keeping track of these settings is crucial for reproducibility, debugging, and fine-tuning models.
  • Manual management of settings files or notebooks can lead to errors and inefficiencies, especially when dealing with multiple experiments or collaborators.

Leveraging Python for Settings Backup:

  • Python’s versatility makes it an ideal choice for automating repetitive tasks, such as backing up settings.
  • We can create a script that parses relevant settings from our PyTorch code and stores them in a structured format, such as JSON or YAML.

Designing the Backup Script:

  • Define a function to extract settings from PyTorch code. This may involve parsing configuration files, command-line arguments, or directly accessing variables.
  • Serialize the settings into a suitable format (e.g., JSON).
  • Implement a mechanism for storing the settings, such as saving them to a file or uploading them to a cloud storage service.
  • Optionally, add functionality for restoring settings from a backup.

Here is a good example.

import json

def extract_settings():
# Example: Extract settings from PyTorch code
settings = {
‘learning_rate’: 0.001,
‘batch_size’: 32,
‘num_epochs’: 10,
# Add more settings as needed
}
return settings

def backup_settings(settings, filepath):
with open(filepath, ‘w’) as file:
json.dump(settings, file)

def main():
settings = extract_settings()
backup_settings(settings, ‘settings_backup.json’)
print(“Settings backup complete.”)

if name == “main“:
main()

Vault is not a HSM…

Introduction: In the ever-evolving landscape of data security, understanding the tools at our disposal is crucial. Two such tools, HashiCorp Vault and Hardware Security Modules (HSMs), often get mentioned in the same breath but serve distinctly different purposes. This blog post aims to demystify these technologies, highlighting why a Vault is not an HSM and how they complement each other in securing our digital assets.


What is HashiCorp Vault? HashiCorp Vault is a software-based secrets management solution. It’s designed to handle the storage, access, and management of sensitive data like tokens, passwords, certificates, and encryption keys. Vault’s strengths lie in its versatility and dynamic nature, providing features like:

  • Dynamic Secrets: Generating on-demand credentials that have a limited lifespan, thus minimizing risks associated with static secrets.
  • Encryption as a Service: Allowing applications to encrypt and decrypt data without managing the encryption keys directly.
  • Robust Access Control: Offering a range of authentication methods and fine-grained access policies.

What is a Hardware Security Module (HSM)? An HSM is a physical device focused on protecting cryptographic keys and performing secure cryptographic operations. Key aspects include:

  • Physical Security: Built to be tamper-resistant and safeguard cryptographic keys even in the event of physical attacks.
  • Cryptographic Operations: Specialized in key generation, encryption/decryption, and digital signing, directly within the hardware.
  • Compliance-Ready: Often essential for meeting regulatory standards that require secure key management.

Key Differences:

  1. Nature and Deployment:
    • Vault is a flexible, software-based tool deployable across various environments, including cloud and on-premises.
    • HSMs are physical, tamper-resistant devices, providing a secure environment for cryptographic operations.
  2. Functionality and Scope:
    • Vault excels in managing a wide range of secrets, offering dynamic secrets generation and encryption services.
    • HSMs focus on securing cryptographic keys and performing hardware-based cryptographic functions.
  3. Use Case and Integration:
    • Vault is suitable for organizations needing a comprehensive secrets management system with flexible policies and integrations.
    • HSMs are ideal for scenarios requiring high-assurance key management, often mandated by compliance standards.

Why Vault is Not an HSM: Simply put, Vault is not an HSM because it operates in a different realm of data security. Vault is a software layer providing a broad spectrum of secrets management capabilities. It doesn’t offer the physical security inherent in HSMs but excels in managing access to secrets and encrypting data. Conversely, HSMs provide a hardened, secure environment for cryptographic operations but don’t have the extensive management features of Vault.


Complementary, Not Competitive: In a comprehensive security strategy, Vault and HSMs are not competitors but collaborators. Vault can integrate with HSMs to leverage their physical security for key storage, combining the best of both worlds: the flexibility and extensive management of Vault with the robust, physical security of HSMs.


Streamlining Presentations: The Power of Automation in PowerPoint Data Generation

Creating the perfect PowerPoint presentation is an art—an equilibrium between compelling content and striking visuals. However, for professionals and developers who need to test the efficiency of co-authoring tools or presentation software, the content itself can sometimes be secondary to the functionality being tested. That’s where the power of automation comes in, particularly in generating mock data for PowerPoint presentations.

I’ve been working on a fun side project It’s a script that allows users to create ‘fake’ PowerPoint data to simulate various scenarios and test how long it takes to read through the content in a process akin to co-authoring. For those intrigued by how this automation operates and its potential benefits, you can delve into the details on my GitHub repository.

Why Automate PowerPoint Data Generation?

The reasons for automating data generation are numerous, especially in a corporate or development setting:

  • Testing Efficiency: For software developers and IT professionals, having a tool that automatically generates data can significantly aid in testing the efficiency of co-authoring tools and other collaborative features in presentation software.
  • Training: Automated mock presentations can serve as training material for new employees, helping them get acquainted with presentation tools and company-specific templates.
  • Benchmarking: By standardizing the length and complexity of the generated content, teams can benchmark the performance of their software or the productivity of their staff.

How Does the Automation Work?

The automation script I developed is designed to be intuitive. It populates PowerPoint slides with random text, images, and data. The script takes into account different factors like text length and complexity, mimicking real-world presentations without the need for manual data entry.

Moreover, I incorporated a timing mechanism to assess how long a ‘co-authoring’ read-through would take. This feature is invaluable for software developers who aim to improve the collaborative aspects of presentation tool

It is up now on my github

Terraform learning

As someone who hasn’t been using Terraform for years, something things I’m about to say are obvious to you, someone who likely already knows that it’s a powerful infrastructure-as-code (IAC) tool that allows you to automate the provisioning and management of your cloud resources. With Terraform, you can define your infrastructure using a declarative language, and then use that definition to create, update, and destroy your resources in a consistent and repeatable way.

It has been a fantastic tool to get to know. Most fun I’ve had in technology in a long time.

One of the key benefits of using Terraform is that it allows you to abstract away the complexity of the underlying cloud APIs and services. Instead of having to write custom scripts or manually configure each individual resource, you can define your infrastructure in a high-level, human-readable format that can be version-controlled and shared with your team. This makes it easier to collaborate, track changes, and ensure consistency across your infrastructure.

Terraform also provides a number of built-in features and plugins that make it easy to work with a wide range of cloud providers, services, and tools. For example, you can use Terraform to provision infrastructure on AWS, Azure, Google Cloud, and many other cloud providers. Additionally, Terraform supports a wide range of resource types, including compute instances, load balancers, databases, and more.

Another benefit of using Terraform is that it allows you to automate your infrastructure changes with confidence. Because Terraform is declarative, you can see exactly what changes will be made to your infrastructure before you apply them. This helps you avoid unexpected changes and ensures that your infrastructure remains stable and secure.

Terraform is a fantastic tool for automating your infrastructure and managing your cloud resources. Whether you’re working on a small project or a large-scale enterprise deployment, Terraform can help you achieve your goals quickly and efficiently.

Figuring out DKIM

I often wonder why haven’t more companies rolled out DKIM at this point as it is clearly a fix for so many phishing/SPAM issues.

DKIM, which stands for DomainKeys Identified Mail, is an email authentication method designed to detect email spoofing and phishing. It works by allowing an organization to attach a digital signature to an email message, which can be validated by the recipient’s email server. DKIM is an important security feature for any organization that sends email, as it helps to prevent fraudulent emails from being delivered to the recipient’s inbox.

In Office365 and Exchange online, not using DKIM can pose several dangers. Here are a few of them:

  1. Increased risk of phishing attacks: Phishing attacks are a type of cyber attack that involve tricking users into revealing sensitive information, such as login credentials or credit card details. Without DKIM, it becomes easier for attackers to impersonate legitimate senders and convince recipients to provide their personal information.
  2. Increased risk of email spoofing: Email spoofing is when an attacker sends an email that appears to be from a legitimate sender, but is actually from a fraudulent source. DKIM helps to prevent email spoofing by verifying that the email actually came from the sender’s domain. Without DKIM, it becomes easier for attackers to impersonate legitimate senders and deceive recipients.
  3. Increased risk of email interception: Email interception is when an attacker intercepts an email in transit and reads its contents. Without DKIM, it becomes easier for attackers to intercept and read emails, as there is no digital signature to validate the authenticity of the email.
  4. Decreased email deliverability: Many email providers, including O365, use DKIM as a factor in their spam filtering algorithms. Without DKIM, emails may be more likely to be flagged as spam or rejected by the recipient’s email server, resulting in decreased email deliverability.

Not using DKIM in O365 can pose several dangers, including increased risk of phishing attacks and email spoofing, increased risk of email interception, and decreased email deliverability. Therefore, it is highly recommended that organizations use DKIM to help ensure the security and authenticity of their email communications.

The Dangers of Memory Exploits: Why Developers Need to Do More

Introduction: The world of technology is continually evolving, and with it comes new challenges in ensuring the safety and security of our digital systems. One such challenge is the ever-present threat of memory exploits. These security breaches occur when hackers manipulate a program’s memory to gain unauthorized access, allowing them to steal sensitive data or execute malicious code. This article will discuss the dangers of memory exploits, the importance of developers securing their memory usage, and why using Rust, while helpful, is only part of the solution.

The Dangers of Memory Exploits: Memory exploits are a severe concern for several reasons. They have the potential to impact not only individual users but also large organizations and government institutions. Some of the most critical dangers include:

  1. Data breaches: Hackers can use memory exploits to gain access to sensitive information, such as personal data, financial information, or trade secrets, which can lead to identity theft, financial losses, or corporate espionage.
  2. System instability: When memory exploits occur, it can cause system crashes or introduce new vulnerabilities, leaving the door open for further exploits or rendering the system inoperable.
  3. Loss of trust: Security breaches erode the trust users place in software and hardware products, potentially leading to reduced adoption, market share, and revenue.

The Need for Developers to Secure Memory Usage: Developers play a crucial role in mitigating the risks associated with memory exploits. They can implement various measures to ensure that the software they create is less susceptible to such attacks. Some of these measures include:

  1. Adopting secure coding practices: Developers should follow industry best practices for secure coding, which can help prevent memory exploits by eliminating vulnerabilities from the outset.
  2. Regularly updating and patching software: By keeping software up-to-date, developers can close known security vulnerabilities, reducing the risk of memory exploits.
  3. Conducting security audits: Performing security audits can help identify and fix vulnerabilities in software, providing another layer of defense against memory exploits.
  4. Leveraging secure programming languages: Using languages like Rust can help minimize memory-related vulnerabilities, but it is essential to recognize that this is only part of the solution.

Rust as a Partial Solution: Rust is a systems programming language designed with safety and performance in mind. Its syntax and unique features, such as its ownership system and the borrow checker, help prevent memory-related issues like data races, null pointer dereferences, and buffer overflows. While adopting Rust can significantly reduce the risk of memory exploits, it is not a magic bullet.

  1. Rust’s learning curve: Rust’s unique features and syntax can be challenging for developers familiar with other programming languages, which can slow down adoption.
  2. Existing software: Many applications are already written in other languages, and rewriting them entirely in Rust would be a time-consuming and expensive task.
  3. Rust is not immune to all vulnerabilities: While Rust reduces the risk of memory exploits, it is not entirely immune to other vulnerabilities or programmer errors.

Conclusion: The dangers of memory exploits are very real and have far-reaching consequences. Developers play a vital role in securing their memory usage and should employ a multi-faceted approach to minimize the risk of memory exploits. While adopting Rust can be a step in the right direction, it is important to recognize that it is only part of the solution. By combining Rust with secure coding practices, regular software updates, and security audits, developers can create more secure software and help defend against the threat of memory exploits.

Found an old BlackBerry today…

BlackBerry was a pioneer in the smartphone industry, introducing the concept of an integrated email device in the early 2000s. The company’s devices became extremely popular among business professionals, who appreciated the ability to stay connected while on-the-go.

However, BlackBerry’s market dominance was short-lived. Despite the company’s early success, it struggled to keep up with the rapidly evolving smartphone market. Here are some of the key reasons why BlackBerry ultimately failed:

  1. Failure to innovate: BlackBerry was slow to innovate and failed to keep pace with the rapidly changing market. The company was slow to adopt touchscreen technology, which became the standard for smartphones, and instead stuck with physical keyboards. This made its devices less appealing to consumers who were looking for more advanced features and designs.
  2. Limited app ecosystem: BlackBerry had a limited app ecosystem compared to competitors like Apple and Google. Developers were less likely to create apps for BlackBerry devices due to the complexity of the platform, which made it more difficult to build and distribute apps.
  3. Lack of focus: BlackBerry’s lack of focus also contributed to its downfall. The company attempted to expand into other areas, such as tablets and smartwatches, but these efforts were largely unsuccessful. By spreading itself too thin, BlackBerry failed to maintain its core business.
  4. Strong competition: BlackBerry faced fierce competition from Apple and Android devices, which quickly took over the smartphone market. These competitors offered a wider range of features and more advanced technology, which made BlackBerry devices less appealing to consumers.
  5. Mismanagement: Finally, BlackBerry’s mismanagement was a significant factor in its failure. The company made several strategic mistakes, including the decision to delay the release of BlackBerry 10, which was meant to be a major overhaul of the platform. This delay allowed competitors to gain more ground and further erode BlackBerry’s market share.

Welcome home Teams

The first shirt I ever bought myself was a Slackware shirt. I was like 15 years old, and Slackware Linux was my obsession. I would come home from school everyday to try to make different features work. To this one, one of my proudest moments was making my sound driver work for the first time. Most people I talked to about this thought I was insane, they had windows, things just worked.

While I have grown an appreciation for the comforts of technology just working, I would not be where I am today if it was not for those learning curves I had in Slackware. I will always keep a Slackware machine around my house. Why am I ranting about this? Microsoft announced today that there is a Teams package for Linux released.

A lot of folks don’t realize how close Microsoft and Linux have become over the last 5 years. Recently when I was at the ignite conference, I spent a lot of time talking to Red hat (who was a sponsor at the event) about how these two companies and technologies have come together.