After years of looking exactly the same, this blog finally got a fresh coat of paint. I’ve switched things over to a darker look: black background, light text, and a more code‑friendly feel that matches how I actually spend my time these days. If you’re reading this at night or in a dim room, it should be a little easier on the eyes than the old blinding white theme I set up years ago and never touched again. Behind the scenes, I also moved the site off WordPress.com and over to a local provider. That gives me more control over backups, themes, and customization, and lets me treat this blog a bit more like the rest of my projects instead of something I only poke at every few years .Nothing dramatic is changing about the content, but the plumbing and paint are finally caught up with the present. If something looks weird or broken in the new setup, feel free to let me know—and thanks for still stopping by after all this time.

My Split Heart: Why I’m Defensive of the Linux That Saved Me

There’s a war going on inside me, and it’s fought in terminal commands and neural networks.

On one hand, I am euphoric. The gates have been blown wide open. For decades, the biggest barrier to entry for Linux wasn’t the technology itself—it was the gatekeeping, the assumed knowledge, the sheer terror of being a “moron” in a world of geniuses. You’d fumble with a driver, break your X server, and be met not with a helpful error message, but with a cryptic string of text that felt like the system mocking you.

But now? AI has changed the game. That same cryptic error message can be pasted into a chatbot and, in plain English, you get a step-by-step guide to fix it. You can ask, “How do I set up a development environment for Python on Ubuntu?” and get a coherent, working answer. The barrier of “having to already be an expert to become an expert” is crumbling. It’s a beautiful thing. I want to throw the doors open and welcome everyone in. The garden is no longer a walled fortress; it’s a public park, and I want to be the guy handing out maps.

But the other part of my heart, the older, more grizzled part, is defensive. It’s protective. It feels a pang of something I can’t fully explain when I see this new, frictionless entry.

Because Linux, for me, wasn’t frictionless. It was friction that saved my life.

I was a kid when I first booted into a distribution I’d burned onto a CD-R. It was clunky. It was slow. Nothing worked out of the box. But for a kid who felt out of place, who was searching for a sense of agency and control in a confusing world, it was a revelation. Here was a system that didn’t treat me like a consumer. It treated me like a participant. It demanded that I learn, that I struggle, that I understand.

Fixing that broken X server wasn’t just a task; it was a trial by fire. Getting a sound card to work felt like summiting a mountain. Every problem solved was a dopamine hit earned through sheer grit and persistence. I wasn’t just using a computer; I was communicating with it. I was learning its language. In a world that often felt chaotic and hostile, the terminal was a place of logic. If you learned the rules, you could make it obey. You could build things. You could break things, and more importantly, you could fix them.

That process—the struggle—forged me. It taught me problem-solving, critical thinking, and a deep, fundamental patience. It gave me a confidence that came not from being told I was smart, but from proving it to myself by conquering a system that asked no quarter and gave none. In many ways, the command line was my first therapist. It was a space where my problems had solutions, even if I had to dig for them.

So when I see AI effortlessly dismantling those very same struggles, I feel a strange, irrational bias. It’s the bias of a veteran who remembers the trenches, looking at new recruits with high-tech gear. A part of me whispers, “They didn’t earn their stripes. They don’t know what it truly means.”

I know this is a fallacy. It’s the “I walked uphill both ways in the snow” of our community. The goal was never the suffering; the goal was the empowerment. If AI can deliver that empowerment without the unnecessary pain, that is a monumental victory.

But my love for Linux is tangled up in that pain. It’s personal. It’s the technology that literally saved me by giving me a world I could control and a community I could belong to. I am defensive of it because it’s a part of my identity. I feel a need to protect its history, its spirit, and the raw, hands-on knowledge that feels sacred to me.

So here I am, split.

One hand is extended, waving newcomers in, thrilled to see the community grow and evolve in ways I never dreamed possible. “Come on in! The water’s fine! Don’t worry, the AI lifeguard is on duty.”

The other hand is clenched, resting protectively on the old, heavy textbooks and the logs of a thousand failed compile attempts, guarding the memory of the struggle that shaped me.

Perhaps the reconciliation is in understanding that the soul of Linux was never the difficulty. It was the freedom, the curiosity, and the empowerment. The tools are just changing. The spirit of a kid in a bedroom, staring at a blinking cursor, ready to tell the machine what to do—that remains. And if AI helps more people find that feeling, then maybe my defensive, split heart can finally find peace.

The gates are down. The garden is open. And I’ll be here, telling stories about the old walls, even as I help plant new flowers for everyone to enjoy.

The Hypocrisy of “Strengthening” 

The Leadership Immunity

So, if the business is so strong, why the massive layoff?

The answer is simple, and it has nothing to do with “strengthening culture” or “increasing ownership.” It’s about increasing shareholder value.

Layoffs, especially during periods of high profit, are a well-worn tactic to immediately boost stock prices and profit margins. By cutting 14,000 salaries, benefits, and overhead, Amazon can present a more favorable balance sheet to Wall Street. It’s a direct transfer of stability from employees to investors.

This is the heart of the hypocrisy: Framing a decision made for financial markets as a necessary step for innovation and customer obsession.

While the memo speaks of shared sacrifice and a leaner future, it’s vital to ask: who is actually sharing in the sacrifice?

The executive who signed this letter, Beth Galetti, is not feeling this “leanness.” According to Amazon’s own regulatory filings, her total compensation for 2023 was nearly $14 million, the vast majority of which came in the form of stock awards.

Let that sink in.

The executive presiding over the elimination of 14,000 jobs—jobs that provided mortgages, healthcare, and stability for families—was rewarded with a compensation package worth over 300 times the median Amazon employee’s pay.

This is the ultimate hypocrisy. The “tough decisions” are not being made by people who face any financial insecurity. Their multi-million dollar packages, tied directly to stock performance, incentivize short-term cost-cutting like mass layoffs. For them, “strengthening the organization” means boosting the metrics that directly inflate their own net worth.

AI’s Unintended Path to Self-Destruction

You feel it, don’t you? A low hum of unease beneath the surface of daily life. It’s there when you scroll through your phone, a feed of curated perfection alongside headlines of impending collapse. It’s there in conversations about work, the economy, the future. It’s a sense that the wheels are still turning, but the train is heading for a cliff, and everyone in the locomotive is just arguing over the music selection.

This isn’t just burnout. This isn’t just the news cycle. This is a collective, intuitive understanding that our world is barreling towards a future that feels… self-destructive. And nowhere is this feeling more acute than in our relationship with the breakneck rise of Artificial Intelligence.

We were promised a future of jetpacks and leisure, a world where technology would free us. Instead, we’re handed opaque algorithms that dictate our choices, social media that fractures our communities, and an AI arms race that feels less like progress and more like a runaway train with no one at the controls.

The path we’re on is not the only one. The feeling that something is wrong is the first, and most crucial, step toward changing course. It’s the proof that we haven’t yet surrendered our vision for a world that is not just smart, but also wise; not just connected, but also compassionate.

The future isn’t a destination we arrive at. It’s a thing we build, every day, with our attention, our choices, and our voices. Let’s start building one we actually want to live in.

An Open Question to Microsoft: Let Me Get This Straight…

Let’s rewind the tape for a second.

It’s March 2020. The world screeches to a halt. Offices empty out. A grand, unplanned, global experiment in remote work begins. We were told to make it work, and we did. We cobbled together home offices on kitchen tables, mastered the mute button, and learned that “I’m not a cat” is a valid legal defense.

And you know who thrived in this chaos? You, Microsoft.

While the world adapted, you didn’t just survive; you absolutely exploded. Your products became the very bedrock of this new, distributed world.

Teams became the digital office, the school, the family meeting space.
Azure became the beating heart of the cloud infrastructure that kept everything running.
Windows and Office 365 were the essential tools on every single one of those kitchen-table workstations.

And the market noticed. Let’s talk about the report card, because it’s staggering:

  • 2021: You hit $2 trillion in market cap for the first time.
  • 2023: You became only the second company in history to reach a $3 trillion valuation.
  • You’ve posted record-breaking profits, quarter after quarter after quarter, for four consecutive years.

Your stock price tripled. Your revenue soared. You, Microsoft, became the poster child for how a tech giant could not only weather the pandemic but emerge stronger, more valuable, and more essential than ever before.

All of this was achieved by a workforce that was, by and large, not in the office.

Which brings us to today. And the recent mandate. And the question I, and surely thousands of your employees, are asking:

Let me get this straight.

After four years of the most spectacular financial performance in corporate history…
After proving, unequivocally, that your workforce is not just productive but hyper-productive from anywhere…
After leveraging your own technology to enable this very reality and reaping trillions of dollars in value from it…
After telling us that the future of work was flexible, hybrid, and digital…

You are now asking people to return to the office for a mandatory three days a week?

What, and I cannot stress this enough, the actual fuck?

Where is the logic? Is this a desperate grasp for a sense of “normalcy” that died in 2020? Is it a silent, cynical ploy to encourage “quiet quitting” and trim the workforce without having to do layoffs? Is it because you’ve sunk billions into beautiful Redmond campuses and feel the existential dread of seeing them sit half-empty?

Because it can’t be about productivity. The data is in, and the data is your own stock price. The proof is in your earnings reports. You have a four-year, multi-trillion-dollar case study that says the work got done, and then some.

It feels like a profound betrayal of the very flexibility you sold the world. It feels like you’re saying, “Our tools empower you to work from anywhere! (Except, you know, from anywhere).”

You built the infrastructure for the future of work and are now mandating the past.

So, seriously, Microsoft. What gives? Is the lesson here that even with all the evidence, all the success, all the innovation, corporate America’s default setting will always, always revert to the illusion of control that a packed office provides?

It’s not just wild. It’s a spectacular disconnect from the reality you yourself helped create. And for a company that prides itself on data-driven decisions, this one seems driven by something else entirely.

Implementing OAuth with an MCP (Model Context Protocol) AI Test Server: A Technical Deep Dive

I wasn’t really sure I even wanted to write this up — mostly because there are some limitations in MCP that make things a little… awkward. But I figured someone else is going to hit the same wall eventually, so here we are.

If you’re trying to use OAuth 2.0 with MCP, there’s something you should know: it doesn’t support the full OAuth framework. Not even close.

MCP only works with the default well-known endpoints:

  • /.well-known/openid-configuration
  • /.well-known/oauth-authorization-server

Before we get going, let me write this in the biggest and most awkward text I can find.

Run the Device Flow outside of MCP, then inject the token into the session manually.

And those have to be hosted at the default paths, on the same domain as the issuer. If you’re using split domains, custom paths, or a setup where your metadata lives somewhere else (which is super common in enterprise environments)… tough luck. There’s no way to override the discovery URL.

It also doesn’t support other flows like device_code, jwt_bearer, or anything that might require pluggable negotiation. You’re basically stuck with the default authorization code flow, and even that assumes everything is laid out exactly the way it expects.

So yeah — if you’re planning to hook MCP into a real-world OAuth deployment, just be aware of what you’re signing up for. I wish this part of the protocol were a little more flexible, but for now, it’s pretty locked down.

Model Context Protocol (MCP) is an emerging standard for AI model interaction that provides a unified interface for working with various AI models. When implementing OAuth with an MCP test server, we’re dealing with a specialized scenario where authentication and authorization must accommodate both human users and AI agents.

This technical guide covers the implementation of OAuth 2.0 in an MCP environment, focusing on the unique requirements of AI model authentication, token exchange patterns, and security considerations specific to AI workflows.

Prerequisites

Before implementing OAuth with your MCP test server:

  1. MCP Server Setup: A running MCP test server (v0.4.0 or later)
  2. Developer Credentials: Client ID and secret from the MCP developer portal
  3. OpenSSL: For generating key pairs and testing JWT signatures
  4. Understanding of MCP’s Auth Requirements: Familiarity with MCP’s auth extensions for AI contexts

Section 1: MCP-Specific OAuth Configuration

1.1 Registering Your Application

MCP extends standard OAuth with AI-specific parameters:

curl -X POST https://auth.modelcontextprotocol.io/register \
  -H "Content-Type: application/json" \
  -d '{
    "client_name": "Your AI Agent",
    "client_type": "ai_service",  # MCP-specific client type
    "grant_types": ["authorization_code", "client_credentials"],
    "redirect_uris": ["https://your-domain.com/auth/callback"],
    "scopes": ["mc:inference", "mc:fine_tuning"],  # MCP-specific scopes
    "ai_metadata": {  # MCP extension
      "model_family": "your-model-family",
      "capabilities": ["text-generation", "embeddings"]
    }
  }'

1.2 Understanding MCP’s Auth Flows

MCP supports three primary OAuth flows:

  1. Standard Authorization Code Flow: For human users interacting with MCP via UI
  2. Client Credentials Flow: For server-to-server AI service authentication (sucks, doesn’t work, even the work arounds, don’t do it)
  3. Device Flow: For headless AI environments

Section 2: Implementing Authorization Code Flow

2.1 Building the Authorization URL

MCP extends standard OAuth parameters with AI context:

from urllib.parse import urlencode

auth_params = {
    'response_type': 'code',
    'client_id': 'your_client_id',
    'redirect_uri': 'https://your-domain.com/auth/callback',
    'scope': 'openid mc:inference mc:models:read',
    'state': 'anti-csrf-token',
    'mcp_context': json.dumps({  # MCP-specific context
        'model_session_id': 'current-session-uuid',
        'intended_use': 'interactive_chat'
    }),
    'nonce': 'crypto-random-string'
}

auth_url = f"https://auth.modelcontextprotocol.io/authorize?{urlencode(auth_params)}"

2.2 Handling the Callback

The MCP authorization server will return additional AI context in the callback:

@app.route('/auth/callback')
def callback():
    auth_code = request.args.get('code')
    mcp_context = json.loads(request.args.get('mcp_context', '{}'))  # MCP extension

    token_response = requests.post(
        'https://auth.modelcontextprotocol.io/token',
        data={
            'grant_type': 'authorization_code',
            'code': auth_code,
            'redirect_uri': 'https://your-domain.com/auth/callback',
            'client_id': 'your_client_id',
            'client_secret': 'your_client_secret',
            'mcp_context': request.args.get('mcp_context')  # Pass context back
        }
    )

    # MCP tokens include AI-specific claims
    id_token = jwt.decode(token_response.json()['id_token'], verify=False)
    print(f"Model Session ID: {id_token['mcp_session_id']}")
    print(f"Allowed Model Operations: {id_token['mcp_scopes']}")

Section 3: Client Credentials Flow for AI Services

3.1 Requesting Machine-to-Machine Tokens

import requests

response = requests.post(
    'https://auth.modelcontextprotocol.io/token',
    data={
        'grant_type': 'client_credentials',
        'client_id': 'your_client_id',
        'client_secret': 'your_client_secret',
        'scope': 'mc:batch_inference mc:models:write',
        'mcp_assertion': generate_mcp_assertion_jwt()  # MCP requirement
    },
    headers={'Content-Type': 'application/x-www-form-urlencoded'}
)

token_data = response.json()
# MCP includes additional AI context in the response
model_context = token_data.get('mcp_model_context', {})

3.2 Generating MCP Assertion JWTs

MCP requires a signed JWT assertion for client credentials flow:

import jwt
import datetime

def generate_mcp_assertion_jwt():
    now = datetime.datetime.utcnow()
    payload = {
        'iss': 'your_client_id',
        'sub': 'your_client_id',
        'aud': 'https://auth.modelcontextprotocol.io/token',
        'iat': now,
        'exp': now + datetime.timedelta(minutes=5),
        'mcp_metadata': {  # MCP-specific claims
            'model_version': '1.2.0',
            'deployment_env': 'test',
            'requested_capabilities': ['inference', 'training']
        }
    }

    with open('private_key.pem', 'r') as key_file:
        private_key = key_file.read()

    return jwt.encode(payload, private_key, algorithm='RS256')

Section 4: MCP Token Validation

4.1 Validating ID Tokens

MCP ID tokens include standard OIDC claims plus MCP extensions:

from jwt import PyJWKClient
from jwt.exceptions import InvalidTokenError

def validate_mcp_id_token(id_token):
    jwks_client = PyJWKClient('https://auth.modelcontextprotocol.io/.well-known/jwks.json')

    try:
        signing_key = jwks_client.get_signing_key_from_jwt(id_token)
        decoded = jwt.decode(
            id_token,
            signing_key.key,
            algorithms=['RS256'],
            audience='your_client_id',
            issuer='https://auth.modelcontextprotocol.io'
        )

        # Validate MCP-specific claims
        if not decoded.get('mcp_session_id'):
            raise InvalidTokenError("Missing MCP session ID")

        return decoded
    except Exception as e:
        raise InvalidTokenError(f"Token validation failed: {str(e)}")

4.2 Handling MCP Token Introspection

def introspect_mcp_token(token):
    response = requests.post(
        'https://auth.modelcontextprotocol.io/token/introspect',
        data={
            'token': token,
            'client_id': 'your_client_id',
            'client_secret': 'your_client_secret'
        }
    )

    introspection = response.json()
    if not introspection['active']:
        raise Exception("Token is not active")

    # Check MCP-specific introspection fields
    if 'mc:inference' not in introspection['scope'].split():
        raise Exception("Missing required inference scope")

    return introspection

Section 5: MCP-Specific Considerations

5.1 Handling Model Session Context

MCP tokens include session context that must be propagated:

def call_mcp_api(endpoint, access_token):
    headers = {
        'Authorization': f'Bearer {access_token}',
        'X-MCP-Context': json.dumps({
            'session_continuity': True,
            'model_temperature': 0.7,
            'max_tokens': 2048
        })
    }

    response = requests.post(
        f'https://api.modelcontextprotocol.io/{endpoint}',
        headers=headers,
        json={'prompt': 'Your AI input here'}
    )

    return response.json()

5.2 Token Refresh with MCP Context

def refresh_mcp_token(refresh_token, mcp_context):
    response = requests.post(
        'https://auth.modelcontextprotocol.io/token',
        data={
            'grant_type': 'refresh_token',
            'refresh_token': refresh_token,
            'client_id': 'your_client_id',
            'client_secret': 'your_client_secret',
            'mcp_context': json.dumps(mcp_context)
        }
    )

    if response.status_code != 200:
        raise Exception(f"Refresh failed: {response.text}")

    return response.json()

Section 6: Testing and Debugging

6.1 Using MCP’s Test Token Endpoint

curl -X POST https://test-auth.modelcontextprotocol.io/token \
  -H "Content-Type: application/x-www-form-urlencoded" \
  -d "grant_type=client_credentials" \
  -d "client_id=test_client" \
  -d "client_secret=test_secret" \
  -d "scope=mc:test_all" \
  -d "mcp_test_mode=true" \
  -d "mcp_override_context={\"bypass_limits\":true}"

6.2 Analyzing MCP Auth Traces

Enable MCP debug headers:

headers = {
    'Authorization': 'Bearer test_token',
    'X-MCP-Debug': 'true',
    'X-MCP-Traceparent': '00-0af7651916cd43dd8448eb211c80319c-b7ad6b7169203331-01'
}

Why did this suck so much?

Implementing OAuth with an MCP test server requires attention to MCP’s AI-specific extensions while following standard OAuth 2.0 patterns. Key takeaways:

  1. Always include MCP context parameters in auth flows
  2. Validate MCP-specific claims in tokens
  3. Propagate session context through API calls
  4. Leverage MCP’s test endpoints during development

For production deployments, ensure you:

  • Rotate keys and secrets regularly
  • Monitor token usage patterns
  • Implement proper scope validation
  • Handle MCP session expiration gracefully

Kernel Hell: SLUB Allocator Contention on NUMA Machines

So here’s a weird rabbit hole I went down recently: trying to figure out why Linux memory allocation slows to a crawl under pressure — especially on big multi-socket systems with a ton of cores. The culprit? Good ol’ SLUB. And no, I don’t mean a rude insult — I mean the SLUB allocator, one of the core memory allocators in the Linux kernel.

If you’ve ever profiled a high-core-count server under load and seen strange latency spikes in malloc-heavy workloads, there’s a good chance SLUB contention is part of it.

The Setup

Let’s say you’ve got a 96-core AMD EPYC box. It’s running a real-time app that’s creating and destroying small kernel objects like crazy — maybe TCP connections, inodes, structs for netlink, whatever.

Now, SLUB is supposed to be fast. It uses per-CPU caches so that you don’t have to lock stuff most of the time. Allocating memory should be a lockless, per-CPU bump pointer. Great, right?

Until it’s not.

The Problem: The Slow Path of Doom

When the per-CPU cache runs dry (e.g., under memory pressure or fragmentation), you fall into the slow path, and that’s where things get bad:

  • SLUB hits a global or per-node lock (slub_lock) to refill the cache.
  • If your NUMA node is short on memory, it might fallback to a remote node — so now you’ve got cross-node memory traffic.
  • Meanwhile, other cores are trying to do the same thing. Boom: contention.
  • Add slab merging and debug options like slub_debug into the mix, and now you’re in full kernel chaos mode.

If you’re really unlucky, your allocator calls will stall behind a memory compaction or even trigger the OOM killer if it can’t reclaim fast enough.

Why This Is So Hard

This isn’t just “optimize your code” kind of stuff — this is deep down in mm/slub.c, where you’re juggling:

  • Atomic operations in interrupt contexts
  • Per-CPU vs. global data structures
  • Memory locality vs. system-wide reclaim
  • The fact that one wrong lock sequence and you deadlock the kernel

There are tuning knobs (/proc/slabinfo, slub_debug, etc.), but they’re like trying to steer a cruise ship with a canoe paddle. You might see symptoms, but fixing the cause takes patching and testing on bare metal.

Things I’m Exploring

Just for fun (and pain), I’ve been poking around the idea of:

  • Introducing NUMA-aware slab refill batching, so we reduce cross-node fallout.
  • Using BPF to trace slab allocation bottlenecks live (if you haven’t tried this yet, it’s surprisingly helpful).
  • Adding a kind of per-node, per-type draining system where compaction and slab freeing can happen more asynchronously.

Not gonna lie — some of this stuff is hard. It’s race-condition-central, and the kind of thing where adding one optimization breaks five other things in edge cases you didn’t know existed.

SLUB is amazing when it works. But when it doesn’t — especially under weird multi-core, NUMA, low-memory edge cases — it can absolutely wreck your performance.

And like most things in the kernel, the answer isn’t always “fix it” — sometimes it’s “understand what it’s doing and work around it.” Until someone smarter than me upstreams a real solution.

Fixing the App registration permission issues with the new Azure AD roles

So I’ve been wanting to write about this for some time now.  For the longest time, I’ve managed our Azure AD.  The problem came in when it came to setting up new application registrations.  You needed entirely too powerful of permissions to set up these registrations.  It was a nightmare, because I found myself on calls setting up these registrations for no reason.  I asked a few folks at Ignite about this, and Microsoft assured myself and others that they were working on it.  The following Ignite ( This year) they released an update to Azure AD roles.

 

https://techcommunity.microsoft.com/t5/Azure-Active-Directory-Identity/Hallelujah-Azure-AD-delegated-application-management-roles-are/ba-p/245420

I am not even entirely sure you can understand how painful this was as a Directory architect.   Having the ability to selectively allow people to create applications registrations allows me to automate so many workflows.

 

Good job Microsoft

 

 

approles

Setting up remote power shell from different domains

So I have had the pleasure (sarcasm, massive amounts of sarcasm) in dealing with remote power shell in the last couple of days  So I figured I would write a quick guide on how you can connect to another machine, outside of your domain, with remote Power shell.  This is useful if you want to run Exchange cmdlets from your local machine, run tests on your local power shell instance while connecting to a test lab, or countless other ways.  First lets talk about remote power shell, and what it is.

Remote power shell is a tool that allows you to remotely managed services using WS-Management protocol and the Windows Remote Management (WinRM) service.    The WS-Management protocol is a public standard for remotely exchanging management data with any computer device that implements the protocol. The WinRM service processes WSMan requests received over the network. It uses HTTP.sys to listen on the network.

In my test scenario, I am trying to connect to my test lab (testlab.com) with remote powershell, from my work machine (workdomain.com)  The first problem that I am going to come across is that my machines are in different domains, and we are not going to be able to create a trust between them.  I found a great KB that walked me through the actual technical piece.

I have listed those steps here

 

1. Start Windows PowerShell as an administrator by right-clicking the Windows PowerShell shortcut and selecting Run As Administrator.

2. The WinRM service is confi gured for manual startup by default. You must change the startup type to Automatic and start the service on each computer you want to work with. At the PowerShell prompt, you can verify that the WinRM service is running using the following command:
get-service winrm
The value of the Status property in the output should be “Running”.

3. To configure Windows PowerShell for remoting, type the following command:
Enable-PSRemoting –force

In many cases, you will be able to work with remote computers in other domains. However, if the remote computer is not in a trusted domain, the remote computer might not be able to authenticate your credentials. To enable authentication, you need to add the remote computer to the list of trusted hosts for the local computer in WinRM. To do so, type:
winrm s winrm/config/client ‘@{TrustedHosts=”RemoteComputer”}’
Here, RemoteComputer should be the name of the remote computer, such as:
winrm s winrm/config/client ‘@{TrustedHosts=”CorpServer56″}’

 

A few problems that I came across.

  1. Even after adding the machine to the trusted hosts, you still get the same errors inside power shell that says unable to connect.  Make sure you are running power shell as an administrator
  2. Make sure you can ping and telnet the ports you are using
  3. Make sure that if your going over HTTP that the server your connecting to has the turned on, for example,if your going to connect to an Exchange server for remote power shell, make sure that IIS directory allows connections on port 80

 

In 20 or 30 yea…

In 20 or 30 years, you’ll be able to hold in your hand as much computing knowledge as exists now in the whole city, or even the whole world.

It is such a crazy world we live in, today I have been troubleshooting my brother’s 32GB SD card.  Think about that for a second, I am holding a 32 GB SD card, that 20 years ago, wasn’t even possible to create in such small space.  The miracles of technology.