More AI Tools I'm using

More AI Tools

I wanted to share a few (more) AI tools that have become essential parts of my daily workflow. As I mentioned in my April checkpoint and My LLM Workflow posts, I’m still heavily leveraging Claude Code, but I’ve expanded my toolkit with some useful additions. I’ve been meaning to blog more actively, but as we all know, life has a way of catching up fast.

Consider this my attempt to get back on track!

Claude Hooks

Claude Code hooks are user-defined shell commands that execute at various points in Claude Code’s lifecycle. Hooks provide deterministic control over Claude Code’s behavior, ensuring certain actions always happen rather than relying on the LLM to choose to run them. 1

Since I often have multiple Claude instances running on different projects, getting notified when tasks complete has become crucial for my productivity (and to call me back from weeding the yard!).

I’ve set up notification hooks using ntfy.sh running on my Tailscale network.

Here’s a sample configuration I use in my .claude/settings.json user file:

    ...
    "Stop": [
      {
        "matcher": "",
        "hooks": [
          {
            "type": "command",
            "command": "curl -H \"Title: Claude Code\" -d \"Task completed for $(git remote get-url origin 2>/dev/null | sed 's/.*[/:]\\([^/]*\\)\\/\\([^/]*\\)$/\\1\\/\\2/' || echo 'unknown') ($(pwd)) on branch $(git branch --show-current 2>/dev/null || echo 'no git') 🎆\" $NTFY_SERVER_URL/cc"
          }
        ]
      }
    ]

Beyond notifications, I also leverage project-specific hooks for automated linting. In my .claude/settings.local.json project files, I run linting commands automatically after file modifications:

    ...
    "PostToolUse": [
      {
        "matcher": "Write|Edit|MultiEdit",
        "hooks": [
          {
            "type": "command",
            "command": "jq -r '.tool_input.file_path | select(endswith(\".py\"))' | grep -q . && just lint-fix"
          }
        ]
      }
    ]

Claude Code Router

Given the current economics of AI model providers, I’m always exploring cost-effective alternatives for when the bills come due.

A few months ago, I discovered Claude Code Router, a clever project that enables using non-Anthropic models with the Claude Code interface.

It works remarkably well, though I primarily use it when my max subscription is throttled or when I can match simpler tasks to smaller, more efficient models.

Here’s my current configuration (stored at ~/.claude-code-router/config.json) using Open Router.

{
  "LOG": true,
  "Providers": [
    {
      "name": "openrouter",
      "api_base_url": "https://openrouter.ai/api/v1/chat/completions",
      "api_key": "<openrouter-api-key-here>",
      "models": [
        // "anthropic/claude-sonnet-4",
        "qwen/qwen3-coder",
        "moonshotai/kimi-k2",
        "deepseek/deepseek-r1-0528",
        "google/gemini-2.5-flash-lite-preview-06-17"
      ],
      "transformer": { "use": ["openrouter"] }
    }
  ],
  "Router": {
    // "default": "openrouter,anthropic/claude-sonnet-4",
    "default": "qwen/qwen3-coder",
    "background": "openrouter,moonshotai/kimi-k2",
    "think": "openrouter,deepseek/deepseek-r1-0528",
    "longContext": "openrouter,google/gemini-2.5-flash-lite-preview-06-17"
  }
}

Additional services

Open Router for model flexibility

Speaking of Open Router, it’s become my go-to platform for experimenting with new models. Their unified interface provides access to multiple providers with competitive pricing and latency optimizations. Would encourage you to review their privacy settings.

Deep Research as search replacement

I’ve increasingly replaced traditional Google searches with Deep Research, particularly Google’s Gemini implementation. Based on my own usage, it’s clear to me how this trend could substantially impact traditional search traffic patterns.

Things I’m doubling-down on

I briefly mentioned this in last year’s post about my AI tools, but I can’t imagine working with my Obsidian notes without LLM integration anymore. The combination has transformed how I organize and is integral in how I interact with my knowledge base. I’m currently using mcp-obsidian, and I’ve noticed increased discussion about this approach in the Twittersphere. Highly recommended. While we’re on the topic, I’d also suggest migrating to Bases if you haven’t - the migration took a bit, but the performance improvement has been substantial.

Current experiments

I’ve been exploring Open Code, and if you haven’t tried it yet, I’d encourage giving it a spin. Their shared session implementation is particularly interesting, and I’m surprised similar approaches aren’t more prevalent amongst the various CLI tools.

Looking Forward

The tooling landscape continues to evolve, and I find myself constantly re-evaluating my workflow as new options emerge. My current plan is to stick with Claude Code while maintaining flexibility with models.


  1. What’s your current workflow(s), and how are you leveraging these tools?
  2. Are there any new tools that you’re playing with that I should check out?

I’d love to hear about your LLM toolbelt.

Hope to see y’all next time!

This post was written by yours truly; spellchecking, grammar-policing and hook completed by Claude Desktop.

My LLM Workflow

My LLM workflow

I’ve been running Claude Code on multiple GitHub issues simultaneously using git worktrees, and it’s changed my development workflow. If you’re curious about the setup, read on.

I really enjoyed reading How I 10x My Engineering With AI because Kieran articulated something many of us are grappling with: the transition from traditional development workflows to AI-assisted ones. It got me thinking about documenting my own journey through this rapid adoption of AI tooling - especially since my approach has evolved quite a bit from the tools I mentioned in my April checkpoint.

Harper’s post about the developer’s codegen monomyth perfectly captures this journey. Having chatted with a number of my peers, I’ve noticed patterns emerging - many have reduced their Cursor usage and leaned fully into ‘agent’1-driven workflows.

The most fascinating thing is that we’re all essentially vibing our own productivity, with wildly different results depending on workflow choices. Here’s how my workflow works, and why I think this approach hints at where AI-assisted development is heading.

Claude Code + Git worktrees

For the most part, I’ve settled into a workflow that heavily leverages Claude Code with git worktrees. If you’re not familiar with worktrees, they’re a git feature that lets you have multiple working directories for a single repository - basically, you can check out different branches in separate directories simultaneously without the usual branch-switching disruption.2

I know there have been a few references to them in Anthropic docs, but I hadn’t seen much chatter about them. I know a few heavyweight implementations like uzi3 and Claude Squad exist, but I wanted something lightweight that didn’t involve managing yet another binary.

My current workflow

The first lightweight implementation I came across was @jlehman_’s tweet and his subsequent gist. I really, really like this idea because it allows for the LLM to manage the worktrees. While the custom Claude commands worked, Claude would sometimes get confused about where the worktree lived, and I’d end up with changes bleeding into my primary repository. I also stumbled upon this helpful git worktree pattern field note in the Claude Code GitHub issues which got me most of the way there - I didn’t adopt the git-worktree-merge script because I had a slightly different workflow than the OP.

For a single project, my current process looks like the following:

  1. Grab the work - Retrieve all GitHub issues (filter or ID)
  2. Create isolation - Spin up a fresh worktree for each issue.
  3. Deploy the agent - Echo a prompt into a Claude Code instance with –print and –dangerously-skip-permissions; even better yet, use a dev container.
  4. Break time! - Make coffee, weed the yard, etc.
  5. Review and ship - When all the issues are completed, jump back into each worktree, review the changes, and if they look good, use a few custom user commands for Claude to commit using conventional commits and file the pull request.

You might be wondering why I didn’t automate that last step - it’s because I prefer maintaining oversight of changes before shipping. In addition, if the agent does ‘go off the rails’, at the very least, I have a starting checkpoint to begin the work.

The beauty of this approach is that it lets me spin up isolated environments for different features or experiments without conflicts, all while allowing a human-in-the-loop to observe the changes being made. All the work I want the machine to perform is captured in GitHub issues.

While I can’t claim a 10x improvement (how would you even measure that?), and while not novel, this workflow has definitely cranked up my output.

Some of the benefits include:

Currently, my only limiting factor is my token budget.

Implementation details

Note: You’ll need the GitHub CLI.

To accomplish the workflow above, I rely on a few Bash scripts (that Claude wrote for me, so PRs accepted!).

Notes

# How I run the script
$> ANTHROPIC_API_KEY=<foobar> CLAUDE_PROMPT_FILE="./do-work-prompt.txt" github-issue-processor.sh

# Example prompt (mine is a bit more complex using thinking tokens ie think, think-hard, ultrathink, etc.)
$> cat do-work-prompt.txt

1. Open GitHub issue.
2. Post a detailed plan in a comment on the issue.
3. Write robust, well-documented code.
4. Include comprehensive tests and debug logging.
5. Practice TDD.
6. Confirm that all tests pass.
7. Use commits as a form of checkpointing. Commit often.

I also export a bash function called git-worktree-toggle to source the git-worktree-llm:

# Bash function sourcing the script to not open it in a subprocess (for cd, etc.)
git-worktree-toggle() {
  source /path/to/git-worktree-llm "$@"
}

Tools I’m not using

While I have Cursor installed and Avante.nvim loaded in Neovim, I barely touch them beyond using them as a glorified autocomplete. When Claude Code can’t complete a task headlessly, I switch to interactive mode. If that fails, I use Claude Desktop for the final tweaks before copying back to my editor.

Model diversity

There’s growing evidence that using a consortium of models might yield better results than relying on a single one. To enable this in Claude Code, I’ve been experimenting with Claude Code Proxy, a nifty proxy that lets you route requests to Gemini or OpenAI models (via LiteLLM) while still using your Anthropic client. I still need a better way to quantifiably evaluate all of this so the jury’s still out on whether this meaningfully improves outcomes, but it’s been interesting to see how different models complete the same coding problem. Since different language models excel at different tasks, in the future, I expect multi-model workflows to become standard.

What’s next

As the landscape is constantly changing, I’ll be iterating on this workflow (or writing another post!).

While I’ve enabled Claude Code in GitHub Actions, I wanted to review their SWE-bench Verified/Terminal-bench results to better understand what I should tell Claude to route to the automatic action.

While this setup works for me in single-player mode, I’d love to see what others are doing in this space in multiplayer mode - I can only imagine what a mob programming situation might look like.

I think multi-model is where we’re headed - sending the same prompt to multiple models, capturing those changes in git branches, having a council/judge select the best, then automatically merging to main.


  1. What’s your current workflow(s), and how are you leveraging these tools?
  2. Are you doing anything interesting with git worktrees and similar isolation patterns? I was thinking about spinning these up in Morph Cloud.

I’d love to hear about your LLM workflow.

Hope to see y’all next time!

This post was written by yours truly; spellchecking, grammar-policing, title, and hook completed by AI.


  1. My current definition of agent aligns with OpenAI - Agents are systems that independently accomplish tasks on your behalf. ↩︎

  2. Generated by Claude desktop using Claude Sonnet 4 on 6/6/2025 ↩︎

  3. https://www.skeptrune.com/posts/git-worktrees-agents-and-tmux/ via Clint ↩︎

More LLM Musings: Notes from the Frontier

My LLM notes

Since the response to my last post was encouraging, I thought I’d expand on what I’ve been thinking about since then.

The future is already here – it’s just not evenly distributed.

AI adoption curve

I still encounter hesitation to adopt AI tools from senior folk. Change is difficult - just be aware that our industry doubles every (n) years. In (n) years, half the people you’ll interact with will have less than (n) years experience writing software. Their only lived experience will be using these types of tools as ‘AI-native’ developers.

Don’t say you weren’t warned.

I also see ‘but the LLMs put out buggy code’ arguments - I’ve always chuckled at these because I think some of us have forgotten who’s staring back at us in the mirror. Our entire careers have been predicated on putting out buggy code. :)

Scaling AI development

I was listening to my buddy Harper on the Intelligent Machines podcast - it’s a fun podcast, go have a listen. He referenced a friend that had a Github repository with hundreds of PRs created by LLMs. I’ve always been a huge proponent of the idea that given the right structures and flow, change should be cheap - if you have the financial means to spin up a few instances of Claude Code with a large budget, it’s quite amazing what you can accomplish with an agent + worktrees… which leads me to the next point.

Black box

I was chatting with another buddy, Austen (who was briefly in town), and an interesting point came up while we were discussing LLMs - Is it important we ‘care’ about LLM outputs? i.e., can we treat them like black boxes? or do we need to understand what’s going on under the hood?

I argue that in the long term, we will not and should not care; it’ll be yet another level of abstraction - albeit, a fuzzy one. Similar to how most of us don’t understand the optimizations LLVM is performing (nor do we care), over time, LLMs should expose a more accessible natural language-esque entrypoint to democratizing software development.

In the near / medium term, we should care as they’re a bit finnicky. Keep up with SWE bench and other benchmarks to understand what SOTA wrt our industry. It can help you answer “should I use an LLM for this”-like questions.

As an industry, I think we do a lot of unnecessary gatekeeping / mystifying - we’ve always embraced abstractions that make development more accessible within our industry, LLMs should be the next evolution in that journey.

Cursor

I might be in the minority, but I don’t think Cursor is very good - it’s definitely better than the autocomplete of old, but I didn’t think that was very good either.

These tools aren’t incentivized to tell you the specific use-cases that their product would be ideal in; so we should be thoughtful about their use.

Austen mentioned the word ‘magical’ when discussing what it’s like coding with one, and I don’t disagree, but this feeling is blinding us from its real utility. It definitely helps us code faster/better/more, but are we providing value to the user or are we using it to do less work - which is also fine and I am a proponent of - but I think we should be clear about that.

A tale of two modes

Speaking with friends in the industry, we’re using the tools in primarily two ‘modes’:

  1. Assistant Mode: AI as an ultra auto-complete (e.g. with Cursor, but without using their agent)
  2. Agent Mode: AI as a terminal AI agent (e.g. Aider, Claude Code, Codex).

I would be careful to explore both options as I think the end-game is Agent-land - so don’t miss what’s going on over the other side of the pond.

In my own personal projects, I always enable both modes. Assistant mode when I want/need more fine-grained control over the outputs, and Agent mode for any task that I think the LLM can execute on its own.

AI avalanche

There is an avalanche of AI slop coming down the content mountain - just log into LinkedIn to see what I mean. The signal-to-noise ratio has deteriorated to a point where curation fiefdoms will become ever more important.

At any point in the timeline, I think Main Street will always underestimate how the tools can be leveraged - so it’s good to have peripheral knowledge about how far the technology has come. I’ve long advocated for an accessible site that highlights SOTA examples of what the technology can do so that people can start to internalize and think about the content they’re consuming.

As a note, I don’t have a problem with the game that’s being played, I just wished people were more transparent about what’s being done with the tools.

Custom software

I have been using a lot of these tools to create custom software for my personal workflow. I sometimes wonder if this is where we’ll end up - everyone having an infinite amount of custom software tailored to one’s own unique processes and preferences.


If you’ve made it this far again… <3

Some things to think about…

  1. How do you determine when an LLM is the right tool to use for a task?
  2. What mode (assistant / agent) do you find most useful and when?

I’d love to hear about your LLM experiences.

Hope to see y’all next time!

This post was written by yours truly and spellchecked/organized/title completed by AI. I am still trying to figure out a way to be transparent about how much of what I write is being edited by a language model.

April 2025 technologies

AI checkpoint

~9 years ago, I was somewhat (in)consistent about updating the various technologies I was interested in. Since these posts are fun to look back at and reminisce about, I thought I would start them up again.

In a different life, I was a young CS graduate student very interested in all things artificial intelligence - but that story is for another day. 1 Fast forward 25 years, and I’m working on large language model adjacent projects. Whether it’s convincing people (and companies) to leverage them, tinkering with various frameworks, reading about their underpinnings, or just ruminating with friends about how we’re all using these tools - it’s been quite the whirlwind.

I’m sure I’m not alone when I say that my biggest gripe with the technology is that while it may feel magical when we “one-shot” generate a complex piece of software, the frontier is still jagged and rife with (steep) cliffs. I still experience countless moments where I traverse deep LLM-rabbit holes that make me quite unproductive, yet I continue to blindly trudge through the muck, hoping that \this\ iteration will deliver me one step closer to the promised land.

While various leaderboards and evals exist, at least in userland, there don’t seem to be very good quantitative approaches to all of this madness. That little hint of magic is blinding us from the reality that there are probably only a limited number of modes where leveraging a language model makes sense. After all, working too closely for too long sucks the joy out of writing software.

In a gold rush, it’s best to sell shovels. And right now, we’re all still digging.

So without further ado, here’s my April checkpoint of where I currently stand with respect to all things AI.

Evolving Toolkits

I’ve built a thing or two in many of the large Python frameworks - though most are available in other languages. Here are some of the ones I’ve played around with extensively:

For the most part, there’s quite a bit of overlap in the abstractions exposed through these frameworks. I’d suggest just picking one and going deep - it’s relatively trivial to switch between frameworks once you have a decent understanding of the high-level concepts. Since the space is constantly shifting, I’d also suggest periodically re-evaluating every few months. For example, I maintain a private llm-framework-playground to test the latest and greatest.

Tooling Ecosystem

The tooling ecosystem has absolutely exploded in the past year, with clear winners emerging. Again: sell lots of shovels during a gold rush.

Here’s some of the tools that I use daily:

As a note, at some point, I also had an OpenAI subscription, but with the release of the Sonnet series, I let it lapse. YMMV here as I know peeps who get a lot of miles out of ChatGPT.

Books

Here are books I’d recommend on the topic. I’ve probably read at least 3 or 4 more, but these were the best. Knowledge of Python and linear algebra will help you more easily digest the material.

Courses

I tend to be an experiential learner (i.e., I learn by doing). I take small projects and work through an implementation, so I haven’t taken much coursework for the technologies I’ve learned throughout the years. With that said, since most of my projects have been private, I’ve completed the following (ordered from least to most useful):

I found the Google / Kaggle course the most informative (and the deepest dive), but it might be a little overwhelming to some because it’s information-dense and compressed into a tight schedule.

My Current Observations

What’s next


If you’ve made it this far… (congrats!)

  1. What’s your experience been with these technologies?
  2. Have you found any frameworks or tools that have genuinely improved your workflow?

I’d love to hear about your LLM rabbit holes.

Hope to see y’all next time!

This post was written by yours truly and spellchecked by AI


  1. I’ll probably catch some flack for saying the following, but if you’re interested in where this is all going, I would encourage you to take a peek ‘under the hood’. The math underlying these systems isn’t very difficult, and the general computing ideas have been around for a very long time - there’s a reason there’s a proliferation of these language models, and it’s not because they’re very complex to understand; while some of the novel architectures / optimizations are interesting (and neat!), they’re definitely within the realm of comprehension. ↩︎

My computing journey

Finding Your Pete: A Story of Mentorship

Having recently fielded numerous questions about when to introduce programming to children, I’ve been reflecting on my own journey into computing - a story that highlights how sometimes the most important factor isn’t when you start, but whom you meet along the way.

Despite being a strong student in high school, I never took a single computer class, so it’s a bit of a wonder that I ended up majoring in Computer Science. I’d be remiss if I didn’t admit that the first few years were daunting - wrestling with “recursion” in Scheme while surrounded by graduates from IMSA - Illinois’ premier math and science academy - who were all seemingly born with a TI-82 in their hand was quite the challenge.

Co-workers

Everything changed during my junior year when I landed a part-time programming job in the School of Liberal Arts and Sciences’ graduate lab - special thanks to Dan Zink for taking a chance on me – a chance I absolutely didn’t deserve at the time.

It was there that I met Peter.

Given that my only programming experiences were in CS classes, I struggled at work; often contemplating if this was right for me. Peter, on the other hand, was a computing demigod. Even though it’s been more than 25 years, I still vividly remember Pete staying with me really late one night explaining how simulated annealing and force-directed graphs worked in the Java application we were working on. He had no obligation to spend his evening teaching a struggling colleague, but he did. Walking home across the Quad late that night, I remember thinking to myself something finally clicked - I understood programming in a way that had eluded me throughout the previous 2+ years.

Our friendship grew beyond that pivotal moment. For the next year or so that we worked together, we played pool, he introduced this un-adventurous, island kid to shawarma, and he opened my eyes to new computing ideas like running your own Linux box (which I’ve sadly continued to torture myself do for over two decades now). When Pete graduated and moved to Austin, Texas, we stayed loosely connected, eventually reuniting at SXSW about a decade ago. He is still the same fun-loving person I remembered from college – and he’ll probably hate me for writing this.

My Career

That mentorship moment with Pete opened doors that shaped my entire career. While I might have eventually found my way without him, his influence was catalytic.

Since then, my journey has taken me through different countries and industries: from defense research to helping startups create products including a newsroom, from kickstarting a makerspace to managing a technology team / platform at a NYC investment bank, and ultimately building a small technology company where I work today. Along the way, I pursued two advanced degrees – one directly inspired by my recognition of the vast ocean between people like Pete and myself.

Finding your Pete (or being a Pete!)

The impact of mentorship can be profound and far-reaching. As I look back on my journey, I realize that these pivotal moments – these opportunities to open doors for others – are what truly shaped my technical journey.

And so, while I encourage people to teach their children the ideas underlying programming (i.e. logic, abstraction, algorithmic thinking, etc), I don’t fundamentally believe that it’s something your child has to do at such an early age. What matters more is creating an environment where learning can flourish naturally. Give them opportunities to explore, to struggle, to grow – and most importantly, to connect with others; there’s a chance that they’ll meet their Pete Taylor along the way.

You never know what possibilities you might unleash into the world.

The MCP and other AI tools

Walkin’ in an AI Wonderland

Merry Christmas!

Since I have a free morning, I thought I’d surprise the Internet with an update. I’ve been working with / deploying LLM tools for a few years now, and I wanted to get some of my thoughts documented so I can laugh at myself when I’m really old - so basically, next week at the rate AI is evolving.

At this point in the timeline, I don’t see a future where we aren’t leveraging an AI tool (or some AI-powered interface), regardless of industry. Like cryptocurrency, this genie can never be placed back into its bottle. In fact, even if we don’t end up in AGI Wonderland™ with our robot overlords, interacting with an open-ended, human-like “super-intelligence” is simply far too alluring. If you haven’t tried it, I’d whole-heartedly encourage you to do so; you’ll see exactly what I mean.

Although I sometimes spend more time cajoling these things into giving me less falsehoods than actually being productive, I still send them quite a bit of daily tasks. Here’s a small list of things that I personally use them for:

Personally, I’ve tried to stay away from content generation, as I’m still futilely trying to hold onto a modicum of thinking.

Since I still dabble in a bit of software (when the AI lets me), here’s what I typically send to the models:

If it’s a difficult, new task for a language I don’t know well (e.g. Rust), I don’t bother sending it to the models anymore because I’ve found that I normally spend more time fighting the machine than it being my complement.

Here’s my current tools list:

Daily

Libraries

Funsies

Watching

Hope you have a happy holidays, and see y’all next year!

Migrating to UV

The Great UV Migration

This might just be my own observation from my little abode on the Interwebs (so take it with a grain of salt), but it’s been a minute since I’ve been excited (and seen much excite) about a piece of Python tooling.

A few months ago, after reading Hynek’s blog, I tweeted:


Now that they’re stable, I’ve slowly begun migrating my personal repositories.

Why you ask?

Nice to haves

I’m still hoping the following issues get addressed as they’re pretty ingrained in my workflow.

Even with cruft and a cookiecutter template, this is still going to be quite the undertaking.

See y’all in a few years!

AI-generated Git commit messages

Embracing the future

Check out Harper’s blog about leveraging an llm to generate meaningful Git commit messages. Be like Harper.

I’m sure I’m not alone in this, but most of my Git commit messages in my private repositories look something like:

wip  # work in progress

I obviously wouldn’t recommend this strategy as my future-self is always chastising past-self. Recognizing my propensity for laziness, I thought it would be both fun and productive to enlist the help of our newly appointed AI Overlords in generating meaningful (and standardized) Git commit messages.

The idea originated from a one-liner Git alias a friend sent me, which I then forwarded to the brilliant Harp. He took the concept and ran with it, making it infinitely more robust by leveraging Simon’s powerful llm library, a tool I hadn’t had the pleasure of using before.

I highly recommend checking out Harp’s implementation and insights on his blog.

Since I love to flat out torture myself, I’ve configured the output to adhere to the Conventional Commit format. For those curious about how I wrangled the prompt monsters, feel free to take a peek at my dotfiles repository.

December 2022 technologies

I know, I know.

I was supposed to be doing these quarterly entries of technologies that piqued my interest, but I’ve come to realize that I’m just too busy hacking on them. I really do want to be better about blogging, and maybe one day, I’ll add some of my private Obsidian notes.

Well, we can all dream, can’t we?

So without further ado…

Technologies / APIs

Tools

Shutting down allb.us

Shutting down the studio

I’ve always secretly enjoyed retiring computing systems.

Over 10 years ago, catching public transportation in Hawaii was hard. Having just given up my car, I originally wrote allb.us to help make that transition more palpable.

I’m a little nostalgic because the site was always a reminder to me of all the things I had going on at the time; working at a small startup, helping start Honolulu’s first makerspace, and negotiating a move to NYC.

I finally decided to retire allb.us after all these years because:

All good things must come to an end.

After all, that’s the beauty of technology; something better will eventually take its place.

Hello from Tokyo!

A lot has happened in two years…

For the most part, I’ve left most social media sites aside from Twitter because it’s just too much for my feeble brain to manage and to be honest, my life isn’t all that interesting. Haha.

With that said, here’s what I’ve been up to

August 2018 update

Lately, here’s what I’ve been focused on.

Technology

What I’m reading

November 2016 technologies

No matter how old I get, I think I’ll always be tinkering with technology.

Here’s what I’ve been looking at over the past few months:

I’m so interested in what the ML/DL folks have been up to that I’ve started studying for the GRE again. :D

History of HICapacity

I haven’t written about the history of HICapacity because I’ve always felt that there’s a disproportionate amount of weight given to people who start things. Having had several experience starting things, I’d like to argue (for another day) that starting is actually the easy part; it’s sustaining that’s difficult and admirable.

I’ve decided to document what I can remember of its beginnings because to be completely honest, I’m getting old and don’t remember all the details from events that occurred more than 5 years ago. In my old age, I also recognize that there were so many people that are a part of this story that by not writing it, I’m minimizing their contribution in helping shape a small portion of Hawaii’s technology community.

2011

This was the year it all started.

If I were to pinpoint the seminal event in HICapacity’s history, I’d have to say that it was Jerry Isdale’s post on TechHui. What seemed like an innocuous dinner gathering at Los Chapporos brought together 7 people on Oahu that would provide HICapacity’s early foundations. These seven included:

We don’t remember what we discussed, but we do know that Paul would go on to organize our second meeting at Murphy’s where Jeremy coined the name ‘HICapacity’ as a form of homage to the NYC Resistor.

The second most important event was the 2011 Unconferenz. We all had a feeling that there was enough community support for a Makerspace, but we didn’t fully understand how large that support would be. We’d meet with others interested in growing the collaborative community including:

This was also around the time where I met Ian Kitajima. Most people don’t know this, but Ian immediately opened Oceanit’s doors to us when nobody else in downtown would. Suffice to say, there were many, many doubters on island that informed me this concept wouldn’t / couldn’t work. Ian definitely wasn’t one of them.

Lastly, I feel that the third most important event in HICapacity’s history was meeting every weekend at Petals and Beans. I still remember the advice that Jerry and Gorm gave - “to just be consistent.” When we first started, there were only a handful of us who would come down to hang out, but something interesting happened along the way - word got out and we consistently started occupying the entire coffee shop.

I just knew that we were onto something.

2012 and beyond

As much as I’d love to say that I was part of the community that took HICapacity to the next level, I was already entrenched in NYC finance. Before leaving the islands, a young fellow named Austen Ito would become a part of the core team. Austen would play an instrumental role in getting HICapacity its own physical space. He and the next group of technologists would lead the organization to frontiers that I had only ever dreamed about.

These technologists included:

One of the primary reasons I decided to write this history is to help everyone understand that HICapacity didn’t come about from the heroic effort of the original seven - nor was it solely based on my or Austen’s effort.

It was built by a community who shared the singular belief that a technology-focused, communal space would benefit Hawaii. When looking back at it, I’d do it all over again.

Thanks

Thanks to Jerry, Matt, Gorm, and Paul for helping distill what happened in HICapacity’s early days. Thanks to all the past and current leaders of HICapacity for providing leadership and helping realize an idea that sprung from a message board. Lastly, to everyone who has passed through the space - for sharing knowledge and making for a better technical community in Hawaii.

Save the best for last

In its early days, there was only one person who understood the amount of hours and work required to organize this community. Sara contributed an immense amount of behind-the-scenes work that very few people know about. She supported and believed in me when very few people would. And for that, I can’t thank her enough. Luckily, fate would have that she’s now my wife. #putaringonit

April 2016 technologies

Doh!

I know I haven’t blogged in almost 8 months, but I swear that I’m still in the technology game. As much as I love Docker, I think that Amazon has nearly nailed the serverless abstraction with Lambda.

So without further ado, these are the technologies that I’ve been staring at for the past few months:

Until next time!

July 2015 technologies

I’m still around and still playing with technology.

Here’s what I’ve been looking at over the past few months:

I’m fascinated with continuously deploying immutable infrastructures based on fast, deployable application stacks. Hopefully, with the Rise of the Unikernels, we’ll see these become a bit more secure.

Until next time!

March 2015 technologies

Since I’m currently unemployed, I’m really going to try to stick to these quarterly entries of technologies that have piqued my interest.

So without further adieu, here’s what I’ve been looking at over the past few months:

As a more important note, I’ve begun to decrease my interactions with cloud services. Initially, I started with Instagram and Facebook, but by the end of this year, I’d like to reclaim my data and remove my dependency on the cloud.

Homeward Bound

After an incredible 3½ years of building, growing, and managing a closed-source platform at a NYC investment bank, I’m heading home.

After all, home is where the heart is.

October 2014 technologies

Last year, I was going to write a quarterly entry about all the technologies I’ve been playing with. Since it’s been about a year since that last post, I figured I’m due for another.

In no particular order, here’s what I’ve been looking at over the past few months:

See you in a few months (aka next year)!

On goals and execution...

Last year, I set a personal goal to commit something (anything) meaningful every day for an entire year.

Yesterday, I completed that goal.

Special thanks to this special someone who supported and reminded me if I had “checked in” for the day. <3

So here’s to setting far-reaching goals and still being able to execute on them.

Command-line Fu

As developers, we’re constantly tinkering and refactoring code to arrive at the tidiest and most maintainable piece of software. I’m still often surprised by how few of us optimize our working environments via shortcuts, aliases, and habits - especially considering the large time investment we often commit as software engineers.

Inspired by @hmason's Command-line Fu session at OrdCamp, I realized that there’s a lot of snippets I’ve written and accumulated over the years that could be useful to a young developer.

So without further adieu, I’ll try to post some of my more useful snippets at least once or twice a month beginning with one of my personal favs: tmux

function tm() {
if [ -n "$1" ]; then
tmux attach -t $1 2>/dev/null || tmux new -s $1
else
tmux list-sessions 2> >(grep -q 'failed')
if [ "$?" -eq 1 ]; then
echo "No available tmux sessions. Please create one."
fi
fi
}
view raw tmux-helper hosted with ❤ by GitHub

Can you figure out what it’s doing? :)

I'm back, for a μs

One of these days, I’ll get this blogging thing down, but until then, you’ll just have to bear with my yearly updates me.

The other day, someone suggested I write a quarterly entry about all the technologies I’ve been playing with. Not only should it spawn current discussion, but in retrospect, should provide some insight into what I was doing at certain points in my career.

So without further adieu, in no particular order, here’s what I’ve been looking at over the past few months:

Creating a LocalTunnel on dotCloud

tl;dr

If you want to install LocalTunnel on dotCloud, use this repo: https://github.com/cyounkins/tunnel-on-dotcloud

If you participate in a lot of hackathons or just want to expose a certain port on a development box to theInterwebs™, there’s a really useful app that performs all the ssh magic from one of the Twilio engineers, http://progrium.com/localtunnel/ . Installing this rubygem magically assigns an unused proxied subdomain from localtunnel.com so you can show off your wares.

While extremely useful in the one-off hackathon world, it’s a bit problematic if your app connects to a number of external services. Each time you’re assigned a different proxied subdomain from localtunnel, you’ll have to log into aforementioned services and change the callback urls or update a mystical dns redirect entry - both undesirable behaviors… especially having done this several umpteen times. :)

Having stumbled upon this dotCloud blog entry, http://blog.dotcloud.com/open-your-local-webapp-to-the-web-with-dotclo and walked the tutorial, the repo referenced didn’t actually have a working nginx.conf file to do the proxying - looks to be commented out and missing a few other directives. In any case, thank you Interwebs, here’s a working repo: https://github.com/cyounkins/tunnel-on-dotcloud

As a note, make sure you kill any open ssh sessions in dotCloud. If you’re careless like me with a lot of tmux sessions open, you’ll often get “Warning: remote port forwarding failed for listen port 8042” which generally translates to “Close your other open ssh sessions”

Ignoring changes in git submodules

For those Vimmers using Pathogen to manage your runtime path, you’ll find that Pathogen creates tags files in the bundle’s doc folder.

Thanks to this Stack Overflow post, all you need is Git 1.7.2 and the following command:

 for s in `git submodule  --quiet foreach 'echo $name'`; 
      do git config submodule.$s.ignore untracked; 
 done

Happy Vimming!

Avoiding the ESC key in Vim (and Readline)

It’s probably safe to say that I should’ve changed this (ugh) habit years ago, but I just never got around to it. Thanks to Stephen, I’ve finally updated my .vimrc to exit insert mode using:

inoremap jk <ESC>

Yay!

More importantly, if you tend to set vi editing mode in readline, you definitely want to change its bindings as well. I found this gem hidden deep in the Vim Tips Wiki.

All you have to do is edit your .inputrc file with the following:

set editing-mode vi
set keymap vi

$if mode=vi
    set keymap vi-command
    "ii": vi-insertion-mode
    set keymap vi-insert
    "jk": vi-movement-mode
$endif

As a note: Readline will also pick these settings up for every readline enabled app (IPython, etc)

Happy Vimming! :D

Where did all my disk space go?

Update -

Thanks to @pgr0ss for the tip, you can use -k instead of –block-size!


On my cloud servers, I’m always asking myself:

Where the !@#$% did all my disk space go?

And I know somewhere, sometime down the road, future self will be thanking present self for blogging this as a reminder.

To get the top <n> offending directories on your filesystem (replace <n> with a number)

du -x -k | sort -nr | head -<n>
view raw top10 hosted with ❤ by GitHub

via http://linuxreviews.org/quicktips/chkdirsizes/

Access to OS X pasteboard in tmux/screen

If you're having trouble accessing pbcopy/pbpaste from tmux and/or an unpatched screen, check out https://github.com/ChrisJohnsen/tmux-MacOSX-pasteboard.git

tl;dr

  1. Clone repo
  2. Run makefile
  3. Add to path (or add symlink on path)
  4. Add set-option -g default-command "reattach-to-user-namespace -l zsh" or whatever your favorite shell is in your .tmux.conf

code!

Getting uWSGI + init.d playing nicely on Ubuntu 11.10

A few weeks ago, I wanted to install uWSGI on my Ubuntu 11.10 box for http://allb.us.  After having gone through the standard aptitude/pip installs to get uwsgi installed, I noticed after running the init.d scripts, absolutely nothing would happen.

zip. nada. zilch.

No log file + no uwsgi process == a lot of sad pandas.

After having searched stackoverflow, it was quite apparent that I wasn’t the only unlucky soul to encounter this error.  To debug the uwsgi init.d script, I used the trusty set -xv trick atop to see the omgwtfbbqs.

Here’s a few things I realized:

Here’s a gist I created of the xml uwsgi configuration I used for my Django application. Hopefully it helps save someone from the hour I spent in startup script hell.

Enjoy! :D

<uwsgi>
<socket>127.0.0.1:12345</socket>
<pythonpath>/home/apps/allbus/current</pythonpath>
<module>wsgi</module>
<plugins>python</plugins>
<processes>1</processes>
<pidfile>/var/run/uwsgi/%n/pid</pidfile>
<daemonize/>
<uid>33</uid>
<gid>33</gid>
<enable-threads/>
<master/>
<harakiri>120</harakiri>
<max-requests>5000</max-requests>
</uwsgi>
view raw uwsgi.xml hosted with ❤ by GitHub

Passing in a custom port to ssh-copy-id

Thank you, #lazyweb.

After years of using ssh-copy-id to drop public keys into a remote machine’s authorized keys, I finally found a post showing how to use the script to connect to a remote machine running on a custom port.

http://it-ride.blogspot.com/2009/11/use-ssh-copy-id-on-different-port.html

Note: This would’ve been apparent if I had just cat the script… but man, am I lazy. :D