Beyond the Blog: More AI and Hacking Content

If you read my posts on here, you enjoy the same things I do. So, I wanted to let you know about stuff I’ve made (or contributed to) in other places over the last month or two just in case you wanted to check them out.

More …

Jailbreaking Humans vs Jailbreaking LLMs

“Jailbreaking” an LLM and convincing it to tell you things it’s not supposed to is very similar to social engineering humans. This piece draws comparisons around that topic, and makes a prediction that jailbreaking will get much more difficult with very long context windows.

More …

vim + llm = 🔥

If you don’t use vi/vim, you might not find this post very practical, but maybe it’ll convince you to try it out!

More …