Most of my experience with AI tools has happened through a keyboard. Ask a question, enter a prompt, read the response, refine it, keep going. It's a useful rhythm for certain kinds of thinking: sorting through ideas, translating something technical into something clearer, pressure-testing a thought. But it's also a filtered rhythm. By the time a thought reaches the screen, it's already been edited.

Voice changes that in ways I didn't fully expect until I tried it.

What Feels Different

The moment you start speaking to a tool instead of typing into it, the interaction becomes more immediate. Less like using software, more like working through something out loud. I was aware the whole time that it was still a machine on the other side. But that awareness mattered less than I thought it would, because the pace of the exchange kept pulling me forward.

What I noticed first was how much lower the friction was. Instead of trying to craft the right prompt before sending it, I could talk through an idea the way I'd naturally explain it to someone else. The thought didn't have to be finished before it could be useful. I could start somewhere rough and let the response help me figure out where I actually wanted to go.

For someone whose ideas tend to move quickly, that's a meaningful difference.

The second thing I noticed was where the refinement happens. With typing, most of it happens before you hit send: structuring the thought, trimming what's unnecessary, deciding what to ask. With voice, it happens in the back-and-forth. You say something, hear the response, react to it, reshape the thought, and keep moving. It's more iterative and more conversational, which suits certain kinds of thinking better than others.

What Doesn't Change

None of that lowers the bar for judgment. If anything it shifts where the bar shows up.

The faster and more natural the interface feels, the easier it is to mistake fluency for accuracy. A voice exchange can move quickly enough that you're three exchanges in before you notice something in the response was off, or that the question you were asking wasn't quite the right one. With typing there's usually more pause built in. With voice you have to apply that pause yourself, which requires a slightly different kind of awareness.

The tool still needs the same human ingredients everywhere else. You still need to know what you're actually trying to solve. You still need to recognize when an answer is technically responsive but not actually useful. You still need to decide what to do with what you get.

What's changed is how accessible the interaction feels, and that matters more than it sounds. The easier AI becomes to use, the more it fits into everyday moments: not just work at a keyboard, but quick idea capture, verbal brainstorming, the in-between moments where typing feels too slow or too deliberate. That shifts the tool from something you go to intentionally into something that starts to travel with the thinking.

What I'm Watching

Every time the interface gets easier, the tool becomes more embedded in how people work. That's mostly a good thing. But it also means the role of human judgment becomes more important rather than less, precisely because the moments of use are more casual and more frequent.

Voice doesn't feel like a gimmick. It feels like another signal that AI is becoming genuinely conversational: less a tool you operate and more a presence you think alongside. Whether that's useful or something to be thoughtful about probably depends on how deliberately you bring your own judgment to it.

For people who think better out loud than they do in a blank text box, it's worth trying. The difference in how thinking flows is real.

Want more practical approaches like this? Explore my curated library of AI tools, prompts, and workflows at resources.taneilcurrie.com

Recommended for you