Taylor’s Teardowns: Xcode Intelligence
Taylor’s Teardowns: Xcode Intelligence
I used to do these on YouTube. Walkthroughs, demos, live reactions, whole series. But there were always problems. Audio would cut out, the service I was reviewing would not cooperate on stream, something would fall apart right in the middle of a recording. That is annoying for me, but it is actually a bigger problem for the companies I am covering. They deserve real written feedback. Something they can read, search, share with their team, and reference later. Not a video that broke halfway through.
Taylor’s Teardowns is simple: I pick a product, I use it, and I tell you what works, what does not, and what I think about it.
If you are new here, I am blind, I use VoiceOver on every Apple device I own, I build iOS apps, I do web development, and I am an AI enthusiast.
What Xcode Intelligence Actually Is
Xcode Intelligence is Apple’s name for the coding assistant built directly into Xcode. It is not a plugin or a third-party extension. It ships with the editor.
There are two main pieces.
The first is predictive code completion. It runs on your device, powered by a machine learning model that Apple trained specifically for Swift and Apple’s SDKs. You type, it suggests. Not just the next variable name but full lines, complete function bodies. It knows your project and it learns your style the more you use it.
The second piece is the coding assistant. I hit Command-0 and a sidebar opens. VoiceOver lands me in it. The first time, it asks me to choose a model. I select Claude Agent. From there I type in natural language, Xcode responds. Ask it to explain code, generate something new, fix a build error, write documentation. All without leaving the editor.
Getting Started
The intelligence features already exist in the production release of Xcode. If you have Xcode 26 installed, you have access to them right now. I use the beta. On March 18th, Apple released Xcode 26.4 Release Candidate, and that is the version I have been using. The production version works, but the new refinements are in the beta.
The Coding Intelligence section of the release notes covers a fix for MCP servers getting overwritten during Codex initialization, and a fix for repeated connection dialogs when external development tools are talking to Xcode. Small things. The kind of small things that mean a team is paying attention.
Choosing a Provider
Before the coding assistant does anything useful, you have to enable a provider. Open Xcode preferences with Command-comma, then click Intelligence in the sidebar.
The current options are OpenAI and Anthropic. From OpenAI you get ChatGPT in Xcode or Codex. From Anthropic you get Claude or Claude Agent. You can also add your own provider, something locally hosted or a service you already pay for, as long as it supports the Chat Completions API.
There is an important distinction here. Claude is a chat assistant. You ask it a question, it answers. Claude Agent is something else entirely. It does not just respond. It takes actions. It can build your project, run tests, search Apple’s documentation, add entitlements, capture Xcode Previews to verify what it built, and make changes across multiple files in a single pass. It connects to Xcode through the Model Context Protocol, which Xcode provides directly. You install the agent from the Intelligence settings and Xcode keeps it updated automatically.
Codex from OpenAI is the other agentic option. Same idea, different model.
I use Claude Agent, Codex, all of them. But Claude Agent is my preference. If you have a Claude.ai account, you connect it in the settings and you are done in about two minutes. You will need at least the Claude Pro plan to get started. I am a Max subscriber.
For iOS development specifically, I find this is better than using Claude on its own. Apple put a layer of training data on top of the model, so Claude inside Xcode is tuned for iOS development. It knows the SDKs, it knows Swift conventions, and it knows the patterns Apple wants you to follow. That matters.
The Walkthrough
Today I am working on Perspective Studio, an open source project. I thought I would share the experience here in real time.
I open the project in Xcode. I find the coding assistant through the toolbar, or you can hit Command-0. It does not focus your cursor on the coding assistant. Not a fan. I have to navigate to it myself.
VoiceOver says “Compose menu button.” I select Claude Agent.
I VO-left and VO-right through the sidebar to get familiar with the layout. There is an icon that VoiceOver announces as “Clock button.” Not obvious. I click it and it turns out to be conversation history. Previous sessions, earlier conversations, the ability to restore your project to that state. Useful feature. But if you are a VoiceOver user hearing “Clock button” for the first time, you are guessing.
I VO-right some more. VoiceOver says “Attachements menu button.” Not attachments. Attachements. Apple misspelled it in the accessibility label. That is weird.
Then I land on the text field. VoiceOver says “Source Editor edit text Currently on line 1.” That is the prompt field. The place where you type your message to the model. But VoiceOver calls it a source editor. I wish this said “Message” or “Prompt” or anything that tells you what it is actually for. If you did not already know the layout, you would think you landed back in your code.
I type my first prompt. I ask it to set the deployment target to the previous macOS from 26, if that is possible.
Claude comes back fast. It changed all six instances of the deployment target from macOS 26.2 to macOS 15.0 Sequoia. It even warns me that if any of my code uses macOS 26-only APIs, I will see compiler errors and need to add availability checks. That is a solid response. It did exactly what I asked and told me what to watch for.
But here is something I am noticing. When I am working and chatting with agents in Perspective Studio, the UI freezes. Every time I send a message, the whole interface locks up until the response comes back. I think we need to do some async await work for the UI. I ask Claude to check the threading.
Next I try something different. I have made a bunch of agents for this project. Each agent is a tool, so the AI is supposed to route my request to the right one. I ask in the chat if I can talk to a specific agent. It is not routing the way I want. I expected to pick an agent and talk to it directly, but that is not how it works. The AI decides which tool to use on its own.
The response comes back in a scroll area. VoiceOver labels it “list.” Just “list.” Sometimes if you interact with it, VoiceOver starts reading and will not stop. It just keeps going and going. One time it froze my entire computer. That is not a minor annoyance. That is a showstopper for a VoiceOver user trying to actually read what the model said.
I VO-right past the list. VoiceOver says “Responding.” Then I hear “Square button.” I have no idea what that is. No label, no hint, just “Square button.” I do not want to find out what it does by pressing it.
While it is responding, the agent builds the app on its own. Right now it says build failed. That is the thing about Claude Agent. It does not just suggest code and leave you to deal with the result. It builds. It sees the warnings, the errors, the console output, the debug messages. It reads all of that and tries again. It has access to the same build pipeline you do.
The response finishes. VoiceOver drops me back at the top. I VO-right and hit another unlabeled button. This one VoiceOver reads as “rectangle.grid.1x2 button.” That is not a label. That is an SF Symbol name. Someone forgot to add an accessibility label and VoiceOver is reading the raw asset identifier. I do not know what this button does and I am not pressing it to find out.
I want to copy the response but I cannot grab the entire thing to my clipboard. It is too long. I try and it will not work.
Final Thoughts
I also notice there is no dictation tool. Voice input would make me go faster.
And when the agent is making plans, there is no ask questions tool like Claude Code has in the terminal. So when the agent needs input from me, it asks in text. I have to copy the question, figure out where to answer, and respond. The interaction is clunky. Claude Code handles this so much better. It asks you a question and you answer it right there. In Xcode, that back and forth is not built in.
What Apple Needs to Fix
Here is something else I noticed. Apple is not keeping up with Anthropic’s releases. The Claude Agent bundled in Xcode 26.3 shipped with version 2.1.14, which runs on the Opus 4.5 model. Anthropic has already released version 2.1.32 running Opus 4.6. That is not a small gap. Newer versions of Claude Code are faster, smarter, and better at fixing their own mistakes. The version Apple is shipping is already behind. One developer figured out you can manually replace the Claude binary at ~/Library/Developer/Xcode/CodingAssistant/Agents/ with the latest version from Anthropic, and it works. But you should not have to do that. Apple controls the update cadence and right now they are not moving fast enough.
I would also love Xcode to send me a notification when the agent is fully done and ready for me to test. It already notifies me when it needs attention for terminal commands. Why not notify me when it finishes? Let me go do something else and come back when the work is ready to review.
That is enough. See you in the next teardown.

