Programming with LLMs took a large leap late last year, particularly Claude, so much so that many of my programming friends went from “getting help” with an AI to “pushing most work” to an AI.
However, what doesn’t work is “write me a program that does X.” You really have to think, plan, and dictate like a high-level programmer would to an intern. That AI intern these days is ridiculously good, but only if you set them up to succeed. That typically means that context, architecture, function usage, and more needs to be defined. Most programmers using AI do that through a skills or context file they supply along with their detailed request.
For instance, here’s just a few of the things that one programmer tells his AI along with his request:
- Object oriented code, please
- Use properties to pass variables that don’t change a lot among methods. Use name/value pairs otherwise.
- Make methods have defaults to minimize the length of the calls.
- Classes sort with capital letters. They are nouns.
- Properties start with lower case letters. They are nouns.
- Methods start with lower case letters. They are verbs.
- Use CamelCase, not underscores.
- Try to avoid methods longer the 20 or 30 lines.
- No blank lines.
- Try to keep classes to less than a thousand lines.
- Use test classes for testing and example programming.
- Validate all public method inputs.
In essence, you’re shaping what the AI does when it starts actually coding, and forcing it to put it in a form you can understand and parse.
Another programmer friend of mine actually supplied all their existing code to his AI and, among other things such as the above, supplies the instruction “program in my style.”
Within Elements, I sometimes will ask my AI “I am a Web Site developer who is using Tailwind to control CSS. I need you to create HTML code to do X. You can use Javascript, HTML5, and PHP as necessary, but the code you create will be used in a Flex/Grid in a responsive layout, so do not hard code CSS within your output.”
Since I played myself through college as 2nd and later 1st chair trumpet with the Spokane Symphony, I get your idea, and it’s probably a good one.
XCode these days does directly integrate with Claude/OpenAI, but you have a lot of decisions you’ll want to make up front before getting the AI’s involved. I’d suggest keeping your starting point as an output of a grid for 1, 2, 3 (pressed/not pressed), for instance, as it simplifies a lot of problems. If you get that working, you can then say (1) now animate the trumpet valves in two dimensions," and when that’s working, you can then say “now take this an make it three dimensional.”
To stay as current as possible, I’d say you should specify Swift-only code. Think through how the user interacts with the program and make that part of the request. Is the user saying “here’s a musical snippet and I need the standard fingering for it” or “here’s a musical snippet that I’m trying to get a smoother, faster fingering for”? Does the answer step through each note, or does it do it in real time with the ability to slow down the display?
Also, are you doing more than showing just the three valves? On my Bach Strad (but not on another trumpet I used) I had to use slides on some fingering to keep pitch.