Static analysis, dynamic analysis, and stochastic analysis
post_below
stochastic analysis
Haha... it really is useful. Tip: have the model write to a file with notes about skipped/dismissed items so it doesn't re-surface them on the next run.
davek804
I have some common patterns for instructing Claude to output results to a file. I both agree with you that it gets a little hinky without the clear context step, and I would push back a tiny bit.
There are many useful ways of retaining the context you want while.effectively ensuring the LLM dismisses what you want, without draining all the context.
glesica
This is an interesting idea. I have a codebase at work that I haven't really used Claude with much, partly because it's legacy code and I'm not confident Claude would handle it terribly well (there are a lot of "gotchas" and I don't have buy-in from the team, understandably, to do a wholesale refactor). But just asking it to find problems with specific areas of the code or a single commit could be an interesting way to move the ball forward incrementally.
TheD00d
I've been doing something similar. More of a quick context reminder. Helps keep the AI aware of what I am trying and tailors output to match.
post_below
It's smart not to start with a significant refactor. As you nudge the ball forward, have Claude map out and summarize architecture, features, practices and patterns you encounter, modularized and loosely organized in some way. Use those files as context for future experiments (or demonstrations). Opus can actually be really good with legacy systems if it has enough context to pattern match effectively.
post_below
stochastic analysis
Haha... it really is useful. Tip: have the model write to a file with notes about skipped/dismissed items so it doesn't re-surface them on the next run.
davek804
I have some common patterns for instructing Claude to output results to a file. I both agree with you that it gets a little hinky without the clear context step, and I would push back a tiny bit.
There are many useful ways of retaining the context you want while.effectively ensuring the LLM dismisses what you want, without draining all the context.
glesica
This is an interesting idea. I have a codebase at work that I haven't really used Claude with much, partly because it's legacy code and I'm not confident Claude would handle it terribly well (there are a lot of "gotchas" and I don't have buy-in from the team, understandably, to do a wholesale refactor). But just asking it to find problems with specific areas of the code or a single commit could be an interesting way to move the ball forward incrementally.
TheD00d
I've been doing something similar. More of a quick context reminder. Helps keep the AI aware of what I am trying and tailors output to match.
post_below
It's smart not to start with a significant refactor. As you nudge the ball forward, have Claude map out and summarize architecture, features, practices and patterns you encounter, modularized and loosely organized in some way. Use those files as context for future experiments (or demonstrations). Opus can actually be really good with legacy systems if it has enough context to pattern match effectively.
Haha... it really is useful. Tip: have the model write to a file with notes about skipped/dismissed items so it doesn't re-surface them on the next run.
I have some common patterns for instructing Claude to output results to a file. I both agree with you that it gets a little hinky without the clear context step, and I would push back a tiny bit.
There are many useful ways of retaining the context you want while.effectively ensuring the LLM dismisses what you want, without draining all the context.
This is an interesting idea. I have a codebase at work that I haven't really used Claude with much, partly because it's legacy code and I'm not confident Claude would handle it terribly well (there are a lot of "gotchas" and I don't have buy-in from the team, understandably, to do a wholesale refactor). But just asking it to find problems with specific areas of the code or a single commit could be an interesting way to move the ball forward incrementally.
I've been doing something similar. More of a quick context reminder. Helps keep the AI aware of what I am trying and tailors output to match.
It's smart not to start with a significant refactor. As you nudge the ball forward, have Claude map out and summarize architecture, features, practices and patterns you encounter, modularized and loosely organized in some way. Use those files as context for future experiments (or demonstrations). Opus can actually be really good with legacy systems if it has enough context to pattern match effectively.
Haha... it really is useful. Tip: have the model write to a file with notes about skipped/dismissed items so it doesn't re-surface them on the next run.
I have some common patterns for instructing Claude to output results to a file. I both agree with you that it gets a little hinky without the clear context step, and I would push back a tiny bit.
There are many useful ways of retaining the context you want while.effectively ensuring the LLM dismisses what you want, without draining all the context.
This is an interesting idea. I have a codebase at work that I haven't really used Claude with much, partly because it's legacy code and I'm not confident Claude would handle it terribly well (there are a lot of "gotchas" and I don't have buy-in from the team, understandably, to do a wholesale refactor). But just asking it to find problems with specific areas of the code or a single commit could be an interesting way to move the ball forward incrementally.
I've been doing something similar. More of a quick context reminder. Helps keep the AI aware of what I am trying and tailors output to match.
It's smart not to start with a significant refactor. As you nudge the ball forward, have Claude map out and summarize architecture, features, practices and patterns you encounter, modularized and loosely organized in some way. Use those files as context for future experiments (or demonstrations). Opus can actually be really good with legacy systems if it has enough context to pattern match effectively.