AI related goals for software engineer

As an engineering manager, we’re being tasked to look after the career development of our reports. In the context of annual review, we are often being tasked to set OKR, KPI, or goals. Regardless, the context here is more or less the same.

Anyway, the objective here is to share how to set SMART goals for your reports, while trying to fulfil the management wish on adopting or implementing AI into the day to day operation. Anyway, here are some example and reasoning for why it is being set that way.

1. Develop instructions or chat agent

I think the pattern for assisted coding has being to take place. Assisted coding tools that mainly used in enterprise are Github Copilot, Claude Code, Cursor, Windsurf, Codex, etc. What’s common among these are the ability to create of custom agents or instruction prompts that cater to our team and organization need. While these custom prompts are mostly still at infant stage, we also see many users who has managed to take advantage of these feature to take control of the LLM and improve the productivity of writing code.

What can be done here? Build habit. Knowing that perfect prompts and agents can never exist. Set it in a way that helps the engineers to think agentic workflow by creating AI tools when dealing with recurring patterns. Example:

  • Create X amount chat agents to assist in development regardless of the usefulness.
  • Create a useful chat agents that understands X product architecture to assist in development.
  • Improve existing instruction prompts or chat agent to reduce hallucination when using code assistant.
  • Demonstrate improved prompting techniques in 2 code reviews or peer demos.

You get the idea. Main goal is to set an environment that allow the engineers to explore and learn from creating without expecting it to be perfect. Who knows, out of the 100 that was created, probably 10% of those could be a gem is a niche area of a workflow?

2. Exploring the use of MCP (Model context protocol)

I believe many of the organization or teams are hooked into many different kind of SaaS service. It could be for project management, application monitoring, vulnerability scanner, etc. I guess I don’t have to named those tools as it is pretty common these day.

What’s next? Same approach, start with the habit by using the tool, to a point where continuous improvement eventually lead to us able to rely the output. Many of our day to day involve looking into alerts, escalated tickets, and more. All these information can be retrieved with MCP and combining it with LLM, we are looking at potentially removing a lot of manual work switching website into conversation.

Therefore, encourage the used of MCP to build trust by providing relevant context to existing workflow. Example:

  • Reduce the time needed to triage incoming issue by X% of time with the use of AI.
  • Use MCP tool with LLM improve the time needed to perform Y work by X%.
  • Identify X amount of usecase of MCP usecase to speedup development effort. Share them.

Once again, we want to promote the engineers to learn how to adapt to the new world. Most of the constraints we used to think of, MCP can bridge the gap. Who is better to look find the problem and solution than the one who is actually doing the work?

3. Setting targeted AI-powered improvement goals

Once your team is able to nail the above, by now, I think you likely have a lot of AI-powered tooling in place. Next step is to set goals to continue to improve the accuracy of the AI-powered tools or automating the steps.

That’s not much to says here, beside wearing back the Kaizen hat and think what can be improve next. Start to think about how to setup measurable metrics, and then think about how to slowly improve from there. Example:

  • Reduce the number of rejected pull requests due to AI-generated errors by X%.
  • Reduce the lead and cycle time of the development cycle by X%.
  • Enhance automated testing to reduce QA or manual testing by X%.
  • Achieve a 10% increase in overall test coverage in the next quarter attributed to AI-assisted test writing.
  • Document team best practices on using AI tools in the internal knowledge base.
  • and many more….

Conclusion

We’re in a very interesting era, where there could potentially be a shift in the way we work. While all these may still feel new to us, the concept of LLM/AI that is really useful has been around for 3 years plus (since ChatGPT). It will keep on progress to be even better in a year time.

I strongly believe we need to start adapting our work with new technology, or we (as a career and the skill that we have) will easily be replaced in the future. Since it is still relatively new, people are still being very forgiving for failure. As the industry start to mature, I think the expectation will gradually start to increase, by then, we are prepared.