Unpacking the SaaS “AI” magic

This is for all the app developers out there.

For the sake of convenience AI and LLM will be used interchangeably. More accurate term, LLM is a subset of AI, whereby all the hype is in fact LLM. Therefore, I’ll be using this term instead for the rest of the post, and AI will be used where it make sense.

Anyway, this discussion was basically spawn out of my 1:1 coaching sessions with my direct reports. For them to ask the right questions, and my passion in this trending technology. This shaped my world view of LLM and the future (as an App developer).

Unpacking the complexity of AI magic

Having seen more than enough open-source projects of AI magic. Under the hood, is all about smartly designed system prompts. By understanding this, you no longer need to fear or feel inferior. You already have the right tools and knowledge to build AI feature or products.

I’ll show more example later.

The history

Before that, let’s understand how the LLM actually works and the history of how it evolve. Keeping it simple, the OG of techies know about this. We been through the old days of deep learning (talking about neuro-network) and then using those to build advance model. At least to my understanding, the breakthrough happens on the research on BERT which utilize text sequence prediction to guess the next word.

Some how, the “magic” sort of works beyond what we expected. Who knows, by applying more and more example to the training set, the AI magic sort of getting better and better.

Just like the old days of trying to improve the performance of CPU processor. Jamming as many transistor as you can to get higher speed. Until a point, where it reached a human limit of making transistor so small to one up the performance. Engineers has creatively goes into multi-core mode. Guess what, somehow the whole industry can agree to this and heading this direction to improve the processor performance.

In translation, it feels like history is repeating itself. By jamming more training sample to improve the model intelligent. It sort of reached a human limit, where the next bet to get more out of it is “prompt engineering”. Yup, I believe this new segment of engineering hasn’t die yet, but rather important for the next evolution. We’re still at this stage. For now…

Getting the AI to behave what you want

In general, there are two approaches:

  1. Fine-tuning – Re-train the LLM or model with custom data/sample.
  2. System prompt – Aka agents.

For the sake of simplicity, #1 is rather technical, so let’s just focus on #2.

System prompts

I started to deep-dive how other people are building their AI apps. Really, the pace where they can get it to market rather quickly with all the magical AI feature.

That’s where more open source projects are also made available. And we can easily learned from it. In there, the only things that caught my attention the prompts. People are creatively using the prompts to make magic.

I too can do that! It all boils down to how well you understand system designs and your domains. Coupling these two, you should have the ability to craft step by step instruction through prompts. And, getting it to interface with the app you’re building.

Untangling this mental block has helped me think clearly the system design to approach when trying to build or come out with AI ideas. And, I believe you should too!

Here, take a look at some of the AI apps I have “starred” on my github.

Even the behemoth (Anthropic) is powered by smartly design system prompts to work with their frontend chat system.

Future

Do we even need to fear if this is a temporary skills or demands? Put your own bet!

Here is a little more fact, and my personal insights. You probably can see the evolution by now. The core LLM is just a step up version of the predecessor (remember the CPU processor story?). More power (LPU/GPU – language processing unit), training sample, and prompts. At least for me, only the prompts engine is within my reach, and something we can do about. Only by actually doing it, you can be good at it.

Lesson

Expose yourself to how other people are building apps with AI. Then start to challenge yourself too, “if I were to build the app, how would I build it”. By doing enough mental exercise, only then you can get better. Eventually, it will just be second nature to you. Just like how you’ve get used to translate your system into code without thinking too hard about it.

Practice make perfect. No shortcut.

By the way, all this LLM is still a business after all. What’s better time for you to get exposed to all of this for Free? Eventually when is matured, you definately need to forked out something just to experiment with it. Real fact here, I was one of the early adopter that was able to test and play with OpenAI or Groq API without the need to put in credit card.

My last word. Be resourceful and take advantage of the infant stage of this technology where we still get to try and build things for Free!