Using AI for Coding: My Journey with Cline and Large Language Models


In recent months, I embarked on a journey to transform the UI/UX of a side project—bot.eofferte.eu, a SaaS platform that automates Amazon affiliate marketing on Telegram and streamlines the Amazon Associates onboarding process.

The project’s architecture is straightforward: a Go backend powered by the labstack/echo framework, with UI rendering handled by Go’s standard html/template package. To accelerate development and improve the overall user experience, I experimented with Cline through its VSCode plugin as my primary AI coding assistant. Here’s a detailed breakdown of my experience.

Experiments with Frontend Development

As someone who primarily focuses on backend development, UI/UX has always been a challenge. My limited knowledge of modern web frameworks and general aversion to CSS made frontend work particularly daunting. However, leveraging AI tools transformed this weakness into an opportunity for rapid improvement.

The impact was immediate and substantial. I tasked the LLMs with redesigning every page of the website:

  • The landing page underwent a complete transformation
  • The management interface (where users configure their services) received significant upgrades
  • The overall design evolved from basic to professional-grade

Beyond visual improvements, the LLMs proved invaluable for generating and refining essential content like privacy policies, terms of service, and other compliance documentation.

I experimented with two leading models:

  1. Claude Sonnet 3.5:
    • Performance: Exceptional response speed
    • Accuracy: Deep understanding of web technologies (HTML, CSS, JavaScript)
    • Effectiveness: Made intelligent framework suggestions (Font Awesome, Bootstrap) that enhanced both aesthetics and functionality
    • Limitations: Context window restrictions often interrupted complex tasks
  2. Gemini:
    • Performance: Slower processing speed compared to Sonnet
    • Advantage: Larger context window enabled handling more comprehensive instructions
    • Versatility: Better suited for tasks requiring extensive context analysis

The AI’s ability to suggest appropriate frameworks and create cohesive designs proved transformative, especially for someone with limited frontend expertise.

Prompt Engineering for Success

Working with Cline proved intuitive thanks to its ability to analyze open files and understand repository context. A prime example was the redesign of our bot management interface (bot.html).

The original design required users to complete an extensive form in a single session. To improve user experience, I decided to implement a step-by-step wizard using Enchanter.js. The integration process highlighted the importance of precise prompt engineering:

analyze bot.html - it's a Go (golang) html template.

bot.html contains both html template code and JavaScript. Both are mixed with Go template syntax.

You need to rewrite bot.html using static/enchanter.js in order to convert the form in bot.html to a guided wizard.

Do not touch any JavaScript already present in bot.html and ignore every JavaScript error.

Your <form> tag should wrap the .nav and .tab-content elements. The footer of the form must contain "Back", "Next" and "Finish" buttons with the data-enchanter attributes.

This prompt succeeded because it:

  1. Established Context: Clearly identified the technology stack and file structure
  2. Defined Scope: Provided specific implementation requirements
  3. Set Boundaries: Prevented unnecessary JavaScript modifications
  4. Specified Requirements: Detailed the exact structure needed for the wizard implementation

Backend Development Insights

The backend experience revealed a crucial distinction in AI-assisted development. The project’s backend (Go) and bot component (Python) provided different insights:

  1. Expertise Matters:
    • With strong domain knowledge, AI became a powerful accelerator
    • Code refinement and optimization suggestions were particularly valuable
    • Repetitive tasks were handled efficiently
  2. Pitfalls for Non-Experts:
    • Limited domain knowledge led to accepting suboptimal AI suggestions
    • Understanding the underlying technology proved crucial for effective AI collaboration
    • Code review skills remained essential for quality assurance

In particular, I asked ChatGPT to write a function by knowing precisely how I would have implemented it. This is crucial since without knowing the expected result and what every line does, I might end up with a wrong implementation. The function I asked to implement is a simple similarity score among strings. Since the bot is posting deals by searching for keywords on Amazon, the PA-API is likely to return on the same request multiple version of the same product (e.g. a blue camera vs a black camera of the very same model).

With a very precise prompt, that must contain also a valid example, the generated function is very similar to the one I had in mind to solve this problem.

I need a function that for similar strings returns me a similarity score.

The case that I need to address is, for example:

string A: "Canon EOS 2000D DSLR Camera and EF-S 18-55 mm f/3.5-5.6 IS II Lens - Black"
string B: "Canon EOS 2000D + EF-S 18-55mm III Lens - Easy-to-use DSLR Camera with a Versatile Lens, Ideal for Portraits and Landscapes"

Should return a similarity score higher, since they are referring to the same camera

Generates:

from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics.pairwise import cosine_similarity
import re

def compute_similarity(string_a: str, string_b: str) -> float:
    """
    Computes a similarity score between two strings based on their content.

    Args:
        string_a (str): The first string.
        string_b (str): The second string.

    Returns:
        float: A similarity score between 0 and 1, where higher indicates more similarity.
    """
    def preprocess(text):
        """
        Preprocess the input text: remove special characters, lowercase, and tokenize.
        """
        # Remove non-alphanumeric characters and lowercase
        return re.sub(r'\W+', ' ', text).lower()

    # Preprocess the input strings
    clean_a = preprocess(string_a)
    clean_b = preprocess(string_b)

    # Use TF-IDF to vectorize the strings
    vectorizer = TfidfVectorizer().fit([clean_a, clean_b])
    vectors = vectorizer.transform([clean_a, clean_b])

    # Compute cosine similarity
    similarity = cosine_similarity(vectors[0], vectors[1])
    return float(similarity[0][0])

The generation time is just a couple of seconds, plus one minute to read and understand if the function does what I requested. The boost in productivity is huge since I would have spent at least 3 or 4 times this time to write by myself the function.

Multilingual Content Generation

One of the most impressive applications was in content translation. The service needed to support multiple Amazon Affiliate regions: AU, BR, CA, EG, FR, DE, IN, IT, JP, MX, NL, PL, SG, SA, ES, SE, TR, AE, UK, and US.

The bot’s functionality includes Telegram posting and article generation (platinum plan), utilizing message templates stored in JSON format. For example, US.json contains structured messages:

"NOW_AVAILABLE_MESSAGE": "💰*{title}*💰\r\n\r\n Is now available at only 💣 *{new_price}{currency}* 💣\r\n\r\n ➡️ [Go to the offer]({url})"

A single, well-crafted prompt handled the entire translation process:

Translate - if not already translated in the target language - all the JSON files in the defaults folder.

Translate only the text in the TELEGRAM section to the target language, keeping the markdown formatting, the JSON structure, the variables, the emojis, and the line breaks.

The target language is identified by the two-letter code in the filename. For example, SE.json means Swedish, FR.json means French, etc.

Do not translate files already in the target language.

The model efficiently processed each file:

  • Identified language requirements based on filename codes
  • Preserved technical elements (markdown, variables, formatting)
  • Maintained consistency across translations
  • Skipped already-translated content

As an example, the model correctly generated the Arabic translation while preserving the formatting, variables and emoji:

"NOW_AVAILABLE_MESSAGE": "💰*{title}*💰\r\n\r\nمتوفر الآن بسعر 💣 *{new_price}{currency}* 💣 فقط\r\n\r\n➡️ [اذهب إلى العرض]({url})"

Challenges and Considerations

While the overall experience was positive, several challenges emerged:

  • Context Management: Claude Sonnet 3.5’s window limits required careful task segmentation
  • Performance Trade-offs: Balancing speed (Sonnet) versus capacity (Gemini)
  • Expertise Requirements: Backend development highlighted the importance of human expertise
  • Quality Assurance: Continuous review remained essential for maintaining code standards

Conclusion

AI-assisted development has fundamentally changed my approach to coding. For frontend tasks, tools like Cline with models such as Claude Sonnet 3.5 have proven invaluable, offering rapid solutions to design challenges. In backend development, these tools excel when guided by experienced developers who can effectively validate and integrate AI suggestions.

The key to success lies in understanding that AI tools are powerful amplifiers of existing skills rather than replacements for fundamental knowledge. They excel at accelerating development cycles, improving designs, and streamlining workflows, particularly in areas outside one’s core expertise.

The distinction between specialized coding models and general-purpose LLMs is significant. While both offer value, models optimized for development tasks, like Claude Sonnet 3.5, provide more focused and efficient assistance in coding scenarios.

Don't you want to miss the next article? Do you want to be kept updated?
Subscribe to the newsletter!

Related Posts

Fixing the code signing and notarization issues of Unreal Engine (5.3+) projects

Starting from Unreal Engine 5.3, Epic Games added support for the so-called modern Xcode workflow. This workflow allows the Unreal Build Tool (UBT) to be more consistent with the standard Xcode app projects, and to be compliant with the Apple requirements for distributing applications... In theory! 😅 In practice this workflow is flawed: both the code signing and the framework supports are not correctly implemented, making the creation of working apps and their distribution impossible. In this article, we'll go through the problems faced during the packaging, code signing, and notarization of an Unreal Engine application on macOS and end up with the step-by-step process to solve them all.

The (Hidden?) Costs of Vertex AI Resource Pools: A Cautionary Tale

In the article "Custom model training & deployment on Google Cloud using Vertex AI in Go" we explored how to leverage Go to create a resource pool and train a machine learning model using Vertex AI's allocated resources. While this approach offers flexibility, there's a crucial aspect to consider: the cost implications of resource pools. This article details my experience with a sudden price increase in Vertex AI and the hidden culprit – a seemingly innocuous resource pool.

Building a RAG for tabular data in Go with PostgreSQL & Gemini

In this article we explore how to combine a large language model (LLM) with a relational database to allow users to ask questions about their data in a natural way. It demonstrates a Retrieval-Augmented Generation (RAG) system built with Go that utilizes PostgreSQL and pgvector for data storage and retrieval. The provided code showcases the core functionalities. This is an overview of how the "chat with your data" feature of fitsleepinsights.app is being developed.

Using Gemini in a Go application: limits and details

This article explores using Gemini within Go applications via Vertex AI. We'll delve into the limitations encountered, including the model's context window size and regional restrictions. We'll also explore various methods for feeding data to Gemini, highlighting the challenges faced due to these limitations. Finally, we'll briefly introduce RAG (Retrieval-Augmented Generation) as a potential solution, but leave its implementation details for future exploration.

Custom model training & deployment on Google Cloud using Vertex AI in Go

This article shows a different approach to solving the same problem presented in the article AutoML pipeline for tabular data on VertexAI in Go. This time, instead of relying on AutoML we will define the model and the training job ourselves. This is a more advanced usage that allows the experienced machine learning practitioner to have full control on the pipeline from the model definition to the hardware to use for training and deploying. At the end of the article, we will also see how to use the deployed model. All of this, in Go and with the help of Python and Docker for the custom training job definition.

Integrating third-party libraries as Unreal Engine plugins: solving the ABI compatibility issues on Linux when the source code is available

In this article, we will discuss the challenges and potential issues that may arise during the integration process of a third-party library when the source code is available. It will provide guidance on how to handle the compilation and linking of the third-party library, manage dependencies, and resolve compatibility issues. We'll realize a plugin for redis plus plus as a real use case scenario, and we'll see how tough can it be to correctly compile the library for Unreal Engine - we'll solve every problem step by step.

AutoML pipeline for tabular data on VertexAI in Go

In this article, we delve into the development and deployment of tabular models using VertexAI and AutoML with Go, showcasing the actual Go code and sharing insights gained through trial & error and extensive Google research to overcome documentation limitations.

Advent of Code 2022 in pure TensorFlow - Day 12

Solving problem 12 of the AoC 2022 in pure TensorFlow is a great exercise in graph theory and more specifically in using the Breadth-First Search (BFS) algorithm. This problem requires working with a grid of characters representing a graph, and the BFS algorithm allows us to traverse the graph in the most efficient way to solve the problem.

Advent of Code 2022 in pure TensorFlow - Day 11

In this article, we'll show how to solve problem 11 from the Advent of Code 2022 (AoC 2022) using TensorFlow. We'll first introduce the problem and then provide a detailed explanation of our TensorFlow solution. The problem at hand revolves around the interactions of multiple monkeys inspecting items, making decisions based on their worry levels, and following a set of rules.

Advent of Code 2022 in pure TensorFlow - Day 10

Solving problem 10 of the AoC 2022 in pure TensorFlow is an interesting challenge. This problem involves simulating a clock signal with varying frequencies and tracking the state of a signal-strength variable. TensorFlow's ability to handle complex data manipulations, control structures, and its @tf.function decorator for efficient execution makes it a fitting choice for tackling this problem. By utilizing TensorFlow's features such as Dataset transformations, efficient filtering, and tensor operations, we can create a clean and efficient solution to this intriguing puzzle.