AI-assisted coding has proved useful when coding PHP, JavaScript, SQL and CSS for general problems. When applying this to Concrete CMS, AI-generated code presents problems. The underlying issue? A lack of Concrete-specific training data and references to legacy versions leads to frequent mistakes and hallucinated interfaces to fill the gaps in what is documented.
To be clear, what we are talking about here is using AI to write code. Not simply as a front end to a search engine.
AI excels at handling straightforward programming challenges. For instance, simple tasks like updating code for PHP 8 compatibility is relatively straightforward and reliable. AI can quickly generate compatibility patches and refactor deprecated functions.
One of my biggest successes in AI-assisted coding was the conversion of my PHP-based comparison engine, originally built for Form Reform handlers on the server, into a JavaScript equivalent for front end form field dependencies. AI handled the bulk of the translation, requiring only minor edits to tidy it up. I still had to test it, but overall it saved me days of manual rework.
I have also found AI a useful assistant for creating complex SQL queries. I used AI help when adding an Attribute Cleaner to Extreme Clean. With each query, I asked the AI to provide alternatives and then tell me which was most efficient. As an aside, AI was also useful for clarifying the cryptic nature of MySQL error messages generated when the AI got the SQL wrong!
The common factor behind all of these is the AI could work from accurate language specifications, profuse documentation and abundant examples that already existed in the data it was trained with.
Concrete CMS however doesn't have the documentation or code examples to be well represented in AI training datasets. Concrete CMS has an enormous volume of code full of Concrete CMS specific structure and interfaces that an AI doesn't know about - much like developers who encounter the platform for the first time. So rather than admitting it doesn't know an AI will then try to be helpful and guess to fill in the gaps.
For example, the Concrete CMS permissions code is particularly convoluted. I tried asking an AI, but because it had little to work with it responded with plausible but inaccurate fabrications that sounded valid but didn’t exist. I have learned by bitter experience to be cautious of such rabbit holes and not waste time chasing the rabbit.
While AI offers benefits, there’s a risk in overly trusting its responses without verification. Questions can result in misleading responses, so require scrutiny to discriminate hallucinations from valid solutions. We need to be highly specific or provide direct reference code to prevent AI from filling gaps with guesswork.
A secondary gripe is some AI's have an increasingly sycophantic personality, creepily praising expertise in various domains. I get tired of answers being suffixed by an AI blowing hot air "Given your expertise in Concrete CMS, you should have no trouble deploying this solution". Flattery doesn’t compensate for inaccurate responses.
For now, Concrete CMS developers need to tread carefully, using AI selectively for general programming tasks while remaining vigilant against its tendency to confidently misdirect when asked about Concrete CMS specifics.
If you would like to discuss any of these thoughts, please start or continue a thread on the Concrete CMS Forums.
Every thought is mine. I use AI like I would a human sub-editor, to catch mistakes, refine and suggest improvements. But the final decisions, edits, and creative direction remain entirely in my hands. AI is a tool, not the author.