I have recently been using AI Chatbots a lot to help me with GNU/Linux bash shell commands and in coding custom scripts (for use on my PC).
The majority of my life (before retirement) was spent as an engineer. I am definitely NOT a programmer, although on occasion I have dabbled in software, coding programs for work and also for my own private use, with languages such as Forth (40 years ago where I have forgotten everything) to custom spacecraft test and operations language, to satellite automatic procedure execution âlanguageâ to, more recently very long bash shell commands for my home PCs.
Recently, with the help of Chat AI bots, I have been converting some of my complex very long bash shell commands to scripts (that I place in /home/oldcpu/bin), while at the same time when converting from complex bash shell come to a script, I am having the AI bot add some enhancements that I was (and I am) too lazy to figure out how to implement by myself.
AI bots I have used for such are ChatGPT, Google Gemini, Grok, DeepSeek and claude.ai. Since I am using the cost free access to those, sometimes dependent on the AI chat bot, the work will âtime outâ in mid project. At which time I often will take the intermediary result, and carry it over to another AI chatbot to complete the work.
Often I have to stop the chatbot from its overall script updates, and have it refocus on very specific parts (with test commands) such that it can then âlearnâ enough to then apply such small tests to the overall script completion. Over half the time the chat bot does not ask for the mini-test, but many times it does. Doing the mini-tests typically (for me) greatly speeds up the chatbotâs overall work, greatly reducing the number of iterations that fail in my testing.
I have on more than one occasion, where an AI chatbot is struggling with syntax (the botâs output failing multiple times with my testing), and after these multiple failed iterations of incredibly complex (for me) scripts, I will take the script to another AI chat bot. It will find where the first chatbot was going wrong. I will then copy the âfixedâ back to the AI chatbot that was âstrugglingâ and get a âcongratulations in returnâ plus from the struggling chatbot get an explanation why it was âstrugglingâ due to what it would call some âassumptionâ on its part (where that assumsumption was incorrect).
Key for me, to get a good output, is to phrase the request to the Chatbot VERY VERY carefully, and provide it with the best information possible for it to proceed.
Of course these Chatbots are simply incredibly advanced âlanguage modelsâ and as âlanguage modelsâ they make many mistakes. Hence testing is essential - but regardless - I amazed what they can do, and I have benefitted in my video processing hobby via the uses of such âlanguage modelsâ.
Has anyone else encountered the same?
Any âchatâ / stories to tell?