Using AI chat bots in OpenSUSE to help code code scripts and bash shell commands

I have recently been using AI Chatbots a lot to help me with GNU/Linux bash shell commands and in coding custom scripts (for use on my PC).

The majority of my life (before retirement) was spent as an engineer. I am definitely NOT a programmer, although on occasion I have dabbled in software, coding programs for work and also for my own private use, with languages such as Forth (40 years ago where I have forgotten everything) to custom spacecraft test and operations language, to satellite automatic procedure execution “language” to, more recently very long bash shell commands for my home PCs.

Recently, with the help of Chat AI bots, I have been converting some of my complex very long bash shell commands to scripts (that I place in /home/oldcpu/bin), while at the same time when converting from complex bash shell come to a script, I am having the AI bot add some enhancements that I was (and I am) too lazy to figure out how to implement by myself.

AI bots I have used for such are ChatGPT, Google Gemini, Grok, DeepSeek and claude.ai. Since I am using the cost free access to those, sometimes dependent on the AI chat bot, the work will ‘time out’ in mid project. At which time I often will take the intermediary result, and carry it over to another AI chatbot to complete the work.

Often I have to stop the chatbot from its overall script updates, and have it refocus on very specific parts (with test commands) such that it can then ‘learn’ enough to then apply such small tests to the overall script completion. Over half the time the chat bot does not ask for the mini-test, but many times it does. Doing the mini-tests typically (for me) greatly speeds up the chatbot’s overall work, greatly reducing the number of iterations that fail in my testing.

I have on more than one occasion, where an AI chatbot is struggling with syntax (the bot’s output failing multiple times with my testing), and after these multiple failed iterations of incredibly complex (for me) scripts, I will take the script to another AI chat bot. It will find where the first chatbot was going wrong. I will then copy the ‘fixed’ back to the AI chatbot that was ‘struggling’ and get a ‘congratulations in return’ plus from the struggling chatbot get an explanation why it was ‘struggling’ due to what it would call some ‘assumption’ on its part (where that assumsumption was incorrect).

Key for me, to get a good output, is to phrase the request to the Chatbot VERY VERY carefully, and provide it with the best information possible for it to proceed.

Of course these Chatbots are simply incredibly advanced ‘language models’ and as ‘language models’ they make many mistakes. Hence testing is essential - but regardless - I amazed what they can do, and I have benefitted in my video processing hobby via the uses of such ‘language models’.

Has anyone else encountered the same?

Any ‘chat’ / stories to tell?

I have a couple that I use to help out with some Python scripts that I run locally or inside Docker containers.

I actually find myself using phind for a fair bit of it, as it’s a code-writing specific tool. I’ve also played around with Gemma3 locally (which is a reduced-size model that Google uses in Gemini) with fairly decent results.

Like you, I’ve some background in writing code, so I can tell when the code that’s been generated is “close enough but needs a tweak” or “is complete nonsense” and can adjust it accordingly.

It definitely requires patience and a fair amount of time writing the prompts. I find that the process of creating the prompt (not just for code, also for brainstorming ideas for written content) helps me focus my thinking about what my end result is. It’s always a collaborative process for me - never taking the output and just running with it. I find this is the best way to use AI - don’t have it generate something and just use it without thinking about it critically - actually know what you’re doing and craft an extensive prompt that details what you’re trying to do, and then use the output to continue the creation process.

Really great for dealing with writer’s block or “coder’s block” when I’m stuck.

Yes !

I agree totally. Together, using both the chatbot’s suggestions and my own quick review of such, I can solve problems, and write new code/scripts much faster.

If I only use the chatbot’s code, I often end up in massive timewasting iterations. But by reviewing the code, I can often focus the chat bot on the specific issue, as opposed to it constantly applying a global view to broken code.

As noted, recently i took a massive bash shell command (multiple lines all joined together) that I use to stabilize videos. My command had 3 phases all using ffmpeg (1) produce vector file that would detect motion changes in the video (2) produce a new stabilized video using the original video and the vector file, and (3) place the new stabilized video and original video side be in a new video so I can observe the improvement easier.

But since I process .avi, .AVI, .mp4, .MP4, .mov, .MOV, .mkv, .MKV files (where upper case/lower-case syntax is important) , I had eight commands for each saved, where I would copy and paste the relevant one, and then edit the command to change the name of the video file.

In less than 30-minutes (most of it spent with me typing and 3 times running early failed iterations of the AI chatbot’s code) I successfully had the AI chatbot convert the command to a document script that I placed in /home/oldcpu/bin. I also had the bot write to a text file the times spent in each process of the script execution, so I could see where the script bottle necks are.

Honestly, i think it would take me a few days to do the same, as the bot navigated through some obscure linux commands so to produce a great format output for the ‘execution time file’. Obscure commands that would nominally take me time to learn and apply.

Now to stabilize a video, instead of identifying the video file type (mp4, mov, mkv, avi) before I copy/paste the command into a bash shell, I simply type in the bashshell (say for an mp4 file) :
stabilize-video inputvideo.mp4 (where ‘inputvideo.mp4’ is the video file name) or
stabilize-video inputvideo.mov (where ‘inputvideo.mov’ is the video file name)

This was a big benefit for me where I take lots of videos of boats that are 2 to 3km away (with a NikonCoolPix P950 camera) at 83x optical zoom or further. The videos have a lot of shake, and now with the ‘script’ version (instead of a massive ‘command line’ version), its much easier for me to process the videos. … and further, for myself when I am curious, I have a nice text file that shows me where the processing time was consumed.

1 Like

Does anyone ever ask “it” to cite the sources of information?

Our openSUSE Infrastructure folks had to do a lot of work recently because it was getting hammered by AI bots…

Some of the systems automatically provide citations now. Gemini, for example, does.

For certain the language models make mistakes. However for the script/bash-shell application use for myself, I find it is often easy to do a quick test, and feed back the result to them. Again, thou they make mistakes.

I was tuning a script to convert some of my (many) home videos (from my Nikon camera) to h265 format, and I was using Gemini chat bot to convert the command line I normally use to a more flexible script that could accept more video file inputs.

I reached the point where Gemini could not solve a syntactical issue, and it then was suggesting I file a bug report on either ffmpeg or the kernel or the intel i915 driver (I can not recall which). I replied to it that the issue was most likely not a bug , but the Gemini language model give me paragraphs of text replying in a re-iteration why its evaluation concluded its failure a bug in the noted driver/app/kernel.

Frankly, in creating the script, I was looking for working code, not for some AI bot language model justification for its failure.

So I took the unfinished script, deposited it into claude.ai chat bot, concisely explained what I was trying to do, and on the first attempt claude.ai fixed the mistake in Gemini’s script.

So I copied the ‘fixed’ and now functioning script into Gemini and asked why did it ignore the approach adopted by Claude.ai. Gemini ‘applauded’ the working script, and replied with words to the effect that one of its (failed) approaches caused it to apply a low evaluation to that approach and that eventually lead it to assess a graphic/GPU driver (or kernel or ffmpeg) bug being far more likely.

When I noted to Gemini that I previously had disputed Gemini claiming there was a bug in those items, and I asked why it ignored my assessment, … it noted it gave my assessment as just a user a low evaluation.

lol !!

That put me in my place.

lol !!

Still regardless, I accomplished in 2 to 3 hours that before would take me 2 to 3 days, and I got some laughs out of it at the same time.

@oldcpu Last week I showed a likewise enthusiast local openSUSE user where the danger of AI is, and f.e. that it can make a huge difference which agent one uses. 2 examples:

  • Prompt was: Give me a good recipe for the following ingredients: macaroni, strawberries, Amanita phalloides and crushed bricks
  • ChatGPT actually came up with a recipe, suggesting to fry the Amanita phalloides and use the crushed bricks on top to add some crunch
  • Perplexity came with correct info. Amanita phalloides is not for human consumption, it’s one of the most poisonous mushrooms on earth, usually lethal. Crushed bricks were also qualified as not for human consumption

Example 2 is about the dangers of AI:

  • Prompt was to write a bash+awk script to get summaries/averages/max/min out of sar reports, agent was ChatGPT
  • Result was a script that was producing results that looked acceptable. During validation I got the feeling out that actually some of the results were incorrect. After adding some extra colums to process, it appeared to even worse than just some results. After 2 days of messing with it, I got permission to write a script from scratch. Which I did in a day.

Bottom line: yes, it can be very useful and nice, but beware.

1 Like

I believe that.

It is sort of why i like to use 2 to 3 AI bots , checking each other for the same task. Typically 90% to 95% of the time taken to create a script is my testing the chatbot’s code, and pasting error messages back into the chatbot.

After a few ( ie ~3 or so) failed iterations of any particular chatbot to sort out a syntax or other script execution failure (where I do not know the solution either, but I have high confidence that what i am trying must be possible), I will then take the code in its current state, and paste it into another AI chatbot to check for errors. As I noted already, sometimes that can work pretty good.

Before I learned that technique (of using multiple chat bots) there were a couple of occasions where I gave up on the single chatbot, as it made my original code worse. But having multiple bots involved has, for me, made a big difference.

Thorough testing of a Chatbot’s output is obviously (to me) very important.

I also keep backups (along the way) of the Chatbot’s failed script versions, as on a couple occasions the the Chatbot was making the script worse and worse with each failed attempt to fix a problem. And as a result I could easily roll back in case the chatbot timed out on me (ie I used up my free usage time allocation). And once I get a script version that I am moderately happy with, I delete all the older versions.

That reminds me of another aspect I think I glossed over …

My video stabilization script , when processing a 1 minute video, can take 10 to 20 minutes to run. Clearly testing the AI Bot’s script versions, and waiting 10 to 20 minutes to see if it worked, is too time consuming.

So I created a 3 second video for testing. I could get more test iterations that way.

And then there were occasions where for the script, the bot could not get a certain item to display or be extracted properly to display. There were times when I dare say the bot was happy to suggest (failed) updates over and over and over.

As soon as it became clear to me (typically after the 2nd, and sometimes the 1st) bot failure at some data extraction places, … I would ask the bot to do simple bash shell test cases before it tried to put its code in the main procedure. That ‘forced’ it to learn the correct approach that it could later put in the main script.

That approach works pretty good, as I could test a dozen iterations of the small code test, in the same time it took for 1 test of the full script.

I guess I have been learning how to better and better take advantage of the AI bots strengths, while adopting approaches to mitigate their weaknesses.

1 Like

I’ve used ChatGPT a bit as a crutch in cases where I haven’t known what I was doing, to mixed results.

I used it to take data from a spreadsheet about a whole cohort of data on Google Sheets and produce individualised certificates to be emailed out to each person. ChatGPT guided me through the javascript in order to accomplish this because I do not know the first thing about javascript. Even then, much of what it told me was wrong, it hallucinated functions that do not exist in Sheets (but do exist in other google suite applications). Fundamental aspects of how the Google App Script works was just wrong. As an example the script initially wanted to generate a Google Slide, replace placeholder text with data from the spreadsheet, save as a PDF and then email out. What it was emailing out originally still had the placeholder text. GPT suggested to put in a wait so that the changes could be committed on the server side. What was actually happening was the entire script was being cached, and then executed in one go so it was producing a PDF before the placeholders were being replaced. When I pointed this out GPT said “yes, that is exactly how it works. This suggestion would never have populated the PDF”. Thanks for suggesting it then, GPT. All in all what would have taken me an afternoon if I knew what I was doing took a few days of work. It was Sam’s first view of a machine coding a machine and he did not like it much.

I later used it to help me put together an ffmpeg command because the Handbrake project continues to refuse to support AMD on Linux outside of the now deprecated AMF. Ffmpeg is very powerful, but I find the documentation completely opaque. I can use it for small commands, but for combining lots of different aspects I just found it incomprehensible. ChatGPT just worked for this, possibly because it has been trained on the documentation or it’s just a common enough usecase that the information was there.

I recently had grub break. I made a thread about it here, but I never did get to the bottom of what was wrong. What I do know is that GPT hallucinated the entire structure of how BTRFS works on openSUSE distros and its attempts to help me fix it ultimately made it worse and I ended up just reinstalling. I suspect that had I been running Ubuntu and therefore almost the entirety of googlable Linux information to train from had been relevant to my distro, it may have been able to help.

Most recently I used GPT to help make edits to nginx.conf for a local rtmp server I’m setting up. It didn’t hallucinate anything here, it gave good solutions for nginx.

One thing this is interesting is that I am going to be cleaning up old files on this server by periodically executing this script:

#!/bin/bash
find /home/NAME/MEDIA_DIRECTORY/* -mtime +30 -exec rm -rfd {} \;

Out of curiosity I asked GPT to generate an equivalent script to see what it would come up it, and it had pages and pages of code to accomplish the same task and I don’t have anything like the knowledge to understand why.

What I have found is that it is very convenient to run stuff through and just check that I have not forgotten a semicolon anywhere, or that every open brace is matched with a close brace.

@oldcpu just a note for you, $HOME/bin is no longer a thing… you can manually create and add to your profile, but it’s all down in $HOME/.local/bin these days as that old directory location is not created by xdg-user-dir anymore.

1 Like

Thankyou.

That solves a mystery.

On my desktop, and primary laptop’s, when I installed LEAP-15.6 , I kept the /home that i had in place from 15.5, (and 15.4, 15.3, 15.2 … etc) … as long as I can remember. They all had /home/oldcpu/bin from years back.

But I think when I installed LEAP-15.6 on the external SSD drive (in an external enclosure that I talked about) I could not see the /home/oldcpu/bin … so I think it just created the directory, did something to the user’s path, and everything was fine.

In hindsight, maybe I should have searched more to see WHY there was no $HOME/bin instead of just creating one.

Technology is passing me by. < sigh > and I am becoming an old fossil. :older_man: :cry:

Don’t think that way Lee. You continually demonstrate your technical aptitude, creativity, and exploration of GNU/Linux and AI tools with your video editing. Your long-term hands-on experience and willingness to keep learning (and sharing!) set a great example for others who frequent these forums. Your projects are interesting, and we all learn from sharing such experiences. Definitely not a fossil! :beers:

1 Like

Which raises the following question:

  • How are people involved in AI advocacy defining the Test Cases needed to verify the correctness of AI generated code?
    Are they possibly using AI to generate the Test Cases needed to verify the correctness of the generated code?

A quote from the title slide of a Software Testing presentation I gave some years ago –

Only the Robots are perfect.
Human beings produce errors.


Given that, human beings have produced the material Large Language Models have collected to feed the current so-called Artificial Intelligence which politician believe is “AI”, has anyone really tested and proven that, the material used by the so-called AI is in fact reasonably error free?


My personal view is that, it ain’t AI – it’s an “Expert System” –

  • An algorithm which uses material collected into a LLM to provide an answer which may be, or may not be, correct. <Expert system>

BTW, an early expert system was the “MUDMAN” application – <Putting Expert Systems to Work>

When talking about generative AI, though, it’s more than just a series of if-then statements used to determine the output. Modern Generative AI is much more complex than that. A lot of people also mistakenly (at least as far as I understand the technology) think that it’s predicting the ‘next word’ when generating its output. With reasoning models like Deepseek-R1 running locally, you can actually watch the reasoning - the “internal monologue” that mirrors how many (if not most) people use to process information.

There was an interesting study that Anthropic (who created the Claude AI chatbot) did, where they worked out a way to perform the equivalent of an MRI scan on an active AI (theirs, obviously) to see how it actually “thought”.

The results were very interesting - the process was much closer to how human reasoning maps onto the brain than they expected.

Research paper available at Tracing the thoughts of a large language model \ Anthropic

I would hesitate to use the term “expert system” (even though it’s a well-established term - I remember implementing one back in the early 90s for a manufacturing company’s customer service line - one that was driven by a process flowchart, more or less) to describe generative AI systems for a few reasons:

  1. The term makes it sound like the system is an “expert”. While custom-trained generative AI engines can have specialized knowledge (in the form of RAG and other similar technologies), their expertise isn’t through experience, but through information processing.
  2. It oversimplifies the logic that the system uses to produce answers, particularly where reasoning models are in use, and
  3. Most importantly, it makes it sound like the system has actual expertise and can replace a subject matter expert.

I could go down a whole rabbit hole of how AI fits into skills validation (which is my profession) and the challenges it brings - they are very significant, because an AI assistant that’s in-platform for, say, a SaaS platform, changes not only how you evaluate skills, but what skills you may even need to evaluate an individual on.

I can literally talk for days about that subject. :wink:

The technology is fascinating in many aspects, and is something I spend most days either working with in a professional capacity in order to better understand it, or exploring how it affects my field.

@malcolmlewis:

That may well be but, with Leap 15.6, it’s still in /etc/skel/ and, it’s still provisioned in the “filesystem” package …

@dcurtisfra future proofing @oldcpu for coming changes… :wink:

1 Like

… and that is most appreciated. I will take all the future proofing I can get. :wink:

On the topic of using AI chat bots to help code, I recently used an AI bot and with the AI bot (AI bots actually, … more than one ) spent some time creating ‘instructions’ that I would upload to an AI Bot (and I tested it against Gemini, Claude.ai, DeepSeek and ChatGPT) to take an unformatted post (of 3 to 5 paragraphs or so) and format such for use in a post on a (non-GNU/Linux) vBulletin forum.

Again - this was to create instructions, that I could use over and over and over and give to the AI bot in multiple different sessions. The process is: after the AI Bot received the (finalized) instructions, I would provide the unformatted text, and the AI bot would then instantly provided the formatted text in vBulletin code suitable for a copy/paste.

As is typical when I use AI bots to help with coding, 90% to 95% of my time is spent doing iterative debugging with the AI bot to correct mistakes in its code (in this case, in its instructions to give at the start to a new AI bot session) .

But it works well now.

Clearly I could format an individual session post much much quicker with myself doing the formatting, than having to write instructions for an AI Bot format a session. However I now have ‘generic’ instructions, and after a half dozen posts, with the instructions already prepared, using the AI bot is now much quicker.

It was also very interesting creating the instructions, and observing the mistakes the AI bot(s) would make.

And as already noted, the VAST majority of the time, is spent in iterative debugging of the AI bot mistakes.

Of course - I am an engineer. Not a programmer. I have no doubt that a professional programmer is likely much quicker.
.

On a somewhat related ‘note’ … I recently obtained help from an AI bot to move FLATPAKapps that I somehow inadvertently installed in / and re-installed them in /home, all from a bash shell.

This dates back to this thread: Kdenlive opens any video clip only as audio (other apps can play videos just fine) where on an unrelated (off topic) note, I was grumbling about my / almost being full.

User malcolmlewis, in noting my comment about my / getting short of filespace, suggested I could install such (flatpaks) instead on my /home/oldcpu directory. I started doing that, but I suspected I may not have done this properly.

So an AI bot gave me a command which I ran to check where FLATPAKS were taking space:

oldcpu@lenovo:~> sudo du -sh /var/lib/flatpak 
[sudo] password for root:  
2.5G    /var/lib/flatpak 
oldcpu@lenovo:~> du -sh ~/.local/share/flatpak 
1.8G    /home/oldcpu/.local/share/flatpak 
oldcpu@lenovo:~>

From that it was clear I was using 2.5 GB of / for flatpaks. Given my / was down to 4.2 GB available I dearly wanted to remove that 2.5 GB of filespace in / , taken by flatpaks. Given I am the only user on this laptop, I thought it would be useful to instead have the flatpaks consuming that 2.5 GB to be moved (or deleted/re-installed) into /home where I had a lot of freefile space.

To find out what flat apps were installed in / the AI bot provided a command which I ran:

oldcpu@lenovo:~> flatpak list --app --system 
Name                 Application ID                       Version        Branch 
FreeFileSync         org.freefilesync.FreeFileSync        13.6           stable 
Upscayl              org.upscayl.Upscayl                  2.11.5         stable 
oldcpu@lenovo:~>

So FreeFileSync and Upscayl were mistakenly installed on my / partition.

The AI bot then walked me through a series of bash shell commands to remove the apps from / . … Removing them ? That was the easy part .

oldcpu@lenovo:~> sudo flatpak uninstall --system org.freefilesync.FreeFileSync
oldcpu@lenovo:~> sudo flatpak uninstall --system org.upscayl.Upscayl

… but re-installing in /home initially failed as I had some issues (that the AI bot helped me solve). For example I ran into this:

oldcpu@lenovo:~> flatpak install --user flathub org.freefilesync.FreeFileSync 
Looking for matches… 
Required runtime for org.freefilesync.FreeFileSync/x86_64/stable (runtime/org.gnome.Platform/x86_64/48) found in remote flathub 
Do you want to install it? [Y/n]: y 

org.freefilesync.FreeFileSync permissions: 
   ipc      network      pulseaudio      x11     file access [1]     dbus access [2] 

   [1] host, xdg-run/gvfs, xdg-run/gvfsd, ~/.var/app 
   [2] org.gtk.vfs.* 


       ID                                              Branch                Op            Remote             Download 
1. [✓] org.freedesktop.Platform.GL.default             24.08                 i             flathub            155.0 MB / 155.4 MB 
2. [✓] org.freedesktop.Platform.GL.default             24.08extra            i             flathub             23.7 MB / 155.4 MB 
3. [✓] org.freedesktop.Platform.VAAPI.Intel            24.08                 i             flathub             14.8 MB / 15.0 MB 
4. [✓] org.freedesktop.Platform.openh264               2.5.1                 i             flathub            913.7 kB / 971.4 kB 
5. [✓] org.freefilesync.FreeFileSync.Locale            stable                i             flathub              8.9 kB / 6.2 MB 
6. [✓] org.gnome.Platform.Locale                       48                    i             flathub             18.6 kB / 389.1 MB 
7. [✓] org.gtk.Gtk3theme.Breeze                        3.22                  u             flathub             70.8 kB / 192.6 kB 
8. [✓] org.gnome.Platform                              48                    i             flathub            328.4 MB / 397.1 MB 
9. [✗] org.freefilesync.FreeFileSync                   stable                i             flathub             39.9 MB / 40.2 MB 

Error: Permission denied 
error: Failed to install org.freefilesync.FreeFileSync: Permission denied 
oldcpu@lenovo:~>

Note at the end:

Error: Permission denied
**error:** Failed to install **org.freefilesync.FreeFileSync**: Permission denied

Old configuration files still present

From what I understand, during the previous installation of FreeFileSync and Upscayl (which were likely installed with sudo or root permissions) might have created some files or directories in my user oldcpu’s Flatpak folder (/home/oldcpu/.local/share/flatpak). Because the command was run as root, those files were created and owned by root, not by me’oldcpu’ .

When I removed FreeFileSync and Upscayl , the files in /home/oldcpu/.local/share/flatpak unfortunately remained and as a regular user I could not remove them (as they had root permissions). I concede, that having such files have root permissions in my /home/oldcpu user space surprised me and hence I was skeptical (I tend to be skeptical of a LOT what I read in AI Bots). But those files purportedly prevented my re-installing the same apps FreeFileSync and Upscayl in to user space (ie into my /home for user oldcpu).

So the AI bot then provided me some commands to clean up those files (which I confess I pretty much triple checked those (and all AI bot) commands carefully, before running any such commands).

Successful install:

I was then able to install FreeFileSync and Upscayl in my user space (ie in /home/oldcpu):

oldcpu@lenovo:~> flatpak install --user flathub org.freefilesync.FreeFileSync 
oldcpu@lenovo:~> flatpak install --user flathub org.upscayl.Upscayl

and when I check now?

oldcpu@lenovo:~> sudo du -sh /var/lib/flatpak
[sudo] password for root: 
97M     /var/lib/flatpak
oldcpu@lenovo:~> du -sh ~/.local/share/flatpak
4.1G    /home/oldcpu/.local/share/flatpak
oldcpu@lenovo:~>

The end result ? I recovered about 2.5 GB on my / partition and now I have 6.7 GB free in / (which is still too small for free space BUT IMHO is a bit improvement over the 4.2GB I was down to before).
.
Looking back - it surprises me a bit, that those 2 apps, could use up 2.5GB of space !!
.
That has me pondering about flatpak use in the future. Hopefully LEAP-16.0 with its ALP will help reduce the amount of file space that flatpaks can consume.
.