TUMBLEWEED exploring digital audio

Hopefully you foundthis post by searching for openSUSE in conjunction with digital audioor the relevant software titles like jack/qjackctl,fluidsynth/qsynth, and a DAW like rosegarden, qtractor, lmms, orardour. Because it’s based on an enterprise product, openSUSE hasgreat technical features, but sometimes it seems that the bestsupport (packaging and documentation) is for business-orientedsoftware, while all the creative suite distros are ubuntu variants.Well, I’m no musician, so lack of music software wouldn’t be adeal-breaker, but I am curious, and I like to sing, so I decided toexplore the possibility. This isn’t a proper how-to, from theperspective of an expert, just an invitation to explore along withme. Mostly I’m looking at info written in other contexts,collecting and summarizing it in one place. Hopefully, that’ll be auseful service, and in this post I’ll explain how to get a “back-end” up and running and play some synthesized sounds, then explore software that runs on top of that back-end in later weekly posts.

So the first thing Idiscovered is that openSUSE uses a low-latency kernel by default. Youcan prove it to yourself by running “uname -a” in terminal andscanning the output for the word PREEMPT. (The more I learn aboutopenSUSE, the more I like it.) However, if you check out the configfile in /boot and scan for the line CONFIG_HZ, then you’ll see itequals 250Hz, and some older tutorials say that 1000Hz is necessaryfor digital audio. These days, the kernel has HR (hi-res) timerinstead, but you have to configure your DAW to use it. Short version,the default kernel should be good to go – we’ll see. (By the way,don’t try editing that config file to govern the behavior of thekernel, because unlike most config files, it’s just a historicalrecord of how the kernel was compiled.)

So the basics: ALSAis the linux sound driver and already installed; JACK is a soundserver but is sometimes mistakenly called a driver. Functionally it’sa virtual patchbay, and all the digital audio workstations use it.Qjackctl is the GUI for it, so install that and it’ll pick up JACKas a dependency. MIDI is all about synthesizing music – take astring of notes and apply a soundfont to change their aural qualityjust like regular fonts change the visual quality of a string ofletters. Without a soundfont, a midi file sounds ugly. Installqsynth, the GUI for fluidsynth, and it’ll pick up a good soundfontas a dependency. These programs take a little set-up, which is thefocus of today’s post, and to test that the setup works, you’llneed a virtual keyboard like vkeybd. A different one called vmpk ismore feature-rich, but it’s from the experimental repo and crashedwhen I tried it.

For the digitalaudio software itself, I’m most anxious to try out rosegarden,because it can do something unique: Record some music, and it’lltry to write music notation for what you sang! We also have Audacity,LMMS, and Qtractor in the standard repo, and the highly-regardedArdour in an experimental repo (so install at your own risk –you’ve been warned – don’t wake the baby because she’ll cry –I mean I know you installed Tumbleweed yourself so in a worst-casescenario you could do it again but focus on what a soul-crushinghassle that’d be).

Hopefully thisweekend I’ll have a post on one of the DAWs, but this one is to laythe groundwork: Install qjackctl, qsynth, and vkeybd. Don’t openthem yet, though.

Pulseaudio kind ofdoes the same job as JACK, but it’s more focused on the experienceof the consumer than the creator. Now you can just kill pulse andthat may be what the pros do on dedicated hardware, but I’m anamateur and I use my laptop for other stuff too, so I want to keeppulse and have it play nice with JACK. Since the plugin’s there bydefault, the way to make that happen is append these lines to/etc/pulse/default.pa:
load-modulemodule-jack-sink
load-modulemodule-jack-source

[Not sure how sophisticated the audience is here for things like appending to a file; if using KDE, then open File Manager - Super User (under system in the start menu), and then you can click on a configuration file in the root system and edit it with kate. Obviously, be careful while browsing system files with super powers.]

In order to useJACK, you’ll need to add yourself to the “audio” group. WithopenSUSE we can do that through the GUI in YaST: User and GroupManagement, open your user account, click the details tab, and checkthe box for any group you need - “audio” in this case. Then save.

Now that you’re amember of the audio group, you have to define what the means byappending these lines to /etc/security/limits.conf:
@audio - rtprio 95
@audio - memlockunlimited

Honestly, I don’tknow why there’s not a script to do this automatically when youinstall JACK – anyone using JACK needs these lines. The config filewill explain what these terms mean. I’ve seen recommendations touse 99 instead of 95 for the real-time priority, but I figure there’sno real competition on my machine.

Okay, fire upqjackctl and click settings. Highest priority here is the box in themiddle labeled interface. It refers to your hardware, and until youspecify a soundcard, you won’t hear anything. Also, if you specifythe wrong soundcard (like maybe the video card which also handlessound for HDMI output), you’ll hear nothing. If it’s not obviouswhich is which, you can try ‘em all quicker than you can finddocumentation online.

The sample rate is48000 by default but needs to be 44100 for compatibility with qsynth.I’ve hard from professional musicians who say the default framesper period of 1024 is fine, others who recommend as low as possible,and others who think something around 128 is best. So my conclusionis that it depends on your hardware, and this is one time toliterally play it by ear. I’m using 128, but I’m not sure thevirtual keyboard is much of a test.

Now startqsynth and click its settings. Technically, these are the settingsfor engine 1, and you can define more. Why you’d need to, I don’tyet know. Set the midi “driver” to jack, and then on thesoundfonts tab, open whatever you find in /usr/share/sounds/sf2. Nowopen vkeybd, open its View menu, and start vkeybd, and check key andcontrols. This keyboard doesn’t have all 88 piano keys, but the“key” control lets you map its 36 keys to the low, mid, or highrange of a piano. The controls menu lets you change the “channel”it plays on qsynth, which you’ll have fun with in a minute if thisall goes well.

Back to qjackctlagain, but this time click connect. On the
Audio tab, connectqsynth to system. On the
MIDI tab, connectsystem to qsynth. And on the
ALSA tab, connectvirtualkeyboard to midithrough.
Also on qjackctl,click start, and on qsynth, click start or restart. Both of theseprograms can live in the systray, and since you’ll have them bothopen whenever you do anything with digital audio, it might suit yourworkflow best to put them there.

Finally, click somekeys on the virtual keyboard - hear anything? Gosh I hope so, becauseI don’t know any trouble-shooting steps besides trying thedifferent hardware as the “interface” for qjackctl settings. Ifyou do hear something, it means your back-end is ready to go, and wecan explore using it with more sophisticated programs than vkeybd.

Back on qsynth,click channels, then click the line for the first channel. Trysomething from the list, like violin or trombone. Then press keys onvkeybd. Remember, you can just tap, or hold to sustain a note, oreven hold while you move the mouse across multiple keys. So, you canmake some cool sounds, but you can’t really play this like a realkeyboard. Fortunately, you can pick up an entry-level MIDI-capablekeyboard for less than a hundred bucks, and I found a MIDI-to-USBadapter on Amazon for ten more. Way back in high school, I playedtrombone (poorly), but the instrument I use best is my voice, and I’mhoping that I can simply use a microphone as input and change thenotes I sing to an instrumental part, but that’s for a later post.

One last thing. On qjackctl, click patchbay. On the left click Add, choose Audio/qsynth, and on the right click add and choose Audio/system. Then repeat for MIDI and ALSA, basically recreating the work you did in connections. Then save with a name like virtual keyboard. Next click on settings, options tab, set Activate Patchbay Persistance and choose the file you just created. Next time you start these programs, the connections are already in place. When working with more complex programs that are crash-prone, patchbay persistence will probably save a lot of time,

If anyone else outthere is curious about digital audio on opensuse, suggest a topic orpost your own. Otherwise, I plan to try rosegarden over the weekend,because I’ve come up with a couple of tunes. I know what they soundlike, obviously, but supposedly Rosegarden can construct the notationto show me what they look like, which sounds pretty cool. ‘Til nexttime,

GEF

PS: Stop reading ifyou’re an old hand with openSUSE, but if you only found this postbecause of the reference to audio and know little about the distro,ignore the shallow reviews! You know, the ones that waste a paragraphon whether they like the default wallpaper, as if that’s not theeasiest thing to change. Those reviews all say that openSUSE has apoor “out of the box” experience, and that’s because it doesn’tship what it shouldn’t. That takes 3 steps to rectify, and in themidst of all the other software you’re downloading and settingsyou’re playing with to get your perfect experience, hardlynoticeable. I’ll come back to that in a moment.

The reasons to tryopenSUSE are its “strong technical features” of which thestrongest, in my opinion, is snapper. It takes a snapshot every halfhour (and saves it for 4 hours), and it takes a snapshot every timeyou change the system (as by installing software) and saves 50changes. The cleanup algorithms are key; that’s what keeps thesnapshots from consuming your whole disk like an infestation of kudzugrass. (Remember to specify a cleanup algorithm if you take a manualsnapshot.) Notice how it uses both time- and count-based algorithms;if you leave your computer in the bag for a month, it should stillhave 50 snapshots. If you hose up your computer, 49 of the 50 shouldbe good, eh? And the bootloader lets you review the snapshots bytimestamp, pick one from before you messed up, and then if bootsproperly, roll back. Heck, it’s easier to do it than to explain it,and it has saved me lots of time.

Another contenderfor strongest feature is the build service, which tests newlyreleased software like the gorilla tested Samsonite luggage. It’sautomated but full of realistic user scenarios, hundreds of them on avariety of hardware. Thanks to the build service, Tumbleweed managesto serve up the latest software within a couple days of officialrelease, and yet it’s more stable than most other distros’ pointreleases. When I refer to official versus experimental repos above,the official ones have gone through this testing; the “experimental”ones aren’t doing anything wild, they’re just normal compiled andpackaged programs that haven’t been through the build servicetests.

In spite of thestupid name, YaST is a darn good graphical control center. Yes,desktop environments are starting to have sophisticated settingsexposure, but YaST is better for about half the things it does, andit exposes options that are normally available only in config filesobscure command line switches. Since openSUSE is desktop-agnostic andhas excellent implementations of several, a common interface forcritical system tasks helps the community support individual users.

Another strongtechnical feature is the installer, not the prettiest but the mostthoughtful, especially the partitioner (from YaST). It makes goodrecommendations even in complex multi-boot scenarios and the expertmode offers fine control. If you’re of a mind to try openSUSE, letme give you some advice for the install:

  1. Choose “GuidedSetup” for partitionng, and then set LVM as a parameter. You maynot need it now, but LVM offers flexibility down the line, becausethe partitions can be changed and new ones added.

  2. OpenSUSE willpropose separate root and userspace partitions, which is reallyhelpful if you ever need to reinstall. By default, it proposes BTRFSfor root because that supports snapper, and XFS for userspace. XFSpartitions can’t be shrunk, which is why the default proposal issmall – you can always expand later, right? Or just change from XFSto Ext4, or use BTRFS here too in case at some point you want tostart using snapper to backup your documents or music projects.

  3. OpenSUSE willsuggest 40GB for the root partition, and remember that snapshots takehalf of that. Normal users may not need more, but if you’re readingthis post, you’re planning to play around with all kinds ofcreative suites, so you’ll probably want to bump this up.

On those gripesabout the “out of box” experience that shallow reviews mention.Search on the internet for these three things: “opensuse community”for a 1-click installer for multimedia codecs, “opensuse sleepingbeauty” for a 1-click installer for better on-screenfont-rendering, and “microsoft web fonts” for fonts like TimesNew Roman that Microsoft licensed for public use (if you want to beable to read Word documents that use those fonts, anyway). Note that1-click install only works with Firefox, or with Chrome(ium) using anextension. [If you want to save yourself a lot of clicks on the“1-click” install of multimedia codecs, open YaST’s softwaremanagement, Configure Repositories, Add commununty repo Packman andsave, select Option tab to Allow Vendor Change, and then on Packagetab, for All Packages, Update if newer version available. All thedependencies for the multimedia codecs live on this Packman repo, andyou’ll have to approve vendor changes for them individually if youdon’t follow this procedure first.]

So, I’ve got lotsof reasons to like openSUSE, even if it doesn’t have the reputationfor digital audio, but from what I’ve seen, it’s got a kernelthat can do the job, a working backend of pulseaudio, JACK, andqsynth, and a good if not wide selection of popular DAW software forme to test in the coming weeks. In the past I’ve found consistentlythat openSUSE is better than its rep – that could be its motto –and I suspect I’ll find the same to hold true, at least for theDAWs tested by the build service. See you then!

1 Like

Great Post!
Now, here’s a suggestion… Posts to this “General Chit-Chat forum” can get lost in an ocean of varied topics…

So,
For something like this, if you’d like your experiences to become guideposts to people who follow you, then create your own Wiki!
I’ve written a quickstart for getting you started in my signature below…

Regarding your installation of Jack and how it takes a few steps to make it play nicely with Pulse Audio, If each can stand alone on its own, then we have a utility called “update-alternatives” to switch between replaceable components installed side by side. You’ll often see update-alternatives used switching between different java (eg openjdk and Oracle Java, or different versions of java). I wrote a simple Wiki page on how to configure a simple alternatives to switch between gcc versions

https://en.opensuse.org/User:Tsu2/gcc_update-alternatives

If you create your Wiki (which can be in any style you wish, even a bare micro-blog)
I wouldn’t mind if you even stole ideas from what I created. Like any other openSUSE Wiki, the code behind is easy to access and public. If you ever make a mistake or want to undo something, everything is versioned, so it only takes a click to roll back to anything before.

The main index of my Wiki
https://en.opensuse.org/User:Tsu2

Great to see you share!
If you create a Wiki describing what you just posted, I’d be glad to include a link to yours in mine.

TSU

In addition to software I mentioned above, experimental repos have muse and its spinoff musescore. I can install muse, but it won’t launch, and musescore seems to work fine, but if you don’t want to step outside standard repos, then it’s also available online here: https://www.rollapp.com/app/musescore. I’ll check it out after RoseGarden.

TSU, thanks for the feedback. In my first post, I wondered about whether to give step-by-step or general instructions, and I had a very long postscript for new users about the implicit question: If I’m gonna talk about certain software on openSUSE, I have to say why I want to use openSUSE in the first place. Hypertext would make tangential topics or extra detail available without interrupting the flow of the main story. Now I promised to look at Rosegarden this weekend, but if things don’t get too crazy at work, I’ll look into setting up a wiki next week. Thanks for linking the resources. I think you’re right; digital audio is a broad enough topic to warrant a wiki, and there needs to be opensuse-specific info available. My first look into the matter would have steered me to another distro entireley, because I was looking for a real-time kernel without realizing that the default kernel had the necessary features. I want to help a community that’s helped me, and since I lack the skills to write software, maybe I can write guides to software, but we’ll see how long my energy lasts. One app a week for a few weeks I can manage, but for a complete online guide to digital audio on opensuse, I think it’d need to be more than a solo show, especially since I’m no musician!

GEF

PS: I wrote above on how to get JACK to play nice with Pulse. Now I looked at lots of websites to get going with qsynth and JACK, and at this point I couldn’t give proper attribution to all of them, but I do remember that the JACK/Pulse solution came from this very forum. Thanks, Kitman!

While I’m looking at Rosegarden, you can look at Qtractor in the meantime, a series of recent youtube tutorials starting with this one:

https://www.youtube.com/watch?v=jpwGsNMUmQ8

This guy’s presentation style is a little rambling, but he gets there. You can download qtractor from the standard repo. I don’t know if this youtube presenter will address timers because I haven’t gotten all the way through yet: When you open it, click on the View menu, Options entry, MIDI tab. Under playback, you’ve got a drop down for Queue Timer. That’s where you choose the hi-res timer at a billion Hertz. Again, the openSUSE kernel is only compiled with a system timer of 250Hz, out of a max possible of 1000Hz, but that’s so much lower than a billion, even if it were compiled with a faster system timer, it’d still make sense to use the HR timer instead.

There’s nothing like finding a place to start.

When you’re looking at some ginrmous project in the beginning, it always looks unsurmountable.
Even my little Wiki wasn’t built in a day or a week or a year.
I seem to remember that its first iteration was based on a bunch of text notes I’d been saving on my machine, accumulating for a few years already. At the time, I thought that those notes might benefit others, so the first draft of maybe 6 articles was literally a straight “copy and paste.” Within a few days, those were fixed up a bit and from there, pieces were added one by one when the whim hit me.

Could be the same for you.
Maybe condense your existing body of work into what could be a 45 minute presentation at a conference (Preparing talks for a conference is a good exercise in making your content fit within limits).
Then, post it.
Maybe even publicize it.
And, maybe don’t worry too much about how finished it is. I’ve re-worked several of my articles and even junked and completely revised a few articles over time.
And, who knows…
Maybe someone might happen on your Wiki and help you out although I’ve never had anyone else touch my Wiki despite welcoming and inviting people to mark up what I write (again, if I’d want to, there is the ultimate solution to simply roll back and erase a contribution).

Have fun however you get started, but just don’t fall into the trap believing you have to literally complete the entire topic before you can write about it… Instead break your project down into digestible, bite-size pieces you can post without straining yourself each time. People might even enjoy reading about your evolving journey of discovery.

TSU

My wife’s mom was a piano teacher, and the last thing she wanted to do after work was more work, which is why none of her kids learned to play. My wife regrets that, so a couple birthdays ago I bought her a well-reviewed if entry-level keyboard. Naturally, it only kept her attention for a few weeks, so she doesn’t mind if I borrow it now, and the obstacles I faced may be similar to yours even if my gear is different.

The keyboard is effectively two devices in one: A MIDI controller, directly connected to a MIDI synthesizer. Mine has a USB port, saving me the cost of an adapter, but when I plugged it into my computer, nothing happened. Turns out, I had to break the connection to the internal synthesizer for the USB port to activate, and then I had to reboot my computer before it could see the piano. (By the way, a quick way to check is to type “aconnect -i” in terminal.)

So, this is the part where I feel like an idiot. After I connected the real keyboard to qsynth using jack, the same way I did the virtual keyboard as explained above, I started playing piano keys and no sound came out. However, a little dot on the qsynth interface turned green every time I pressed a key, so it was connected. Well, my wife’s electronic piano is designed to simulate a real piano as closely as possible, meaning that the keys are pressure-sensitive, and the harder you strike them, the louder the note. When I plugged it into my computer, I started thinking of it as a computer keyboard, depressing each key with no more pressure than I’d use on the spacebar – it actually produced a tone, but a very faint one, and striking the keys with vigor produced the sound I expected.

One test I tried was to route the output into the synthesizer. So I’m still playing the piano keys, and the output is still coming from the unit in one of its few built-in voices, but instead of going through its direct internal connection, it’s going through my computer. I could hear no difference. To me, it seemed as though I heard a sound the very moment I struck a key. If I understand correctly, that’s good evidence that the low-latency functions of the kernel are working as intended.

GEF

I imagined I’d dive into a different audio app each weekend and report here a page worth of tips and tricks to get them working and review their performance on openSUSE. They’re too complex for that, so the next post here may not be for a month. Also, sometimes they need to work in combination. Rosegarden, for example, needs a mixer, and I’m using Non mixer, which is a component of Non studio (along with session manager, sequencer, and timeline, all from the standard repo).

For the specific question I wanted to answer about Rosegarden, “Can I generate musical notation from what I sing?” the answer is technically yes, but it’s useless.

First of all, I had to record my voice as an audio file (I used audacity but there are multiple options, like audio-recorder). Then I had to convert the audio file to midi. Audacity has a menu option for that, but it’s greyed out, and if there’s a linux program that’ll do this conversion, I haven’t found it yet, but there’s a website service I’ve now pinned: https://www.ofoct.com/audio-converter/convert-wav-or-mp3-ogg-aac-wma-to-midi.html.

So, armed with a midi version of myself singing do-re-me-fa-so-la-ti-do, I imported to rosegarden, clicked on the resulting segment, and chose the option to open in notation editor. Now to my ear, these notes sounded reasonably close to a B♭ scale, but Rosegarden gave a key signature of C♯. I could recognize 8 columns of notes, with 1 to 6 notes per column, but I couldn’t find a set of 1 note per column which together formed a scale in any key! Well, perhaps my singing sucks, but I think the reality is that it’s just really hard for a computer to transcribe vocals. I tried a chorus of a song I wrote (except that I’ve never actually “written” it - that’s the motive for this exercise). The generated notation was a complete mess. A couple of commercial music transcription programs work on linux, AnthemScore and XSC, so they’ll go on the list but not before I learn more about digital audio in general, so that I’m equipped to make full use of the free trial periods.

If any reader has info on these two programs, or knows a way to convert audio to midi natively, please share.

How to convert an audio file to MIDI in openSUSE Linux, for transcription in Rosegarden.

Install Sonic Visualizer from standard repo. Also install vamp-aubio-plugins just to pick up the dependencies, but they don’t land where sonic visualizer can find them.

(By the way, Audacity can find them, but it can’t make a true midi file - it can make an sds file but that won’t work for transcription.)

Open Sonic Visualizer → transform menu → find, click hyperlink for vamp, search the website for aubio, and download what you find.

Create a /vamp sub-directory in your home directory, and extract the downloaded archive there. (The files have to be in $HOME/vamp, not some further sub-directory thereof.)

In Sonic Visualizer, open an audio file. Now the transform menu should have more options; choose category → notes.

After the plugin annotates the audio file, from file menu choose export annotation layer, and save as a midi.

Now open Rosegarden, open the newly minted midi file, right-click on its segment in the rosegarden interface, and choose open in notation editor.

When I tried this with a simple scale, I got 8 notes. Not quite the ones I was expecting, but way less messy than my first try with a file converted by the online tool mentioned above. I think it’s worth trying again in a truly quiet room, with a microphone better than what’s built-in to my laptop.

GEF

I wrote, “but they don’t land where sonic visualizer can find them,” and “Create a /vamp sub-directory in your home directory” and more.

But there’s an easier way. The vamp plugins (including aubio and any other you install) land in /usr/lb64/vamp. Make a symlink to that directory either in your home directory or with superuser permission in /usr/local/lib. These are the two places where Sonic Visualizer will look.

-GEF

PS: I gather that Sonic Visualizer is a cool tool for all kinds of analysis. For now, I’m just using it to generate midi files for transcription into musical notation, and that’s pretty much all I could accomplish in one weekend, but I hope to revisit this software later in the thread - probably much later, a this rate. Actually, by then it may be a wiki, following TSU’s advice.

I wrote in post 1:
>append these lines to/etc/pulse/default.pa:
load-module module-jack-sink
load-module module-jack-source

Seems it matters where you put them; appending to end of file won’t work. Instead place them just after the line that says “#load-module module-pipe-sink”.

I had to reinstall and just went through my own guide above. It worked, but this time, the volume on my main (built-in) speaker was nerfed - it’s base level was set quite low and that was 100% on pulse audio volume control could (though it could still go even lower).

The solution turned out to be running alsamixer in terminal. The interface is ASCII art, but use F6 (if I recall) to check your potential sound cards until you find the one where the master volume is set below half, and up arrow to raise it. Just set it to max, then you can use the systray applet or pavucontrol to adjust it downward. -GEF

And having made the change with alsamixer, I had to run “sudo alsactl store” to make the setting persistent on next startup. I’d edit the post above, but I didn’t figure this out quick enough. TSU, you’re right, the info in this thread will go in a wiki when I have enough to write a full, coherent how-to.

Sorry I’ve let this thread languish. That’s partly because I’ve been trying to figure out photogrammetry as well as digital audio, and partly because I relized I needed a proper microphone. Well I got one, in the 20-dollar range from amazon. Now I gather a pro would use a mike that plugs into a controller that in turn plugs into a USB port, but mine plugs into USB directly. In volume control, I turned down jack source to zero, and the recording volume of my USB mike was 40%. I sang a scale to Audacity, and on playback it seemed my voice was a bit louder than in life. Once processed by Sonic Visualizer and transcribed by Rosegarden, the results were FAR better than with the laptop’s built-in mike. Evidently I need to work on tempo, because I saw a mix of eighth notes, rests, and dotted quarters, and in some cases two eighth notes (not tied) for what I thought had been a continuous tone. I’d have guessed I sang in 3/4 time, but Rosegarden’s time sig was 12/8.

However, the pitch progression was clear, each not on the line or space following the previous, and only a couple of extraneous notes (way above the scale, possibly from the air conditioner). So, I may need to work on tempo, but at least I can see what note I’m singing, and if the midi transcription is that clean, I should be able to use the track made from my voice to play a synthesized trumpet or any other instrument. However, that’s either not working or not intuitive, so it’ll have to wait until read the destructions.

GEF

Obviously, I never got very far exploring digital audio, but when I had to reinstall my system, at least I wanted to retrace my steps this far.

It didn’t work. Latest kernel isn’t low-latency, so Tumbleweed may be a poor choice for audio after all. Jack’s flaky on start and qsynth keeps crashing, not sure if that’s the reason.

However, you can still do the trick of converting song to notation. Turns out that MuseScore will do the same trick with a midi file. However, if you want to use Sonic Visualizer to extract midi from a wave file, you can’t use the Tumbleweed version as of this writing (3.2.somthing), though the version from Leap 15.1 works (3.0.3). Just be sure to opt out of subscribing to the repo! Musescore seems a little more certain of its choices than Rosegarden, yielding cleaner results, but still inaccurate: I sing do-re-mi-fa-so-la-ti, Sonic Visualizer and Musescore yield do-re-mi-fa-mi-re-do, or something like it. However if I can train myself to sing purer notes, the musescore solution doesn’t require jack. (Neither does vmpk, which didn’t work with tumbleweed when I started this thread but does now.)

Thanks for the excellent posts. It’s been some time since I read something as interesting, computer-wise.

Best regards

Once more, from the top. Slight changes from original, current for tumbleweed as of the date of the post but also tested for leap 15.1 (KDE live).

  1. openSUSE no longer has low-latency kernel, but JACK site says you don’t need it. With modern processors, latency should be low regardless.

  2. Edit /etc/pulse/default.pa, after line ending pipe-sink, add:
    load-module module-jack-sink
    load-module module-jack-source

  3. Edit /etc/security/limits.conf, add:
    @audio - rtprio 95
    @audio - memlock unlimited

  4. Add yourself to the audio group, and wonder why there’s no script to do this automatically when you install jack.

  5. Install pulseaudio-module-jack, qjackctl, qsynth, qtractor, and vkeybd (or instead of vkeybd, use 1-click install for vmpk).

OPTIONAL) Just my style here. Edit Application menu entry for qtractor, change command to qjackctl | qsynth | qtractor. Then hide separate menu entries for qjackctl and qsynth. Then, when configuring those two programs, set them to start minimized to systray (and set the tray to hide 'em too).

  1. Run qjackctl. In settings, select device (probably Generic, but just try 'em all as needed). Leave realtime checked. Set sample rate to 44100 and MIDI driver to seq. For tumbleweed, on Misc tab, un-check replace connections with graph (Leap has older version with connection view only). Also, this is where you enable systray and start minimized. Close setup.

  2. Run qsynth. In options, enable systray start. In setup, MIDI tab set driver to alsa_seq, Audio tab set driver to jack, soundfonts tab open /usr/share/sounds/sf2/ and select what you find in that directory.

  3. Run vkeybd (or vmpk). Back to qjackctl, open Connect. On Audio tab, you should see Jack Sink and Source. If not, reboot until you do. These should be cross-connected to system, and qsynth should connect to system as well. MIDI tab ignore, on ALSA tab connect vkeybd (or vmpk) to fluidsynth. Use mouse on keyboard, if you hear sound then jack and qsynth are working. (Note that volume control is the “gain” knob on qsynth interface.) Try some different soundfonts just for fun, and then no reason not to remove vkeybd from your system at this point.

  4. Run qtractor. (If you did the optional step, this will launch a second instance of qsynth, sorry). Go to View => Options => MIDI tab => Queue Timer and and select HR Timer (Billion Hz), because the nano clock is better than 250ms between clock ticks from the kernel. And now your system’s set up, in theory. You can get to jack’s connections from within qtractor as you add midi devices.

Tell me how it goes. -GEF

1 Like