Hybrid Graphics and vgaswitheroo

Hello everyone,

I have a HP DV6 with an Intel/AMD hybrid graphics card.

00:02.0 VGA compatible controller [0300]: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller [8086:0116] (rev 09)
01:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Whistler [Radeon HD 6730M/6770M/7690M XT] [1002:6740]

After installing Leap 42.2 I was surprised to see that glxgears is displayed which makes me think the radeon card is now functional.

However I want to switch the laptop to use the radeon card to use the GPU for applications like Second Life and blender3d. The laptop does not have a BIOS option to disable the Intel so I experimented with vgaswitcheroo.

But following other threads in using vgaswitcheroo, the radeon card does not get used - the Intel stays on all the time.

linux-4j2s:/home/chris # cat /sys/kernel/debug/vgaswitcheroo/switch
0:DIS: :Pwr:0000:01:00.0
1:IGD:+:Pwr:0000:00:02.0
linux-4j2s:/home/chris # echo DIS > /sys/kernel/debug/vgaswitcheroo/switch
linux-4j2s:/home/chris # cat /sys/kernel/debug/vgaswitcheroo/switch
0:DIS: :Pwr:0000:01:00.0
1:IGD:+:Pwr:0000:00:02.0

I have googled to no avail.

Are there any more settings I may have overlooked to get the radeon working as the default graphics card OR is it not 100% supported in 42.2 and I am wasting my time?

Thank you,
Chris.

glxgears is just a simple openGL utility. The intel adapter would be perfectly capable of running it. You can see which one is driving it a couple of ways, including:

glxgears -info

However I want to switch the laptop to use the radeon card to use the GPU for applications like Second Life and blender3d. The laptop does not have a BIOS option to disable the Intel so I experimented with vgaswitcheroo.

But following other threads in using vgaswitcheroo, the radeon card does not get used - the Intel stays on all the time.

linux-4j2s:/home/chris # cat /sys/kernel/debug/vgaswitcheroo/switch
0:DIS: :Pwr:0000:01:00.0
1:IGD:+:Pwr:0000:00:02.0
linux-4j2s:/home/chris # echo DIS > /sys/kernel/debug/vgaswitcheroo/switch
linux-4j2s:/home/chris # cat /sys/kernel/debug/vgaswitcheroo/switch
0:DIS: :Pwr:0000:01:00.0
1:IGD:+:Pwr:0000:00:02.0

I have googled to no avail.

Are there any more settings I may have overlooked to get the radeon working as the default graphics card OR is it not 100% supported in 42.2 and I am wasting my time?
[Note: you would have wanted to have run the above commands from console (with no X Windows/GUI session running), as they would have (if successful) terminated your X/Desktop session abruptly … there are other commands and steps you would have wanted to have run if you were to have done it from a terminal running within your Xorg Desktop session. But, in any regard, and more to the point] …

vga_switcheroo is really intended for laptop’s with hybrid graphics that utilise a mux. Your laptop is very likely a muxless. In which case, the radeon device is unlikely to be able to be (directly) connected to the laptop’s display panel. In other words, you can only use it indirectly, for rendering.

So what you want to use is Prime. Have a read through the following for usage information:

Hello Tyler_K,

Thank you for the reply.

Simple things first :slight_smile:

I booted the laptop in runlevel 3 and played with the vgaswitcheroo commands (ON and DDIS). That resulted in no change at all when I restarted the X server.

The Prime notes were quite interesting. The xf86-video-ati driver was already installed by default and I never had the AMD Catalyst drivers installed (well no version available anyway).

Running the various xrandr commands I got -

chris@linux-4j2s:~> xrandr --listproviders
Providers: number : 3
Provider 0: id: 0x7d cap: 0xb, Source Output, Sink Output, Sink Offload crtcs: 3 outputs: 5 associated providers: 2 name:Intel
Provider 1: id: 0x55 cap: 0xf, Source Output, Sink Output, Source Offload, Sink Offload crtcs: 6 outputs: 0 associated providers: 2 name:TURKS @ pci:0000:01:00.0
Provider 2: id: 0x55 cap: 0xf, Source Output, Sink Output, Source Offload, Sink Offload crtcs: 6 outputs: 0 associated providers: 2 name:TURKS @ pci:0000:01:00.0

chris@linux-4j2s:~> xrandr --setprovideroffloadsink 1 0
chris@linux-4j2s:~> 

chris@linux-4j2s:~> DRI_PRIME=1 glxgears -info
Running synchronized to the vertical refresh.  The framerate should be
approximately the same as the monitor refresh rate.
GL_RENDERER   = Gallium 0.4 on AMD TURKS (DRM 2.43.0, LLVM 3.8.0)
GL_VERSION    = 3.0 Mesa 11.2.2
GL_VENDOR     = X.Org
(and lots of code displayed)

This tells me the AMD graphics is being used but the frame rate is ridiculously high AND the glxgears window is all black ie no running gears displayed. I then ran it under compton and got the running gears displayed. So far I am getting happy.

5860 frames in 5.0 seconds = 1171.961 FPS
5855 frames in 5.0 seconds = 1170.925 FPS


I ran blender3D (with compton). It worked but ran very sluggishly but no “Compute Device” was found - I need to investigate that further.

chris@linux-4j2s:~> DRI_PRIME=1 blender/blender-2.78a-linux-glibc211-x86_64/./blender
Read new prefs: /home/chris/.config/blender/2.78/config/userpref.blend
found bundled python: /home/chris/blender/blender-2.78a-linux-glibc211-x86_64/2.78/python

I ran Second Life which surprisingly acknowledged the change from Intel to AMD and displayed the inworld graphics but graphics preferences like the Advanced Lighting Model could not be activated even though the Intel graphics could.

DRI_PRIME=1 Phoenix_FirestormOS-Releasex64_x86_64_5.0.1.52150/./firestorm

Bottom line - The AMD graphics is active but not performing too well - maybe it can’t with what I want to do. But this far further than I have been before with getting the AMD to work.

Thank you so very much.

That’s the package that contains the radeon xorg driver (radeon_drv.so). Its a simplification of things, but there are 3 major components in the driver stack for you (when running under an X11/X Windows Session):

  • kernel space: the radeon DRM kernel driver (radeon.ko)
  • user space:
    [LIST]
  • the xorg driver (aka DDX), which is the aforementioned radeon_drv.so
  • the 3D driver (Mesa provides an openGL driver is of the gallium variety), which is r600_dri.so

[/LIST]

Running the various xrandr commands I got -

chris@linux-4j2s:~> xrandr --listproviders
Providers: number : 3

Note that there really should only be 2 providers listed, however, there was a logic error in the code that results, in this case, your radeon adapter being double counted. This has been amended and you should see the correct reporting behaviour in the future when you’re using v1.19 of the X Server (either through update or next Leap release).

chris@linux-4j2s:~> xrandr --setprovideroffloadsink 1 0
chris@linux-4j2s:~> 

yep, its executed correctly … on its own, it doesn’t return any output … you’d have to run the command with the --verbose option to see anything.

chris@linux-4j2s:~> DRI_PRIME=1 glxgears -info
Running synchronized to the vertical refresh.  The framerate should be
approximately the same as the monitor refresh rate.
GL_RENDERER   = Gallium 0.4 on AMD TURKS (DRM 2.43.0, LLVM 3.8.0)
GL_VERSION    = 3.0 Mesa 11.2.2
GL_VENDOR     = X.Org
(and lots of code displayed)

Looks good. The “lots of displayed code” is (as can be read) essentially a vomitting listing of all the GL extensions supported/exposed by your GL driver.

This tells me the AMD graphics is being used
yeppers.

but the frame rate is ridiculously high AND the glxgears window is all black ie no running gears displayed.
is this why you said earlier “I was surprised to see that glxgears is displayed which makes me think the radeon card is now functional.” ?? If so, then something else is up (and I’d think it would have to do with desktop/WM…speaking of which):

I then ran it under compton and got the running gears displayed.
What DE are you using, and what WM were you using before?

So far I am getting happy.

5860 frames in 5.0 seconds = 1171.961 FPS
5855 frames in 5.0 seconds = 1170.925 FPS

I ran blender3D (with compton). It worked but ran very sluggishly but no “Compute Device” was found - I need to investigate that further.

chris@linux-4j2s:~> DRI_PRIME=1 blender/blender-2.78a-linux-glibc211-x86_64/./blender
Read new prefs: /home/chris/.config/blender/2.78/config/userpref.blend
found bundled python: /home/chris/blender/blender-2.78a-linux-glibc211-x86_64/2.78/python

I ran Second Life which surprisingly acknowledged the change from Intel to AMD and displayed the inworld graphics but graphics preferences like the Advanced Lighting Model could not be activated even though the Intel graphics could.

DRI_PRIME=1 Phoenix_FirestormOS-Releasex64_x86_64_5.0.1.52150/./firestorm

Bottom line - The AMD graphics is active but not performing too well - maybe it can’t with what I want to do.

But this far further than I have been before with getting the AMD to work.

Thank you so very much.
You can run

LIBGL_DEBUG=verbose glxinfo -B 

To get a better look under the hood (so to speak).

You can append the LIBGL_DEBUG=verbose before you start every GL app to get a bit more output and indication that its running the intended driver. So, for the most basic example, you could:

DRI_PRIME=1 LIBGL_DEBUG=verbose glxgears -info

Hi Tyler_K

Note that there really should only be 2 providers listed, however, there was a logic error in the code that results, in this case, your radeon adapter being double counted. This has been amended and you should see the correct reporting behaviour in the future when you’re using v1.19 of the X Server (either through update or next Leap release).

I assume the doubling of the output is just a cosmetic thing that will be fixed in 7.6_1.19? I currently have 7.6_1.18.

What DE are you using, and what WM were you using before?

I currently use LXDE (Plasma 5 is ‘flakey’ on this laptop) using Openbox in all my testing that I have described here. I have a few KDE apps like Gwenview, Konsole and Dolphin installed so I guess there are bits of KDE loaded as well.

To get a better look under the hood (so to speak).

You can append the LIBGL_DEBUG=verbose before you start every GL app to get a bit more output and indication that its running the intended driver. So, for the most basic example, you could:

DRI_PRIME=1 LIBGL_DEBUG=verbose glxgears -info

That command returned the following which means it is running the right r600_dri.

chris@linux-4j2s:~> DRI_PRIME=1 LIBGL_DEBUG=verbose glxgears -info
libGL: OpenDriver: trying /usr/lib64/dri/tls/r600_dri.so
libGL: OpenDriver: trying /usr/lib64/dri/r600_dri.so
libGL: Can't open configuration file /home/chris/.drirc: No such file or directory.
libGL: Can't open configuration file /home/chris/.drirc: No such file or directory.
libGL: Using DRI2 for screen 0
Running synchronized to the vertical refresh.  The framerate should be
approximately the same as the monitor refresh rate.
GL_RENDERER   = Gallium 0.4 on AMD TURKS (DRM 2.43.0, LLVM 3.8.0)
GL_VERSION    = 3.0 Mesa 11.2.2
GL_VENDOR     = X.Org
GL_EXTENSIONS = GL_ARB_multisample ....

After I got things ‘working’ this morning I further checked up on blender3D only to discover that architecture of the AMD is NOT supported.:frowning:
Oh well, at least I now why now. Time to look at building a box with a NVidia GTX900 series card (or two;) )

BTW - do I need to rename this title as “Solved…”?

Regards,
Chris.

Yeah, from a functional perspective, its never impinged anything AFAIK. See here: https://lists.freedesktop.org/archives/xorg-devel/2016-May/049709.html

Yes, in v1.19 of the X server … couldn’t tell you if the package name will be that specifically (7.6_1.19) … I’m not sure why there is a 7.6 prefix … it might be a reference to v7.6 of the X/X11/X Window System … only problem with that is that X11 is currently on release version 7.7 :\ (and v1.18.x falls under that umbrella too) … so I’m not clear about that openSUSE naming … perhaps the package spec file provides some clue … shrugs …:expressionless:

That said, I don’t know if Leap will be upgrading to that anyways (though you could always add the xorg&Mesa repo and update your system to the more recent versions yourself)

I currently use LXDE (Plasma 5 is ‘flakey’ on this laptop) using Openbox in all my testing that I have described here. I have a few KDE apps like Gwenview, Konsole and Dolphin installed so I guess there are bits of KDE loaded as well.
Well, I tested under Openbox, and it (openGL) appears to work fine with my config (running under TW environmenet). So, offhand, I’m not certain what is up in your (Leap 4.2. environment) situation

… unless its due to synch between your intel and AMD adapters (I was using AMD + AMD) … in fact, there are some some nice improvements in the X server v1.19 release that touch upon this, as well as in recent kernel side stuff. Your system, for example, would benefit from these discussed improvements:

That command returned the following which means it is running the right r600_dri.

libGL: Using DRI2 for screen 0

You could also switch to glamor acceleration and you’d get DRI3 …

After I got things ‘working’ this morning I further checked up on blender3D only to discover that architecture of the AMD is NOT supported.:frowning:
I don’t use blender, but I’m rather surprised by that statement. Perhaps you have misunderstood something … if anything, I’d suspect that it’d be in relation to openCL support

Some further thoughts:

  1. What output do you have for
dmesg | grep -i dpm

What I’m getting at here is that:

we’d probably expect to see the power state of the AMD adapter reporting DynOff or DynPwr if the runtime dynamic power management is working … see: http://www.phoronix.com/scan.php?page=news_item&px=MTQ2NjI

  1. It might be beneficial if you post the contents of your Xorg log to susepaste, and then supply a link to that. Perhaps there is a clue provided in it’s output of something not quite right.

  2. Lastly, given you’re running slightly older incarnations of the graphics environment and driver stacks (then me), you may be able to return a favour and do some testing for me: Could you do some VT switching (between your desktop session to console (the various VT’s) and then back again and report (in the thread I’ll link to in a second), if you experience any crash of X … see this for details: https://forums.opensuse.org/showthread.php/521652-Swtching-between-X-session-(Plasma)-to-Console-amp-then-back-kills-X-session-dropping-you-to-DM-login?p=2804197#post2804197

Hi Tyler_K

so I’m not clear about that openSUSE naming … perhaps the package spec file provides some clue … shrugs

Please view this - https://paste.opensuse.org/90634388

That said, I don’t know if Leap will be upgrading to that anyways (though you could always add the xorg&Mesa repo and update your system to the more recent versions yourself)

I can not find a repo in the Community Repositories list. Where would one be please? I did find the source code for Xorg on a mirror at http://mirror.csclub.uwaterloo.ca/x.org/X11R7.7/ but I would be very uncomfortable in compiling.

You could also switch to glamor acceleration and you’d get DRI3 …

How do I switch to glamor please?

I don’t use blender, but I’m rather surprised by that statement. Perhaps you have misunderstood something … if anything, I’d suspect that it’d be in relation to openCL support

In blender’s manual it states the following crieria which my card does not meet -

OpenCL

OpenCL is supported for GPU rendering with AMD graphics cards. We only support graphics cards with GCN architecture (HD 7xxx and above). Not all HD 7xxx cards are GCN cards though, you can check if your card is here.

In regard to your follow up reply -

What output do you have for
Code:
dmesg | grep -i dpm

chris@linux-4j2s:~> dmesg | grep -i dpm
    3.618727] [drm] radeon: dpm initialized
chris@linux-4j2s:~> 


we’d probably expect to see the power state of the AMD adapter reporting DynOff or DynPwr if the runtime dynamic power management is working

Please ignore that previous screenshot of mine of the vgaswitcheroo settings. I did a “stupid” thing and somehow managed to get both cards powered. It is now as it should be.

linux-4j2s:/home/chris # cat /sys/kernel/debug/vgaswitcheroo/switch 
0:DIS: :DynOff:0000:01:00.0
1:IGD:+:Pwr:0000:00:02.0
linux-4j2s:/home/chris # 

  1. Lastly, given you’re running slightly older incarnations of the graphics environment and driver stacks (then me), you may be able to return a favour and do some testing for me: Could you do some VT switching (between your desktop session to console (the various VT’s) and then back again and report (in the thread I’ll link to in a second), if you experience any crash of X … see this for details: https://forums.opensuse.org/showthre…97#post2804197

Happy to return the favour. You want me to do this in Plasma5 or LXDE? I do switch from the Desktop to a tty ‘semi regularly’ already. I usually use an external monitor connected via HDMI with the laptop LCD turned off. In Plasma5 sometimes the screen would go blank or I would loose the task bar at the bottom. LXDE has been ROCK SOLID! I’ll will switch back to Plasma5 and use the tty’s. If(when) it crashes what logs do you want?

Regards,
Chris.

Hi Chris

AFAICS, it doesn’t really tell us anything more then what we already know. But thanks anyway. (its not a big deal regardless)

I can not find a repo in the Community Repositories list. Where would one be please? I did find the source code for Xorg on a mirror at http://mirror.csclub.uwaterloo.ca/x.org/X11R7.7/ but I would be very uncomfortable in compiling.
No, no, don’t bother with that … you’d zypper dup from here: http://download.opensuse.org/repositories/X11:/XOrg/openSUSE_Leap_42.2/

How do I switch to glamor please?
Sorry, It’s not quite straight forward for you because of the multiple gpus (and from different vendors) …
For radeon:

  • if you have v7.8.0 of xf86-video-ati driver (which I don’t think Leap has naitively), then it will enable DRI3 by provided
    that X server version is >= 1.18.3 and glamor is enabled … when that is the case, then your Xorg log should report that DRI3 is enabled … note that it may appear that there is some conflicting info returned by glamor about it using DRI2. The latter can be ignored. And you can verify that DRI3 is indeed being utilised by the output of one of the aforementioned tests listed above in the thread (glxgears, glxinfo). - Glamor, itself, is not yet enabled by default (though should be enabled in v7.8.1 of the radeon xorg driver). So, by default, the driver, will still be using EXA, and hence only provide DRI2
  • To enable glamor in older driver versions, just add an Acel option for it (glamor) to the Device section in /etc/X11/Xorg.conf.d …(see “man radeon”)
  • To enable DRI3 in older driver versions, aside from the glamor caveat, just add a DRI option for it (DRI) in the Device section in /etc/X11/Xorg.conf.d … (see “man radeon”)

Got that? lol! … Oh, but wait, there’s more: the Intel side ! Leaving alone anything about its 2D Accel handling and DRI3 support, You want to bring things together by having both devices listed in the Device snipet file and then both in the Screen snipet file. Respectively like:

Section "Device"
    Identifier  "Radeon Device"
    Driver      "radeon"
    BusID       "PCI:1:0:0"
    Option      "AccelMethod" "glamor"
    # Option "DRI" "3"
EndSection

Section "Device"
    Identifier  "Intel Device"
    Driver      "intel"
    BusID       "PCI:0:2:0"
EndSection

and

Section "Screen"
  Identifier "Default Screen"
  Device "Intel Device"
  Device "Radeon Device"
EndSection

If that works, know that you likely will get some glitching with glamor. v1.19 of the Xserver sees a number of glamor improvements, so I expect that to go away. Also note that when you get v7.8 of the radeon driver, there is potential you might also experience the error that I discuss in that other thread.

You may even wish to change the Intel driver to use UXA or, alternatively, don’t use the intel Xorg driver, but use the generic modesetting driver for the Intel device instead … both of which may provide help with your Plasma problems. See: https://forums.opensuse.org/showthread.php/521519-Screen-Freeze-on-Leap-42-2-only-option-is-reboot

In blender’s manual it states the following crieria which my card does not meet -

[QUOTE]OpenCL is supported for GPU rendering with AMD graphics cards. We only support graphics cards with GCN architecture (HD 7xxx and above). Not all HD 7xxx cards are GCN cards though, you can check if your card is here[QUOTE]Oh well, at least I now why now. Time to look at building a box with a NVidia GTX900 series card (or two

.[/QUOTE]
[/QUOTE]Ahh, yes, as expected, it was in relation to openCL. And so, in that regard, in terms of options, don’t overlook the obvious (a supported GCN device) … you might want to also keep an eye on, or consider, the ROCm develpements too:

chris@linux-4j2s:~> dmesg | grep -i dpm
    3.618727] [drm] radeon: dpm initialized


Please ignore that previous screenshot of mine of the vgaswitcheroo settings. I did a “stupid” thing and somehow managed to get both cards powered. It is now as it should be.

linux-4j2s:/home/chris # cat /sys/kernel/debug/vgaswitcheroo/switch 
0:DIS: :DynOff:0000:01:00.0
1:IGD:+:Pwr:0000:00:02.0

Ahh, very good; indeed, as it should be … Perhaps that “mishap” would account for the unexpected and poor performance behaviour mentioned previously

Happy to return the favour. You want me to do this in Plasma5 or LXDE? I do switch from the Desktop to a tty ‘semi regularly’ already. I usually use an external monitor connected via HDMI with the laptop LCD turned off. In Plasma5 sometimes the screen would go blank or I would loose the task bar at the bottom. LXDE has been ROCK SOLID! I’ll will switch back to Plasma5 and use the tty’s. If(when) it crashes what logs do you want?
Whatever DE you prefer (the error I encounter is with the server side of things, and is agnostic about the desktop/type of user session). You don’t have to provide any log if it crashes – just have a look through the xorg log for the crashed session (which, would then be “Xorg.0.log.old”, and not the now current instance … you’d likely be able to find the old one in systemd journaling anyway, if you know the time it occurred, in any case) … just be on the lookout for error messages along the lines of "failed to set mode: No space left on device " and “EnterVT failed for gpu”

I’m not expecting you to notice such though under or with your current environment. As I mentioned, that may change it you get adventurous and update. Thanks.

Hi Tyler_K,

Thanks for all that advice.

you’d zypper dup from here: http://download.opensuse.org/reposit…USE_Leap_42.2/

I think I will be adventurous and update the Xorg via the mentioned repo in the next few days after I have run clonezilla on the HD ;). ( side note: I did try Btrfs to play with snapper but things weren’t performing well. I reinstalled 42.2 formatting both the / and /home partitions as EXT4 - much better).

I’ll see if update gets any better performance out of the AMD in Second Life. I think I will abandon all hope of using it for blender3D (those articles on ROCm went over my head and maybe not worth pursuing though I will reread them again. Of course the big question for me is if I build a new box for GPU rendering do I get bang-for-buck with AMD CPU+GPU or Intel CPU +NVidia or for that fact AMD CPU+NVidia? Much googling in the new year:)).

If the updated Xorg is ok under LXDE I will give Plasma5 another chance and see what happens and report.

Regards,
Chris.

Hi Tyler_K,
Next few days - bah I couldn’t wait ;).

Ok after some bizarre initial version downgrade message from zypper (https://paste.opensuse.org/67350467 ) xorg server and the ati driver are up to date.

chris@linux-4j2s:~> zypper info xorg-x11-server
Loading repository data...
Reading installed packages...


Information for package xorg-x11-server:
----------------------------------------
Repository     : X11                         
Name           : xorg-x11-server             
Version        : 1.19.0-470.1                
Arch           : x86_64                      
Vendor         : obs://build.opensuse.org/X11
Installed Size : 5.2 MiB                     
Installed      : Yes                         
Status         : up-to-date                  
Summary        : X                           
Description    :                             
    This package contains the X.Org Server.

chris@linux-4j2s:~> zypper info xf86*ati
Loading repository data...
Reading installed packages...


Information for package xf86-video-ati:
---------------------------------------
Repository     : X11                                   
Name           : xf86-video-ati                        
Version        : 7.8.0-66.3                            
Arch           : x86_64                                
Vendor         : obs://build.opensuse.org/X11          
Installed Size : 545.8 KiB                             
Installed      : Yes                                   
Status         : up-to-date                            
Summary        : ATI video driver for the Xorg X server
Description    :                                       
    ati is an Xorg driver for ATI/AMD video cards.

    It autodetects whether your hardware has a Radeon, Rage 128, or Mach64
    or earlier class of chipset, and loads the radeon, r128, or mach64
    driver as appropriate.

chris@linux-4j2s:~>

Is the reason the drivers from this repo is not in LEAP itself because of licensing?

The listproviders duplicate bug is gone now.

chris@linux-4j2s:~> xrandr --listproviders
Providers: number : 2
Provider 0: id: 0x7d cap: 0xb, Source Output, Sink Output, Sink Offload crtcs: 3 outputs: 5 associated providers: 0 name:Intel
Provider 1: id: 0x55 cap: 0xf, Source Output, Sink Output, Source Offload, Sink Offload crtcs: 6 outputs: 0 associated providers: 0 name:TURKS @ pci:0000:01:00.0
chris@linux-4j2s:~> 

After editing the /etc/X11/Xorg.conf.d/50-devices.conf file with your code then at the next reboot, X failed to start and I could only get the console from which I wiped that file again to get X working. The log from journalctl for that failed reboot is at https://paste.opensuse.org/88772687 while the log for the next reboot with the 50-…conf files wiped is at https://paste.opensuse.org/11932950 . I hope you can see some cause for the problem.

I’ll halt any further testing until you may make any sense for the X server to fail with those conf files.

Regards,
Chris.

Is the reason the drivers from this repo is not in LEAP itself because of licensing?

Yes…

LOL … I saw from your dmesg output that you were mounting a raspberry pi … that tends to indicate to me that you like to tinker with computer stuff … so, it surprises me in the least that you couldn’t hold out a few days from opening up the (proverbial) presents under the christmas tree you had there. :wink:

Ok after some bizarre initial version downgrade message from zypper (https://paste.opensuse.org/67350467 ) xorg server and the ati driver are up to date.
offhand, I haven’t a clue why you’d have received the downgrade message, but I’m sure zypper had its reasons … in any regard, yeah you got the new stuff now.

Is the reason the drivers from this repo is not in LEAP itself because of licensing?
I don’t think it has anything to do with licensing (i’ve never considered or looked into that aspect – offhand, I really don’t think there is anything contained in there that presents a licencing issue) … what is, at issue, however, is that the content deviates greatly from the stable release … you’d have to look into Leaps update & upgrading policies. Typically the stable distro release (that’d be Leap) only updates particular packages, and does not provide upgrades after release … I know to many that upgrade and update are synonymous, but that’s more to do with the imprecision of the English language, as opposed to mincing words and specific package mgmt logic & policy.

The listproviders duplicate bug is gone now.
as expected

After editing the /etc/X11/Xorg.conf.d/50-devices.conf file with your code then at the next reboot, X failed to start and I could only get the console from which I wiped that file again to get X working. The log from journalctl for that failed reboot is at https://paste.opensuse.org/88772687 while the log for the next reboot with the 50-…conf files wiped is at https://paste.opensuse.org/11932950 . I hope you can see some cause for the problem.
Sorry, my mistake … the xorg log gets included into systemd’s jounral if rootless invocation was used (in which case, its also logged to ~/.local/share/xorg )… That (rootless invocation of X) I believe is dependent upon display manager support … I know gdm does this by default, but sddm (and possibly others) still (by default) invoke the X session as root … which consequently, means you’ll find the xorg log in the age old location: /var/log … The age old method also has the limitation of writing only the current X session (Xorg.n.log) and the previous session (Xorg.n.log.old) and recycling from there (note: n in those filenames = display server session; typically 0) … with journalctl, on the other hand, you can retreive a log from x number of boots ago without worry … one just has to keep track of which boot they’re interest in.

So, to have made a short story long, we need to see the xorg log for the failed boot … I do note from your provided dmesg that there is some messages that I would not have expected to see, but best to see what’s up with regard to why the X display server is not starting first … (actually, its likely that it starts, but terminates for some reason, leaving you at or with the cli/console environment).

Hi Tyler_K,

Yes I like to tinker. Being a power miser, a Raspberrypi 2 running Openmediavault and a 2Tbyte USBHDD at 6.5watts is better than an old DualCore box with Freenas and 4 HDDs at 185watts;) . The Freenas box only gets turned on once a month for a system wide backup.

Right, I re-edited the 50-devices.conf file with DRI 3 and the UXA option from the other post -


Section "Device"
    Identifier  "Radeon Device"
    Driver      "radeon"
    BusID       "PCI:1:0:0"
    Option      "AccelMethod" "glamor"
    Option      "DRI" "3"
EndSection

Section "Device"
    Identifier  "Intel Device"
    Driver      "intel"
    BusID       "PCI:0:2:0"
      Option      "AccelMethod"  "uxa" 
EndSection

and the 50-screen.conf file with -

Section "Screen"
  Identifier "Default Screen"
  Device "Intel Device"
  Device "Radeon Device"
EndSection

As expected the system would only boot to the console. Copy of Xorg.0.log here https://paste.opensuse.org/68951432 . I see the message about no screens being found but it is “all greek to me”. I have included the journal for that boot here https://paste.opensuse.org/71612476 .

Regards,
Chris.

Hi Chris

Very nice.

As expected the system would only boot to the console. Copy of Xorg.0.log here https://paste.opensuse.org/68951432 . I see the message about no screens being found but it is “all greek to me”.
Yep, as expected the X server is starting but, because the configuration is wrong for your system’s typology, its terminating; the ultimate reason for which you’ve identified.
Goes to show ya that ya can’t just copy what any ol’body tells ya on the internet and expect it to work!lol!

If you change the Screen file to the following, you’re system will likely boot properly again:

Section "Screen"
     Identifier "Default Screen"
     Device "Intel Device"
     GPUDevice "Radeon Device"
EndSection

Just as a note, with DRI3, (and provided your intel is making use of DRI3 (you can check that is the case by rendering something GL on it (intel adapter) and using the LIBGL_DEBUG) ) then you won’t have to invoke xrandr --setprovideroffloadsink to get the radeon device as the renderer, as with DRI3, it will handle that on its on, so all you’ll need to do is pass DRI_PRIME=1

The radeon adapter will no longer be be configured for use in X and will only be mentioned in passing in you your xorg log. In essence, its now just rendering device.

If you have outputs on your system that are attached to the radeon, they won’t be displaying anything … though, reverse prime might work (but I’m not certain for sure when using GPUDevice))

You should be able to determine the exact topology via this big ugly command:

ls -l /sys/class/drm/card? && ls /sys/class/drm/*/status | xargs -I {} -i bash -c "echo -n {}: ; cat {}"

have included the journal for that boot here https://paste.opensuse.org/71612476 .
I didn’t look at it; no time, and likely not important to resolving the current problem.

Hi Tyler_K,

With the edited screen conf file the laptop now boots ok. Xorg.0.log https://paste.opensuse.org/38664681 .

However, it seems the Intel cannot use DRI3?

chris@linux-4j2s:~>  LIBGL_DEBUG=verbose glxgears -info
libGL: screen 0 does not appear to be DRI3 capable
libGL: pci id for fd 4: 8086:0116, driver i965
libGL: OpenDriver: trying /usr/lib64/dri/tls/i965_dri.so
libGL: OpenDriver: trying /usr/lib64/dri/i965_dri.so
libGL: Can't open configuration file /home/chris/.drirc: No such file or directory.
libGL: Using DRI2 for screen 0
libGL: Can't open configuration file /home/chris/.drirc: No such file or directory.
Running synchronized to the vertical refresh.  The framerate should be
approximately the same as the monitor refresh rate.
GL_RENDERER   = Mesa DRI Intel(R) Sandybridge Mobile 
GL_VERSION    = 3.0 Mesa 13.0.2
GL_VENDOR     = Intel Open Source Technology Center
GL_EXTENSIONS = GL

So I ran ‘xrandr --setprovideroffloadsink 1 0’ and got the same result. I ran with the AMD with no change to using DRI3 either.

chris@linux-4j2s:~> DRI_PRIME=1 LIBGL_DEBUG=verbose glxgears -info
libGL: screen 0 does not appear to be DRI3 capable
libGL: pci id for fd 4: 1002:6740, driver r600
libGL: OpenDriver: trying /usr/lib64/dri/tls/r600_dri.so
libGL: OpenDriver: trying /usr/lib64/dri/r600_dri.so
libGL: Can't open configuration file /home/chris/.drirc: No such file or directory.
libGL: Can't open configuration file /home/chris/.drirc: No such file or directory.
libGL: Using DRI2 for screen 0
Running synchronized to the vertical refresh.  The framerate should be
approximately the same as the monitor refresh rate.
GL_RENDERER   = Gallium 0.4 on AMD TURKS (DRM 2.43.0 / 4.4.36-8-default, LLVM 3.8.0)
GL_VERSION    = 3.0 Mesa 13.0.2
GL_VENDOR     = X.Org
GL_EXTENSIONS = 

I ran that ‘big ugly’ command and got

chris@linux-4j2s:~> ls -l /sys/class/drm/card? && ls /sys/class/drm/*/status | xargs -I {} -i bash -c "echo -n {}: ; cat {}"
lrwxrwxrwx 1 root root 0 Dec 24 10:41 /sys/class/drm/card0 -> ../../devices/pci0000:00/0000:00:01.0/0000:01:00.0/drm/card0
lrwxrwxrwx 1 root root 0 Dec 24 10:41 /sys/class/drm/card1 -> ../../devices/pci0000:00/0000:00:02.0/drm/card1
/sys/class/drm/card1-DP-1/status:disconnected
/sys/class/drm/card1-HDMI-A-1/status:connected
/sys/class/drm/card1-LVDS-1/status:connected
/sys/class/drm/card1-VGA-1/status:disconnected
chris@linux-4j2s:~> 


However, I have noticed a (maybe) slight improvement in the Firestorm viewer’s rendering performance in Second Life when using the AMD via DRI_PRIME=1.
Bottom line, you got my AMD to power up. But it seems this GPU’s specs just isn’t up to scratch for real heavy work (unless you can pull one last rabbit out of the hat;)).

Thanks very much for your help here,
Chris.

STOP PRESS
Something weird has happened. Other apps like Kodi and mpv (both media players) freeze loading while anything playing on youtube drops frames significantly. I did the ‘xrandr --setprovideroffloadsink 1 0’ trick as before to no effect. I wiped the 50-device and 50-screen conf files and rebooted - no luck. I tried to find a clue or a screwed file but to no avail. So I reluctantly restored my image using the original distro v1.18 of Xorg server and things are fine again.

I appreciate your help but as I said, maybe this AMD isn’t worth the effort - though I might install v1.19 again soon and take little steps just to nut out what went wrong.

Many thanks and Merry Christmas,
Chris.

There are a couple of rabbits in the hat in regards to this point:

  • first, the intel driver defaults to DRI2, so you’d have to use a similar DRI3 option for it … I would have mentioned that earlier, but I was thinking that it already exposed DRI3, but that’s not the case, as you’ve found out.
  • second, as mentioned in that other thread about solutions to intel devices freezing with Plasma, you could always use the modesetting driver … modesetting will use DRI3

Both of those will result in DRI3 on Screen0

Another point maybe worth mentioning, if you didn’t want to use the AddGPUdevice way, you could reorder the listings of the devices in both the Device file and Screen file … that gives you four configuration possibilities. I leave it as an exercise to see what each will result in (hint: some won’t work, another will effectively setup the device similar to how the addgpudevice option works, albeit you’ll get the radeon card indexed in the xorg log).

I ran that ‘big ugly’ command and got
As can be seen, your radeon device has no outputs attached to it (IOW, and stating the obvious, all outputs are attached to the intel adapter) … The radeon adapter is a pure rendering device.

My suggestion to that is to launch them from terminal and see what the ouptut is and if it may provide some clue as to what is going on.

Anyway, good luck, and Merry Christmas.

Hi Tyler_K,

first, the intel driver defaults to DRI2, so you’d have to use a similar DRI3 option for it

Ok I will redo the Devices conf and add a DRI3 option for the Intel as this -

Section "Device"
    Identifier  "Radeon Device"
    Driver      "radeon"
    BusID       "PCI:1:0:0"
    Option      "AccelMethod" "glamor"
    Option      "DRI" "3"
EndSection

Section "Device"
    Identifier  "Intel Device"
    Driver      "intel"
    BusID       "PCI:0:2:0"
    Option      "AccelMethod"  "uxa" 
    Option      "DRI" "3"
EndSection

if you didn’t want to use the AddGPUdevice way, you could reorder the listings of the devices in both the Device file and Screen file

Not exactly sure of this - why would I not want to put the GPUdevice option in Screens conf? Also I didn’t realize that order of listings in the two conf files matters - that could be a hazard for young players like myself:).

The radeon adapter is a pure rendering device.

So does that mean the Intel will ALWAYS drive the screens and any GPU work is off loaded to the AMD then back to the Intel to display? That makes me wonder - when I did that “stupid thing” via vgaswitcheroo and had both devices powered, I noticed my system’s temperature started to rise quickly to 95+ Celsius before I shut things down.

launch them from terminal and see what the ouptut is

I did launch from terminal. Kodi just froze, no terminal messages - needed to kill the process via htop. I am sure that the apps in question have debug switches and logs so I will revisit the apps issue after I get the above down. But first a full clonezilla backup tonight;).

Anyway thanks again,
Chris.

if you want the two adapters listed in the xorg log file in more detail. In some scenarios (not your’s) you don’t want to use the adapters in such manner … it comes down to the provider objects roles your adapters can assume

Also I didn’t realize that order of listings in the two conf files matters
yeah, I don’t think its documented in the xorg man files either. But if you try out the four different configuration possibilities, you’d see in the xorg log that the

 10.882] (**) |-->Screen "Default Screen" (0)
    10.882] (**) |   |-->Monitor "<default monitor>"
    10.882] (**) |   |-->Device "this"
    10.882] (**) |   |-->GPUDevice "and this"

will change, as well as the more obvious point of whether the second adapter is indexed, and, if it is, how (i.e “Radeon(0)” or “Radeon(G0)”)

So does that mean the Intel will ALWAYS drive the screens and any GPU work is off loaded to the AMD then back to the Intel to display?
No. The intel device will always be what is placing screen information onto any display attached to its outputs. But the intel device can still be the source renderer for that output too. And that is the intended case for these hybrid laptops → let the intel adapter render for less demanding daily tasks and needs, as you’ll have a lower power consumption. But then, under more demanding situations (like a opengl video game), have the screen rendering be done by the more powerful/capable radeon adapter, which then pass that info off to the “dumb” intel device to place upon the displays attached to it. So, in this second case, better performance, but at the expense of higher power consumption. This second case, however, does not automagically happen; it has to be configured. Speaking of which, you may be interested in: https://github.com/ChristophHaag/gpuchooser

That makes me wonder - when I did that “stupid thing” via vgaswitcheroo and had both devices powered, I noticed my system’s temperature started to rise quickly to 95+ Celsius before I shut things down.
I couldn’t say for sure what was going on. Having both devices powered on would increase your temp, but I would suspect that it would be moderate, as opposed to a very high elevation (which I would think indicates that a load was being placed upon them).