[Macchiato] MacchiatoBin GPU progress [#freetimeproject]

Matt Spencer Matt.Spencer at arm.com
Fri May 5 18:02:53 BST 2017


Hi Ard, Leif has suggested I add you to this thread.


I am trying to get the MacchiatoBin working with a PCIe GPU.  The approach I am taking is to use a powered 16->1 lane PCIe adapter (https://www.amazon.co.uk/gp/product/B01ER2Z1GY/ref=oh_aui_detailpage_o03_s00?ie=UTF8&psc=1) and using a modern NVidia GForce GPU that is know to function well with Nouveau (https://www.amazon.co.uk/gp/product/B01AY7927A/ref=oh_aui_detailpage_o04_s00?ie=UTF8&psc=1).


I have the kernel booting, detecting the graphics card and trying to initialise the driver, but come across this issue:


[    3.048239] PCI host bridge /cpn-110-master/pcie at 0x600000 ranges:

[    3.048252]    IO 0xf9000000..0xf900ffff -> 0xf9000000

[    3.048257]   MEM 0xf6000000..0xf6efffff -> 0xf6000000

[    3.048403] armada8k-pcie f2600000.pcie: PCI host bridge to bus 0000:00

[    3.048407] pci_bus 0000:00: root bus resource [bus 00-ff]

[    3.048412] pci_bus 0000:00: root bus resource [io  0x0000-0xffff] (bus address [0xf9000000-)

[    3.048415] pci_bus 0000:00: root bus resource [mem 0xf6000000-0xf6efffff]

[    3.048623] iommu: Adding device 0000:00:00.0 to group 0

[    3.074114] pci 0000:01:00.0: reg 0x24: initial BAR value 0x00000000 invalid

[    3.074772] iommu: Adding device 0000:01:00.0 to group 1

[    3.074786] vgaarb: device added: PCI:0000:01:00.0,decodes=io+mem,owns=none,locks=none

[    3.075869] iommu: Adding device 0000:01:00.1 to group 1

[    3.084112] pci 0000:00:00.0: BAR 15: no space for [mem size 0x0c000000 64bit pref]

[    3.084116] pci 0000:00:00.0: BAR 15: failed to assign [mem size 0x0c000000 64bit pref]

[    3.084119] pci 0000:00:00.0: BAR 14: no space for [mem size 0x01800000]

[    3.084121] pci 0000:00:00.0: BAR 14: failed to assign [mem size 0x01800000]

[    3.084125] pci 0000:00:00.0: BAR 0: assigned [mem 0xf6000000-0xf60fffff]

[    3.084129] pci 0000:00:00.0: BAR 13: assigned [io  0x1000-0x1fff]

[    3.084135] pci 0000:01:00.0: BAR 1: no space for [mem size 0x08000000 64bit pref]

[    3.084137] pci 0000:01:00.0: BAR 1: failed to assign [mem size 0x08000000 64bit pref]

[    3.084140] pci 0000:01:00.0: BAR 3: no space for [mem size 0x02000000 64bit pref]

[    3.084142] pci 0000:01:00.0: BAR 3: failed to assign [mem size 0x02000000 64bit pref]

[    3.084144] pci 0000:01:00.0: BAR 0: no space for [mem size 0x01000000]

[    3.084147] pci 0000:01:00.0: BAR 0: failed to assign [mem size 0x01000000]

[    3.084150] pci 0000:01:00.0: BAR 6: no space for [mem size 0x00080000 pref]

[    3.084152] pci 0000:01:00.0: BAR 6: failed to assign [mem size 0x00080000 pref]

[    3.084154] pci 0000:01:00.1: BAR 0: no space for [mem size 0x00004000]

[    3.084157] pci 0000:01:00.1: BAR 0: failed to assign [mem size 0x00004000]

[    3.084159] pci 0000:01:00.0: BAR 5: assigned [io  0x1000-0x107f]

[    3.084182] pci 0000:00:00.0: PCI bridge to [bus 01]

[    3.084186] pci 0000:00:00.0:   bridge window [io  0x1000-0x1fff]

[    3.084428] pcieport 0000:00:00.0: Signaling PME through PCIe PME interrupt

[    3.084431] pci 0000:01:00.0: Signaling PME through PCIe PME interrupt

[    3.084432] pci 0000:01:00.1: Signaling PME through PCIe PME interrupt


For reference, PCI devices on the bus are:


root at localhost:~# lspci

00:00.0 PCI bridge: Marvell Technology Group Ltd. Device 0110

01:00.0 VGA compatible controller: NVIDIA Corporation Device 128b (rev a1)

01:00.1 Audio device: NVIDIA Corporation GK208 HDMI/DP Audio Controller (rev a1)



It looks like I don't have enough BAR memory available?  Leif tells me that the base configuration only exposes 15M, but you have a different firmware that could solve the problem?


I am going to cross post this to the list that Steve M setup to track Macchiato activities.


/Matt

________________________________
From: Leif Lindholm
Sent: 05 May 2017 16:20:15
To: Matt Spencer; Wookey; Steve Capper; Steve McIntyre
Subject: Re: MacchiatoBin GPU progress [#freetimeproject]

This is a firmware change.

We're currently a bit distracted trying to get to the bottom of the
random TLB failures with > 39 bits VA.

But if you don't mind adding Ard (.Biesheuvel at linaro.org) to the thread,
we can keep you posted. (And we need to be writing some stuff on the
Linaro wiki.)

/
     Leif

On 05/05/17 15:53, Matt Spencer wrote:
> I bought a low end new card - a Geforce GT710.  It is a 16 lane PCIe
> device, so I also picked up a 16->1 powered PCIe adapter.  The plan was
> to upgrade this to a 16->4 adapter to increase the available bandwidth
> if this experiment worked.
>
>
> https://www.amazon.co.uk/gp/product/B01ER2Z1GY/ref=oh_aui_detailpage_o03_s00?ie=UTF8&psc=1
>
> https://www.amazon.co.uk/gp/product/B01AY7927A/ref=oh_aui_detailpage_o04_s00?ie=UTF8&psc=1
>
>
> Is the change a runtime/configuration change or is it a kernel
> build-time configuration?  And where can I get more details?
>
>
> /Matt
>
> ------------------------------------------------------------------------
> *From:* Leif Lindholm
> *Sent:* 05 May 2017 15:35:22
> *To:* Matt Spencer; Wookey; Steve Capper; Steve McIntyre
> *Subject:* Re: MacchiatoBin GPU progress [#freetimeproject]
>
> Well, the default configuration only gives you 15MB of mappable BAR space.
>
> Ard has a version that gives you gigabytes, with a completely different
> setup than the Marvell kernel. We can share this.
>
> However, you seem to be doing better in one regard - most of our cards
> don't show up at all. Which specific GForce card is this you're using?
>
> /
>      Leif
>
> On 05/05/17 15:22, Matt Spencer wrote:
>> Hi Guys
>>
>>
>> Getting closer (one tiny step at a time!).  The GForce GPU is detected
>> on the PCIe bus, and the Nouveau driver loaded - but then this happens:
>>
>>
>> [    3.093228] PCI host bridge /cpn-110-master/pcie at 0x600000 ranges:
>>
>> [    3.099371]    IO 0xf9000000..0xf900ffff -> 0xf9000000
>>
>> [    3.104539]   MEM 0xf6000000..0xf6efffff -> 0xf6000000
>>
>> [    3.109819] armada8k-pcie f2600000.pcie: PCI host bridge to bus 0000:00
>>
>> [    3.116471] pci_bus 0000:00: root bus resource [bus 00-ff]
>>
>> [    3.121989] pci_bus 0000:00: root bus resource [io  0x0000-0xffff]
>> (bus address [0xf9000000-)
>>
>> [    3.131514] pci_bus 0000:00: root bus resource [mem
>> 0xf6000000-0xf6efffff]
>>
>> [    3.138640] iommu: Adding device 0000:00:00.0 to group 0
>>
>> [   24.152049] INFO: rcu_preempt detected stalls on CPUs/tasks:
>>
>> [   24.157748]  1-...: (1 GPs behind) idle=941/140000000000000/0
>> softirq=145/146 fqs=5251
>>
>> [   24.165784]  (detected by 2, t=5255 jiffies, g=-223, c=-224, q=13)
>>
>> [   24.172002] Task dump for CPU 1:
>>
>> [   24.175242] kworker/u8:0    R  running task        0     6      2
>> 0x00000002
>>
>> [   24.182356] Workqueue: deferwq deferred_probe_work_func
>>
>> [   24.187612] Call trace:
>>
>> [   24.190072] [<ffffffc000086cc4>] __switch_to+0x8c/0xa0
>>
>> [   24.195234] [<ffffffc0000cc284>] __queue_work+0x124/0x2a0
>>
>> [   24.200656] [<ffffffc0000ecf74>] __wake_up+0x44/0x68
>>
>> [   24.205642] [<ffffffc0000c9be0>] call_usermodehelper_exec+0xb8/0x178
>>
>> [   24.212021] [<ffffffc0de893330>] 0xffffffc0de893330
>>
>>
>> To my mind, it looks like the kernel stalls when mapping the IO for the
>> graphics card - but its been a number of years since I hacked on the
>> kernel - do any of you have any ideas where to look next?
>>
>>
>> /Matt
>>
>

IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.einval.com/pipermail/macchiato/attachments/20170505/a37f282e/attachment-0001.html>


More information about the Macchiato mailing list