Skip to content

Conversation

@joshwhieb
Copy link

@joshwhieb joshwhieb commented Nov 15, 2025

Add initial systemC pcie TLM, TLM based pci root (gs_gpex) and nvme SSD based off of simple ssd code. There's probably a few things to resolve before merging i'm sure. Currently doesn't have any sort of functionality for synchronizing time which is next up for me. Should be able to support multiple PCI devices. I was able to attach 2 NVMe SSDs but the second one wasn't working with MSI-X so will have to investigate that as I go.

Also pulled in this PR we can probably close it for now. #30
closes #27

I also noticed an intermittent crash on my rebase and managed to catch it once with the debugger. Some sort of segfault error in the arm_gicv3. Not entirely sure what is the root cause of this so far. This issue doesn't show up when using the v9.1-v0.20.1 quic/qemu vs v10.1-v0.2. This seems to be contained to that qemu version.
segfault

lspci output 00:00.0 is the gs_gpex and 00:01.0 is the nvme_ssd object:

00:00.0 Host bridge: Red Hat, Inc. QEMU PCIe Host bridge
        Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
        Status: Cap+ 66MHz+ UDF+ FastB2B+ ParErr+ DEVSEL=?? >TAbort+ <TAbort+ <MAbort+ >SERR+ <PERR+ INTx+
        Expansion ROM at <unassigned> [disabled] [size=2K]

00:01.0 Non-Volatile memory controller: Samsung Electronics Co Ltd Device 2001 (prog-if 02 [NVM Express])
        Subsystem: Device 3704:8086
        Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
        Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
        Latency: 0
        Interrupt: pin A routed to IRQ 18
        Region 0: Memory at 60304000 (32-bit, non-prefetchable) [size=8K]
        Region 4: Memory at 60300000 (32-bit, non-prefetchable) [size=16K]
        Capabilities: [f8] Power Management version 2
                Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
                Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
        Capabilities: [bc] Express (v2) Endpoint, MSI 00
                DevCap: MaxPayload 128 bytes, PhantFunc 0, Latency L0s <64ns, L1 <1us
                        ExtTag- AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 0.000W
                DevCtl: CorrErr- NonFatalErr- FatalErr- UnsupReq-
                        RlxdOrd- ExtTag- PhantFunc- AuxPwr- NoSnoop- FLReset-
                        MaxPayload 128 bytes, MaxReadReq 128 bytes
                DevSta: CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
                LnkCap: Port #0, Speed unknown, Width x0, ASPM not supported
                        ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp-
                LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk-
                        ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
                LnkSta: Speed 2.5GT/s (strange), Width x0 (ok)
                        TrErr- Train- SlotClk- DLActive- BWMgmt- ABWMgmt-
                DevCap2: Completion Timeout: Not Supported, TimeoutDis- NROPrPrP- LTR-
                         10BitTagComp- 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix-
                         EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
                         FRS- TPHComp- ExtTPHComp-
                         AtomicOpsCap: 32bit- 64bit- 128bitCAS-
                DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis- LTR- OBFF Disabled,
                         AtomicOpsCtl: ReqEn-
                LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete- EqualizationPhase1-
                         EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
                         Retimer- 2Retimers- CrosslinkRes: unsupported
        Capabilities: [b0] MSI-X: Enable+ Count=17 Masked-
                Vector table: BAR=4 offset=00000000
                PBA: BAR=4 offset=00002000
        Kernel driver in use: nvme
        Kernel modules: nvme

Running simple read against the device and the log outputs.

root@localhost:~# dd if=/dev/nvme0n1 bs=512 count=1 | hexdump -C
[I] [    393.368469807 s ]SystemC                       : Root MMIO WRITE to address: 0x60305038 len: 0x4
[W] [    393.368469807 s ]qdma                          : RING SQ TAIL DOORBELL
[I] [    393.368482767 s ]qdma                          : read slba 0 nlb 32
[W] [    393.368482777 s ]qdma                          : CQ Entry Push MSIX
[I] [    393.376382443 s ]SystemC                       : Root MMIO WRITE to address: 0x6030503c len: 0x4
[W] [    393.376382443 s ]qdma                          : RING CQ TAIL DOORBELL
00000000  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
1+0 records in
1+0 records out
*
00000200
512 bytes copied, 0.0124769 s, 41.0 kB/s

Example lua changes i make for the aarch64 lua file.


    gpex_0 ={
        moduletype = "gs_gpex";
        log_level=3,
        bus_master = {bind = "&router.target_socket"};
        pio_iface = { address = 0x60200000, size = 0x0000100000, bind= "&router.initiator_socket"};
        mmio_iface = { address = 0x60300000, size = 0x001fd00000, bind= "&router.initiator_socket" };
        ecam_iface = { address = 0x43B50000, size = 0x0010000000, bind= "&router.initiator_socket" };
        mmio_iface_high = { address = 0x400000000, size = 0x200000000, bind= "&router.initiator_socket" },
        --irq_out_0 = {bind = "&gic_0.spi_in_541"};
        --irq_out_1 = {bind = "&gic_0.spi_in_542"};
        --irq_out_2 = {bind = "&gic_0.spi_in_543"};
        --irq_out_3 = {bind = "&gic_0.spi_in_544"};
    };
    nvme_disk_0 = {
        moduletype = "nvme_ssd",
        dylib_path = "nvme_ssd",
        args = {"&platform.gpex_0"},
        --serial = "nvme_serial_001",
        --blob_file=top().."fw/Artifacts/nvme_disk.img",
        --max_ioqpairs = 64
    };

    gic_0 =  {
        moduletype = "arm_gicv3",
        args = {"&platform.qemu_inst"},
        dist_iface    = {address=APSS_GIC600_GICD_APSS, size= OFFSET_APSS_ALIAS0_GICR_CTLR, bind = "&router.initiator_socket"};
        redist_iface_0= {address=APSS_GIC600_GICD_APSS+OFFSET_APSS_ALIAS0_GICR_CTLR, size=0x1C0000, bind = "&router.initiator_socket"};
        num_cpus = ARM_NUM_CPUS,
        redist_region = {ARM_NUM_CPUS / NUM_REDISTS};
        num_spi=960,
        has_lpi = true,
    };

    -- ITS device - pass GIC instance directly in args
    gic_its_0 = {
        moduletype = "arm_gicv3_its",
        args = {"&platform.qemu_inst", "&gic_0"},
        
        -- ITS control registers (GITS_CTLR, GITS_BASER, etc.)
        mem = {
            address = 0x08080000,
            size = 0x20000,
            bind = "&router.initiator_socket"
        },
    };

…SSD based off of simple ssd code.

Signed-off-by: jhieb <[email protected]>
My changes after testing aren't necessary.

Signed-off-by: jhieb <[email protected]>
# TODO(Jhieb) pull request changes to make sure capabilities are DWORD aligned.
CPMAddPackage(
NAME pcie_model
GIT_REPOSITORY "https://github.com/joshwhieb/pcie-model.git"
Copy link
Author

@joshwhieb joshwhieb Nov 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently have it as a fork as I need to PR and potentially put in checks to make sure that capabilities are DWORD aligned. Otherwise when the host tries to read them it doesn't read the correct values. For some reason X86 in their demos doesn't have problems with it but it is a challenge in the ARM simulation. I'll have to PR that against the primary repository when i get a chance.

https://github.com/Xilinx/pcie-model

Xilinx/pcie-model@020cd98

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Created a PR to update the dword alignment of capability pointers so that I can switch the git repository to the xilinx one.

Xilinx/pcie-model#2

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Nvme disk not realized or connected when defining through lua

3 participants