Drivers Marvell SCSI & RAID Devices



Hi there, I have a problem with a driver, i can't find what driver is for marvell console, any idea? 42367 My config: Motherboard: Asus Rampage V Extreme CPU: Intel i7-5930K. Marvell 92XX SATA Controller 6GB Driver for Windows 10 (x64) 1.2.0.1039-WHQL (9/5/2013 a.k.a. 6/19/2014) In Windows 10, the driver for the Marvell SATA Controller may not get installed automatically which leads us on a wild goose chase as to where to find it.

Printable version

Marvell Storage Utility (MSU) for HPE ProLiant MicroServer Gen10 Server

By downloading, you agree to the terms and conditions of the Hewlett Packard Enterprise Software License Agreement.
Note: Some software requires a valid warranty, current Hewlett Packard Enterprise support contract, or a license fee.

Type:Utility - Tools
Version:4.1.0.2031(15 Jun 2017)
Operating System(s):
Microsoft Windows 7 (32-bit)
Microsoft Windows 7 (64-bit)
Microsoft Windows 8.1 (64-bit)
Microsoft Windows Server 2012 R2
Microsoft Windows Server 2016
Microsoft Windows 10 (64-bit)
File name:Marvell_MSU_v4.1.0.2031.zip (55 MB)
Marvell Storage Utility for HPE ProLiant MicroServer Gen10 under Windows supports the creation of virtual disks and configuration of RAID level 0, 1, or JBOD (Free Disk).

To ensure the integrity of your download, HPE recommends verifying your results with this SHA-256 Checksum value:

147d88feba71c96c623b2acbc464179f0fe263063d72d29953616c8cd263b232Marvell_MSU_v4.1.0.2031.zip

Reboot Requirement:
Reboot is not required after installation for updates to take effect and hardware stability to be maintained.

Installation:

  1. Download the zip file to the target server and unzip it.
  2. Double click 'Marvell_MSU_v4.1.0.2031' folder and verify that the following files are extracted:
    • MSUSetup_v4.1.0.2031.exe
    • MSU_userguide.pdf
    • Instruction.txt
  3. Double click 'MSUSetup_v4.1.0.2031.exe' to execute and follow the installation instruction to install MSU.
  4. Double click 'MarvellTray' icon on desktop after finish the installation, an internet browser will pop-out.
  5. Keyin your windows account and password in 'Username' and 'Password' then click 'Login'.
  6. For more information, click '?' icon in upper right corner of browser to access user guide.

End User License Agreements:
HPE Software License Agreement v1

Upgrade Requirement:
Optional - Users should update to this version if their system is affected by one of the documented fixes or if there is a desire to utilize any of the enhanced functionality provided by this version.

Upgrade Requirement:
Optional - Users should update to this version if their system is affected by one of the documented fixes or if there is a desire to utilize any of the enhanced functionality provided by this version.

Initial release of Marvell Storage Utility (MSU) for HPE ProLiant MicroServer Gen10 Server.

Version:4.1.0.2032 (17 Sep 2018)

Upgrade Requirement:
Optional - Users should update to this version if their system is affected by one of the documented fixes or if there is a desire to utilize any of the enhanced functionality provided by this version.

  • An MSU memory leak issue was addressed in Marvell Storage Utitily (MSU) update version 4.1.0.2032.

Version:4.1.0.2031 (15 Jun 2017)

Upgrade Requirement:
Optional - Users should update to this version if their system is affected by one of the documented fixes or if there is a desire to utilize any of the enhanced functionality provided by this version.

Initial release of Marvell Storage Utility (MSU) for HPE ProLiant MicroServer Gen10 Server.


Type:Utility - Tools
Version:4.1.0.2031(15 Jun 2017)
Operating System(s):
Microsoft Windows 10 (64-bit)
Microsoft Windows 7 (32-bit)
Microsoft Windows 7 (64-bit)
Microsoft Windows 8.1 (64-bit)
Microsoft Windows Server 2012 R2
Microsoft Windows Server 2016

Description

Marvell Storage Utility for HPE ProLiant MicroServer Gen10 under Windows supports the creation of virtual disks and configuration of RAID level 0, 1, or JBOD (Free Disk).

Installation Instructions

To ensure the integrity of your download, HPE recommends verifying your results with this SHA-256 Checksum value:

147d88feba71c96c623b2acbc464179f0fe263063d72d29953616c8cd263b232Marvell_MSU_v4.1.0.2031.zip

Reboot Requirement:
Reboot is not required after installation for updates to take effect and hardware stability to be maintained.

Installation:

  1. Download the zip file to the target server and unzip it.
  2. Double click 'Marvell_MSU_v4.1.0.2031' folder and verify that the following files are extracted:
    • MSUSetup_v4.1.0.2031.exe
    • MSU_userguide.pdf
    • Instruction.txt
  3. Double click 'MSUSetup_v4.1.0.2031.exe' to execute and follow the installation instruction to install MSU.
  4. Double click 'MarvellTray' icon on desktop after finish the installation, an internet browser will pop-out.
  5. Keyin your windows account and password in 'Username' and 'Password' then click 'Login'.
  6. For more information, click '?' icon in upper right corner of browser to access user guide.

Release Notes

End User License Agreements:
HPE Software License Agreement v1

Upgrade Requirement:
Optional - Users should update to this version if their system is affected by one of the documented fixes or if there is a desire to utilize any of the enhanced functionality provided by this version.

Fixes

Upgrade Requirement:
Optional - Users should update to this version if their system is affected by one of the documented fixes or if there is a desire to utilize any of the enhanced functionality provided by this version.

Initial release of Marvell Storage Utility (MSU) for HPE ProLiant MicroServer Gen10 Server.

Revision History

Version:4.1.0.2032 (17 Sep 2018)

Upgrade Requirement:
Optional - Users should update to this version if their system is affected by one of the documented fixes or if there is a desire to utilize any of the enhanced functionality provided by this version.

  • An MSU memory leak issue was addressed in Marvell Storage Utitily (MSU) update version 4.1.0.2032.

Version:4.1.0.2031 (15 Jun 2017)

Upgrade Requirement:
Optional - Users should update to this version if their system is affected by one of the documented fixes or if there is a desire to utilize any of the enhanced functionality provided by this version.

Initial release of Marvell Storage Utility (MSU) for HPE ProLiant MicroServer Gen10 Server.


Legal Disclaimer: Products sold prior to the November 1, 2015 separation of Hewlett-Packard Company into Hewlett Packard Enterprise Company and HP Inc. may have older product names and model numbers that differ from current models.

Copyright (c) 2020 Marvell International Ltd.

Overview¶

Resource virtualization unit (RVU) on Marvell’s OcteonTX2 SOC maps HWresources from the network, crypto and other functional blocks intoPCI-compatible physical and virtual functions. Each functional blockagain has multiple local functions (LFs) for provisioning to PCI devices.RVU supports multiple PCIe SRIOV physical functions (PFs) and virtualfunctions (VFs). PF0 is called the administrative / admin function (AF)and has privileges to provision RVU functional block’s LFs to each of thePF/VF.

RVU managed networking functional blocks
  • Network pool or buffer allocator (NPA)
  • Network interface controller (NIX)
  • Network parser CAM (NPC)
  • Schedule/Synchronize/Order unit (SSO)
  • Loopback interface (LBK)
RVU managed non-networking functional blocks
  • Crypto accelerator (CPT)
  • Scheduled timers unit (TIM)
  • Schedule/Synchronize/Order unit (SSO)Used for both networking and non networking usecases
Resource provisioning examples
  • A PF/VF with NIX-LF & NPA-LF resources works as a pure network device
  • A PF/VF with CPT-LF resource works as a pure crypto offload device.

Download marvell hard disk controller driver windows 7. RVU functional blocks are highly configurable as per software requirements.

Firmware setups following stuff before kernel boots
  • Enables required number of RVU PFs based on number of physical links.
  • Number of VFs per PF are either static or configurable at compile time.Based on config, firmware assigns VFs to each of the PFs.
  • Also assigns MSIX vectors to each of PF and VFs.
  • These are not changed after kernel boot.

Drivers¶

Linux kernel will have multiple drivers registering to different PF and VFsof RVU. Wrt networking there will be 3 flavours of drivers.

Admin Function driver¶

As mentioned above RVU PF0 is called the admin function (AF), this driversupports resource provisioning and configuration of functional blocks.Doesn’t handle any I/O. It sets up few basic stuff but most of thefuncionality is achieved via configuration requests from PFs and VFs.

PF/VFs communicates with AF via a shared memory region (mailbox). Uponreceiving requests AF does resource provisioning and other HW configuration.AF is always attached to host kernel, but PFs and their VFs may be used by hostkernel itself, or attached to VMs or to userspace applications likeDPDK etc. So AF has to handle provisioning/configuration requests sentby any device from any domain.

AF driver also interacts with underlying firmware to
  • Manage physical ethernet links ie CGX LMACs.
  • Retrieve information like speed, duplex, autoneg etc
  • Retrieve PHY EEPROM and stats.
  • Configure FEC, PAM modes
  • etc
From pure networking side AF driver supports following functionality.
Drivers
  • Map a physical link to a RVU PF to which a netdev is registered.
  • Attach NIX and NPA block LFs to RVU PF/VF which provide buffer pools, RQs, SQsfor regular networking functionality.
  • Flow control (pause frames) enable/disable/config.
  • HW PTP timestamping related config.
  • NPC parser profile config, basically how to parse pkt and what info to extract.
  • NPC extract profile config, what to extract from the pkt to match data in MCAM entries.
  • Manage NPC MCAM entries, upon request can frame and install requested packet forwarding rules.
  • Defines receive side scaling (RSS) algorithms.
  • Defines segmentation offload algorithms (eg TSO)
  • VLAN stripping, capture and insertion config.
  • SSO and TIM blocks config which provide packet scheduling support.
  • Debugfs support, to check current resource provising, current status ofNPA pools, NIX RQ, SQ and CQs, various stats etc which helps in debugging issues.
  • And many more.

Physical Function driver¶

This RVU PF handles IO, is mapped to a physical ethernet link and thisdriver registers a netdev. This supports SR-IOV. As said above this drivercommunicates with AF with a mailbox. To retrieve information from physicallinks this driver talks to AF and AF gets that info from firmware and respondsback ie cannot talk to firmware directly.

Drivers Marvell Scsi & Raid Devices Configuration

Supports ethtool for configuring links, RSS, queue count, queue size,flow control, ntuple filters, dump PHY EEPROM, config FEC etc.

Virtual Function driver¶

There are two types VFs, VFs that share the physical link with their parentSR-IOV PF and the VFs which work in pairs using internal HW loopback channels (LBK).

Type1:
  • These VFs and their parent PF share a physical link and used for outside communication.
  • VFs cannot communicate with AF directly, they send mbox message to PF and PFforwards that to AF. AF after processing, responds back to PF and PF forwardsthe reply to VF.
  • From functionality point of view there is no difference between PF and VF as same typeHW resources are attached to both. But user would be able to configure few stuff onlyfrom PF as PF is treated as owner/admin of the link.
Devices
Type2:
  • RVU PF0 ie admin function creates these VFs and maps them to loopback block’s channels.
  • A set of two VFs (VF0 & VF1, VF2 & VF3 . so on) works as a pair ie pkts sent out ofVF0 will be received by VF1 and viceversa.
  • These VFs can be used by applications or virtual machines to communicate between themwithout sending traffic outside. There is no switch present in HW, hence the supportfor loopback VFs.
  • These communicate directly with AF (PF0) via mbox.

Except for the IO channels or links used for packet reception and transmission there isno other difference between these VF types. AF driver takes care of IO channel mapping,hence same VF driver works for both types of devices.

Drivers Marvell Scsi & Raid Devices List

Basic packet flow¶

Ingress¶

  1. CGX LMAC receives packet.
  2. Forwards the packet to the NIX block.
  3. Then submitted to NPC block for parsing and then MCAM lookup to get the destination RVU device.
  4. NIX LF attached to the destination RVU device allocates a buffer from RQ mapped buffer pool of NPA block LF.
  5. RQ may be selected by RSS or by configuring MCAM rule with a RQ number.
  6. Packet is DMA’ed and driver is notified.

Drivers Marvell Scsi & Raid Devices Usb

Egress¶

Drivers Marvell Scsi & Raid Devices Download

  1. Driver prepares a send descriptor and submits to SQ for transmission.
  2. The SQ is already configured (by AF) to transmit on a specific link/channel.
  3. The SQ descriptor ring is maintained in buffers allocated from SQ mapped pool of NPA block LF.
  4. NIX block transmits the pkt on the designated channel.
  5. NPC MCAM entries can be installed to divert pkt onto a different channel.