Showing posts with label Storage. Show all posts
Showing posts with label Storage. Show all posts

Tuesday, July 31, 2018

Network ports for Site Recovery Manager 6.1

Hi All,

Toady I found good KB article about SRM ports which are required for successful implementation and site pairing.

Site Recovery Manager can experience problems if the required network ports are not open.
In a Site Recovery Manager deployment, both the protected site and the recovery site must be able to resolve the vCenter Server instance by name. The correct ports must be open on both sites for uninterrupted communication.

vCenter Server and ESXi Server network port requirements for Site Recovery Manager 6.1

Site Recovery Manager requires certain ports to be open on vCenter Server, Platform Services Controller, and on ESXi Server:

Default PortProtocol or DescriptionSourceTargetDescription
443HTTPSSite Recovery ManagervCenter ServerDefault SSL Web port
443HTTPSSite Recovery ManagerPlatform Services ControllerTraffic from Site Recovery Manager Server to local and remote Platform Services Controller.
902TCP and UDPSite Recovery Manager Server on the recovery siteRecovery site ESXi hostTraffic from the Site Recovery Manager Server on the recovery site to ESXi hosts when recovering or testing virtual machines with IP customization, with configured callout commands on recovered virtual machines, or that use raw disk mapping (RDM). All NFC traffic for updating or patching the VMX files of virtual machines that are replicated using vSphere Replication use this port.

Site Recovery Manager Server 6.1 network ports

The Site Recovery Manager Server instances on the protected and recovery sites require certain ports to be open.

Note: Site Recovery Manager Server at the recovery site must have NFC traffic access to the target ESXi servers.

Default PortProtocol or DescriptionSourceTargetEndpoints or Consumers
443HTTPSSite Recovery ManagervCenter ServerDefault SSL Web Port for incoming TCP traffic
443HTTPSSite Recovery ManagerPlatform Services ControllerTraffic from Site Recovery Manager Server to local and remote Platform Services Controller.
902TCP and UDPSite Recovery Manager Server on the recovery siteRecovery site ESXi hostTraffic from the Site Recovery Manager Server on the recovery site to ESXi hosts when recovering or testing virtual machines with IP customization, with configured callout commands on recovered virtual machines, or that use raw disk mapping (RDM). All NFC traffic for updating or patching the VMX files of virtual machines that are replicated using vSphere Replication use this port.
1433TCPSite Recovery ManagerMicrosoft SQL ServerSite Recovery Manager connectivity to Microsoft SQL Server (for Site Recovery Manager database)
1521TCPSite Recovery ManagerOracle Database ServerSite Recovery Manager database connectivity to Oracle
1526TCPSite Recovery ManagerOracle Database ServerSite Recovery Manager database connectivity to Oracle
9086HTTPSvSphere Web ClientSite Recovery ManagerAll management traffic to Site Recovery Manager Server goes to this port. This includes traffic by external API clients for task automation and HTTPS interface for downloading the UI plug-in and icons. This port must be accessible from the vCenter Server proxy system. Used by vSphere Web Client to download the Site Recovery Manager client plug-in.

Network ports that must be open on Site Recovery Manager and vSphere Replication Protected and Recovery sites

Site Recovery Manager and vSphere Replication require that the protected and recovery sites can communicate.

PortProtocol or DescriptionSourceTargetEndpoints or Consumers
31031Initial replication trafficESXi hostvSphere Replication appliance on the recovery siteFrom the ESXi host at the protected site to the vSphere Replication appliance at the recovery site.
44046Ongoing replication trafficESXi hostvSphere Replication appliance on the recovery siteFrom the ESXi host at the protected site to the vSphere Replication appliance at the recovery site.
8043HTTPSSite Recovery ManagervSphere Replication appliance on the recovery and protected sitesManagement traffic between Site Recovery Management instances and vSphere Replication appliances.
Note: Newly configured replication will use only 31031, existing replications will continue to use 44046 until reconfigured.

Site pairing port requirements

Port
Source
Target
Description
9086
vCenter Server
SRM server target site
vCenter and target SRM communication
9086
SRM server
SRM server on target site
SRM to SRM communication
443
SRM
PSC and vCenter
SRM to vCenter communication – local and remote

Wednesday, September 24, 2014

What is Perennial Reservation?

I had a problem with MSCS cluster when I rebooted one of the ESXi host & found that ESXi host was stopped responding at the time of boot / discovery.

Then I got to know that there were 32 RDM were mapped to VM which were causing the issue

What are perennial reservations and why are they needed? Whenever a LUN is participating in a MSCS cluster, the active node has ownership of that device(s) using permanent SCSI reservations. Now, whenever you rescan for devices on an ESXi host, or are booting an ESXi host, the host tries to query all devices that it can see, including the devices used for MSCS. Now, I'm not exactly sure what takes place during the query process, but because the MSCS device(s) are already have a permanent SCSI reservation, the ESXi query to the device fails, and will continue to fail until the host decides to move on
To solve the problem of of ESXi trying to query MSCS owned devices, a flag has been introduced on a device called Is Perennially Reserved. By default this flag is set to false. By setting this flag to true it lets the ESXi host know to essentially, NOT query this device during rescans/boot time.  Here's VMware KB 1016106 that describes that problem/resolution

Read the KB from VMware which talks about more



Tuesday, November 12, 2013

VMware vSphere Storage APIs – Array Integration (VAAI)

In a virtualized environment, storage operations traditionally have been expensive from a resource perspective. 
Functions such as cloning and snapshots can be performed more efficiently by the storage device than by 
the host.
VMware vSphere® Storage APIs – Array Integration (VAAI), also referred to as hardware acceleration or 
hardware offload APIs, are a set of APIs to enable communication between VMware vSphere ESXi™ hosts and 
storage devices. The APIs define a set of “storage primitives” that enable the ESXi host to offload certain storage 
operations to the array, which reduces resource overhead on the ESXi hosts and can significantly improve 
performance for storage-intensive operations such as storage cloning, zeroing, and so on. The goal of VAAI is to 
help storage vendors provide hardware assistance to speed up VMware® I/O operations that are more efficiently 
accomplished in the storage hardware.
Without the use of VAAI, cloning or migration of virtual machines by the vSphere VMkernel Data Mover involves 
software data movement. The Data Mover issues I/O to read and write blocks to and from the source and 
destination datastores. With VAAI, the Data Mover can use the API primitives to offload operations to the array if 
possible. For example, if the desired operation were to copy a virtual machine disk (VMDK) file from one 
datastore to another inside the same array, the array would be directed to make the copy completely inside the 
array. Whenever a data movement operation is invoked and the corresponding hardware offload operation is 
enabled, the Data Mover will first attempt to use the hardware offload. If the hardware offload operation fails, 
the Data Mover reverts to the traditional software method of data movement. 
In nearly all cases, hardware data movement will perform significantly better than software data movement. It 
will consume fewer CPU cycles and less bandwidth on the storage fabric. Improvements in performance can be 
observed by timing operations that use the VAAI primitives and using esxtop to track values such as CMDS/s, 
READS/s, WRITES/s, MBREAD/s, and MBWRTN/s of storage adapters during the operation.
In the initial VMware vSphere 4.1 implementation, three VAAI primitives were released. These primitives applied 
only to block (Fibre Channel, iSCSI, FCoE) storage. There were no VAAI primitives for NAS storage in this 
initial release.
In vSphere 5.0, VAAI primitives for NAS storage and VMware vSphere Thin Provisioning were introduced. 


VAAI Block Primitives

In VMware vSphere VMFS, many operations must establish a lock on the volume when updating a resource. 
Because VMFS is a clustered file system, many ESXi hosts can share the volume. When one host must make an 
update to the VMFS metadata, a locking mechanism is required to maintain file system integrity and prevent 
another host from coming in and updating the same metadata. The following operations require this lock:
1. Acquire on-disk locks 
2. Upgrade an optimistic lock to an exclusive/physical lock 
3. Unlock a read-only/multiwriter lock 
4. Acquire a heartbeat 
5. Clear a heartbeat 
6. Replay a heartbeat 
7. Reclaim a heartbeat 
8. Acquire on-disk lock with dead owner
It is not essential to understand all of these operations in the context of this whitepaper. It is sufficient to 
understand that various VMFS metadata operations require a lock.

ATS is an enhanced locking mechanism designed to replace the use of SCSI reservations on VMFS volumes 
when doing metadata updates. A SCSI reservation locks a whole LUN and prevents other hosts from doing 
metadata updates of a VMFS volume when one host sharing the volume has a lock. This can lead to various 
contention issues when many virtual machines are using the same datastore. It is a limiting factor for scaling to 
very large VMFS volumes. ATS is a lock mechanism that must modify only a disk sector on the VMFS volume. 
When successful, it enables an ESXi host to perform a metadata update on the volume. This includes allocating 
space to a VMDK during provisioning, because certain characteristics must be updated in the metadata to 
reflect the new size of the file. The introduction of ATS addresses the contention issues with SCSI reservations 
and enables VMFS volumes to scale to much larger sizes.
In vSphere 4.0, VMFS3 used SCSI reservations for establishing the lock, because there was no VAAI support in 
that release. In vSphere 4.1, on a VAAI-enabled array, VMFS3 used ATS for only operations 1 and 2 listed 
previously, and only when there was no contention for disk lock acquisitions. VMFS3 reverted to using SCSI 
reservations if there was a multihost collision when acquiring an on-disk lock using ATS.
In the initial VAAI release, the ATS primitives had to be implemented differently on each storage array, requiring 
a different ATS opcode depending on the vendor. ATS is now a standard T10 SCSI command and uses opcode 
0x89 (COMPARE AND WRITE).
For VMFS5 datastores formatted on a VAAI-enabled array, all the critical section functionality from operations 1 
to 8 is done using ATS. There no longer should be any SCSI reservations on VAAI-enabled VMFS5. ATS continues 
to be used even if there is contention. On non-VAAI arrays, SCSI reservations continue to be used for 
establishing critical sections in VMFS5.