Thursday, August 15, 2013

Content Deployment


If you ever run into issues with content deployment stuck in “Importing” state for long time and not moving forward even after manually running the destination timer job, recycling the timer service only on destination server and then re-running the CD timer job on destination fixes the issue. Make sure the timer job is recycled when there is no current CD jobs executing.

Saturday, January 19, 2013

SharePoint No Downtime based deployments

Deploying solutions to a SharePoint farm with zero or no downtime. It is possible by using the local switch to deploy the files only to the local server hence individual servers can be taken out of rotation and then this operation can be performed.

stsadm -o addsolution -filename "WSPName.wsp" (Does not cause IIS pool recycle)
stsadm -o deploysolution -allowgac -local -name "WSPName.wsp"
stsadm -o retractsolution -local -name "WSPName.wsp"
stsadm -o deletesolution -filename "WSPName.wsp" (Does not cause IIS pool recycle)
stsadm -o upgradesolution -filename "WSPName.wsp" -name "WSPName.wsp" -allowgac -local

Sunday, January 13, 2013

SP2010 Web Analytics

If you are using a Hardware Load Balancer (HLB) in front of your SharePoint 2010 FE servers to offload SSL and have web analytics configured on the farm, you may see incorrect (less) traffic values in the site collection web analytics reports. 
Make sure HLB sends client IP address to the FE servers, you can easily find that out by checking the IIS Logs to ensure actual client-ip's are getting logged and not the HLB ip address. 
If you see HLB ip address getting logged, you need to find out a way to add the cip(client ip) to the package sent by HLB to SP FE as a custom header and the FE should have IIS AARHelper module to reassign the custom header to client-ip. Once this is done Web analytics reports should start reporting actual traffic data.

The root cause of the problem is, when HLB sends all traffic stamped with its IP address, Web analytics reporting logic treats all incoming requests as SPAM and deletes them from the reporting database. The rule of thumb what we discovered was if there were more than 1000 requests from a specific client requesting any url within a span of  then the deletion logic would kick in.