Leverage Azure Policies to flag issues for review with the audit status, and then leverage PowerShell scripts to identify and manage unattached disks in Azure Storage.
The recent annoucment that Azure Functions now has preview support for PowerShell was big news for me, and should be for a lot of others looking to automate standard azure deployment and operational tasks.
Resource tagging in Azure is a critical function. In this article I will cover what tags are, the core benefits of tagging, and some general suggestions on how to approach tagging.
I have had a few questions around de-duping files within a SharePoint environment recently so I set off to do some research to identify a good solution. Based on past experiences I knew that SharePoint identifies duplicates while performing an index of the content so I expected this would be part of the solution.
Upon starting my journey, I found a couple of threads on various forums where the question has been asked in the past. The first one was “Good De-Dup tools for SharePoint” which had a link to a blog post by Gary Lapointe that offered a PowerShell script that can list every library item in a farm. At first glance this seemed to be neat, but not helpful here.
Next I found a blog post with another handy PowerShell script. This blog post was title Finding Duplicate Documents in SharePoint using PowerShell. I found this script interesting, albeit dangerous. It will iterate through all of your site collections, sites, and libraries, hash each document and compare for duplicates. It however only identifies duplicate documents within the same location. The overhead of running this script is going to be pretty high, and it gets a little risky when you have larger content stores. I would be worried about running this against an environment that has 100s of sites, or large numbers of documents.
Next I found an old MSDN thread named Find duplicate files which had two interesting answers. The first was to query the database (very bad idea) and the second was a response by Paul Galvin that pointed to the duplicates keyword property, and a suggestion to execute a series of alpha wildcard searches with the duplicates keyword. While I have used the duplicates keyword before I had never thought to use it in this context so I set out to give it a try.
As I mentioned at the beginning SharePoint Search does identify duplicates documents. It does this by generating a hash of the document. Unlike the option above where the PowerShell generates a hash, the search hash seems to separate out the meta-data so even items with unique locations, meta-data, and document names can still be identified as identical documents.
When doing some tests though I quickly discovered that the duplicates property requires the full document URL. This means that you would have to execute a recursive search. First you would have to get a list of items to work with, and then you would then need to iterate through each of those items and execute the duplicates search with a query such as duplicates:”[full document url]”.
Conceptually there are two paths forward at this point. The first is to try and obtain a list of all items from SharePoint Search. Unfortunately you cannot get a full list of everything. The best you can do is the lose title search that Paul had suggested. Something like title:”a*” which would return all items with an a in the title. You would then have to go through and do that for all letters and numbers. One extra challenge is that you will be repeatedly processing the same items unless you are using FAST Query Language and have access to the starts-with operator and can do something like title:starts-with(“a”). In addition, since we are only looking for documents, its an extremely good idea to also add in the isdocument:true to your query to ensure that only documents are returned. Overall this is a very inefficient process.
An alternative would be to revisit and extend Gary’s original script to execute the duplicates search for each item. The advantage here is that you would guarantee that you are only executing the duplicates search once for each item which would reduce the total processing and extra output information to be parsed. The other change to Gary’s script would be to change what is written out to the log file since you would only write out the information for items that are identified as duplicates.
This past week fellow SharePoint MVP Yaroslav Pentsarskyy posted an excellent PowerShell script for doing bulk updates on the UserProfile properties via PowerShell. The Bulk Update SharePoint 2010 User Profile Properties is a great script that makes it extremely easy to populate any new fields that are not set to synchronize.
My team has been doing a lot of client work promoting the user of User Profiles for use within customizations or to drive business processes. For a quick overview check out my blog post Permanent Link to User Profiles – Driving Business Process or sit in on my Developing Reusable Workflow Features presentation at SharePoint Saturday NY on July 30th or SharePoint Saturday The Conference 2011 August 11-13th.
This also demonstrates another great example of the value that PowerShell can bring to Building and Maintaining a high functioning SharePoint environment.
A few years ago I wrote an article about how to enable and work with the Quota Management features in SharePoint 2007 (click here for article) which proved to be a popular post. Quota Management is a pretty important topic when it comes to SharePoint Governance and overall maintenance of the platform. While the overall Quota Management features in SharePoint 2010 were maintained, there was one big feature left out when SharePoint first shipped, and that was the “Storage space allocation” page also known as StoreMon.aspx page that was available to Site Collection administrators from the Site Settings page.
New Storage Metrics
With the release of SharePoint 2010 SP1 (download here) the feature returns, but in a much different format and vastly improved. The page was renamed “Storage Metrics” and it is a gold mine of information since it provides a way for Administrators to navigate through the content locations on the site and provides details for the item’s Total Size, % of Parent, % of Site Quota, and Last Modified Date. This makes it easy for administrators to identify where content is concentrated, and can also show an exceptionally large lists, libraries, folders, and documents.
There was one aspect of this that I thought was helpful in 2007 that is no longer supported, and that is the ability to view the number of versions of a given document right from the report. In many cases I’ve seen versioning turned without any limits, and some popular documents might have 1,000s of versions. The report used to provide a way to find those exceptions so that they could be cleaned up.
From what I understand, it was removed because it proved to be extremely resource intensive and information was gathered in real-time so it could cause service stability issues in very large environments. With its return is a completely revamped gathering process that relies on timer jobs, titled Storage Metrics Processing, resulting in much faster page loads and no risk of crashing the server just by viewing the report. These jobs will pull data every 5 minutes but like all timer jobs, the frequency can be adjusted to better meet your needs and environment. For larger environments, it might be a good idea to reduce that frequency to avoid the extra overhead.
As with the 2007 version, this feature is only available if quotas are enabled. In cases where quotas are not currently being used and proper limits managed, the safest bet is to establish a quota that cannot be met. This will enable the features without the risk of triggering a warning or locking a site that exceeds the thresholds. Locking the site is the only risk with quotas, there is no risk of data loss.
Both Farm and Site Collection Administrators should review the functionality and add its review into their content review and cleanup processes.
I do not often link to another blog post as part of one on my site, but this one was too good to pass up. For those of you interested in SharePoint 2010’s Social Features, here is great diagram of of how all of the components and services fit together. It is extremely valuable to understand this information when you embark on an implementation or need to troubleshoot why something is not behaving as expected.
SharePoint Solutions Team Blog