From Learn PowerShell Scripting in a Month of Lunches by Don Jones and Jeffery Hicks
Before you sit down and start coding up a function or a class, you need to do some thinking about its design. We frequently see toolmaking newcomers charge into their code, and before long they’ve made some monstrosity which is harder to work with than it should be. In this article we’re going to lay out some of the core PowerShell tool design principles, and help you stay on the path of Toolmaking Righteousness. We’ll include some concrete examples.
Tools Do One Thing
The Prime Directive for a PowerShell tool is that it does one thing. You can see this in most tools – which is, command – that ships with PowerShell.
Get-Service gets services. It doesn’t stop them. It doesn’t read computer names from a text file. It doesn’t modify them. It does one thing.
This concept is one we see newcomers violate the most. For example, you’ll see folks build a command that has a
-ComputerName parameter for accepting a remote machine name, as well as a
-FilePath parameter to alternately read computer names from a file. That’s Dead Wrong, because it means the tool is doing two things instead of one. A correct design to follow the paradigm is to stick with the
-ComputerName parameter, and let it accept strings (computer names) from the pipeline. You could also feed it names from a file by using a –
ComputerName (Get-Content filename.txt) parenthetical construct. Or define the
-Computername parameter to accept input by value:
Get-content filename.txt | get-serverstuff
Get-Content command reads text files; you shouldn’t duplicate that functionality in your command without a strong reason.
Let’s explore that “anti-pattern” for a moment. Here’s an example of using a completely fake command (meaning, don’t try this at home) in two different ways:
# Specify three computer names Get-CompanyStuff –Computername ONE,TWO,THREE # Specify a file containing computer names Get-CompanyStuff –FilePath ./names.txt
This approach overcomplicates the tool, making it harder to write, harder to debug, harder to test, and harder to maintain. We’d go with this approach to provide the exact same effect in a simpler tool:
# Specify three computer names Get-CompanyStuff –Computername ONE,TWO,THREE # Specify a file containing computer names Get-CompanyStuff –Computername (Get-Content ./names.txt) # Or if you were smart in making the tool… Get-Content ./names.txt | Get-CompanyStuff
Those patterns do a much better job of mimicking how PowerShell’s own core commands work. But let’s explore one more anti-pattern, which is the “but I have the computer names in a specially formatted file that only I know how to read.” Folks will convince themselves that this is okay:
# Specify three computer names Get-CompanyStuff –Computername ONE,TWO,THREE # Specify a file containing computer names Get-CompanyStuff –FilePath ./names.dat
Recognize those? Yeah, it’s the same file-reading pattern that we said we don’t like. “But
Get-Content can’t read my .DAT file,” the argument goes, “and I’m not duplicating functionality.” The argument misses the point: the “tools only do one thing” pattern has little or nothing to do with duplicating functionality; it has everything to do with simplicity. We’d use these patterns instead:
# Specify three computer names Get-CompanyStuff –Computername ONE,TWO,THREE # Specify a file containing computer names Get-CompanyStuff –Computername (Get-SpecialDataFormat ./names.dat) # Or again, if you were really smart… Get-SpecialDataFormat ./names.dat | Get-CompanyStuff
The idea here is to take that “special data format reading stuff” and put it into its own standalone tool. Each tool then becomes simpler, easier to test by itself, easier to debug and maintain, and so on. Not to overplay the hammer analogy from earlier, but if we were designing hammers, none of them would have the claw end for removing nails. That’d be a separate tool.
Tools are Testable
Another thing to bear in mind is that – if you’re trying to make tools like a real pro – you’re going to want to create automated unit tests for your tools. From a design perspective you want to make sure you’re designing tools that are testable.
One way to do that is to focus on tightly scoped tools that do only one thing. The fewer pieces of functionality a tool introduces, the fewer things and permutations you’ll have to test. The fewer logic branches within your code, the easier it is to thoroughly test your code using automated unit tests.
For example, suppose you decide to design a tool that queries a bunch of remote computers. Within that tool, you might decide to implement a check to make sure each computer is reachable, perhaps by pinging it. That might be a bad idea. First of all, your tool is now doing two things: querying whatever you’re querying, but also pinging computers. That’s two distinct sets of functionalities. The pinging part, in particular, is likely to be code you’d use in many different tools, suggesting it should, in fact, be its own tool. Having the pinging built into the same querying tool makes testing harder because you have to explicitly write tests to make sure that the pinging part works the way it’s supposed to.
An alternate approach is to write that “Test-PCConnection” functionality as a distinct tool. If your “querying” tool’s something like “Get-Whatever,” you might concoct a pattern like:
Get-Content computernames.txt | Test-PCConnection | Get-Whatever
The idea being that
Test-PCConnection filters out whatever computers aren’t reachable, perhaps logging the failed ones, allowing that Get-Whatever to focus on its one job of querying something. Both tools become easier to independently test, because they each have a tightly scoped set of functionalities.
TIP – Having testable tools is a side effect of having tools that only do one thing. If you’re careful with your tool design and create tightly scoped tools, you get all the benefits of more-testable for free.
You also want to avoid building functionality into your tools which are difficult to test. For example, you might decide to implement some error logging in a tool. It’s great – but if that logging is going to a SQL Server database, it’s going to be trickier to test and ensure that the logging is working as desired. Logging to a file might be easier, because a file is easier to check. It is easier still to write a separate tool that handles logging. You could then test that tool independently, and use it within your other tools. This gets back to the idea of having each tool do one thing, and one thing only, as a good design pattern.
Tools are Flexible
You want to design tools that can be used in a variety of scenarios. This often means wiring up parameters to accept pipeline input. For example, suppose you write a tool named Set-MachineStatus that changes some setting on a computer. You might specify a
-ComputerName parameter to accept computer names. Will it accept one computer name, or many? Where will those computer names come from? The correct answers are, “always assume there’ll be more than one, if you can” and “don’t worry about where they come from.” You want to enable, from a design perspective, a variety of approaches.
It can help to sit down and write some examples of using your command that you intend to work. These can become help file examples later, but in the design stage they can help ensure you’re designing to allow all of these. For example, you might want to support these usage patterns:
Get-Content names.txt | Set-MachineStatus Get-ADComputer -filter * | Select -Expand Name | Set-MachineStatus Get-ADComputer -filter * | Set-MachineStatus Set-MachineStatus -ComputerName (Get-Content names.txt)
That third example is going to require some careful design, because you’re not going to be able to pipe an AD Computer object to the same
-ComputerName parameter that also accepts a String object from
Get-Content! You may’ve identified a need for two parameter sets, perhaps one using
-ComputerName <string> and another using
-InputObject <ADComputer> to accommodate both scenarios. Now, creating two parameter sets makes the coding, and the automated unit testing, a bit harder – and you’ll need to decide if the tradeoff is worth it. Will that third example be used frequently enough to justify the extra coding and test development? Or is it a rare enough scenario that you can exclude it, and instead rely on the similar second example?
The point’s that every design decision you make has downstream impact on your tool’s code, its unit tests, and so on. It’s worth thinking about those decisions up front, which is why it’s called the design phase!
Tools Look Native
Finally, be careful with tool and parameter names. Tools should always adopt the standard PowerShell verb-noun pattern, and should only use the most appropriate verb from the list returned by
Get-Verb. Microsoft also publishes that list online (https://msdn.microsoft.com/en-us/library/ms714428.aspx) and the online list includes incorrect variations and explanations that you can use to check yourself. Don’t beat yourself up too hard over fine distinctions between approved verbs, like the difference between Get and Read. If you check out that website, you’ll realize that
Get-Content should probably be
Read-Content; likely a distinction Microsoft came up with after
Get-Content was already in the wild.
We also recommend you get in the habit of using a short prefix on your command’s noun. For example, if you work for Globomantics, Inc., then you might design commands named Get-GloboSystemStatus rather than Get-SystemStatus. The prefix helps prevent your command name from conflicting with those written by other people and it makes it easier to discover and identify commands and tools created for your organization.
A Note on Patterns – Don’t ever forget that the existing commands, particularly the core ones authored by the PowerShell team at Microsoft, represent their vision for how PowerShell works. Break with that vision at your own peril!
Parameter names should also follow native PowerShell patterns. Whenever you need a parameter, take a look at a bunch of native PowerShell commands and see what parameter name they use for similar purposes. For example, if you need to accept computer names, you’d use
-ComputerName (notice it’s singular!) and not some variation like -MachineName. If you need a filename, it’s usually
-Path on most native commands.
The Verb Quandary
One area where you can get a bit wound up is in choosing the right verb for your command name. Honestly, Microsoft probably has too many verbs to choose from, and although we’re sure someone in the company had a clear idea of the differences between them all, that hasn’t always been well-communicated to the PowerShell public. For example, if you’re writing a command that retrieves information from a SQL Server database, is the command name Get-MyWhateverData, or is it Read-MyWhateverData? The company offers some guidance, stating, “The
Get verb is used to retrieve a resource, such as a file. The
Read verb is used to get information from a source, such as a file.” This implies
Get is used to get a file, meaning an object representing the file itself, and
Read is used to retrieve the contents of the file. Except that
Get-Content is a thing; they didn’t even take their own advice.
Our advice? Do what seems to be the most consistent with whatever is already in PowerShell. If you’re truly stuck, post a question in the forums at Powershell.org to get a little feedback from experienced pros.
That’s all for now.
If you’re interested in learning PowerShell Scripting and Toolmaking while you eat your lunch each day (or whenever you have an hour, here and there), check out Learn PowerShell Scripting in a Month of Lunches on liveBook and see this Slideshare presentation.