mercredi 4 janvier 2017

Design pattern for buffering pipeline input to PowerShell cmdlet

I occasionally encounter situations where it makes sense to support pipeline input to a Cmdlet but the operations I wish to do (e.g. database access) make sense to batch up on a sensible number of objects.

A typical way to achieve this appears to be something like the following:

function BufferExample {
<#
.SYNOPSIS
Example of filling and using an intermediate buffer.
#>
[CmdletBinding()]
param(
    [Parameter(ValueFromPipeline)]
    $InputObject
)

BEGIN {
    $Buffer = New-Object System.Collections.ArrayList(10)
    function _PROCESS {
        # Do something with a batch of items here.
        Write-Output "Element 1 of the batch is $($Buffer[1])"
        # Then empty the buffer.
        $Buffer.Clear()
    }
}

PROCESS {
    # Accumulate the buffer or process it if the buffer is full.
    if ($Buffer.Count -ne $Buffer.Capacity) {    
        [void]$Buffer.Add($InputObject)        
    } else {
        _PROCESS 
    }
}

END {
     # The buffer may be partially filled, so let's do something with the remainder.
    _PROCESS
}
}

Is there a less "boilerplate" way to do this?

One method may be to write the function which I call "_PROCESS" here to accept array argument(s) but not pipeline input and then for the cmdlet exposed to the user to be a proxy function built to buffer the input and pass on the buffer as described in Proxy commands.

Alternatively I could dot source dynamic code in the body of the cmdlet I wish to write to support this functionality, however this seems error prone and potentially hard to debug / understand.

Aucun commentaire:

Enregistrer un commentaire