Archive for the 'Misc' Category

Creating a PowerShell script that can install itself as module

Powershell advanced functions are a lightweight, yet pretty powerful way to extend the set of commands available in a Powershell sessions. Advanced functions look and feel almost exactly like proper cmdlets, but they are written in Powershell and therefore quick to develop.

By default, advanced functions are ephemeral though: If you run a script containing an advanced function, that function is going to be available for the rest of the Powershell session — after that, it is gone. To make an advanced function available permanently — like a cmdlet — you have to wrap it in a Powershell module, and install that module.

A module consists of at least two files:

  • a script module, which is a Powershell script containing the advanced functions. Unlike a regular Powershell script, it has to use the file suffix .psm1.
  • a module manifest (*.psd1) that defines some metadata for the module.

If you have a NuGet repository, you can publish your module there to make it easily installable across your environment. But what if you do not have a NuGet repository you could publish the module to, but still want to make it easily installable for other users?

Installing without Nuget

Installing a module entails taking the .psm1 and .psd1 file and copying them to a folder underneath [ProgramFiles]\WindowsPowerShell\Modules\ (to become visible globally) or [MyDocuments]\WindowsPowerShell\Modules (for the current user). That process is easy enough, but already adds a good amount of friction:

  • Comprising multiple files, the module is not as easy to download or copy as a plain script file anymore and you might have to package the files in a Zip archive.
  • Asking users to copy the files to the right folder is rarely an option, so you might have to create an extra installer script or package to take care of this.

As it turns out, there is a way to have your cake and eat it too — to take advantage of advanced functions and modules while still being able to distribute everything as a single script file. The idea is to create a Powershell script that, when run, installs itself permanently as a PowerShell module.

See powershell-install-as-module/Install-Template.ps1 on GitHub for an example of such a script.

The script can be run in three modes. If you dot-source it without arguments, the script will make the advanced functions available to the current session:

PS> . .\Install-Template.ps1

Name Value
---- -----
ModuleVersion 1.0
FunctionsToExport {Invoke-TemplateTest}
RootModule JP.TemplateModule.psm1

Advanced functions added to current session.
Use -Install to add functions permanenently.

PS> Invoke-TemplateTest
This is a test function

If you run .\Install-Template.ps1 -Install CurrentUser, the script generates a .psm1 and .psd1 file off of itself and saves them to [MyDocuments]\WindowsPowerShell\Modules\JP.TemplateModule. This causes the advanced functions to permanently become available for the current user:

PS> .\Install-Template.ps1 -Install CurrentUser

Name Value
---- -----
ModuleVersion 1.0
FunctionsToExport {Invoke-TemplateTest}
RootModule JP.TemplateModule.psm1

Advanced functions added to current session.
Advanced functions installed to C:\Users\jpassing\Documents\WindowsPowerShell\Modules\

In a new session:

PS&gt Invoke-TemplateTest
This is a test function

Finally, if you run .\Install-Template.ps1 -Install LocalMachine, the script generates a .psm1 and .psd1 file off of itself and saves them to [ProgramFiles]\WindowsPowerShell\Modules\JP.TemplateModule, causing the advanced functions to become visible to everyone.

Advertisements

Runtime Code Modification Explained, Part 1: Dealing With Memory

Runtime code modification, of self modifying code as it is often referred to, has been used for decades — to implement JITters, writing highly optimized algorithms, or to do all kinds of interesting stuff. Using runtime code modification code has never been really easy — it requires a solid understanding of machine code and it is straightforward to screw up. What’s not so well known, however, is that writing such code has actually become harder over the last years, at least on the IA-32 platform: Comparing the 486 and current Core architectures, it becomes obvious that Intel, in order to allow more advanced CPU-interal optimizations, has actually lessened certain gauarantees made by the CPU, which in turn requires the programmer to pay more attection to certain details.

Looking around on the web, there are plenty of code snippets and example projects that make use of self-modifying code. Without finger-pointing specific resources, it is, however, safe to assume that a significant (and I mean significant!) fraction of these examples fail to address all potential problems related to runtime code modification. As I have shown a while ago, even Detours, which is a well-done and widely recognized and used library relying on runtime code modification has its issues:

Adopting the nomenclature suggested by the Intel processor manuals, code writing data to memory with the intent of having the same processor execute this data as code is referred to as self-modifying code. On SMP machines, it is possible for one processor to write data to memory with the intent of having a different processor execute this data as code. This process if referred to as cross-modifying code. I will jointly refer to both practices as runtime code modification.

Memory Model

The easiest part of runtime code modification is dealing with the memory model. In order to implement self-modifying or cross-modifying code, a program must be able to address the regions of memory containing the code to be modified. Moreover, due to memory protection mechanisms, overwriting code may not be trivially possible.

The IA-32 architecture offers three memory models — the flat, segmented and real mode memory model. Current OS like Windows and Linux rely on the flat memory model, so I will ignore the other two.

Whenever the CPU fetches code, it addresses memory relative to the segment mapped by the CS segment register. In the flat memory model, the CS segment register, which refers to the current code segment, is always set up to map to linear address 0. In the same manner, the data and stack segment registers (DS, SS) are set up to refer to linear address 0.

It is worth mentioning that AMD64 has retired the use of segmentation and the segment bases for code and data segment are therefore always treated as 0.

Given this setup, code can be accessed and modified on IA-32 as well as on AMD64 in the same manner as data. Easy-peasy.

Memory Protection

One of the features enabled by the use of paging is the ability to enforce memory protection. Each page can specify restrictions to which operations are allowed to be performed on memory of the respective page.

In the context of runtime code modification, memory protection is of special importance as memory containing code usually does not permit write access, but rather read and execute access only. A prospective solution thus has to provide a means to either circumvent such write protection or to temporarily grant write access to the required memory areas.

As other parts of the image are write-protected as well, memory protection equally applies to approaches that modify non-code parts of the image such as the Import Address Table. That’s why the call to VirtualProtect is neccessary when Patching the IAT. Programs using runtime code modification often do not restrict themselves to changing existing code but rather generate additional code. Assuming Data Execution Prevention has been enabled, it is thus vital for such approaches to work properly that any code generated is placed into memory regions that grant execute access. While user mode implementations can rely on a feature of the RTL heap (i.e. using the HEAP_CREATE_ENABLE_EXECUTE when calling RtlCreateHeap) for allocating executable memory, no comparable facility for kernel mode exist — a potential instrumentation solution thus has to come up with a custom allocation strategy.

Jump distances

Whenever code is being generated, odds are that there are branching instructions involved. Depending on where memory for the new code has been allocated and where the branch targets falls, the offset between the branching instruction itself and the jump target may be of significant size. In such cases, the software has to make sure that the branch instruction chosen does in fact support offsets at least as large as required for the individual purpose. This sounds trivial, but it is not: Software that overwrites existing code with a branch may face severe limitation w.r.t. how many bytes the branch instruction may occupy — if, for example, there is less than 5 bytes of space (assuming IA-32), a far jump cannot be used. To use a near jump, however, the newly allocated code better be near.

Further safety concerns will be discussed in Part 2 of this series of posts.

cfix finished 2nd in ATI’s Automation Honors Awards, surpassed only by JUnit

Along with JUnit, JWebUnit, NUnit, and SimpleTest, cfix was one of the nominees for the Automated Testing Institute’s Automation Honors Award 2009 in the category Best Open Source Unit Automated Test Tool. A few days ago, the results were published and cfix finished second — surpassed only by JUnit, which finished 1st (No real surprise here). If you are interested, there is a video in which the results are presented.

Overview on Designing High-Performance Windows Applications

Back in 2008, the Windows Server Performance Team Blog, which I came across recently, ran a series of posts on Designing Applications for High Performance:

If you are interested in developing server side applications for Windows, these articles are definitely worth reading.

Vote for cfix

The Automated Testing Institute has elected cfix to be one of the finalists for the Autmation Honors award. The winners of the award will be highlighted in a Special December Edition of the Automated Software Testing Magazine.

If you are a cfix user, be sure to vote for cfix here.

And by the way, I think The Grinder, which is a really neat web performance testing framework, also deserves being voted for…

More Context Menu Handlers for Everyday Use

Although Windows Explorer may actually not be the brightest spot of Windows, it is still, for most users, among the most often used pograms. Customizing it to speed up certain tasks is thus a natural desire.

A while ago, I wrote about how to extend the context menu by new commands that allow MSI packages to be installed/uninstalled with logfiles being created. The registry entries I used were:

Windows Registry Editor Version 5.00

[HKEY_CLASSES_ROOT\Msi.Package\shell\LoggedInstall]
@="&Logged Install"

[HKEY_CLASSES_ROOT\Msi.Package\shell\LoggedInstall\command]
@="msiexec.exe /l* \"%1-install.log\" /i \"%1\" %*"

[HKEY_CLASSES_ROOT\Msi.Package\shell\LoggedUninstall]
@="L&ogged Uninstall"

[HKEY_CLASSES_ROOT\Msi.Package\shell\LoggedUninstall\command]
@="msiexec.exe /l* \"%1-uninstall.log\" /x \"%1\" %*"

[HKEY_CLASSES_ROOT\Msi.Package\shell\runas\command]
@="msiexec.exe /l* \"%1.log\" /i \"%1\" %*"

While I do not use Windows Installer every day, I am a heavy user of cmd.exe command prompts. Another set of custom verbs I use on my machines therefore deal with opening command line windows. Getting Windows Explorer to open a “normal” command prompt using the context menu is not hard and it has been demonstrated on various places. The idea becomes truly powerful, though, when the commands are specialized to open special kinds of command windows:

  • A plain command prompt
  • An elevated command prompt (using elevate.exe)
  • A WDK command prompt (WLH-chk)
  • A WDK command prompt (WLHA64-chk)
  • A Visual Studio 2005 command prompt
  • etc …

To distinguish the different types of consoles, I like to use different colors — The Visual Studio command prompt is white/green, the elevated prompt is green/blue, and so on. The following script puts it all together (mind the static paths):

Windows Registry Editor Version 5.00

[HKEY_CLASSES_ROOT\Directory\shell\Open DDK Console here (WLH-chk)]
@="Open DD&K Console here (WLH-chk)"

[HKEY_CLASSES_ROOT\Directory\shell\Open DDK Console here (WLH-chk)\command]
@="C:\\Windows\\System32\\cmd.exe /k C:\\WinDDK\\6000\\bin\\setenv.bat C:\\WinDDK\\6000\\ chk WLH && cd /D %1 && color 1f"

[HKEY_CLASSES_ROOT\Directory\shell\Open DDK Console here (WLHA64-chk)]
@="Open DD&K Console here (WLHA64-chk)"

[HKEY_CLASSES_ROOT\Directory\shell\Open DDK Console here (WLHA64-chk)\command]
@="C:\\Windows\\System32\\cmd.exe /k C:\\WinDDK\\6000\\bin\\setenv.bat C:\\WinDDK\\6000\\ chk AMD64 && cd /D %1 && color 1f"

[HKEY_CLASSES_ROOT\Directory\shell\Open Default Console here]
@="Open Default Conso&le here"

[HKEY_CLASSES_ROOT\Directory\shell\Open Default Console here\command]
@="cmd.exe /K \"title %1 && cd /D %1\""

[HKEY_CLASSES_ROOT\Directory\shell\Open Elevated Console here]
@="Open Ele&vated Console here"

[HKEY_CLASSES_ROOT\Directory\shell\Open Elevated Console here\command]
@="d:\\bin\\elevate.exe /K \"title %1 && color 1a && cd /D %1\""

[HKEY_CLASSES_ROOT\Directory\shell\Open VS.Net 2005 Console here]
@="Open VS.Net 200&5 Console here"

[HKEY_CLASSES_ROOT\Directory\shell\Open VS.Net 2005 Console here\command]
@="cmd.exe /K \"cd /D %1 && \"C:\\Program Files (x86)\\Microsoft Visual Studio 8\\VC\\vcvarsall.bat\" && color 2f\""

Finally, if you perform backups to the cloud from time to time and do want to upload unencrypted data or for other reasons encrypt specific files occasionaly, it may also be practical to have two GPG commands in your context menu — one to (symetrically) encrypt, and one to decrypt:

Windows Registry Editor Version 5.00

[HKEY_CLASSES_ROOT\.gpg]
@="GpgFile"

[HKEY_CLASSES_ROOT\GpgFile]

[HKEY_CLASSES_ROOT\GpgFile\shell]

[HKEY_CLASSES_ROOT\GpgFile\shell\Decrypt]
@="Decrypt"

[HKEY_CLASSES_ROOT\GpgFile\shell\Decrypt\command]
@="\"c:\\Program Files (x86)\\GNU\\GnuPG\\gpg.exe\" -d -i -o %1.plain %1"

[HKEY_CLASSES_ROOT\*\shell\GpgSymmetricEncrypt]
@="Encrypt with GPG (symmetric)"

[HKEY_CLASSES_ROOT\*\shell\GpgSymmetricEncrypt\command]
@="\"c:\\Program Files (x86)\\GNU\\GnuPG\\gpg.exe\" -c %1"

Bryan Cantrill on Real-World Concurrency

Browsing through ACM Queue’s archives I came across the article Real-World Concurrency by Bryan Cantrill (who, by the way, is the inventor of DTrace) and Jeff Bonwick (Issue 5/2008). The article provides a nice summary of actual challenges and best practices for systems programming in a multithreaded/shared memory environment. Worth reading.


Categories




About me

Johannes Passing lives in Berlin, Germany and works as a Solutions Architect at Google Cloud.

While mostly focusing on Cloud-related stuff these days, Johannes still enjoys the occasional dose of Win32, COM, and NT kernel mode development.

He also is the author of cfix, a C/C++ unit testing framework for Win32 and NT kernel mode, Visual Assert, a Visual Studio Unit Testing-AddIn, and NTrace, a dynamic function boundary tracing toolkit for Windows NT/x86 kernel/user mode code.

Contact Johannes: jpassing (at) hotmail com

LinkedIn Profile
Xing Profile
Github Profile
Advertisements