Windows 7 Task Scheduler: “The user account does not have permission to run this task”

14 02 2012

I was encountering a problem with the Windows 7 Task Scheduler. I had a task configured, and it was running correctly at the specified time. But whenever I would open the Task Scheduler and try to run the task manually, it would fail with this error:

I had no idea what this error meant. It could’ve meant that the user account that the task was configured to run under didn’t have permission to run the task — but that didn’t make sense, because the task ran fine at its scheduled time. It could’ve meant that the user account I was using didn’t have permission to run the task — but I was running as a system admin. I spent a while searching Google, and while I found people talking about the error, I couldn’t find any useful information about what it meant or how to fix it. Finally, I whipped out my old friend ProcMon, which helped me see what was happening:

The Windows 7 Task Scheduler stores tasks as individual XML files in the directory C:\Windows\System32\Tasks. This task in particular had been created by a script using the Schtasks.exe utility. The way Schtasks had configured the permission on the task file was very strange — it had granted the Administrators group all permissions except Execute:

Oddly enough, you need to have Execute permission on the task file in order to run it. This can be edited through the Windows UI, or from the command line by running:

cacls "C:\Windows\System32\Tasks\Task Name" /g Administrators:F

Naturally, you’ll need to replace “Task Name” with the actual name of the task, and Administrators with the user or group to grant access.

Advertisement




Changing the Windows system path programmatically

11 02 2012

In our test environment, we automatically install a bunch of utilities — like Notepad++ and the Sysinternals tools — on every Windows system. As part of this, I wanted to add some directories to the Windows system path so these utilities could be easily accessed from the command line. I knew that this could be done manually through the System applet in Control Panel, but it took me a few minutes to figure out how to do it programmatically.

When you edit the system path from the Control Panel, what you’re actually doing is modifying the registry value HKLM\System\CurrentControlSet\Control\Session Manager\Environment\Path. So I wrote a very simple VBS script that takes a directory as an argument and appends it to the value in the registry. Note that the key won’t be re-read until the next time you log into Windows, so your path won’t actually be updated until then.

PathRegKey = "HKLM\System\CurrentControlSet\Control\Session Manager\Environment\Path"
If WScript.Arguments.Count = 0 Then
 WScript.Echo "Please specify a path to add."
 WScript.Quit(1)
End If
Set WshShell = WScript.CreateObject("WScript.Shell")
UpdatedPath = WshShell.RegRead(PathRegKey) & ";" & WScript.Arguments(0)
WshShell.RegWrite PathRegKey, UpdatedPath, "REG_EXPAND_SZ"




Allowing Unauthenticated Access to Windows Shares

1 01 2012

At my job, we have a Windows-based test environment on a standalone Active Directory domain. I wanted to allow users to to access file shares within the test domain from their computers on other domains without being prompted for credentials. (Since it’s a test environment, I don’t really care about security.)

Google sent me on a wild goose chase into the Local Security Policy, but the solution was deceptive simple. It turns out that when you connect to a file share on another domain, the server tries to authenticate you with the local Guest account. The problem is that by default, Windows (correctly) disables the Guest account. You can enable it from Computer Management (Start > Run > compmgmt.msc):

Next, you have to update the permissions on the share and the NTFS permissions on the underlying folder so that Guest will have access. Guest is a member of the Everyone group, so if you grant permission to Everyone, you should be good to go. If you want to set special permissions for Guest — maybe you only want to grant anonymous users read-only access — you can do that too. Just make sure to grant the permission to either the local Guest account or the local Guests group, not the domain Guest account:





WordPress, Page Caching and “Missed Schedule”

28 11 2011

When we first put Varnish in front of our WordPress installation, we noticed that post scheduling became pretty unreliable. About half the time we’d schedule a post, it would either appear much later than scheduled, or it would never appear on the site at all with the WordPress control panel showing the post’s status as “Missed Schedule.”

It turns out that WordPress has an, uh, interesting way of implementing post scheduling: because they don’t want to require that people have access to a proper cron daemon, WP has its own jury-rigged cron wannabe called WP-Cron, which relies on users regularly accessing WordPress PHP pages to kick off scheduled tasks at the appropriate time. The problem is that Varnish was working so damn well and caching so much content from cache, the Apache/WordPress backend wasn’t getting hit often enough for WP-Cron to work reliably. Based on the miscellaneous kvetching that can be found about WP-Cron, this is apparently only one circumstance in which it may not work reliably.

The way to fix this turned out to be forcing the WP-Cron script to run every minute using regular cron by configuring a job like this on the web server:


* * * * * lynx --dump http://mysite.com/wp-cron.php > /dev/null 2>&1

Note that you’ll want to have the job hit the WP-Cron page directly through Apache, bypassing Varnish or whatever page cache you’re using. (Or, you could just configure Varnish not to cache that page, but that would allow the public to hit your WP-Cron page and potentially cause a spike in your resource utilization, which may not be desirable.)





RunOnce for Linux

27 11 2011

On occasion, I’ve wished there was a Linux feature that enabled me to run any command once the next time the system comes up (sort of similar to Windows’ RunOnce). The last time I needed this, I put together a simple init script to provide the functionality. I use this on Debian, but it should work on any UNIX-y OS with Sys-V style init. Create a file called /etc/init.d/runonce with the following content. Don’t forget to make it executable (chmod a+x).

#! /bin/sh
### BEGIN INIT INFO
# Provides: runonce
# Required-Start:
# Required-Stop:
# Should-Start:
# Default-Start: S
# Default-Stop:
# Short-Description: RunOnce
# Description: Runs scripts in /usr/local/etc/runonce.d
### END INIT INFO

PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
RUNONCE_D=/usr/local/etc/runonce.d

. /lib/init/vars.sh
. /lib/lsb/init-functions

do_start () {
 mkdir -p $RUNONCE_D/ran > /dev/null 2>&1
 for file in $RUNONCE_D/*
 do
 if [[ ! -f "$file" ]]
 then
 continue
 fi
 "$file"
 mv "$file" "$RUNONCE_D/ran/"
 logger -t runonce -p local3.info "$file"
 done
}

case "$1" in
 start|"")
 do_start
 ;;
 restart|reload|force-reload)
 echo "Error: argument '$1' not supported" >&2
 exit 3
 ;;
 stop)
 # Do nothing
 ;;
 *)
 echo "Usage: runonce [start|stop]" >&2
 exit 3
 ;;
esac

Then, you’ll need to symlink this script into the directories for the appropriate runlevels, which can be done easily on Debian with the following command:


update-rc.d runonce defaults

Finally, create a directory called /usr/local/etc/runonce.d. Now, you can simply put executable scripts or symlinks to utilities on the system into that directory. They’ll be run the next time you boot up, and then moved into the subdirectory /usr/local/etc/runonce.d/ran for posterity.





Getting Capistrano destination hosts from Puppet

26 11 2011

If you’re using Capistrano to deploy code to web servers and Puppet to manage those servers, the DRY principle suggests that it may be a bad idea to hardcode your list of web servers in your Capistrano recipe — instead, you may want to dynamically fetch the list of web servers from Puppet prior to each deploy. Here’s one way of doing this.

Puppetmaster Configuration

First, you’ll need to turn on storeconfigs on your Puppetmaster. This allows Puppet to store all of its node information in a database (which enables us to access it for other purposes). Note that these instructions assume you have a MySQL database for your storeconfigs.

Next, create a script on your Puppetmaster called /usr/local/bin/list_nodes_by_class.rb. Fill in the database username, password, host and schema name used to access your storeconfigs in lines 4-7. Make sure the script is executable.

#!/usr/bin/ruby
require 'mysql'

MY_USER="yourdbuser"
MY_PASS="yourdbpass"
MY_HOST="yourdbserver"
MY_DB="yourdbname"

QUERY="select h.name from hosts h join resources r on h.id = r.host_id join resource_tags rt on r.id = rt.resource_id join puppet_tags pt on rt.puppet_tag_id = pt.id where pt.name = 'class' and r.restype = 'Class' and r.title = '#{ARGV[0].gsub("'","")}' order by h.name;"

my = Mysql.new(MY_HOST, MY_USER, MY_PASS, MY_DB)
res = my.query(QUERY)
res.each_hash { |row| puts row['name'] }

Capistrano Recipe Modification

Note that I’ll assume here that your Capistrano recipe uses the “web” role to determine where to deploy code to, and that the Puppet class you use to designate web servers is “role_webserver” — you may need to change these.

First, add the following task and helper function to your recipe:


task :set_roles_from_puppet, :roles => :puppetmaster do
 get_nodes_by_puppet_class('role_webserver').each {|s| role :web, s}
end

def get_nodes_by_puppet_class(classname)
 hosts = []
 run "/usr/local/bin/list_nodes_by_class.rb #{classname}", :pty => true do |ch, stream, out|
 out.split("\r\n").each { |host| hosts << host }
 end
 hosts
end

Next, go to wherever you’re defining your roles. Add a role for your Puppetmaster:


role :puppetmaster,  "puppetmaster.mynetwork.com"

Finally, delete your existing definition of the “web” role and replace it with the following:

set_roles_from_puppet

Now, when you do a Capistrano deploy, the destination servers should be dynamically retrieved from the Puppet database.





Puppet and the Ghetto DNS

25 11 2011

Suppose you have a small network of Linux servers powering your web site. You’re probably going to want a way of accessing the servers by hostname from one another — for example, so your web servers can find your database servers without resorting to hardcoding IP addresses in your Apache configuration. What’s the best way to do this?

DNS

Well, if you’re already hosting your own DNS servers, you can consider a split-horizon configuration, which is supported by major DNS daemons like BIND. But if you’re using your ISP’s DNS servers, for example, setting up your own DNS for this purpose seems like overkill (and a pain). Personally, I like to run as few services on my network as possible, just as a matter of principle.

Another option is to just add A records for each of your servers to your site’s public DNS zone. But adding private (RFC1918) IP addresses to the public DNS means that anyone who can operate dig can find a complete list of your servers and their private IP addresses. Some would argue that I’m advocating for security by obscurity, but I just can’t see the upside of unnecessarily exposing information about your internal network to the public. Also, while it’ll work, exposing RFC1918 IPs via the public DNS just seems icky.

Hosts file

The “lightweight” approach to the problem of hostname to IP address resolution is to just add entries to the /etc/hosts file on each server. Of course, this is completely unmaintainable if you have more than 2 or 3 servers.

Hosts file + Puppet = Ghetto DNS

But if you’re using Puppet to automate your server infrastructure, it turns out that exported resources offer a nice solution to this problem. You can create and deploy a simple Puppet module to each machine that exports a Host entry for itself, and then assembles a /etc/hosts file based on the Host entries exported by all the machines Puppet knows about:


class hosts {
 host { "localhost.localdomain":
 ip => "127.0.0.1",
 host_aliases => [ "localhost" ],
 ensure => present,
 }
 @@host { "$fqdn":
 ip => $ipaddress_eth1,
 host_aliases => [ $hostname ],
 ensure => present,
 }

Host <<| |>>
}

This “Ghetto DNS” setup can be just right if manually maintaining hosts files seems impractical, but running your own DNS seems like overkill.