Unable to send files to local drive – Access denied

Having just spent several hours trying to work out why I couldn’t copy files from the machine I was accessing through Citrix Receiver, I thought I’d share.

Problem:
Copying files from a remote machine to a local drive in Citrix Receiver results in a dialog titled “Destination Folder Access Denied”. The message reads “You need permission to perform this action”. This is frustrating, as it’s the only way I could get files back to my local machine, and only affected me, not my colleagues. I suspect I selected deny when a prompt appeared some time ago and also checked don’t ask again…

solution:
Navigate to the registry key:

HKEY_CURRENT_USER\Software\Citrix\ICA Client\Client Selective Trust\

There are several UID type strings. One has ‘IcaAuthorizationDecision’ appended to it, and a subkey FileSecurityPermission. Mine was this:

{93E41C6E-2091-1B9B-36BC-7CE94EDC677E}IcaAuthorizationDecision\FileSecurityPermission

Change the value of the (Default) REG_SZ to 2. Sorted. The possible values are these:
0 = No Access
1 = Read Only Access
2 = Full Access
3 = Prompt User for Access

MS SQL 2012 Fails to install – installation never goes past installing setup files

Whilst attempting to get some software to communicate with MS SQL Server 2012, I decided to try reinstalling SQL.  I removed all the SQL components in add/remove programs and attempted a reinstall.  The installer got part way and then just gave up.

After some Googling, I found that the SQL installer leaves log files in %temp% (SqlSetup.log and SqlSetup_1.log).  SqlSetup_1.log had the following lines:

07/29/2013 12:44:41.033 Error: Failed to launch process
07/29/2013 12:44:41.033 Error: Failed to launch local ScenarioEngine.exe: 0x80070003

Nothing on the net seemed to have a solution (or I need to improve my Google skills).  I found one post about VS2010 problems which seemed vaguely related and decided to remove all the visual studio components to see if that sorted things.  It did.

A dialog from the installer and a more helpful error message would have been nice.

Finding out what is using IO on linux with older kernels

I’ve been wandering around the internet for hours looking for something to show which process is doing all the IO on my system, and that will run on my centos 5.3.  It has a kernel which just missed out on per process IO accounting (in /proc/PID/io) and so, most tools (iotop, etc) will not work.

Something like this:

iostat -x -m -d sdk 1

will show how much io is being used on the device (in this case sdk), but not per process.

This, in CentOS, will show some nice IO stats, but again not per process:

dstat

In later releases of CentOS, this will apparently show per process IO:

dstat -s --top-io --top-bio

I finally came accross this blog, and it worked!  However, there is a bug in the perl script which a user has reported via a comment.  I confirmed this, for my system at least, but the comments on the blog seem to have broken.  Here is a copy of the script which works for DIRTY counts:

#!/usr/bin/env perl
# This program is part of Aspersa (http://code.google.com/p/aspersa/)

=pod

=head1 NAME

iodump - Compute per-PID I/O stats for Linux when iotop/pidstat/iopp are not available.

=head1 SYNOPSIS

Prepare the system:

  dmesg -c
  /etc/init.d/klogd stop
  echo 1 > /proc/sys/vm/block_dump

Start the reporting:

  while true; do sleep 1; dmesg -c; done | perl iodump
  CTRL-C

Stop the system from dumping these messages:

  echo 0 > /proc/sys/vm/block_dump
  /etc/init.d/klogd start

=head1 AUTHOR

Baron Schwartz

=cut

use strict;
use warnings FATAL => 'all';
use English qw(-no_match_vars);
use sigtrap qw(handler finish untrapped normal-signals);

my %tasks;

my $oktorun = 1;
my $line;
while ( $oktorun && (defined ($line = <>)) ) {
   my ( $task, $pid, $activity, $where, $device );
   ( $task, $pid, $activity, $where, $device )
      = $line =~ m/(\S+)\((\d+)\): (READ|WRITE) block (\d+) on (\S+)/;
   if ( !$task ) {
      ( $task, $pid, $activity, $where, $device )
         = $line =~ m/(\S+)\((\d+)\): (dirtied) inode (\d+) \(.*?\) on (\S+)/;
   }
   if ( $task ) {
      my $s = $tasks{$pid} ||= { pid => $pid, task => $task };
      ++$s->{lc $activity};
      ++$s->{activity};
      ++$s->{devices}->{$device};
   }
}

printf("%-15s %10s %10s %10s %10s %10s %s\n",
   qw(TASK PID TOTAL READ WRITE DIRTY DEVICES));
foreach my $task (
   reverse sort { $a->{activity} <=> $b->{activity} } values %tasks
) {
   printf("%-15s %10d %10d %10d %10d %10d %s\n",
      $task->{task}, $task->{pid},
      ($task->{'activity'}  || 0),
      ($task->{'read'}      || 0),
      ($task->{'write'}     || 0),
      ($task->{'dirtied'}     || 0),
      join(', ', keys %{$task->{devices}}));
}

sub finish {
   my ( $signal ) = @_;
   if ( $oktorun ) {
      print STDERR "# Caught SIG$signal.\n";
      $oktorun = 0;
   }
   else {
      print STDERR "# Exiting on SIG$signal.\n";
      exit(1);
   }
}

Save it as ‘iodump’.  The header of the script tells you how to run it.  You have to turn on kernel messages about IO:

echo 1 > /proc/sys/vm/block_dump

and then you can run it like this:

while true; do sleep 1; dmesg -c; done | perl iodump

Don’t forget to turn off the IO logs once you’ve finished:

echo 0 > /proc/sys/vm/block_dump

I like to run it in watch a separate window (e.g. in screen):

watch -tn 1 'i=0; while (( ++i < 5 )); do sleep 1; dmesg -c; done | perl iodump'

This will keep an updated view of the output, updating every 5 seconds.

PHP function to remotely run command as root (su)

I rewrote this from somewhere and ended up not needing it, but in case I need it in the future, here it is.

The function ssh to the remote server, issues a su and then runs the command, returning true or false depending on the return code.

function RunRemoteAsRoot($ip, $username, $password, $rootPassword, $commandString)
{
	$connection = ssh2_connect($ip, 22);
	if (!$connection)
		return false;

	if (!ssh2_auth_password($connection, $username, $password))
		return false;

	$stream = ssh2_shell($connection, "vanilla", null, 200);
	if ($stream === false)
		return false;

	stream_set_blocking($stream, true);

	if (fputs($stream, "su -\n") === false)
	{
		fclose($stream);
		return false;
	}

	$line = "";
	$output = "";
	$returnCode = 1;
	while (($char = fgetc($stream)) !== false)
	{
		$line .= $char;
		if ($char != "\n")
		{
			if (preg_match("/Password:/", $line))
			{
				// Password prompt.
				if (fputs($stream, "{$rootPassword}\n{$commandString}\necho [end] $?\n") === false)
				{
					return false;
				}
				$line = "";
			}
			else if (preg_match("/incorrect/", $line))
			{
				//Incorrect root password
				return false;
			}
		}
		else
		{
			$output .= $line;
			if (preg_match("/\[end\]\s*([0-9]+)/", $line, $matches))
			{
				// End of command detected.
				$returnCode = $matches[1];
				break;
			}
			$line = "";
		}
	}
	fclose($stream);

	return ($returnCode == 0);
}

CVS not updating

Perhaps an obvious one, but I attempted to do a cvs up on a directory, and nothing happened, even though there had been changes in the repository.  After being confused for a while, I noticed that there was a sticky tag:

Working revision:    1.66
   Repository revision: 1.66    /.../.../...
   Expansion option:    o
   Commit Identifier:   2104f3e32632ef7
   Sticky Tag:          1.66
   Sticky Date:         (none)
   Sticky Options:      o
   Merge From:          (none)

cvs up -A sorts this out:

cvs up -H
up: invalid option -- H
Usage: cvs update [-APCdflRp] [-k kopt] [-r rev] [-D date] [-j rev]
    [-I ign] [-W spec] [files...]
        -A Reset any sticky tags/date/kopts.
        -P      Prune empty directories.
        -C      Overwrite locally modified files with clean repository copies.
        -d      Build directories, like checkout does.
        -f      Force a head revision match if tag/date not found.
        -l      Local directory only, no recursion.
        -R      Process directories recursively.
        -p      Send updates to standard output (avoids stickiness).
        -k kopt Use RCS kopt -k option on checkout. (is sticky)
        -r rev  Update using specified revision/tag (is sticky).
        -D date Set date to update from (is sticky).
        -j rev  Merge in changes made between current revision and rev.
        -I ign  More files to ignore (! to reset).
        -W spec Wrappers specification line.

The version in the repository was 1.68, but the file was stuck at 1.66 for some reason.

Altering creation dates in image exif data

Whilst attempting to get a Kodak digital photo frame to display the photos in the correct order, I worked out how to set the creation dates to a sequential order (don’t ask).

First, install ExifTool.

Then run something like this (copy it to a file, e.g. renameFile.sh, then run renameFile.sh whilst in a directory of files to alter):

a=1
for i in *; do
	touch i
	new=$(printf "%04d.jpg" ${a}) #04 pad to length of 4
	if [ "${i}" != "${new}" ]; then
		mv ${i} ${new}
	fi

	minutes=$(( $a * 60 ))

	timeString=$(echo $minutes | awk '{printf("%s", strftime("%H:%M:%S", $1));}';)
	exiftool "-FileModifyDate=2012:03:09 $timeString" \
	"-ModifyDate=2012:03:09 $timeString" \
	"-DateTimeOriginal=2012:03:09 $timeString" \
	"-CreateDate=2012:03:09 $timeString" \
	"-DateTimeDigitized=2012:03:09 $timeString" \
	"-MetadataDate=2012:03:09 $timeString" ${new}

	let a=a+1
done

It was more complicated than it should have been, and I think there may be a bug somewhere in it.

The code just sets the time of the various dates in the exif data to a value based on the file number.  It also names the files in a sequential order.

Run MySQL commands through bash

I wanted to log some stats from MySQL.  The following outputs a few statistics (filtered with grep)

mysql -ppass -e 'SHOW STATUS;' | grep -E '(Threads|[cC]onnections)' | column -t >> $logfile;

and this one shows the current MySQL config variables which have been loaded (Useful if you want to see if a change to the config file has been loaded)

mysql -uroot -ppass -e 'show variables'

Finding out what is going on with your web server

I have been trying to debug something which I have running on an apache/mysql setup (because the max connections is being exceeded and some processes seem to hang around).  You can find out what is using port 80 with this command:

lsof -i tcp:80

And you can see what a particular command is by running this on the PID you get from the above:

ps -lf -p <PID>

I went a bit further, so the following will allow you to watch the TCP connections on port 80 and see what command is being run:

watch -d -n 1 "lsof -i tcp:80 | sort -u -k2 -n | awk '{printf \$2 \" \" \$8; if (NR != 1) {system(\"echo -ne \\\" \\\"; ps -lf -p \" \$2 \" | grep \" \$2)} else {print \" F S UID PID PPID C PRI NI ADDR SZ WCHAN STIME TTY TIME CMD\"}}' | awk '{printf \$1\"__\"substr(\$2,1,20)\"__\"\$5\"__\"\$6\"__\"\$14\"__\"\$16\"__\";for (i=17; i<=NF; i++) { printf(\"%s \", \$i)} printf(\"\n\")}' | column -t -s \"__\""

It’s a bit long, but what it is basically doing is concatenating the two commands and pretty printing it.  That command is escaped and passed into the watch command so that it updates every second.  It also highlights changes (the -d), uniqifies the output of the lsof command (| sort -u -k2 -n) so that the same process only shows once – I was getting the IP and host name both being displayed – and only displays certain columns.

Allowing CVS to recurse into subdirectories which aren’t part of CVS

Running cvs up or cvs st from a directory will recurse one directory deep.  If there are no CVS directories in these, it gives up.  To get around this, you can add a CVS directory  with an Entries file pointing to the directory you want to recurse into.  This is my example: CVS.  Replace ‘FolderToRecurseInTo’ in the Entries file with the correct folder name and put the correct repository in the repository file.  Then you can run cvs commands from the lower level directory and they will work on all the directories you specify in this way.

The CVS directory has to go in the directory above the place you run the command, pointing to the directory above that.  E.g. If you run commands from ‘/directory’, place the CVS directory in ‘/directory/meh’ and point the entries file to ‘yay’.  This will mean running cvs status in ‘/directory’ will show the status of files in the ‘/directory/meh/yay’ directory (along with any in ‘/directory/’ and directories one level up from there).

I found this useful because I can now simply run one cvs command to update all the files in my directory structure.