You are viewing a read-only archive of the Blogs.Harvard network. Learn more.

My pain, your gain.

Troubleshooting myself in the foot.

Java update causing ubuntu installer to fail

On trying to install the Oracle JDK on ubuntu 16.04 I kept getting ERROR 404: Not Found. After some searching I found out that the problem is that there is a new version of java and the installer (which is essentially a wrapper around the Oracle installer) wasn’t updated to reference it.

Since I needed this not as an update, but an install (and I just wanted to know why it was broken), I ended up updating some configs (with the help of pointers from a SO article).

To fix it on ubuntu x64 you need to sub out the file name, URL and checksum for the old version (191) with the new version (201).

cd /var/lib/dpkg/info
sed -i 's|JAVA_VERSION=8u191|JAVA_VERSION=8u201|' oracle-java8-installer.*
sed -i 's|PARTNER_URL=http://download.oracle.com/otn-pub/java/jdk/8u191-b12/2787e4a523244c269598db4e85c51e0c/|PARTNER_URL=https://download.oracle.com/otn-pub/java/jdk/8u201-b09/42970487e3af4f5aa5bca3f542482c60/|' oracle-java8-installer.*
sed -i 's|SHA256SUM_TGZ="53c29507e2405a7ffdbba627e6d64856089b094867479edc5ede4105c1da0d65"|SHA256SUM_TGZ="cb700cc0ac3ddc728a567c350881ce7e25118eaf7ca97ca9705d4580c506e370"|' oracle-java8-installer.*
sed -i 's|J_DIR=jdk1.8.0_191|J_DIR=jdk1.8.0_201|' oracle-java8-installer.*

After that, rerun apt-get update, apt-get install oracle-java8-installer and you are set to go.

javaws on os x mountain lion

We run some dell servers that have idrac cards in them. Some of the older ones have older v6 cards in them. Connecting to these via a mac has been a bit of a bear lately with all the java changes. Doing a bit of research and using the terminal, I found a way to connect to these.

The way that I found was to manually call javaws from the command line specifying which version I wanted to run. Since it seems that the only issue is with java > 1.6, you can just call the java 1.6 javaws. I got the idea from reading this post.

In terminal I found the javaws in my path. It was located in /usr/bin. I checked to see what that was symlinked to. It ended up in a ‘versions’ directory that had many different versions. I created my own symlink in /usr/bin pointing to the v1.6.0 path vs the current path. Once I did this, I had a 1.6.0 version of javaws in my path, aptly named javaws-1.6.0.

When you are logged into the drac, using safari, you can click on the launch virtual console button. This will download a .jnlp file. I didnt write a script or anything, but just found the .jnlp file in the finder via the download window in safari, moved it to the desktop and then in terminal ran javaws-1.6.0 filename.jnlp. This fired it up correctly.

For reference, these are the two versions of javaws I have linked:

/usr/bin/javaws -> /System/Library/Frameworks/JavaVM.framework/Versions/Current/Commands/javaws

/usr/bin/javaws-1.6.0 -> /System/Library/Frameworks/JavaVM.framework/Versions/1.6.0/Commands/javaws

My pain in the bean, your glean.

closing comments on wordpress multisite

There are some wordpress plugins out there that close comments on blogs as well as enable the admin of the blog to enable it site-wide. I wasn’t too interested in putting a plugin in place, although I figured if I could just write a code snippit to do this for me I would be happy. The code below is the result of my snippit efforts.

What this does is make a list of all the blogs (assuming your table prefix is wp_) on your multisite install. Then it checks to see if a blog has been blogged on since X months (I have 12 months in here, but you can change that to whatever suits your needs).

Once those conditions are set, the code sets two comment related options. The first is turning on akismet auto-delete of spam. We found that many of the older blogs on our multisite install had multiple tens of thousands of spam comments hanging around – no need for that. The second is enabling auto-closing of comments on posts older than 60 days (again you can change that to suit your needs).

To run it – you need to fill in the db info on the top (hostname, db, user, pw) and then just run it with the php cli. I hope this can help out someone and serve as a starting point where I had to start from scratch.

$mysqli = new mysqli("hostname", "user", "password", "database");
if ($mysqli->connect_errno) {
echo "Failed to connect to MySQL: (" . $mysqli->connect_errno . ") " . $mysqli->connect_error;
}

# set debug
$debug=1;

# get all the blogs
$tables_res = $mysqli->query("show tables like 'wp_%_options'");

while ( $tables_row = $tables_res->fetch_row() ){
list($blog_wpmu,$blog_id,$blog_options) = explode("_", $tables_row[0]);
$bloginfo_res = $mysqli->query("select * from wp_${blog_id}_options where option_name='siteurl'");
$bloginfo_row = $bloginfo_res->fetch_assoc();
$blog_url = $bloginfo_row['option_value'];

# check to see if blog has been updated in the last year
if ($updated_res = $mysqli->query("select id from wp_${blog_id}_posts where post_status = 'publish' and date_add(post_date, interval 12 month) > now() limit 1")) {
$updated_row_cnt = mysqli_num_rows($updated_res);
if ($updated_row_cnt == 0) {

# check and set akismet to auto-delete spam
$akismet_res = $mysqli->query("select * from wp_${blog_id}_options where option_name='akismet_discard_month'");
$akismet_row_cnt = mysqli_num_rows($akismet_res);
if ($akismet_row_cnt == 0) {
# set auto-delete
$mysqli->query("insert into wp_${blog_id}_options (option_name,option_value,autoload) values ('akismet_discard_month','true','yes')");
} else {
$akismet_row = $akismet_res->fetch_assoc();
if ( $akismet_row['option_value'] != "true" || $akismet_row['autoload'] != "yes" ) {
# set auto-delete
$mysqli->query("update wp_${blog_id}_options set option_value='true',autoload='yes' where option_name='akismet_discard_month'");
if ($debug == 1) {
echo "akismet set but not enabled on $blog_url ($blog_id)\n";
}
}
}
# check and set comments to auto-close
$comments_res = $mysqli->query("select * from wp_${blog_id}_options where option_name like 'close_comments_%' order by option_id");
$comments_row_cnt = mysqli_num_rows($comments_res);
if ($comments_row_cnt == 0) {
# set comments to auto-close
$mysqli->query("insert into wp_${blog_id}_options (option_name,option_value,autoload) values ('close_comments_days_old','60','yes')");
$mysqli->query("insert into wp_${blog_id}_options (option_name,option_value,autoload) values ('close_comments_for_old_posts','1','yes')");
} else {
while ( $comments_row = $comments_res->fetch_assoc()) {
if ( $comments_row['option_name'] == "close_comments_days_old" && $comments_row['option_value'] != 60 ) {
# set comments to auto-close in 60 days
$mysqli->query("update wp_${blog_id}_options set option_value='60',autoload='yes' where option_name='close_comments_days_old'");
if ($debug == 1) {
echo "days_old set to ". $comments_row['option_value']." on ".$blog_url." (".$blog_id.")\n";
}
}
if ( $comments_row['option_name'] == "close_comments_for_old_posts" && $comments_row['option_value'] == 0 ) {
# set comments to auto-close
$mysqli->query("update wp_${blog_id}_options set option_value='1',autoload='yes' where option_name='close_comments_for_old_posts'");
if ($debug == 1) {
echo "old_posts set but not enabled on $blog_url ($blog_id)\n";
}
}
}
}
}
}
}

$mysqli->close();

?>

os x lion (10.7.4) filevault

Filevault and you

So we probably wanted to use filevault (FV) a little differently than most users would. Most users would have one account on their mac and login once at the FV login screen – then they would be done with the login process. This is not what we wanted. We wanted the device to have a global unlock password which would then dump you into the OS login screen. After all, not all passwords are created equal.

We started off simply, create the accounts, start FV full disk encryption (FDE) and then only authorize the account that was being used as the global unlock for the FDE. Then we started getting tricky, to make sure that the global account would not log in, we deleted the account once it was in the FV bios screen. This allows an account to unlock the FDE then drop you into the OS login screen. The issue here is, you cant add any more accounts to the machine because the FV login credentials are automatically updated when you create a new account (not the ideal, but an understandable workflow).

The solution we ended up using is a hybrid of TJ Luoma’s and ours.

If you dont have FV FDE already enabled

  1. Add all the accounts that you think you will need, plus a global admin account that you will use as the FV login account.
  2. Log into the global admin account and enable FV. Do not enable any of the other users to unlock the disk. You will be prompted to restart.
  3. Once restarted and on the FV login window, log in with the global admin account.
  4. Once logged in, log out of the global admin account and into your normal admin account.
  5. Delete your global admin account, let the FDE process complete and you should be all set.

Your mac should now boot and prompt you at the FV login for the global admin account password and then dump you into the standard OS login screen.  Here you can log in with any of the accounts you created in step 1.

If you have FV FDE already enabled

If you have FV FDE already enabled and you want to use the login method that this post is describing, the next steps are for you.  If you have FV FDE already enabled, are already using the login method that this post is describing and want to add more user accounts, the next steps are for you too.  Again, much of these steps are similar to similar to TJ Luoma’s, so if my instructions are confusing, consider checking out his.

The first step is some account administration.

  1. Add all the accounts that you think you will need, plus a global admin account that you will use as the FV login account.
  2. Log into the global admin account and open the terminal app (type terminal into the spotlight window).
  3. This is where it gets a bit technical.  For all user accounts on your mac there exists a short name, you will need to find out the short names for all the accounts you want to remove from the FV login screen.  If the account name is John Smith, there is probably a short name of johnsmith or jsmith.  A quick cheat to find out all the usernames on your system is to type ls -1 /Users (that is a numeral one) followed by enter.  This will list all of the home directories (you can ignore the one name Shared) on your mac which normally correspond to the usernames.
  4. For every account that you want to remove from the FV login screen you will have to reset the password.  Take all the short names you gathered in step 3 and repeat steps 5-7 with each.
  5. Type sudo -u shortname -s subbing shortname for the actual username
  6. Type passwd and where prompted, enter in the current ‘old’ password and then just enter for the new password (ie leave them blank).
  7. Type exit
  8. When you are done resetting all of the passwords to blank, type exit and quit terminal.

The second step is some FV administration.

  1. Go to System Preferences > Security & Privacy > FileVault and click on the Enable Users button on the bottom right.
  2. Set the password of each user to something other than blank, but do not click on Enable User
  3. When all the users’ passwords are set, hit Cancel (counter-intuitive, yes.  Done is also greyed out)

The third and last step is a bit-o-cleanup.

  1. Log out of the global admin account and into your normal admin account.
  2. Delete your global admin account

Your mac should now boot and prompt you at the FV login for the global admin account password and then dump you into the standard OS login screen.  Here you can log in with any of the accounts you created in step 1.

Hope this is helpful to someone.

fios blocks outbound smtp. gah! danger, danger – port 25

FIOS blocks outbound SMTP, and I’m fairly comfortable saying every household ISP should. However, you can use their outbound SMTP servers as a relay to get around this. I had to configure this last night with postfix and I have to say it was trivial to set up.

I ended up inserting this to my postfix main.cf:

relayhost = [outgoing.verizon.net]
smtp_connection_cache_destinations = outgoing.verizon.net
smtp_sasl_auth_enable = yes
smtp_sasl_password_maps = static:username@verizon.net:password
smtp_sasl_security_options = noanonymous
default_destination_concurrency_limit = 4
soft_bounce = yes

Thanks to Jason Haruska for the pointers.

Restart postfix, test (man postfix| mail root -s “some light reading for you”) and requeue all the borked messages (postsuper -r ALL) and you are on your way.

I mentioned this to some of my geeky counterparts and they looked at me and said “Oh, yeah, that rocks, I did that a ways back with Exim.”  It seems its even easier with exim, you just need to add your username and password to /etc/exim4/passwd.client.  For full instructions on how to do this, check out the gmail/exim page.

My pain your sent mail (I lie, this was not so much of a pain, rather fun).

all temps are not the same

I had an interesting problem – we try to build our VMs as lean as possible so occasionally we will have machines that dont have that much disk or RAM. When RAM is minimal, our /tmp partition, which is a RAMFS device, gets small.

Why does this matter, well lots of the processes that run like to use tmp space for, well, tmp space. If this tmp space fills, then the processes that was running, usually fails.

Have you ever seen an error like this? This was after doing an apt-get dist-upgrade on one of smaller VMs.

tar: ./lib/foo/bar/file.bin: Cannot write: No space left on device
tar: Skipping to next header
tar: Error exit delayed from previous errors
dpkg-deb: subprocess tar returned error exit status 2
debsums: can't unpack /var/cache/apt/archives/foo_i386.deb
E: Problem executing scripts DPkg::Post-Invoke 'if [ -x /usr/bin/debsums ]; then /usr/bin/debsums --generate=nocheck -sp /var/cache/apt/archives; fi'
E: Sub-process returned an error code

apt-get does not like to run out of space, and the /tmp partition is pretty small on this machine. Smaller than the amount of space this package was taking to compile. This is an easy fix however. First you have to completely remove this package. More than likely its got something missing or corrupted. You can do this easily by entering in the following on a console (substituting “foo” for whatever package gave you the error):

# dpkg --purge foo

Once you have the package removed, just run apt-get with “env TMPDIR=/var/tmp” prepended to it. The tmp dir does not have to be /var/tmp, it can be any directory that the user you are running as has write access to.

# env TMPDIR=/var/tmp apt-get install foo

On a slight aside, we sometimes also get stuck with these errors.

dpkg: error processing linux-image-1.2.3-4-server (--purge):
cannot remove `/boot/System.map-1.2.3-4-server': Read-only file system

This one is easy to fix and we've been doing this one for a while. On a console, before you run the command that gave you this error, you need to remount the partition (in this case /boot) in RW.

# mount -oremount,rw /boot

autofs annoyances with ubuntu lucid (10.04)

Like a lot of admins that run ubuntu, we decided to update many of our machines to ubuntu’s next LTS release, lucid lynx, aka ubuntu 10.04.  We dont run a huge shop here, we have under 100 machines, a significant percentage of which are VMs, but repeatedly fixing bugs does still annoy me.  One of the bugs present in lucid is particularly annoying because it affects how autofs starts at boot.  Services have dependencies, and its complicated to sort them out – I get that – but come on ubuntu, dependencies are not a new development and sorting them out should be easy enough for a bunch of smart developers.

The specifics are this:

1. Lucid switched to upstart.  To put it succinctly: “upstart is a replacement for the /sbin/init daemon which handles starting of tasks and services during boot, stopping them during shutdown and supervising them while the system is running.”

2. Upstart does not like autofs.

Not to rant too much, but if you are going to replace init, please do it with something that does not require that every person installing various packages has to do the hacks I am about to point out.

Thankfully, there are a bunch of smart, technical people that run ubuntu systems.  On top of that, when things go wrong they complain and post bug reports.  After some quick searching I thought I had fixed the bug.  That was until upstart was updated a couple of weeks back and the boot problems started again with autofs.

The solution is similar to the one I originally implemented, per the suggestion of comment #15, but it works past the update that had broken autofs again.  In the /etc/init directory edit the autofs.conf file and add the following stanza directly after the pre-start script line.

statd_status=`status statd| cut -d, -f1`
while [ "$statd_status" != "statd start/running" ]; do
sleep 5
start statd
let i++; statd_status=`status statd| cut -d, -f1`
if [ $i -gt 10 ]; then
echo "statd startup failed"
fi
done

Once this is in you should be able to (re)start autofs.  Next time the machine is rebooted, autofs will have been started automatically.  Essentially this is the same hack as the one in the aforementioned comment, with the exception that the while loop causes the script to wait until it sees statd has successfully started.

Ok, back to finding more annoying things.

WordPress author list with gravatars – but for WPMU

So this seemed really easy, but apparently it was not, or I guess I would not be writing about it here.  Ok, here is the deal, I wanted a full list of all the authors on one particular blog.  I wanted them all on one page and listed out with links to their blog posts.  I wanted them all to have images next to their names. The standard way to do this seems to be to list out all the user names on your blog, since hey, no harm here, we are only running one blog.

Enter WPMU.  We are not running just one blog, we are running hundreds and have thousands of users.  Listing out all the users via a SQL query would be a huge list, not to mention not at all representative of who is an author on this particular blog.  I did some looking around on the wordpress codex and found a couple of functions that I thought could be helpful.  The first one that came up was wp_list_authors.  This function just lists out all the authors for a particular blog – particularly helpful for WPMU sites.

Now the problem with wp_list_authors is that it just outputs the list of authors as a chunk of links, so you have to chop it up somehow since its not in the loop – yeah, we are doing all this outside of the loop.  The second issue is that this is all that it puts out – links to the author archive page.  No ids, no emails, nothing – its not like the_author_meta which gives you all kinds of nice stuff.

Ok, but at least we have something we can hack up, so I started in on it and this is what I came up with.

<?php
$allAuthorNames = explode(',',wp_list_authors('style=0&amp;show_fullname=1&amp;hide_empty=0&amp;echo=0'));
foreach ( $allAuthorNames as $oneAuthorName ) { ?>
  <li>
  <?php
  $oneAuthorArray = explode(" ",$oneAuthorName);
  if (count($oneAuthorArray) > 1) {
    $oneAuthorLink = explode("/",$oneAuthorArray[2]);
    end($oneAuthorLink);
    $userData = get_userdatabylogin(prev($oneAuthorLink));
  } else {
    $userData = array("user_email" => "None");
  }
echo get_avatar($userData->user_email,$size='96',$default='');
?>
<?php echo ($oneAuthorName); ?>
</li>
<?php } ?>

Pardon my PHP, it sucks, but in any case it gets it done here at least.

Notice the nice function that gets it done? Oddly, there is not much documentation to the get_userdatabylogin function, but its a nice one.  Tie together wp_list_authors with get_userdatabylogin and you can get even more info than you can get from the_author_meta.

Now this code is by no means the finished product, but it does work and it is a nice way to get a full list of everything that the author has in their profile in the DB. At the moment I just used it to get the email address of the author I was iterating over, but the function dumps out the entire user DB row object. A bit dangerous I suspect, but useful.

Happy coding,  hope this saves you a bit of time.

SLAPd

SLAPd: if you need a daemon to to do it for you, you’re doing it too often.

Hmm.  No.  Wait, its for LDAP?  Damn.

Changing your password

A lot of times people think changing passwords is such a pain. I always look at it in terms of security – how many times has possible malfeasance had the chance to take place in the time that you have been using your password. Its also nice to look at it in terms of a review of where your password is stored. Change you password, everything that has it stored in some cache breaks. Its an A-ha! moment; too few of those in our daily lives. To make a game of it, think of it as a learning activity – how long does it take you to remember it without having to read it back, how long until you are not looking at the keyboard, how long until you have to actually think about your password because your muscles have retained it in memory. Compare to the last time – are you getting better or worse?

With that in mind, following are the instructions on changing a password where your password probably has not been changed in a while.

Changing your ssh key password with ssh-keygen

The -p option requests changing the passphrase of a private key file instead of creating a new private key. The program will prompt for the file containing the private key, for the old passphrase, and twice for the new passphrase. Use -f {filename} option to specifies the filename of the key file. For example, change directory to .ssh:

$ cd .ssh

To change your ssh-key passphrase, enter:

$ ssh-keygen -f id_{rsa or dsa} -p

« Older posts

© 2024 My pain, your gain.

Theme by Anders NorenUp ↑