What can I do to safely repair my yum packages after a bad update?

I'm running amazon linux on an EC2 micro box. Recently I ran sudo yum update --security in the hope that it would patch Heartbleed. Unfortunately I ran out of memory during the update process and some packages did not successfully patch. I attempted to fix this by restarting then running sudo yum clean then sudo yum update as shown in the below pastebin but the dependency issues still exist.

How can I fix this without breaking anything further?

Here's a snip from the yum output:

Error: initscripts conflicts with util-linux-ng-2.17.2-13.17.amzn1.i686
Error: initscripts conflicts with util-linux-ng-2.17.2-13.17.amzn1.x86_64
Error: Package: glibc-devel-2.12-1.107.43.amzn1.x86_64 (@amzn-main)
           Requires: glibc-headers = 2.12-1.107.43.amzn1
           Removing: glibc-headers-2.12-1.107.43.amzn1.x86_64 (@amzn-main)
               glibc-headers = 2.12-1.107.43.amzn1
           Updated By: glibc-headers-2.17-36.81.amzn1.x86_64 (amzn-updates)
               glibc-headers = 2.17-36.81.amzn1
           Available: glibc-headers-2.17-36.80.amzn1.x86_64 (amzn-main)
               glibc-headers = 2.17-36.80.amzn1

Here's the full console log: http://sebsauvage.net/paste/?e0f7235450f97bae#qq6QKe/Co+jR2T4FXfGo4w2H8aw7xZkE4z+iZXdMpQ8=


Solution 1:

Reinstall Failed RPMS

I've seen this issue happen when something fails during the RPM transaction. The RPM database can become out of sync with the system. As a result what the system actually has and what RPM thinks is installed varies.

TIP: Before doing any of this creat an AMI image so you can easily recover if things completely fail.

You can use rpm -qa --last to get a listing of the RPMs that were recently installed.

Then rebuild the rpm database, rpm --rebuilddb.

You can then use yum reinstall to reinstall any package that was part of the failed transactions.

This should also pick up any dependency issues and try to correct them.

In some cases, I've had to manually resolve the conflicts by downloading the rpm yum download and using rpm to do the installation.

If you must revert to manual installation using rpm then keep detailed notes, especially when glibc is involved.

Recommendation

I highly recommend you deploy operations on AWS in a way that you can easily just spin up a new EC2 instance and not worry about such problems. If you use dedicated EBS volumes for your data and store your configuration files elsewhere, you can often spin up a new instance and be back in operation faster than debugging an RPM problem like this. When we have EC2 issues like this, we usually deploy a new instance from our custom AMI, remap IPs and be done with it. If needed, we can then do root cause analysis on the failed/corrupted systems without impacting production operations.