This is my tech blog. I'll document all kinds of computer related activties, which may be useful for myself in the future or even useful for others. The blog entries may refer to specific topics, which are documented in more detail here.
With the intention to grow my Linux MD RAID set, I messed up bigtime. I had a lot of fun fixing it though!
Sometime around 2008 I acquired 3 cheap “Samsung SpinPoint HD160JJ” disks of 160 GB. I used them to build a nice RAID5 set with an impressive net capacity of 300GB in my PC. Because RAID5 isn't the fastest storage solution on earth, I added a bcache based SSD cache on top of the RAID set.
Whenever I do these “disk-replace-or-ssd-add” actions, I always do it in an enjoyable fashion (according to myself anyway): I create a disk image backup (dd) of the filesystem to an external USB disk, and I restore this afterwards when alle the “system-or-disk-replace-or-ssd-add” is done. This is because I am a firm believer that Linux does NOT require clean installs. That is a common practice in Windows indeed, but this is not Windows!
So when I upgraded from ext4 to ext4 in the distant past, I did so by using the right
tune2fs command and some
fsck's. That was all there was to it.
And upgrading my Fedora version? Just that: I upgrade Fedora. Never ever have to do a reinstall. In general
rpm do a great job replacing software and cleaning up.
Last summer I had the impression that one of the Samsung disks suffered from old age. There were concerning SATA errors in my syslog, but after some resets the disk apparently could alway be resuscitated after which it pulled through. Of course RAID5 can handle a deceased disk, but I felt it was time to anticipate what was about to happen. I bought 3 “Seagate BarraCuda ST1000DM010” disks of eacht 1 TB. They were reasonably priced and over 6 times the size of my old Samsung disks. Yeay!
And of course not a single hair om my head considered to do a clean reinstall! This wasn't windows! No, it was much easier: I piecewise replaced each Samsung disk by a Seagate disk:
I just repeated this 3 times, after which all disks had been replaced. All the time the RAID5 set worked just fine, and even the bcache config on top worked just all the time!
It suddenly occurred to me yesterday that my filesystem was a kind of small. Why? Well, the original partition tables of the 160GB Samsuns disks were still in use, so I was only using 160GB of each 1TB disk. The solution was simple: just resize the RAID partion (the second and last partition) on each disk. So that's what I did using fdisk. And then I did a reboot…
During startup it all went horribly wrong. No RAID5 set was identified at all! The “pieces” were not identified as RAID partitions, but one “pieces” was identified as the bcache config that was on top of the RAID set, and accordig to the Linux kernel that “piece” was perfectly usable as a bcache device?!?
And so Linux automgically did! According toe the kernel the bcache config was perfectly operational. And the LUKS partition on top of that was as well. And to my surprise (after entering the LUKS password) the LVM setup on top of that was could be identified!
Unfortunately the Linux kernel wasn't able to appreciate the root LV. After Linux having done most of the work, I was supposed to take care of that. Only the final step.
The root LV wasn't healthy inderdeed. The filesystem was partly understandable, but much of it was beyond recognition. Even fsck didn't know what to do with it.
It occured to me that it made some sense: originally the root LV was on a RAID5 set comprised of 3 partitions. And now there was only one partition left. This single partition couldn't possibly contain a complete LV. I just had to get the complete RAID set back again.
Now what had happened to my RAID set? Would it be… It shouldn't be .. ? Yes. But it was! From the documentation:
“Though it used to be the default format of raid superblock during array creation on most distributions until 2009, the older version-0.90 superblock format has several limitations” “The newer and well-supported version-1 superblock format is more-expansion friendly than the previous format. It is the default as of v3.1.1. More specifically, –metadata=1.2 is used as of v3.1.2.”
So what was going on with these superblock versions?
|Version||Superblock Position on Device|
|0.9||At the end of the device|
|1.0||At the end of the device|
|1.1||At the beginning of the device|
|1.2||4K from the beginning of the device|
And this about says it all: “Putting the superblock at the end of the device is dangerous if you have any kind of auto-mounting/auto-detection/auto-activation of the raid contents; in some circumstances (in the case of blkid: if the superblock is damaged) the raid components could be detected as a valid filesystem (or other format) which may contain outdated data. This will desynchronise the array and compromise the data. ”
So…. I got some clear proof that my RAID5 is old, because the RAID superblock was at the end of the RAID partitions. And when I resized the partitions, the system was no longer able to identify them as RAID partition and identified them based on what was in the beginning of the partition!
That's what you get, when you never do a clean reinstall. Such a petty.
How to proceed? En clean reinstall? Of course not!
A agree: I could do a clean reinstall. But I have my principles! I was sure this could be solved, but I had no idea what the exact size of the disk partitions was before. That complicated stuff somewhat indeed.
So I made a small shell script to find the RAID superblock signature on the partitions. Once the script identified the superblock, I resized the partitions to the (apparent) original size.
I finally did I reboot, after which the RAID should be back again!
Yes, indeed! During boot Linux properly identified the RAID set again! And the LUKT partition still worked too! And even all LV's were back!
So far the good news. Now for the bad news:
Apparently there were problems with the axt4 root FS. So I ran
fsck , and
fsck said something I never experienced before: “Aborted”. ???? Was it that bad?
Yes, it was that bad. I had bcache running in writeback, which delivers excellent write performance - even on a RAID5 set. Unfortunately this turned against me in this case.
Although before the RAID set was not identified, but bcache was identified, the Linux kernel had bcache just flush whatever was on the SSD. This is business as usual, nut it was bad idea to flush data from the SSD to a single partition instead of the complete RAID5 set. Because now the flushing happened on the wrong places!
Oh BTW, in hindsight the fsck wasn't a good idea too - I mean the fsck on the partition in stead of the whole RAID set. The fsck may additionally messed up the filesystem.
Always nice to understand what happened!
And now? A clean reinstall in the end? Noway! Of course not!
There was a lot of data still OK on the Filesystem, wasn't it?
And then, fingers crossed: a reboot!
Damn, Yes! The system started! Not in runlevel 5. Not in runlevel 3. But it did start in runlevel 1! And that counts as starting, doesn't it?
Once in runlevel 1 I did an “rpm –rebuilddb” followed by an “rpm -qVa” to verify correctnes of files on the root FS. I interrupted the command, because many many files were reported as being incorrect.
So give up in the end? A reinstall anyway? Of course not!
First of all:
rpm was still running. And that meant we could fiel a lot. Furthermore als
dnf was still working too.
I used “
dnf reinstall” to reinstall all installed
rpm's. Apparently the number of
rpm's has grown to ~5800 through the years. Reinstalling all of the took quite a while. So of left the system on its own for a while.
After a few hours I git back to my PC and noticed that the
dnf reinstall was finished. So all that was left to do:
reboot! And low an behold: it started just fine in runlevel 5 again!
I'm absolutely sure that all important data was still in intact: there never was any on my PC! And that's what made it fun to stubbornly try to fix it all.
Althouh? No important data? U hope that all my Steam games still work!
SD cards show to be vulnerable media when using as Raspberry Pi storage: whenever there's a powerfailure (e.g. when I remove the power adapter from the outlet) there's a considerable risk of running into filesystem corruptions (I think this is especially the case when I'm removing the power during write access). Today the SD card even showed to be broken, beyond repair.
So today I decided to get F2FS (Flash Friendly File System) running on my Pi. It took quite some effort to make it happen:
After that all went really smooth. A few things should be noted thogh:
After coming home from our skiing our holiday, I was fed up with the nine hours drive without internet because data roaming is too expensive. So I ordered a Huawei e355 MiFi (mobile Wifi) which allows us to use a German or Swiss SIM for cheap internet connectivity during the drive! Now I only have to buy a German SIM (like Aldi Talk) for the drive and a Swiss SIM (Like Swisscom Natel) for the chalet to actually be online.
Of course I could just replace the SIM of my phone by a German or Swiss one to achieve this, but that would imply that I'm no longer reachable on my own Phone number. By means of the MiFi both my wife and me are online, while being fully reachable as well.
To test the MiFi I bought a 10 Euro *Bliep SIM. After setting up the right APN, all works like a charm! The next step is buy a German SIM right across the border (we live nearby) and we ready for the next long trip to Switzerland.
Of course I'm aware of alternatives like Droam (www.droam.nl), but … well … I'd like to have my own MiFi. And I think In the end it will be cheaper.
Today I finally finished my 'AirPrint gateway for Apple iOS devices' page. It wasn't all that hard, just had to flash a fresh SD card and reinstall Airprint step by step. Now the 'Mail gateway' page is left to document.
Since I stopped using X on the C60M-I 'server' the awkward messaged
BUG: soft lockup - CPU#… stuck for …s have gone and the machine is stable again. I'm also running a Windows 7 VM in KVM on this machine (yeah, I know, It's not the most powerful CPU), so shear load is not the cause of the instability.
To my surprise I had my X hang during Firefox usage. First some video corruption, some kind of recovery and than a hang. I found messages in syslog stating something like 'Bug: soft lockup - CPU#2 stuck for 22s! [Xorg:10022]', and an the end I was forced to do a reboot. After that, none of the messages were in the syslog, seems like data loss.
The Motherboard is a Asus C60M-I, and I'm running Fedora 17 x86_64. The PC normally isn't used via X since it's only meant for NAS like purposes and firewalling. Could be an ATI video driver issue?
Google shows me lots of “BUG: soft lockup - CPU#… stuck for …s” problems, but no clear answer. So I'll stop using X on this PC, see if the problems stay away.
Today I installed a Raspberry Pi at the office to handle all kinds of network related actities:
Having a router that routes traffic between the internet and all my PC's, phones, etc allows to do great traffic shaping by using the HTB qdisc in both directions. However if your router is a PC itself, it's impossible to use HTB to do traffic shaping on it's own inbound network traffic. I came up with a funny solution: use Virtual Ethernet devices to introduce an additional bridge 'in front of' your PC.
More info can be found here
I had this idea of doing traffic shaping bij adding a single NIC router (E.g. Raspberry Pi) in an existing network:
After struggling with this for quite a while with the fact that the returnpackets arrived at the routerm but got lost, it became clear that disabling iptables on bridging was the solution. See also https://bugzilla.redhat.com/show_bug.cgi?id=887652.
Today I started a tech blog. It's a kind of experiment, I'm not sure if it'll really help me, but I expect it will. Just because I'm familiar with it, I'm using dokuwiki as a tool to serve my blog. It's not what wiki's are meant for, so I'll find out if this works.