home Get a blog for free contact login

Pages tagged: ZFS

Debian upgrade - Buster to BullsEye

Дойде време да ъпгрейдна от Debian Buster към BullsEye .

Новият стабилен Дебиан излезе това лято. Нямах особен зор да ъпгрейдвам, но ми беше в TODO списъка.

Първо ъпгрейднах лаптопа, където нямаше проблеми, но там и няма кой знае какво конфигурирано, понеже го ползвам рядко.

Десктопа ми беше сложен микс от:

  • Buster (основна)
  • Buster-Backports
  • BullsEye
  • Testing
  • Unstable
  • Custom repositories

На него се ползват 3 работни места с три видеокарти, всяко с по няколко монитора, собствени клавиатури и мишки, саундкарти, принтери и др.

Има активни Docker и LXC контейнери. Отделно има VirtualBox, systemd-nspawn, SnapD .. През годините се тестват разни мрежови конфигурации за каквото ли не. От bridges, firewalls, proxies, load balancers, уеб и файлови сървъри, файлови системи ..

Миксираната система се поддържаше от сложен микс apt_preferences, holds, sources.list.

Бих казал, че ъпгрейда мина много плавно, като нещата от които очаквах проблем( binary драйверите на NVidia, root on ZFS, KDE, multi seat) минаха гладко.

Единственият не очевиден проблем, който възникна (на този етап поне) беше с akonadiserver . Крашваше дори и след като му затърках напълно базата и конфигурацията. Проблемът се оказа, че са сложили apparmor policy, което предполагаше файловете му да са на същият дял като HOME директорията, а аз съм конфигурирал отделен потребителски VOLATILE дял за кешове и подобни, да не ми пълнят ZFS снапшотите.

Та въпросното policy не му позволяваше достъп до файловете, които имаше нужда да ползва и крашваше при всеки опит за старт.

Та бързият фикс беше:

aa-complain /etc/apparmor.d/usr.bin.akonadiserver

Правилното планиране направи връщането на конфигурацията лесно с един zfs rollback към snapshot-а, който направих преди ъпгрейда.

Ъпгрейда отне няколко часа, може би 5-6, като голямата част от тях бяха затова, защото държах да следя какво се случва отблизо и гледах да не си спестявам стъпки, които да ме доведат до пъти повече изгубено време впоследствие.

Поразгледах и разчистих и разни стари пакети и конфигурации.


Passing boot parameters to ScaleWay's baremetal C1 instance Linux kernel

Passing boot parameters to ScaleWay's baremetal C1 instance Linux kernel

Short story

Add tags like these to your server:

KEXEC_KERNEL=http://mirror.scaleway.com/kernel/armv7l-mainline-lts-4.9-4.9.93-rev1/vmlinuz
KEXEC_INITRD=http://mirror.scaleway.com/initrd/uInitrd-Linux-armv7l-v3.14.6
KEXEC_APPEND=vmalloc=512M

Longer story

The ScaleWay's "BareMetal" "C1" instance is a cheap EUR 3 / month cloud infrastructure instance. It has:

  • 4 32bit armv7l cores
  • 2 GB RAM
  • 50 GB network attached storage
  • 1 public IP included in the price

ScaleWay offers two lines of servers:

  • BareMetal
  • VirtualMachines (KVM based)

One important difference between the two is that:

  • A VM can only be booted with as much storage as included in its offer
  • Bare metal instances support attaching up-to 15x150 GB additional network block device drives ( charged EUR 1/month per 50 GB )

Another important difference is that currently in ScaleWay infrastructure, contra-logically:

  • Only VMs can run custom kernels
  • Bare metal servers come with e pre-build kernels and ScaleWay does not officially support changing these kernels. You can't even run the official kernel that comes with the chosen Linux distro.

Thus a problem arises you need to change something.

My case was that I wanted to use ZFS and it is not included in the official Linux kernel. It is rather build as a module. On standard Debian it is done easily by installing the zfs-dkms package.

It is possible to build the module for the C1 instance kernel by preparing the build env like described here:

The problem was, that ZFS on 32bit Linux:

  • "May encounter stability problems"
  • "May bump up against the virtual memory limit"

which is officially stated here:

I'm stil about to see the former but hit the latter quite fast, and as recommended I had to add the vmalloc=512M boot parameter.

Unfortunately Scaleway does not support passing parameters to their kernels. They however support KEXEC via the KEXEC_KERNEL and KEXEC_INITRD params as documented here:

and they support parameters to the KEXEC-ed kernel via the KEXEC_APPEND param.

So as I just needed to boot the same kernel and pass the parameter. So first I had to find where the current kernel and initrd are. This is done by installing "scaleway-cli":

I've just grabbed the pre-built amd64 deb packages, and then used the "scw" command to get info about the instance:

# list servers
$ scw ps 

# Show instance details 
$ scw inspect SERVER_ID

"bootscript": {
    "bootcmdargs": "LINUX_COMMON scaleway boot=local nbd.max_part=16",
    "initrd": "initrd/uInitrd-Linux-armv7l-v3.14.6",
    "kernel": "kernel/armv7l-mainline-lts-4.9-4.9.93-rev1",
    "dtb": "dtb/c1-armv7l-mainline-lts-4.9-4.9.93-rev1",
    ...

If you inspect a VM instance you will see that the kernel and initrd are referred by IP:

"bootscript": {
    "bootcmdargs": "LINUX_COMMON scaleway boot=local nbd.max_part=16",
    "initrd": "http://169.254.42.24/initrd/initrd-Linux-x86_64-v3.14.6.gz",
    "kernel": "http://169.254.42.24/kernel/x86_64-mainline-lts-4.4-4.4.127-rev1/vmlinuz-4.4.127"

And a google search showed me that the kernel and the initrd were available at:

I've had a problem by trying to use the image referred in the params above:

# DO NOT USE THIS ONE
KEXEC_INITRD=http://mirror.scaleway.com/initrd/uInitrd-Linux-armv7l-v3.14.6

and I've wasted a couple of hours until I realized that this image was in a different format, not usable for the KEXEC_INITRD . Then I've changed it to:

 KEXEC_INITRD=http://mirror.scaleway.com/initrd/initrd-Linux-armv7l-v3.14.6.gz

and this time it worked fine.

The kernel can be found via at least two different URLs:

KEXEC_KERNEL=http://mirror.scaleway.com/kernel/armv7l-mainline-lts-4.9-4.9.93-rev1/vmlinuz
             http://mirror.scaleway.com/kernel/armv7l/4.9.93-mainline-rev1/vmlinuz

And after the successfull boot I've just had to add:

KEXEC_APPEND=vmalloc=512M

And my ZFS module was no longer complaining about lack of virtual memory.

Let me add a few articles that were helpful:

I've wasted about a day while investigating this stuff. If you find it helpful and you think that I might have saved you a couple of hours you can decide to send me a small donation on this PayPal e-mail: krustev-paypal@krustev.net


Posted in dir: /articles/
Tags: BareMetal Debian Linux ScaleWay ZFS

All tags SiteMap Owner Cookies policy [Atom Feed]