Broken lvm results in read_urandom error

I was doing something in a chroot, and unfortunately I broke the host. Now I cannot manage the volume groups:

pvs
  read_urandom: /dev/urandom: open failed: No such file or directory

Same error with diferent commands about lvm.. Trying to reconfigure:

# dpkg-reconfigure linux-image-4.19.0-16-amd64

/etc/kernel/postinst.d/dkms:
dkms: running auto installation service for kernel 4.19.0-16-amd64:/usr/sbin/dkms: line 3345: /dev/fd/62: No such file or directory
.
/etc/kernel/postinst.d/initramfs-tools:
update-initramfs: Generating /boot/initrd.img-4.19.0-16-amd64
cryptsetup: ERROR: Couldn't resolve device 
    /dev/mapper/rootvg-root--server--alpha--host
cryptsetup: WARNING: Couldn't determine root device
cryptsetup: ERROR: Couldn't resolve device /dev/dm-1 (deleted)
cryptsetup: ERROR: Couldn't resolve device 
    UUID=e9ef352b-a648-4499-ade2-54235f40a3df
W: Couldn't identify type of root file system for fsck hook
I: The initramfs will attempt to resume from /dev/dm-1 (deleted)
I: Set the RESUME variable to override this.
/etc/kernel/postinst.d/zz-update-grub:
/usr/sbin/grub-probe: error: failed to get canonical path of `/dev/mapper/rootvg-root--server--alpha--host'.
run-parts: /etc/kernel/postinst.d/zz-update-grub exited with return code 1

Trying yo update initramfs:

# update-initramfs -u
update-initramfs: Generating /boot/initrd.img-4.19.0-16-amd64
cryptsetup: ERROR: Couldn't resolve device 
    /dev/mapper/rootvg-root--server--alpha--host
cryptsetup: WARNING: Couldn't determine root device
cryptsetup: ERROR: Couldn't resolve device /dev/dm-1 (deleted)
cryptsetup: ERROR: Couldn't resolve device 
    UUID=e9ef352b-a648-4499-ade2-54235f40a3df
W: Couldn't identify type of root file system for fsck hook
I: The initramfs will attempt to resume from /dev/dm-1 (deleted)
I: Set the RESUME variable to override this.

Partition architecture:

# lsblk
NAME                                       MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
nvme1n1                                    259:0    0  1.8T  0 disk  
├─nvme1n1p1                                259:2    0    2M  0 part  
│ └─md1                                      9:1    0    2M  0 raid1 
├─nvme1n1p2                                259:3    0  510M  0 part  
│ └─md2                                      9:2    0  509M  0 raid1 /boot
└─nvme1n1p3                                259:4    0  1.8T  0 part  
  └─md3                                      9:3    0  1.8T  0 raid1 
    └─croot                                253:0    0  1.8T  0 crypt 
      ├─rootvg-swap--server--alpha--host 253:1    0    8G  0 lvm   
      ├─rootvg-root--server--alpha--host 253:2    0  1.5T  0 lvm   /
      ├─rootvg-root--vm1                   253:3    0  100G  0 lvm   
      ├─rootvg-root--vm2                   253:4    0   20G  0 lvm   
      ├─rootvg-root--vm3                   253:5    0   40G  0 lvm   
      └─rootvg-root--vm4                   253:6    0  100G  0 lvm   
nvme0n1                                    259:1    0  1.8T  0 disk  
├─nvme0n1p1                                259:5    0    2M  0 part  
│ └─md1                                      9:1    0    2M  0 raid1 
├─nvme0n1p2                                259:6    0  510M  0 part  
│ └─md2                                      9:2    0  509M  0 raid1 /boot
└─nvme0n1p3                                259:7    0  1.8T  0 part  
  └─md3                                      9:3    0  1.8T  0 raid1 
    └─croot                                253:0    0  1.8T  0 crypt 
      ├─rootvg-swap--server--alpha--host 253:1    0    8G  0 lvm   
      ├─rootvg-root--server--alpha--host 253:2    0  1.5T  0 lvm   /
      ├─rootvg-root--vm1                   253:3    0  100G  0 lvm   
      ├─rootvg-root--vm2                   253:4    0   20G  0 lvm   
      ├─rootvg-root--vm3                   253:5    0   40G  0 lvm   
      └─rootvg-root--vm4                   253:6    0  100G  0 lvm  

The problem was at doing chroot on rootvg-root--vm4. Maybe due to update-initramfs or removing files in /mnt that affected running processes on host. Now I cannot even delete that volume and I'm afraid to lost the server if a server reboot...

waiting for your support, thanks in advance.

edited:

More actions: Trying to restart a VM:

$ sudo virsh start vm2-bastion
sudo virsh start vm2

error: Failed to start domain vm2
error: internal error: Failed to probe QEMU binary with QMP: Could not access KVM kernel module: No such file or directory
qemu-system-x86_64: failed to initialize KVM: No such file or directory
qemu-system-x86_64: Back to tcg accelerator

Lost the conventional access via SSH:

$ ssh user@ip
PTY allocation request failed on channel 0

...but I managed to login using:

$ ssh user@ip "/bin/bash -i" 

bash: cannot set terminal process group (-1): Inappropriate ioctl for device
bash: no job control in this shell
user@server-alpha-host:~$

Now I cannot even delete that volume and I'm afraid to lost the server if a server reboot...

Backup any important data. If this host is important to you, it must be possible to rebuild and recover it.

Determine if any block devices were mounted where and when the deletion happened, and if you care about data on them. Possibly not, but just because you sent it SIGINT doesn't mean the deletion stopped at device nodes.

Missing /dev/urandom, /dev/kvm, block devices, /proc, and other devices will break many things. Reboot to get them back. devtmpfs and udev recreates device nodes on every boot normally. This reboot is to ensure every misbehaving program is restarted, and must happen eventually.