Wednesday, September 29, 2021

Recent Questions - Unix & Linux Stack Exchange

Recent Questions - Unix & Linux Stack Exchange


Change LUKS2 password with TPM2 as key

Posted: 29 Sep 2021 10:10 AM PDT

I accidentally changed the password of my LUKS2 partition to something I can't recover. But I have the partition decrypt with the TPM2 of my laptop. The problem is, that I cannot use the TPM2 as key for cryptsetup. Can someone point me in the right direction to change the password with the TPM as key?

during installation of pop os 21.04 when i choose clean install and custom, my drive isn't there what should i do?

Posted: 29 Sep 2021 10:20 AM PDT

i've tried pretty much everything i think? so now i don't know what to do maybe i did something wrong in process? since it's my first time using linux, i used flash to boot pop.os strong textsorry if my english is bad

Why linux does not respond to ICMP Request from VxLAN?

Posted: 29 Sep 2021 10:18 AM PDT

I ran the following command for each of the two machines. When I run the ping command on host B and use the tcpdump command on host A, I successfully capture the ICMP Request. Why is the host not responding to requests. How can I fix it? I've been struggling with this problem for a day now. Thank you very much for your help!


HostB -> HostA

[hostB]# ping 10.244.1.0  PING 10.244.1.0 (10.244.1.0) 56(84) bytes of data.    [hostA]# tcpdump -nvei vxlan  tcpdump: listening on vxlan, link-type EN10MB (Ethernet), capture size 262144 bytes  00:18:52.610590 c2:86:3c:fc:ed:9e > 16:89:e7:3a:2e:f7, ethertype IPv4 (0x0800), length 98: (tos 0x0, ttl 64, id 13199, offset 0, flags [DF], proto ICMP (1), length 84)      10.244.2.0 > 10.244.1.0: ICMP echo request, id 5181, seq 11, length 64  

HostA -> HostB

[hostA]# ping 10.244.2.0    [HostB]# tcpdump -nevi vxlan  tcpdump: listening on vxlan, link-type EN10MB (Ethernet), capture size 262144 bytes  00:32:21.828135 16:89:e7:3a:2e:f7 > c2:86:3c:fc:ed:9e, ethertype IPv4 (0x0800), length 98: (tos 0x0, ttl 64, id 57470, offset 0, flags [DF], proto ICMP (1), length 84)      10.244.1.0 > 10.244.2.0: ICMP echo request, id 5300, seq 1, length 64  

subnet=$1  ip netns add n1  ip netns add n2  # Init bridge  ip link add br0 type bridge  ip addr add 10.244.$subnet.1/24 dev br0  ip link set br0 up  # Init netns v1  ip link add v1 type veth peer name b1  ip link set v1 netns n1  ip netns exec n1 ip addr add 10.244.$subnet.2/24 dev v1  ip netns exec n1 ip link set lo up  ip netns exec n1 ip link set v1 up  ip link set b1 up  # Init netns v2  ip link add v2 type veth peer name b2  ip link set v2 netns n2  ip netns exec n2 ip addr add 10.244.$subnet.3/24 dev v2  ip netns exec n2 ip link set lo up  ip netns exec n2 ip link set v2 up  ip link set b2 up  # Binding Bridge  ip link set b1 master br0  ip link set b2 master br0  # Add vxlan   ip link add vxlan type vxlan id 1 dstport 4789 dev eth0 nolearning proxy  ip addr add 10.244.$subnet.0/32 dev vxlan  ip link set vxlan up  ip link set vxlan master br0    # Add the following(route, arp, fdb) for each of the two machines  # ip route add 10.244.2.0/24 via 10.244.2.0 dev vxlan onlink  # ip neigh add 10.244.2.0 lladdr c2:86:3c:fc:ed:9e dev vxlan  # bridge fdb append c2:86:3c:fc:ed:9e dev vxlan dst 11x.40.167.227    # ip route add 10.244.1.0/24 via 10.244.1.0 dev vxlan onlink  # ip neigh add 10.244.1.0 lladdr 16:89:e7:3a:2e:f7 dev vxlan  # bridge fdb append 16:89:e7:3a:2e:f7 dev vxlan dst 15x.75.71.186  

[HostA]# sudo iptables -L -nv  Chain INPUT (policy ACCEPT 52 packets, 3764 bytes)   pkts bytes target     prot opt in     out     source               destination             Chain FORWARD (policy DROP 0 packets, 0 bytes)   pkts bytes target     prot opt in     out     source               destination             Chain OUTPUT (policy ACCEPT 43 packets, 6446 bytes)   pkts bytes target     prot opt in     out     source               destination    [HostB]# sudo iptables -L -nv  Chain INPUT (policy ACCEPT 119 packets, 8184 bytes)   pkts bytes target     prot opt in     out     source               destination             Chain FORWARD (policy ACCEPT 34 packets, 45258 bytes)   pkts bytes target     prot opt in     out     source               destination             Chain OUTPUT (policy ACCEPT 113 packets, 15056 bytes)   pkts bytes target     prot opt in     out     source               destination  

grub2 and duplicated drives issue

Posted: 29 Sep 2021 09:28 AM PDT

My machine is an UEFI enabled ubuntu 20.04, with three partitions:

/dev/nvme0n1p1 boot (grub2+initrd+kernel)
/dev/nvme0n1p2 OS
/dev/nvme0n1p3 home

I'm in a situation, that from time to time I need to attach a secondary USB HDD, which is an older replica of my integrated main NVME drive and reboot the PC.

The problem is that all partition names, their UUIDs and etc are identical on both HDDs and the UEFI BIOS upon booting the GRUB2 from the integrated main NVME HDD, will mark it as HD1 and the USB HDD is marked as HD0, thus initrd and kernel booting from the USB HDD, instead of the NVME HDD, which has the latest initrd and kernel. This is the row in my grub.cfg, which makes the issue

insmod efi_uga
insmod efi_gop
insmod gzio
insmod ext2
insmod search_label
insmod search_part_label
search --no-floppy --set root --part-label some_boot_label --hint-efi=hd0,gpt1

Do you know of a dynamic way to identify (search) the nvme0n1p1 and use it, instead of hd0,gpt1 which is static?

device.map will not work, as it is a static file (hostdisk//dev/nvme0n1,gpt1) and reorderring occurs, when USB is inserted. My only guess is to disable the *hci.mod modules, which load the USB devices, but not sure if this is a good idea.

Your help is very welcome

How to extract few tabs from a xml file using zgrep or sed

Posted: 29 Sep 2021 09:55 AM PDT

I have a big size file like 5GB with .gz. Inside that file, we have few XML files that contains values that I want to search and extract just in case if those values are there.

For example I want to extract the tags that contains the name NOOSS and also the subcontent of this tags like <pmJobId>, <requestedJobState>, <reportingPeriod>, <jobPriority> from the the .gz file

<Pm xmlns="urnCmwPm">      <pmId>1</pmId>      <PmJob>          <pmJobId>NOOSSCONTROLExample</pmJobId>          <requestedJobState>ACTIVE</requestedJobState>          <reportingPeriod>FIVE_MIN</reportingPeriod>          <jobType>MEASUREMENTJOB</jobType>          <jobPriority>HIGH</jobPriority>          <granularityPeriod>FIVE_MIN</granularityPeriod>          <jobGroup>Sla</jobGroup>          <reportContentGeneration>CHANGED_ONLY</reportContentGeneration>          <MeasurementReader>              <measurementReaderId>mr_2</measurementReaderId>              <measurementSpecification struct="MeasurementSpecification">                  <measurementTypeRef>Anything</measurementTypeRef>              </measurementSpecification>              <thresholdRateOfVariation>PER_SECOND</thresholdRateOfVariation>          </MeasurementReader>          <MeasurementReader>              <measurementReaderId>mr_1</measurementReaderId>              <measurementSpecification struct="MeasurementSpecification">                  <measurementTypeRef>ManagedElement=1,SystemFunctions=1,Pm=1,PmGroup=OSProcessingLogicalUnit,MeasurementType=CPULoad.Total</measurementTypeRef>              </measurementSpecification>              <thresholdRateOfVariation>PER_SECOND</thresholdRateOfVariation>          </MeasurementReader>      </PmJob>  </Pm>  

I was using cat *gz 1 zgrep -a "PmJobId" but the output only show the <pmJobId> value and not the rest of the information or tags.

Please your help, I'm noobie on this.

Im using CentOS - RedHat Linux.

Thanks

Obtaining balloon memory statistics within a Linux vm (kvm)?

Posted: 29 Sep 2021 09:05 AM PDT

Does anyone know how to obtain balloon memory statistics within a vm? I've scoured google, stack overflow, twitter, and the like.

I'm attempting to set up a monitor to pull the metric but I'm at a loss where the metric is located. I would assume there is a metric somewhere...

How do I recursively run "chgrp" without changing the group if it matches a specific group?

Posted: 29 Sep 2021 08:50 AM PDT

I just copied all the files/subdirectories in my home directory to another user's home directory.

Then I did a recursive chown on his home directory, so that he became the owner of all his files/subdirectories.

The last thing I need to do is a recursive chgrp on his home directory, so that his username will be the group for all his files/subdirectories, instead of my username.

The issue is that there are a couple of subdirectories whose group is "docker". Inside these subdirectories, there are some files/directories whose group is my username, and some other files/directories whose group is "docker".

How do I recursively run chgrp on his home directory so that every single file/subdirectory whose group is my username gets changed to his username, but every single file/subdirectory whose group is "docker" stays "docker"?

Redirect an application's sound to a VLC stream in Linux

Posted: 29 Sep 2021 07:38 AM PDT

I'd like to stream the sound in my computer (ideally of a specific app such as rhythmbox, firefox or spotify) through a VLC stream.

Do you know how this can be achieved? I need this to create some sort of home-made radio where we could all listen to the same music at the same time when on the same network.

VLC alone does not seem to help as it needs a file or an input device. Perhaps there is a way to pipe audio fluxes into VLC? Or to make it a virtual file that VLC could use (everything is a file)?

My config: latest Ubuntu or Manjaro, both using pulseaudio.

Thanks in advance!

Group and count file names following a pattern

Posted: 29 Sep 2021 08:43 AM PDT

I have a large number of files in a folder with a specific naming system. It looks somewhat like this:

my_file_A_a.txt  my_file_A_d.txt  my_file_A_f.txt  my_file_A_t.txt  my_file_B_r.txt  my_file_B_x.txt  my_file_C_f.txt  my_file_D_f.txt  my_file_D_g.txt  my_file_E_r.txt  

I would like a command line, or a series of commands (can use temp files, I have write access), that would return something like:

A: 4  B: 2  C: 1  D: 2  E: 1  

It could be done with a lot of ls -1 *A* | wc -l commands, but it would take a long time as there are a few hundred "groups" to count.

Also, each group name is unique. There is an A group, a B group, but no AB group.

rsync failed to set permissions for a local copy ("Function not implemented")

Posted: 29 Sep 2021 08:48 AM PDT

There are lots of similar questions out there, but none seems to address my problem: every time, the culprit is a legitimate permission issue, or an incompatible filesystem, none of which makes any sense here.

I'm transferring a file locally, on an ext4 filesystem, using rsync. A minimal example is:

cd /tmp  touch blah  mkdir test  rsync -rltDvp blah test  

which returns the error:

rsync: [receiver] failed to set permissions on "/tmp/test/.blah.Gyvvbw": Function not implemented (38)  

and the files have different permissions:

-rw-r--r-- 1 ted ted 0 Sep 29 15:49 blah  -rw------- 1 ted ted 0 Sep 29 15:49 test/blah  

I'm running rsync as user ted and the filesystem is ext4, so it should support permissions just fine. Here is the corresponding line from df -Th:

Filesystem                  Type        Size  Used Avail Use% Mounted on  /dev/mapper/c--3px--vg-root ext4        936G  395G  494G  45% /  

I'm running rsync 3.2.3 protocol version 31 on Debian Sid, kernel 5.10.0-6-amd64.

Edit: well, I'll be damned. apt-get update && apt-get upgrade, which apparently upgraded rsync (to version 3.2.3-8), fixed the problem.

Can't wakeup Raspberry Pi if unused for more than a day

Posted: 29 Sep 2021 07:07 AM PDT

Folks at the Raspberry Pi board forwarded me over here.

I'm using Raspberry Pi to run Onion Share and ProtonVPN on a CanaKit Raspberry Pi 4. I check it once a day. Moving the mouse wakes it up. But if I miss a day, moving the mouse does not wake it up. Likewise for the keyboard. It doesn't matter what USB ports they are plugged into. And trying to access via BlueTooth also doesn't help. So I have to unplug it and boot up again.

Any idea how I can avoid having to unplug it and can just keep waking it up by moving the mouse?

Thanks!

how to find two different files with find command

Posted: 29 Sep 2021 07:45 AM PDT

I stucked at one point where my script should find two different files. One of them time stamp is like D210929 another one 20210929. I have these two files:

HGIS4C.IOPZ.IP4.CCCP.D210929.S004596.IO99999.19992.1111.CCCP.IP9999  HGS4C.SCS.CCA1.TSILocationContactData20210929.zip  

My question is that how can find these two files with find command? E.g.

find . -name "TBSI4C.[SCS]*.[D]${DATE}" | grep -c TBSI4C 2>/dev/null  

bash script help to check nfs mount exists [rsnapshot]

Posted: 29 Sep 2021 08:26 AM PDT

I have two linux servers; server two is a backup to server one where server two is NFS mounted to server one.

I use rsnapshot on server one to copy from /data/ to the nfs mounted folder /bkup from server two.

Problem is if the nfs /bkup mount isn't there, rsnapshot will copy /data {20tb) onto the root partition {1tb}.

Instead of cron'ing my one call to launch rsnapshot I would like to call a backup script that first checks on everything before calling rsnaphot to prevent that scenario. I do not think rsnapshot's no_create_root is relevant because the /bkup folder will always exist. Can the following happen in a bash script? i'm hoping someone fluent in bash can type it up in 2 minutes? my bash writing is horrible.

if ( showmount -e server_two responds with "/bkup server_two" )  {      if ( check if /bkup is nfs mounted == true )      {          /usr/bin/rsnapshot daily      }      else      {         mount /bkup         if ( check if /bkup is nfs mounted === true )         {            /usr/bin/rsnapshot daily         }      }  }  

right now I have this to work where/when my nfs bkup mount is good on server_one

mount | grep bkup    server_two:/bkup on /bkup type nfs4  (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.1.2,local_lock=none,addr=192.168.1.1)    df -h | grep bkup    server_two:/bkup   15T  3.0T   12T  21% /bkup    showmount -e server_two    Export list for server_two:  /bkup server_one  

Can I use ext4 home partition in btrfs file system?

Posted: 29 Sep 2021 07:43 AM PDT

Right Now, I have a bit complex partition table:

p1 - fat32 EFI System Partition  p2 - Microsoft Reserved Partition // W10  p3 - NTFS OS  // W10  p6 - Ext4 home partiotion  // Currently used by Ubuntu  unallocated space // Want to use for Fedora root  p7 - Ext4 Root Partition // Currently used by Ubuntu  p5 - Swap Partition  p4 - NTFS Recovery  // W10  

Now I want to install Fedora in a triple-boot system (I will shortly remove Ubuntu). I used the default BTRFS partition system, and I created a root partition and I want to share the same home partition between Fedora and Ubuntu. But when I set the ext4 Home partition at \home mountpoint. The Anaconda installer would just use the btrfs root partition that I created earlier instead of using the home partition.

So, does BTRFS file system require all the partitions to be of btrfs type?

using "grep", to match a list of IDs in a file to match with another file

Posted: 29 Sep 2021 07:30 AM PDT

I have been using various formats suggested here on the forum like this one:

grep -f file1.txt file2.txt > ouput.txt  

file1.txt contains a list of IDs in one column e.g.:

15002345234   15001234214  

file2.txt contains tab delimited columns, one column including the IDs, and other columns containing other information.

1500349850 1 3 father  

I have tried shell loops, awk and sed commands suggested in other posts. But essentially I only get results for one ID:

150982309750 1 2 2  4  

it is always the same one as well, whereas there should be many results in the output e.g.:

150982309750 1 2 2  4  150563524856 1 3 2  2  150864364612 2 1 2  2  

Any ideas what I am doing wrong?

Rsyslog - Change Default Log Directory(/var/log) for multiple clients

Posted: 29 Sep 2021 08:22 AM PDT

I have 2 Clients connected to my rsyslog server. I want to change the default log directory for each client. So client A writes to /var/log/ClientA and client B writes to /var/log/clientB.

I am looking forward to your help, as i can't implement it that way.

Regards

Is there a way to send all shell script output to both the terminal and a logfile, *plus* any text entered by the user?

Posted: 29 Sep 2021 07:02 AM PDT

I want to send output of a shell script, including user-entered text, to the terminal and a logfile.

I thought some combination of tee and exec might do it, but I've had no luck so far. I know tee by itself can echo and capture what the user enters in the terminal:

$ tee logfile  Hello  (I entered this at runtime)  Hello  (I entered this at runtime)  ^C    $ cat logfile  Hello  (I entered this at runtime)  

But I need to see (on both terminal and in the logfile) what the user enters in response to commands invoked within the shell script.

tee doesn't seem to be able to do that consistently.

For example:

$ read message 2>&1 | tee logfile  Hello  (I entered this at runtime)    $ cat logfile  

Nothing was captured there. I expected to see Hello (I entered this at runtime) in the file just like before.

I also tried combining tee with exec in the shell script like so:

$ cat test.bash  #!bin/bash  # Note: in this simplified version of this file, I'm not looking at $1, $2, or anything else passed in, but will need to eventually    rm -f logfile  exec &> >(tee -a logfile)  echo "Say \"Hello\"" 2>&1  read -p "> " 2>&1  

Unfortunately, adding exec did not help:

$ ./test.bash  Say "Hello"  > Hello  (I entered this at runtime)    $ cat logfile  Say "Hello"  >   

As you can see, it captured the output of the echo command and the read command, but not what I entered into the terminal in response to the read command.

Is there a way to do it?

I know the script command ("make typescript of terminal session") can capture everything on the screen and put it in a logfile. But the script command can't be invoked in a useful way from within a shell script. (Can it?)

script needs to be invoked first, and then the user has to invoke the desired shell script. But I want the user to only have to invoke one command, with its parameters, and then have the command take care of running everything else and logging everything.

Then there's all that "extra" information (e.g. color codes, backspaces) script captures that makes it hard to read the resulting logfile in an arbitrary text editor.

I just want to see the "human-readable" characters in the logfile. And I don't want to see if the user corrected a spelling error. I just want to see that they had "Hello" on the screen when they finished editing and hit Enter. Although I suppose the extra information could be stripped out after capture.

Question on using sed, filtering data

Posted: 29 Sep 2021 09:37 AM PDT

Here is a sample text file:

store: xxx  Delete: xxx  Expires: Sat, 30 Oct 02021 13:01:57 +0100  store: xxx  Delete: xxx  Expires: Sat, 30 Oct 02021 13:01:57 +0100  store: abc  store: sdf  Expires: Sat, 30 Oct 02021 13:01:57 +0100  
  • I want all three fields (store, Delete, Expires) in a CSV format.
  • If there is no Delete or Expires line/string, it should show as null or empty space separated with a comma
  • The date field to be trimmed to be only DD Mon YYYY e.g. 30 Oct 2001

So far with the help, we have the below but does not work as expected.

Any help would be much appreciated.

cat list.txt | grep -E "Expires|Delete|Store" | awk '{ printf "%s\n", $2 }' | tr  '\n' ',' | sed 's/,,/\n/' | sed '$ s/.$//'  

How to script n curl POSTs from n lines of a .txt file and save somwhere else only the lines that generated 200 OK as response

Posted: 29 Sep 2021 10:12 AM PDT

Given this cURL POST request

curl -i -s -k -X $'POST'     -H $'Host: api.host.it' -H $'Content-Length: 205' -H $'Sec-Ch-Ua: \"Chromium\";v=\"93\", \" Not;A Brand\";v=\"99\"' -H $'Messageid: 9d6dd58d2df24d0aa410245a' -H $'Sessionid: ada9e560ed204e85a25e5475' -H $'Devicetype: ANDROID' -H $'Interactiondate-Date: 2021-09-27' -H $'Interactiondate-Time: 20:32:37.758' -H $'Sec-Ch-Ua-Mobile: ?0' -H $'User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/93.0.4577.82 Safari/537.36' -H $'Content-Type: application/json;charset=UTF-8' -H $'Accept: application/json' -H $'Sourcesystem: WEB' -H $'Businessid: bbc0a98dc23a4a84968c42e4' -H $'Channel: HOSTWEBCO' -H $'Transactionid: 3F941666A8414D3C874AC77B' -H $'Sec-Ch-Ua-Platform: \"Linux\"' -H $'Origin: https://www.host.com' -H $'Sec-Fetch-Site: same-site' -H $'Sec-Fetch-Mode: cors' -H $'Sec-Fetch-Dest: empty' -H $'Referer: https://www.host.com/' -H $'Accept-Encoding: gzip, deflate' -H $'Accept-Language: en-GB,en-US;q=0.9,en;q=0.8' -H $'Connection: close'     --data-binary $'{\"mount\":25,\"Method\":\"SA\",\"redirectUrlKo\":\"https://www.host.com/scarica?esito=KO\",\"redirectUrlOk\":\"https://www.host.com/scarica?esito=OK\",\"toMsisdn\":\"PARAMETER\",\"txReqDescription\":\"scarica Online\"}'     $'https://api.host.com/api/charge/public/init'  

I need to create a bash, python script or xargs syntax that executes the cURL command for every line inside the file numbers.txt taking the data at the single line as input for the field PARAMETER shown inside cURL --data-binary option, after executed save only the lines that returned server code 200 OK in another file output.txt.

I know that cURL accept file input using --data@file.txt but I have other fields before and it wont work.

when mounting an .img via fstab, it shows duplicate in file manager (Ubuntu Mate 20.04.3)?

Posted: 29 Sep 2021 09:14 AM PDT

I have done these procedures to mount my .img file in /etc/fstab (for ubuntu mate 20.04 x64)

Create .img file:

dd if=/dev/zero of=filename.img bs=1024 count=2M  sudo mkfs.ext4 filename.img  

Note: also be done with gparted with this method

The problem:

Mount /etc/fstab in /mount/point:

/home/user/filename.img /home/user/vdisk ext4 defaults 0 0  # or  /home/user/filename.img /home/user/vdisk ext4 loop 0 0  # or  /home/user/filename.img /home/user/vdisk auto loop 0 0  

But always show 2 units: vdisk (mount) and loop (not mount) (see image)

enter image description here

if i try to click on this other drive showing unmounted i get the following message:

enter image description here

Why doesn't it just show the fileimage.img image mounted in the vdisk folder?

I would like you to help me fix the fstab line so that two units do not appear when mounting .img but only one

Update:

if I run any of the following commands:

sudo mount -a  # or  sudo mount /home/user/vdisk  

The same thing that I describe in my post appears.

My fstab (I have altered the UUID for security reasons):

# / was on /dev/sda2 during installation  UUID=9f92d1aa-458d-441a-b349-abcdefghijkl /   ext4    errors=remount-ro 0       1  # /boot/efi was on /dev/sda1 during installation  UUID=F798-ABCD  /boot/efi       vfat    umask=0077      0       1  /swapfile          none            swap    sw              0       0  /home/user/filename.img /home/user/vdisk ext4 defaults 0 0  

List:

sudo losetup --list | grep filename.img  /dev/loop8     0      0    1  0 /home/user/filename.img   0     512  

Important:

But, if I remove the /etc/fstab line, delete /dev/loop8 and mount .img image manually (with the followings commands), the described error does not appear

sudo mount -o loop /home/user/filename.img /home/user/disk  # or  sudo mount -t ext4 -o loop /home/user/filename.img /home/user/disk  

Workaround:

  1. manually

mount .img manually to /dev/loopXX available:

losetup -f  /dev/loop8  sudo losetup -P /dev/loop8 filename.img  sudo losetup -l  /dev/loop8         0      0         0  0 /home/user/filename.img                            0     512  

edit /etc/fstab and put the line:

# /path/to/loop/device       /path/to/mount/point       auto       loop       0 0  # example:  /dev/loop8 /home/user/disk ext4      defaults      0 0  

and:

sudo mount -a

Note: this method is not permanent

  1. bash script:
#!/bin/bash  mount -o loop /home/user/filename.img /home/user/disk    # sudo crontab -e  @reboot ./mount-img.sh  
  1. with bindfs:
sudo mkdir /mnt/disk  # edit fstab and add line:  /home/user/filename.img /mnt/disk ext4    defaults  0   0  sudo mount -a  sudo -u user bindfs -n /mnt/disk /home/user/disk    

Summary:

  • There is no error
  • The image is mounted (manually and with fstab)

About mount:

When mounting the .img in fstab, it appears duplicated (one is mounted and the other is not). This does not happen when mounting the .img manually or /dev/loopXX in fstab

Update New:

This appears to be a bug in Ubuntu Mate 20.04.3. In Ubuntu version 20.04.3 this problem is not present.

testing file managers:

affects:

  • caja
  • nemo
  • thunar

does not affect:

  • dolphin
  • nautilus

enter image description here

this has been reported in Ubuntu Mate launchpad, but launchpad can take years to fix it. So if anyone knows the solution to this bug, thank you

Are RC folders obsolete on Ubuntu?

Posted: 29 Sep 2021 08:40 AM PDT

I am learning Linux, using Ubuntu. I wanted to remove network management from one of the run levels. I had done this correctly before, but now, no matter how hard I try, I can not remove a script from the desired run levels.

enter image description here

the rc3 folder is empty so how can I work on run level 3?!

nftables rule: No such file or directory error

Posted: 29 Sep 2021 09:53 AM PDT

I am trying to apply below nftables rule which I adopted from this guide:

nft add rule filter INPUT tcp flags != syn counter drop  

somehow this is ending up with:

Error: Could not process rule: No such file or directory

Can anyone spot what exactly I might be missing in this rule?

Grab ID of OS from /etc/os-release

Posted: 29 Sep 2021 07:28 AM PDT

When I cat /etc/os-release I get the following:

PRETTY_NAME="Kali GNU/Linux Rolling"  NAME="Kali GNU/Linux"  ID=kali  VERSION="2018.1"  VERSION_ID="2018.1"  ID_LIKE=debian  ANSI_COLOR="1;31"  HOME_URL="http://www.kali.org/"  SUPPORT_URL="http://forums.kali.org/"  BUG_REPORT_URL="http://bugs.kali.org/"  

How would I grab kali from ID= in bash? How would I grab 2018.1 from VERSION= in bash?

`cryptsetup luksOpen <device> <name>` fails to set up the specified name mapping

Posted: 29 Sep 2021 09:07 AM PDT

HardenedArray has a helpful archlinux-installation guide at Efficient Encrypted UEFI-Booting Arch Installation.

However, I encountered difficulty early in the installation process -- specifically, at the point of opening my LUKS partition.

The command cryptsetup -c aes-xts-plain64 -h sha512 -s 512 --use-random luksFormat /dev/sda3 completes without error, but after I enter the command cryptsetup luksOpen /dev/sda3 tsundoku, /dev/mapper/tsundoku does not become available.

ls /dev/mapper lists /dev/mapper/control alone, and not also /dev/mapper/tsundoku as I would expect.

The following error message appears upon cryptsetup luksOpen /dev/sda3 tsundoku --verbose --debug:

"Trying to read ... LUKS2 header at offset .... LUKS header read failed (-22). Command failed with code -1 (wrong or missing parameters)."

Could anyone offer any hints at to the cause of this error? My attempts at online research to this point haven't been fruitful.

Thanks much

--- EDIT ---

I've asked this question for help to achieve any of three goals: (1) to install arch-linux (in any manner) on a 6ish-year-old x86-64 Intel Core i5 2.50GHz ASUS; (2) more specifically, to install arch-linux securely with an encrypted partition; (3) to learn why, despite my expectations, cryptsetup luksOpen /dev/sda3 tsundoku does not create a tsundoku mapping entry in the path /dev/mapper.

I'm a newcomer to arch-linux, so although I'd prefer installing the OS with encryption, I'd settle for installing it in any way.

I haven't had much luck following the installation instructions in the official arch wiki in the past, so upon seeing HardenedArray's clearly delineated installation guide, I thought I'd give it a go -- worst case scenario being that I might encounter a problem like the one described above, whereby I might learn something new.

As for the issue, here are some more details:

As per HardenedArray's guide: I gdisk /dev/sda and create the following partitions:

  • /dev/sda1, default, 100M, EF00
  • /dev/sda2, default, 250M, 8300
  • /dev/sda3, default, default, 8300

Then I do the following:

mkfs.vfat -F 32 /dev/sda1

mkfs.ext2 /dev/sda2

At this point, I attempt to initialize a LUKS partition and set up a mapping.

> cryptsetup --verbose -c aes-xts-plain64 -h sha512 -s 512 --use-random luksFormat /dev/sda3

Command successful

> cryptsetup -v isLuks /dev/sda3

Command successful

> ls /dev/mapper

control

> cryptsetup luksOpen /dev/sda3 tsundoku --verbose --debug

    cryptsetup 2.0.0. processing "cryptsetup luksOpen /dev/sda3 tsundoku --verbose --debug"      Running command open.      Locking memory.      ...      Trying to load any crypt type from device /dev/sda3.      Crypto backend ... initialized ...      Detected kernel Linux 4.14.9-1-ARCH x86_64.      ...      Reading LUKS header of size 1024 from device /dev/sda3.      ...      Activating volume tsundoku using token -1.      STDIN descriptor passphrase entry requested.      Activating volume tsundoku [keyslot -1] using passphrase.      ...      Detected dm-ioctl version 4.37.0.      Device-mapper backend running with UDEV support enabled.      dm status tsundoku [ opencount flush ] [...] (...)      Trying to open key slot 0 [ACTIVE_LAST].      Reading key slot 0 area.      Using userspace crypto wrapper to access keyslot area.      Trying to open key slot 1 [INACTIVE].      # key slots 2-7 are also [INACTIVE]      Releasing crypt device /dev/sda3 context.      Releasing device-mapper backend.      Unlocking memory.      Command failed with code -2 (no permission or bad passphrase).  

> ls /dev/mapper

control

> cryptsetup luksDump /dev/sda3

    LUKS header information for /dev/sda3      Version: 1      Cipher name: aes      Cipher mode: xts-plain64      Hash spec: sha512      ...      UUID: 56d8...      Key Slot 0: ENABLED      ...      Key Slot 1: DISABLED      # Key Slots 2-7 are also DISABLED  

Are the steps I've listed above inaccurate in any way? Perhaps there were alternatives I should have taken instead or intervening actions that I missed?

If not, is the command cryptsetup luksOpen /dev/sd{a} {volume} supposed to create a volume mapping in the path /dev/mapper?

If so, do the details I've added above allow anyone to ascertain why the path /dev/sda3/tsundoku does not appear on my machine? And if not, is there any additional information that I could add to make the problem clearer?

Thanks much.

Unable to add printer: Unauthorized when adding a printer using the CUPS web interface

Posted: 29 Sep 2021 08:06 AM PDT

I have setup a CUPS server with the web interface. Sadly I'm unable to add a printer by doing the following steps:

  • Browser (REMOTE_SERVER_IP:631)
  • Administration tab
  • Local Printers
  • HP Printer (HPLIP)
  • Connection
  • Add Printer (name and all the good stuff)
  • Select model
  • Select Driver
  • Error

enter image description here

At this point I get the message: Unable to add printer: Unauthorized. My configuration file looks like this:

# Disable cups internal logging - use logrotate instead  MaxLogSize 0    # Log general information in error_log - change "warn" to "debug"  # for troubleshooting...  LogLevel warn  #PageLogFormat    Listen /run/cups/cups.sock  Listen 0.0.0.0:631  Port 631    BrowseAddress *.*.*.*:631  BrowseAllow all    # Show shared printers on the local network.  Browsing On  BrowseLocalProtocols all    # Default authentication type, when authentication is required...  DefaultAuthType None    # Web interface setting...  WebInterface Yes    # Restrict access to the server...  <Location />    Order allow,deny    Allow All  </Location>    # Restrict access to the admin pages...  <Location /admin>    Order allow,deny    Allow All  </Location>    # Restrict access to configuration files...  <Location /admin/conf>    Order allow,deny    Allow All  </Location>    # Restrict access to log files...  <Location /admin/log>    Order allow,deny    Allow All  </Location>  

I'm using the following Dockerfile to build and start the whole thing. I also provide a new user inside the image.

What am I missing?

how to print the file using awk

Posted: 29 Sep 2021 09:14 AM PDT

INPUT (tab delimited)

HTR12   AT1G01370       Chr1    143564  145684  +  SDG42   AT1G01920       Chr1    316128  319650  +  SDG5    AT1G02580       Chr1    544783  549202  +  

OUTPUT (tab delimited)

Chr1    143564  145684  HTR12   AT1G01370       +  Chr1    316128  319650  SDG42   AT1G01920       +  Chr1    544783  549202  SDG5    AT1G02580       +  

my solution is

awk -v OFS="\t" '{print $3,$4,$5,$2,$1,$6}' input > output  

But seems not good.

Understanding Bash's Read-a-File Command Substitution

Posted: 29 Sep 2021 09:40 AM PDT

I am trying to understand how exactly Bash treats the following line:

$(< "$FILE")  

According to the Bash man page, this is equivalent to:

$(cat "$FILE")  

and I can follow the line of reasoning for this second line. Bash performs variable expansion on $FILE, enters command substitution, passes the value of $FILE to cat, cat outputs the contents of $FILE to standard output, command substitution finishes by replacing the entire line with the standard output resulting from the command inside, and Bash attempts to execute it like a simple command.

However, for the first line I mentioned above, I understand it as: Bash performs variable substitution on $FILE, Bash opens $FILE for reading on standard input, somehow standard input is copied to standard output, command substitution finishes, and Bash attempts to execute the resulting standard output.

Can someone please explain to me how the contents of $FILE goes from stdin to stdout?

How to disable the automatic mute after booting in gnome?

Posted: 29 Sep 2021 07:06 AM PDT

Whenever I start up, gnome sets the volume to mute automatically. So how can I ask it to remember the volume last time I set before shutting down?

No comments:

Post a Comment