Friday, September 10, 2021

Recent Questions - Unix & Linux Stack Exchange

Recent Questions - Unix & Linux Stack Exchange


Sed not working with / in the content itself [duplicate]

Posted: 10 Sep 2021 10:06 AM PDT

I need some help changing this config file via sed, for some reason its not working.

I think it's cause i have a /.

CONFIG="# 100.25.255.255/8 10.0.0.1 fd00:00:00::1"    CURRENT_IP_RANGE="100.25.255.255/8"  FINAL_IP_RANGE="10.0.0.0/8"    sed "1s/${CURRENT_IP_RANGE}/${FINAL_IP_RANGE}/" ${CONFIG}  

Repair Grub Bootloader for Windows-Only Boot (After Removing Linux from Dual-Boot)

Posted: 10 Sep 2021 09:57 AM PDT

I had a dual-boot (Ubuntu+Win10) configuration on my laptop (single disk). Due to space limitations, I had to delete the Ubuntu partitions (root, home, swap in extended partition) and extend the big NTFS partition to the full disk. Unfortunately the Grub configuration was stored in the Ubuntu part. So I was not able to boot any more. I just got to the grub repair CLI.

During repair attempts using a Ubuntu 20.04 live USB stick, I got to the point where I now have sda1 (NTFS 500MB), sda2 (Windows 10, 250GB), sda3 (Windows recovery) and sda4 (300MB ext4 designated as /boot partition for Grub). With the latter I try to repair the existing configuration. I installed grub using grub-install --root-directory /mnt/sda4 /dev/sda from this post.

I am able to boot into the Grub CLI (2.04). With the commands from this post, I can boot into Win10 again:

insmod chain  insmod ntfs  set root=(hd0,msdos1)  chainloader +1  boot  

But at this point I am stuck. I do not see how to install the grub menu permanently again. Commands like update-grub or grub2-mkconfig (run from the live stick) require not to be run from the live stick. I tried chroot, but was not successful. All manuals that I found assume that one Linux OS is still installed, which seems to be the "basis" of chroot. I want to get a running Windows-only configuration. Right now it would be ok if it is with a Grub bootloader. It would be better to use the Windows bootloader, but I seem not to be able to start the recovery mode despite hitting F8 on Windows boot and also have no Windows recovery media.

So does someone have a clue on how to permanently add the windows boot entry to Grub in this scenario? A quick workaround which does not require me to type the five commands on every boot would also be fine. Thanks :-)

TCPDUMP w/SNAT Configuration Shows Private IP on ICMP Reply

Posted: 10 Sep 2021 10:36 AM PDT

Problem: TCPDUMP icmp reply inexplicably has the private address. I would expect it to have the public address.

[router.box(1.2.3.4)]$ tcpdump -n -i br1 icmp  10:42:21.689215 IP 1.2.3.4 > 8.8.8.8: ICMP echo request, id 2935, seq 1, length 64  10:42:21.696828 IP 8.8.8.8 > 10.0.0.1: ICMP echo reply, id 2935, seq 1, length 64  

I have configured my linux box to perform SNAT on packets leaving bridge interface br1:

[router.box(1.2.3.4)]$ iptables -t nat -L -n -v  Chain POSTROUTING (policy ACCEPT 75970 packets, 4560K bytes)   pkts bytes target     prot opt in     out     source               destination                        62  3816 SNAT       all  --  *      br1     0.0.0.0/0            0.0.0.0/0            to:1.2.3.4  

The outgoing icmp packet correctly has it's source address changed from 10.0.0.1 to 1.2.3.4, But the icmp reply packet shows as already being translated to the private address (10.0.0.1):

[local.box(10.0.0.1)]$ ping 8.8.8.8  PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.  64 bytes from 8.8.8.8: icmp_seq=1 ttl=112 time=8.06 ms  64 bytes from 8.8.8.8: icmp_seq=2 ttl=112 time=7.97 ms  
[router.box(1.2.3.4)]$ tcpdump -n -i br1 icmp  10:42:21.689215 IP 1.2.3.4 > 8.8.8.8: ICMP echo request, id 2935, seq 1, length 64  10:42:21.696828 IP 8.8.8.8 > 10.0.0.1: ICMP echo reply, id 2935, seq 1, length 64  

My network configurations are as follows:

[router.box(1.2.3.4)]$ ip a  3: eno2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master br1 state UP group default qlen 1000  36: br1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000    inet 1.2.3.4/26 ...  36: br3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000    inet 10.0.0.2/24 ...  
[router.box(1.2.3.4)]$ ip route  default via <gateway address> dev br1  10.0.0.1/24 dev br3 proto kernel scope link src 10.0.0.2  
[local.box(10.0.0.1)]$ ip a  36: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000    inet 10.0.0.1/24 ...  
[local.box(10.0.0.1)]$ ip route  default via 10.0.0.2 dev eno1  

Am I misunderstanding where TCPDUMP collects its packets from? Is it after the address has been translated back to the source address?

EDIT:
It looks like tcpdump on the physical interface (eno2) produces the expected result:

[router.box(1.2.3.4)]$ sudo tcpdump -n -i eno2 icmp                                                                                                                   tcpdump: verbose output suppressed, use -v or -vv for full protocol decode  listening on eno2, link-type EN10MB (Ethernet), capture size 262144 bytes  13:33:49.331086 IP 1.2.3.4 > 8.8.8.8: ICMP echo request, id 3011, seq 1, length 64  13:33:49.338641 IP 8.8.8.8 > 1.2.3.4: ICMP echo reply, id 3011, seq 1, length 64  

So does SNAT get applied after it enters the physical interface (eno2) and before the bridge interface (br1)?

Add application menu entries in Fedora 34

Posted: 10 Sep 2021 09:32 AM PDT

After upgrading to Fedora 34, the Activities menu disappeared and this new Applications menu has taken its place in the upper left corner. There is no entry in the menu for the basic gnome terminal (or any terminal of any kind). So to start a terminal, the only way is to browse in the Files app to /usr/bin/ and start the terminal from its binary file. Since there is no longer an "add to favorites" features, I can't add anything to this new-and-incomplete menu. How can I conveniently start a terminal, or any other program that has been disregarded during construction of the Applications menu?

fedora 34 desktop

How can I fix LVM PV size after a botched encrypted partition shrinking

Posted: 10 Sep 2021 09:25 AM PDT

I apparently messed up today.

I have to resize an encrypted root partition to make room for a windows dual boot. I followed instructions from the arch wiki since it seemed to match my needs even though I am using debian. At some point I had to use pvmove because after shrinking the root partition, the free space was between my root and swap partition. I thought it all went well, but I apparently messed up my sector/bytes/stuff calculations at some point. Right now the machine is booted from a live debian usb key and this is the output of what I think are the relevant shell commands.

user@debian:~$ sudo lsblk  NAME          MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT  loop0           7:0    0   2.3G  1 loop  /usr/lib/live/mount/rootfs/filesystem.s  sda             8:0    0   3.6T  0 disk    sdb             8:16   1 114.6G  0 disk    sdc             8:32   1  28.9G  0 disk    ├─sdc1          8:33   1   2.5G  0 part  /usr/lib/live/mount/medium  └─sdc2          8:34   1   2.6M  0 part    nvme0n1       259:0    0   3.6T  0 disk    ├─nvme0n1p1   259:1    0   512M  0 part    ├─nvme0n1p2   259:2    0   488M  0 part    └─nvme0n1p3   259:3    0   3.5T  0 part      └─cryptdisk 253:0    0   3.5T  0 crypt  # this is where the "fun" happens  

So, I managed to free 100G for windows, looks good so far. But...

user@debian:~$ sudo cryptsetup luksOpen /dev/nvme0n1p3 cryptdisk  Enter passphrase for /dev/nvme0n1p3:   user@debian:~$ sudo vgchange -a y licorne-vg    WARNING: Device /dev/mapper/cryptdisk has size of 7602233344 sectors which is smaller than corresponding PV size of 7602235392 sectors. Was device resized?    WARNING: One or more devices used as PVs in VG licorne-vg have changed sizes.    device-mapper: reload ioctl on  (253:2) failed: Invalid argument    1 logical volume(s) in volume group "licorne-vg" now active  user@debian:~$ sudo lsblk  NAME                   MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT  loop0                    7:0    0   2.3G  1 loop  /usr/lib/live/mount/rootfs/filesystem.squashfs  sda                      8:0    0   3.6T  0 disk    sdb                      8:16   1 114.6G  0 disk    sdc                      8:32   1  28.9G  0 disk    ├─sdc1                   8:33   1   2.5G  0 part  /usr/lib/live/mount/medium  └─sdc2                   8:34   1   2.6M  0 part    nvme0n1                259:0    0   3.6T  0 disk    ├─nvme0n1p1            259:1    0   512M  0 part    ├─nvme0n1p2            259:2    0   488M  0 part    └─nvme0n1p3            259:3    0   3.5T  0 part      └─cryptdisk          253:0    0   3.5T  0 crypt       └─licorne--vg-root 253:1    0   3.5T  0 lvm     

Panic intensifies... 253:2 was my encrypted swap partition which was part of this cryptdisk.

user@debian:~$ sudo pvdisplay /dev/mapper/cryptdisk    WARNING: Device /dev/mapper/cryptdisk has size of 7602233344 sectors which is smaller than corresponding PV size of 7602235392 sectors. Was device resized?    WARNING: One or more devices used as PVs in VG licorne-vg have changed sizes.    --- Physical volume ---    PV Name               /dev/mapper/cryptdisk    VG Name               licorne-vg    PV Size               3.54 TiB / not usable 0       Allocatable           yes (but full)    PE Size               4.00 MiB    Total PE              928007    Free PE               0    Allocated PE          928007    PV UUID               x5fLwB-qnhM-qc4x-y28f-FdDM-pFGI-9I6SYh       user@debian:~$ sudo lvs    WARNING: Device /dev/mapper/cryptdisk has size of 7602233344 sectors which is smaller than corresponding PV size of 7602235392 sectors. Was device resized?    WARNING: One or more devices used as PVs in VG licorne-vg have changed sizes.    LV     VG         Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert    root   licorne-vg -wi-a-----  <3.54t                                                        swap_1 licorne-vg -wi------- 976.00m    user@debian:~$ sudo dmesg | grep device-mapper  [   99.652244] device-mapper: uevent: version 1.0.3  [   99.652317] device-mapper: ioctl: 4.43.0-ioctl (2020-10-01) initialised: dm-devel@redhat.com  [  100.537014] device-mapper: table: 253:2: dm-0 too small for target: start=7600236544, len=1998848, dev_size=7602233344  [  100.537016] device-mapper: core: Cannot calculate initial queue limits  [  100.537027] device-mapper: ioctl: unable to set up device queue for new table.  [ 1451.395603] device-mapper: table: 253:2: dm-0 too small for target: start=7600236544, len=1998848, dev_size=7602233344  [ 1451.395605] device-mapper: core: Cannot calculate initial queue limits  [ 1451.395956] device-mapper: ioctl: unable to set up device queue for new table.                                                        

Is this LVM/LUKS setup in a recoverable state? I think that licorne--vg-root and only the swap partition suffered, which is OK, right? What steps should I follow here to fix things? Thanks for your help.

Using tee in apache error logs to write to local and also syslog but date function is not working

Posted: 10 Sep 2021 09:19 AM PDT

in apache httpd.conf

works:

ErrorLog "|/var/apache/bin/rotatelogs -f /usr/HTTPLogs/apache/errors.%Y.%m.%d 86400

output:

-rw-r----- 1 root system 48919 Sep 10 12:08 errors.2021.09.10

now trying to write error logs to local folder and to syslog:

ErrorLog "|tee /var/apache/bin/rotatelogs -f /usr/HTTPLogs/apache/errors.%Y.%m.%d 86400 | /usr/bin/logger -thttpd -plocal6.err"

output:

-rw-r----- 1 root system 15941 Sep 10 12:32 errors.%Y.%m.%d

any way to get the date function to work inside tee?

Why can't remove the java version?

Posted: 10 Sep 2021 08:48 AM PDT

Remove the openjdk-17-jre and jdk :

sudo apt remove  openjdk-17-jre openjdk-17-jdk   Reading package lists... Done  Building dependency tree... Done  Reading state information... Done  The following package was automatically installed and is no longer required:    openjdk-17-jdk-headless  Use 'sudo apt autoremove' to remove it.  The following packages will be REMOVED:    openjdk-17-jdk openjdk-17-jre  0 upgraded, 0 newly installed, 2 to remove and 1 not upgraded.  After this operation, 9,250 kB disk space will be freed.  Do you want to continue? [Y/n] y  (Reading database ... 295782 files and directories currently installed.)  Removing openjdk-17-jdk:amd64 (17~19-1) ...  update-alternatives: using /usr/lib/jvm/java-11-openjdk-amd64/bin/jconsole to provide /usr/bin/jconsole (jconsole) in auto mode  Removing openjdk-17-jre:amd64 (17~19-1) ...  Processing triggers for hicolor-icon-theme (0.17-2) ...  debian@debian:~/Downloads$ java -version  openjdk version "17-ea" 2021-09-14  OpenJDK Runtime Environment (build 17-ea+19-Debian-1)  OpenJDK 64-Bit Server VM (build 17-ea+19-Debian-1, mixed mode, sharing)  debian@debian:~/Downloads$ sudo apt autoremove  Reading package lists... Done  Building dependency tree... Done  Reading state information... Done  The following packages will be REMOVED:    openjdk-17-jdk-headless  0 upgraded, 0 newly installed, 1 to remove and 1 not upgraded.  After this operation, 248 MB disk space will be freed.  Do you want to continue? [Y/n] y  

Reboot and log in again.

java -version  openjdk version "17-ea" 2021-09-14  OpenJDK Runtime Environment (build 17-ea+19-Debian-1)  OpenJDK 64-Bit Server VM (build 17-ea+19-Debian-1, mixed mode, sharing)  

Why can't remove the java version ?

Low eficience while using dockerized OpenMCU-ru's image

Posted: 10 Sep 2021 08:40 AM PDT

In my work we have a videoconference system in wich we have a lot of SIP videophones. We use a VM (runing over VMWare on an HP Proliant server, on wich are runing also others VMs) wich run OpenMCU-ru over CentOS6 (the same VM wich OpenMCU-ru's devs posted back in the days in their official webpage).

I wanted to upgrade the old CentOS6 VM because is behaving bad while directors use the service, for a newest so I tried, and tried and tried... And so far I couldn't compile it because incompatibilities with old libraries.

So I decided to use a docker image (wich runs Ubuntu14.04 and OpenMCu 4.1.6): I used an old HP server (E5520, 8 cores at @2.27GHz and 14GB RAM, 1Gb Ethernet card) as host system, I installed Debian 10.10 and installed Docker, downloaded the image and run it:

docker run --network="host" -d kap0ng/openmcu_core  

I set up the same configurations in the new server as in the old one (Video code= H.264{sw}, Audio code= G.711-ALaw-64k{sw}...I set the exact same configuration as the old CentOS6 VM): enter image description here

But every time I make a videoconference using the new system the image is superlaggy and it seems it consuming a lot of cpu usage:

docker stats  

docker stats

And sometimes it gets more than 500%. I guess is a normal value, as I understand a 100% on dockers means it's using 100% of 1 of host's core.

So my question is:

  1. Am I configuring wron the OpenMCU-ru SIP server?
  2. Am I limiting somehow the eficiency of the container?
  3. How do I make to the see the videoconference more fluid? With less picture lost per second...
  4. The problem may be related to the host being Debian 10.10 while the container runs with an Ubuntu 14.04 base?

sed : Replace the string in one variable with another in a file [duplicate]

Posted: 10 Sep 2021 08:34 AM PDT

I need to replace STUDENT (if found in the file FILENAME) with REPLACE.I've been searching for a couple hours now,so it's time for a question. What is the correct syntax of sed to do it ? My best attempt :

sed 's/'$STUDENT'/'$REPLACE'/' $FILENAME  

How to change the order in which the MATE file manager `caja` lists devices bevore bookmarks

Posted: 10 Sep 2021 08:02 AM PDT

Caja shows in the left pane some directories, then my devices the bookmarks and finally the network. I know how to restrict the number of directories shown to the necessary ones see this instruction but I cannot have the bookmarks before the devices listed. I have too many devices (created from logical volume manager) and having he bookmarks listed afterwards requires to scroll down. Is there a way to change the order and have the bookmarks listed before the devices?

How to execute a script after ppp connection establishes?

Posted: 10 Sep 2021 07:59 AM PDT

I would like to execute a ddns script every time after pppoe has started or renewed. And I have tried

auto dsl-provider  iface dsl-provider inet ppp      pre-up /bin/ip link set enp1s0 up      provider dsl-provider  #   up docker restart my-ddns # it auto detects whether changing dns record is needed  

And if the final line is uncommented, it (as well as the entire networking.service) will fail at boot, for dockerd has not started.

What is the proper way to trigger such script? Now I can only set up a cron job to do it.

make results in 'No idlpp IDL pre-processor found!' error

Posted: 10 Sep 2021 09:02 AM PDT

I am trying to build a private project using make, but I keep getting the error:

CMake Error at cmake/config_opensplice.cmake:8 (message):    No idlpp IDL pre-processor found!  Call Stack (most recent call first):    CMakeLists.txt:92 (config_opensplice)  

I have browsed online but cannot seem to find what the reason is for this error. Strange enough, the make command works when I use sudo. *Ubuntu 20.04

Configure first day of the week for xfce calendar in centos 7

Posted: 10 Sep 2021 07:33 AM PDT

I have centos 7 with xfce.

And I cannot find where is the necessary configuration file located to make Monday as first day of the week. xfce calendar

Feasibility of storing keys unencrypted on RAM under certain assumptions

Posted: 10 Sep 2021 08:44 AM PDT

We have IoT gateways that run Linux 5.4.31 kernel. These gateways need to manage thousands of devices which are mobile and each has a unique encryption key. The idea is to fetch the key of a device from a server (via secure channel) when the device enters the range, use it for decryption as long as the device is in the range and delete the key from memory when it leaves. Decryption must be done on the gateway since we have to do specific actions depending on the received data.

We want to store the keys on RAM unencrypted because we don't want the overhead of decrypting the keys for each time we access them. We have following assumptions:

  1. Physical access to the gateways are not possible.
  2. The service is running under a non-root user.
  3. An attacker might gain access to the gateway as a non-root user (that is different from the service user if that matters).
  4. An attacker might pull off a buffer overflow attack.

What are the options of an attacker to access the encryption keys on the RAM under these assumptions if

  1. We store the keys on a statically allocated memory (via static keyword in C, not in stack)
  2. We store the keys on dynamically allocated memory (it would probably be one-time allocation since we have limited resources).

Also what restrictions should we put on a non-root user, e.g. they can't access swap memory, core dumps, install packages, use gdb, etc. to prevent access to the process RAM?

Note: If the attacker has root access then they can access to all the keys using the private key that is used to access the server anyway, so we do not consider this case for this question.

copy files from source to destination with uuid renaming

Posted: 10 Sep 2021 09:43 AM PDT

I have some files in a recursive folder structure, which I would like to copy to destination and give them a uuid name instead of the original name (because the names collide).

Nonrmally I would do something like this:

SOURCE_DIR="/some/*/dir/struct/*/img_files" &&\  DEST_DIR="/dest/folder/0" &&\  rm -rf $DEST_DIR &&\  mkdir -p $DEST_DIR &&\  find $SOURCE_DIR \( -name "*.jpg" \) -exec cp {} $DEST_DIR \;  

but because the names collide, I am unable to do this. Thus, I would like to assign a uuid name to each file which is being copied. To this end, I have tried this:

SOURCE_DIR="/some/*/dir/struct/*/img_files" &&\  DEST_DIR="/dest/folder/0" &&\  rm -rf $DEST_DIR &&\  mkdir -p $DEST_DIR &&\  find $SOURCE_DIR \( -name "*.jpg" \) -exec cp {} $DEST_DIR/"$(uuidgen).jpg" \;  

but I get only copied file :( rather than a bunch of files in the folders.

How to delete an indelible file?

Posted: 10 Sep 2021 09:20 AM PDT

I was copying a file via ssh and the connection dropped. After that, I am left with a zero-sized file which cannot be deleted. ls and mc see it, doing rm or rm -f on it succeed but do nothing to the file; rm -vf even says "removed", but the zero-sized file stays.

Remote host is CentOS 6.8.

When trying to edit the file, it keeps the changes, but when trying to delete, it assumes a zero size and still stays. Trying to use chattr -i gives Inappropriate ioctl for device while reading flags. I have no root access to the server so I cannot do fsck or so. No process I can see via ps is using it. Is there any way to delete it?

Merging two files based on the first column depending on specific pattern's location in file 1

Posted: 10 Sep 2021 09:13 AM PDT

I have the following files:

File 1 (around 7000 lines):

1010089 1402 6814 5543  1010121 6948 1402 2344  1305789 7589 7890 1402  

File 2 (around 300k lines):

1010089 26 48 33  1010121 21 62 49  

I would like to merge the two files based on the first column depending on 1402-s location in file 1. For example, if 1402 is in the second column, I want to print the first column of file 1, the second column of file 1 and the second column of file 2. If 1402 is in the third column, I want to print the first column of file 1, the third column of file 1 and the third column of file 2.

1402 can occur in any column, not only in the second or the third. However, it does not occur more than once/line. If $1 from file 2 does not include $1 of file 1, I want to print $1, 1402 and unknown.

Desired output:

1010089 1402 26  1010121 1402 62  1305789 1402 unknown  

I use the following script to merge the two files:

awk 'FNR==NR{arr[$1]=$2;next} ($1 in arr){print $0,arr[$1]}' file2 file1  

Is there a kernel option to allow a userspace app discovering custom USB devices on the host?

Posted: 10 Sep 2021 08:14 AM PDT

Consider a minimalist, buildroot-based Linux image on some 1-PCB computer with a USB host port.

Then there is another small computer: a Raspberry Pi Compute Module "CM3", the only connection between the two is USB. When the CM3 has a Linux running on it, it acts as a ethernet gadget for communication.

But when the CM3 needs to be flashed, an input pin on it, toggled by the buildroot machine, tells it "boot mode", and it will become a custom USB device "BCM2710 Boot". The Raspi folks then offer a userspace program, usbboot aka rpiboot, to find the device and upload a small image turning it into a mass storage device.

When I try this, plugging the CM3 USB into a "normal" RaspberryPi's host USB port, I see this with dmesg:

[16689.527482] usb 1-1.3: new high-speed USB device number 3 using xhci_hcd  [16689.657906] usb 1-1.3: config index 0 descriptor too short (expected 55, got 32)  [16689.658302] usb 1-1.3: New USB device found, idVendor=0a5c, idProduct=2764, bcdDevice= 0.00  [16689.658319] usb 1-1.3: New USB device strings: Mfr=1, Product=2, SerialNumber=0  [16689.658335] usb 1-1.3: Product: BCM2710 Boot  [16689.658350] usb 1-1.3: Manufacturer: Broadcom  

Rpiboot does then find the device and sends over that spepcial image mentioned. So that works.

But when I plug it into my buildroot machine instead, I only see this:

[  597.725309] usb 3-1: new full-speed USB device number 4 using xhci-hcd  [  601.965304] usb 3-1: new high-speed USB device number 5 using xhci-hcd  

So it appears to see that there is some USB thing new, but won't go further. And rpiboot waits forever instead of finding it like on the Raspi-host. So it seems there are one or more Linux kernel config options which are not enabled in my buildroot image, which prevents this from working. I got this idea because initially, all the ethernet gadget stuff also did not work and I had to go and enable CDCETHER and some other options - but I have no idea what to look for with this non-standard device here. There are a lot of not-enabled CONFIG_USB_* options that seem to pertain to specific devices, but nothing like "custom" or such.

What is/are the corresponding option(s)?


Added: a view of sorted and line-matched (where same options) excerpts of both kernel configurations: I removed everything that's the same setting on both sides, drivers of specific devices, or nothing to do with USB, to shrink the listing by a fair amount. I went through short descriptons of these kernel options on https://cateee.net/, and so far saw nothing that seemed to fit the bill - but it's not said I didn't overlook or misinterpret something...

configRaspi4-Raspbian                           configBuildrootDevice  --------------------------------------------------------------------------------  CONFIG_HISI_HIKEY_USB is not set                  CONFIG_MEDIA_USB_SUPPORT=y                        CONFIG_NOP_USB_XCEIV=y                          CONFIG_NOP_USB_XCEIV is not set  CONFIG_USB_ACM=m                                CONFIG_USB_ACM=y  CONFIG_USB_ADUTUX=m                             CONFIG_USB_ADUTUX is not set  CONFIG_USB_AIRSPY is not set                      CONFIG_USB_AN2720=y                               CONFIG_USB_ARMLINUX=y                             CONFIG_USB_ATM=m                                  CONFIG_USB_BELKIN=y                               CONFIG_USB_CDC_COMPOSITE=m                      CONFIG_USB_CDC_COMPOSITE is not set  CONFIG_USB_CONFIGFS_ACM=y                       CONFIG_USB_CONFIGFS_ACM is not set  CONFIG_USB_CONFIGFS_ECM_SUBSET=y                  CONFIG_USB_CONFIGFS_ECM=y                       CONFIG_USB_CONFIGFS_ECM is not set                                                  CONFIG_USB_CONFIGFS_ECM_SUBSET is not set  CONFIG_USB_CONFIGFS_F_FS=y                      CONFIG_USB_CONFIGFS_F_FS is not set  CONFIG_USB_CONFIGFS_F_HID=y                     CONFIG_USB_CONFIGFS_F_HID is not set  CONFIG_USB_CONFIGFS_F_LB_SS=y                   CONFIG_USB_CONFIGFS_F_LB_SS is not set  CONFIG_USB_CONFIGFS_F_UAC2=y                      CONFIG_USB_CONFIGFS_NCM=y                       CONFIG_USB_CONFIGFS_NCM is not set  CONFIG_USB_CONFIGFS_OBEX=y                      CONFIG_USB_CONFIGFS_OBEX is not set  CONFIG_USB_CONFIGFS_RNDIS=y                     CONFIG_USB_CONFIGFS_RNDIS is not set  CONFIG_USB_CONFIGFS_SERIAL=y                    CONFIG_USB_CONFIGFS_SERIAL is not set  CONFIG_USB_CONFIGFS=m                           CONFIG_USB_CONFIGFS=y  CONFIG_USB_DEFAULT_PERSIST=y                    CONFIG_USB_DEFAULT_PERSIST is not set  CONFIG_USB_DWC2_DEBUG is not set                  CONFIG_USB_DWC2_DUAL_ROLE=y                       CONFIG_USB_DWC2_HOST is not set                   CONFIG_USB_DWC2_PCI is not set                    CONFIG_USB_DWC2_PERIPHERAL is not set             CONFIG_USB_DWC2_TRACK_MISSED_SOFS is not set      CONFIG_USB_DWC2=m                               CONFIG_USB_DWC2 is not set                                                  CONFIG_USB_DWC3_DUAL_ROLE=y                                                  CONFIG_USB_DWC3_GADGET is not set                                                  CONFIG_USB_DWC3_HAPS=y                                                  CONFIG_USB_DWC3_HOST is not set                                                  CONFIG_USB_DWC3_OF_SIMPLE=y                                                  CONFIG_USB_DWC3_OTG is not set  CONFIG_USB_DWC3 is not set                      CONFIG_USB_DWC3=y  CONFIG_USB_DWCOTG=y                               CONFIG_USB_ETH_EEM is not set                   CONFIG_USB_ETH_EEM=y  CONFIG_USB_ETH=m                                CONFIG_USB_ETH=y  CONFIG_USB_EZUSB_FX2=m                          CONFIG_USB_EZUSB_FX2 is not set  CONFIG_USB_F_ACM=m                                CONFIG_USB_F_ECM=m                              CONFIG_USB_F_ECM=y  CONFIG_USB_F_EEM=m                              CONFIG_USB_F_EEM=y  CONFIG_USB_F_FS=m                                 CONFIG_USB_F_HID=m                                CONFIG_USB_F_MASS_STORAGE=m                     CONFIG_USB_F_MASS_STORAGE=y  CONFIG_USB_F_NCM=m                                CONFIG_USB_F_OBEX=m                               CONFIG_USB_F_RNDIS=m                            CONFIG_USB_F_RNDIS=y  CONFIG_USB_F_SERIAL=m                             CONFIG_USB_F_SS_LB=m                              CONFIG_USB_F_SUBSET=m                           CONFIG_USB_F_SUBSET=y  CONFIG_USB_F_UAC2=m                               CONFIG_USB_FEW_INIT_RETRIES is not set            CONFIG_USB_G_ACM_MS=m                           CONFIG_USB_G_ACM_MS is not set  CONFIG_USB_G_HID=m                              CONFIG_USB_G_HID is not set  CONFIG_USB_G_MULTI_CDC is not set                 CONFIG_USB_G_MULTI_RNDIS=y                        CONFIG_USB_G_MULTI=m                            CONFIG_USB_G_MULTI is not set  CONFIG_USB_G_SERIAL=m                           CONFIG_USB_G_SERIAL is not set  CONFIG_USB_GADGETFS=m                           CONFIG_USB_GADGETFS is not set  CONFIG_USB_GSPCA=m                                CONFIG_USB_HIDDEV=y                             CONFIG_USB_HIDDEV is not set  CONFIG_USB_HSO=m                                  CONFIG_USB_LD=m                                 CONFIG_USB_LD is not set  CONFIG_USB_LIBCOMPOSITE=m                       CONFIG_USB_LIBCOMPOSITE=y  CONFIG_USB_MASS_STORAGE=m                       CONFIG_USB_MASS_STORAGE is not set  CONFIG_USB_MON=m                                CONFIG_USB_MON is not set  CONFIG_USB_NET_CDC_EEM=m                        CONFIG_USB_NET_CDC_EEM=y  CONFIG_USB_NET_CDC_MBIM=m                       CONFIG_USB_NET_CDC_MBIM is not set  CONFIG_USB_NET_CDC_NCM=m                        CONFIG_USB_NET_CDC_NCM is not set  CONFIG_USB_NET_CDC_SUBSET_ENABLE=m                CONFIG_USB_NET_CDC_SUBSET=m                     CONFIG_USB_NET_CDC_SUBSET is not set  CONFIG_USB_NET_CDCETHER=m                       CONFIG_USB_NET_CDCETHER=y  CONFIG_USB_NET_RNDIS_HOST=m                     CONFIG_USB_NET_RNDIS_HOST is not set  CONFIG_USB_NET_RNDIS_WLAN=m                                                                       CONFIG_USB_OTG_BLACKLIST_HUB is not set                                                  CONFIG_USB_OTG_FSM=y                                                  CONFIG_USB_OTG_WHITELIST is not set  CONFIG_USB_OTG is not set                       CONFIG_USB_OTG=y  CONFIG_USB_OTG_DISABLE_EXTERNAL_HUB is not set    CONFIG_USB_OTG_PRODUCTLIST is not set             CONFIG_USB_PWC=m                                  CONFIG_USB_RAW_GADGET is not set                  CONFIG_USB_ROLE_SWITCH=m                        CONFIG_USB_ROLE_SWITCH is not set  CONFIG_USB_SERIAL=m                             CONFIG_USB_SERIAL is not set  CONFIG_USB_SEVSEG=m                             CONFIG_USB_SEVSEG is not set  CONFIG_USB_TEST=m                               CONFIG_USB_TEST is not set  CONFIG_USB_TMC=m                                CONFIG_USB_TMC is not set  CONFIG_USB_U_ETHER=m                            CONFIG_USB_U_ETHER=y  CONFIG_USB_U_SERIAL=m                                                                             CONFIG_USB_WUSB_CBAF is not set  CONFIG_USB_ZERO=m                               CONFIG_USB_ZERO is not set  

Ubuntu 18.04 freezes on log in screen locally while working with RDC

Posted: 10 Sep 2021 07:32 AM PDT

Some times ago I have installed xrdp (0.9.5) on my Dell tower (OS Ubuntu 18.04) in order to be able to work on it from home, from my laptop (OS Windows 10 Pro, Version 2004, OS build 19041.1165).

In itself, everything has worked fine, as I can use RDC from my Win laptop to log in and work on the Dell tower. However, now that I am back at the office, I discovered that if I try to work locally on the Dell tower, Ubuntu freezes on the log in screen. Specifically, I can see the mouse pointer but neither the mouse nor the keyboard seem to work.

I have tried to log off from RDC before trying to connect locally, and I tried to turn off the tower from RDC and turn it on locally. Both fixes have not worked.

8021x wireless connection using iwctl

Posted: 10 Sep 2021 10:26 AM PDT

I am running Arch Linux and am using iwctl to connect to the WI-FI. I have tried to connect to a network that is using 8021x security and iwctl comes up with the error message "Not configured". How do I configure iwd to work with this network?

Edit: The network also requires a username to login not just a password.

sshpass with ssh -J jump host

Posted: 10 Sep 2021 09:05 AM PDT

I have a script with a couple of ssh commands that use a jump host. I would like to enter the jump and target server passwords each time and tried to use sshpass sadly "nesting" sshpass does not seems to make the trick.

sshpass -p "JumpPass" sshpass -p "ServerPass" ssh -J user@jump admin@server  

Can we "nest" many sshpass or is there a specific option for providing different passwords ?

find string and print first and last characters of line

Posted: 10 Sep 2021 09:52 AM PDT

I have files with hundreds of lines of varying length. I want to find each line with the string "New" and print the first 7 characters and the 10th from the last character.

For example, cat file1.txt

1234567 New line with irrelevant info x end line  2345678 irrelevant line  3456789 New line with different irrelevant info y end line  4567890 irrelevant line  5678901 New line with yet more irrelevant info z end line  

And my output would be:

1234567 x   3456789 y  5678901 z  

HASP key does not work over ssh

Posted: 10 Sep 2021 08:43 AM PDT

I am using a software product that uses a HASP USB dongle. The software runs on a Linux box and I would like to run it remotely via ssh (it is a command line software tool). When I am physically on the workstation I can run the tool. When I login via ssh, it says it cannot find the License key.

My other team members use the software tool (separate installation) and are able to use it remotely. I just installed this thing and I can't seem to use it remotely. There isn't much documentation on this HASP dongle and I am not sure if there some super-tight restriction - I think some debug feedback would be great.

Note I am not trying to do anything outside the ordinary. I rebooted the machine and the behavior was still the same. I am thinking that maybe the HASP (UDEV rules) do not permit network users?

Any thoughts guidance would be informative.

What is the /etc/subuid file? [closed]

Posted: 10 Sep 2021 10:52 AM PDT

Following docker instructions, I've run a docker with dockerd --userns-remap=default which added this line to the /etc/subuid file:

dockremap:165536:65536  

I don't understand what it means, please explain.

Firewalld: How to whitelist just two IP-addresses, not on the same subnet

Posted: 10 Sep 2021 09:05 AM PDT

I'm running firwalld on a VPS / webserver.

The public zone is active and default (and I do not want the change that). How do I allow only these two external IP-addresses to access the VPS (i.e. all of the services I have defined in the public zone):

   IP1:  11.22.33.44/24     IP2:  55.66.77.88/24  

These are fake IP addresses and notice that they are intentionally not on the same subnet.

I think I understand why the following doesn't work (it locks out one or the other IP).

user$ sudo firewall-cmd --zone=public --permanent --add-source=11.22.33.44/24  user$ sudo firewall-cmd --zone=public --permanent --add-source=55.66.77.88/24    user$ sudo firewall-cmd --permanent --zone=public --add-rich-rule='rule family="ipv4" source address="11.22.33.44/24" invert="True" drop'   user$ sudo firewall-cmd --permanent --zone=public --add-rich-rule='rule family="ipv4" source address="55.66.77.88/24" invert="True" drop'  user$ sudo firewall-cmd --reload  

What do I need to modify for this to work (so it doesn't lock out one IP or the other or both)?

Thank you! :)

EDIT: I also tried a /32 bit mask for all four commands above. Sadly it did not help. Still looking for a solution.

I think the logic might sound something like: if IP1 or IP2, allow it and stop processing the chain. else Continue processing the chain, where the very next rule would be to DROP.. Something like that.

EDIT2: Posting the output of sudo firewall-cmd --list-all-zones below. Note that I removed all the rules mentioned above since they weren't working. So the below is back to square one.

user$ sudo firewall-cmd --list-all-zones  block    target: %%REJECT%%    icmp-block-inversion: no    interfaces:     sources:     services:     ports:     protocols:     masquerade: no    forward-ports:     source-ports:     icmp-blocks:     rich rules:       dmz    target: default    icmp-block-inversion: no    interfaces:     sources:     services:     ports:     protocols:     masquerade: no    forward-ports:     source-ports:     icmp-blocks:     rich rules:       drop    target: DROP    icmp-block-inversion: no    interfaces:     sources:     services:     ports:     protocols:     masquerade: no    forward-ports:     source-ports:     icmp-blocks:     rich rules:       external    target: default    icmp-block-inversion: no    interfaces:     sources:     services:     ports:     protocols:     masquerade: yes    forward-ports:     source-ports:     icmp-blocks:     rich rules:       home    target: default    icmp-block-inversion: no    interfaces:     sources:     services:     ports:     protocols:     masquerade: no    forward-ports:     source-ports:     icmp-blocks:     rich rules:       internal    target: default    icmp-block-inversion: no    interfaces:     sources:     services:     ports:     protocols:     masquerade: no    forward-ports:     source-ports:     icmp-blocks:     rich rules:       public (active)    target: default    icmp-block-inversion: no    interfaces: venet0:0 venet0    sources:     services: ssh-vps http https    ports: 8080/tcp 8080/udp    protocols:     masquerade: no    forward-ports:     source-ports:     icmp-blocks: echo-reply echo-request timestamp-reply timestamp-request    rich rules:     trusted    target: ACCEPT    icmp-block-inversion: no    interfaces:     sources:     services:     ports:     protocols:     masquerade: no    forward-ports:     source-ports:     icmp-blocks:     rich rules:       work    target: default    icmp-block-inversion: no    interfaces:     sources:     services:     ports:     protocols:     masquerade: no    forward-ports:     source-ports:     icmp-blocks:     rich rules:  

btrfs found uncorrected disk errors, How can I find which files they are in?

Posted: 10 Sep 2021 07:33 AM PDT

I ran btrfs scrub and got this:

scrub status for 57cf76da-ea78-43d3-94d3-0976308bb4cc      scrub started at Wed Mar 15 10:30:16 2017 and finished after 00:16:39      total bytes scrubbed: 390.45GiB with 28 errors      error details: csum=28      corrected errors: 0, uncorrectable errors: 28, unverified errors: 0  

OK, I have good backups, and I would like to know which files these 28 errors are in so I can restore them from backup. That would save me a lot of time over wiping and restoring the whole disk.

su cannot open session error when starting Oracle XE database

Posted: 10 Sep 2021 10:00 AM PDT

I have a RHEL 7.2 server with Oracle 11g Express Edition (11.2.0) installed. The installation of Oracle created a file named "oracle-xe" in /etc/init.d This is a bash script that can be used to start and stop the listener and database manually. When I'm logged on to the server, I can run the following:

dzdo /etc/init.d/oracle-xe start  

and the Oracle listener + database are started without issue. I can log on using sqlplus and execute commands. I'm trying to use chkconfig to make it so that oracle-xe is executed automatically on system start, so that I do not have to manually start the listener and database every time the server is rebooted. The oracle-xe script itself is lengthy, but the meat of it contains the following:

#!/bin/bash  # chkconfig: 2345 80 05    # Source fuction library  if [ -f /lib/lsb/init-functions ]  then      . /lib/lsb/init-functions  elif [ -f /etc/init.d/functions ]  then      . /etc/init.d/functions  fi    SU=/bin/su  ORACLE_OWNER=oracle  $ORACLE_HOME=/u01/app/oracle/product/11.2.0/xe  LSNR=$ORACLE_HOME/bin/lsnrctl  SQLPLUS=$ORACLE_HOME/bin/sqlplus  $STARTUP_LOG=/home/tsm/log/oracle-xe.log    echo $(date) >> $STARTUP_LOG      $SU -s /bin/bash $ORACLE_OWNER -c "$LSNR start" >> $STARTUP_LOG 2>&1  $SU -s /bin/bash $ORACLE_OWNER -c "$SQLPLUS -s /nolog @$ORACLE_HOME/config/scripts/startdb.sql" >> $STARTUP_LOG 2>&1  

I added the $STARTUP_LOG code and the >> redirect of output so that I could sort out what was happening. I added the script to chckconfig with the following:

cd /etc/init.d  dzdo chmod 750 oracle-xe  dzdo chkconfig --add oracle-xe  dzdo chkconfig oracle-xe on  

The following command yields the given (shortened) output:

dzdo chkconfig --list    oracle-xe       0:off    1:off   2:on   3:on   4:on   5:on  6:off  

I reboot the server, and it generates a log file at /home/tsm/log/oracle-xe.log with the following output:

Fri Jan 13 15:03:58 CST 2017  su: cannot open session: Permission denied  su: cannot open session: Permission denied  

and as you might guess, as a result of this su failure, neither the listener nor the database engine have started. Since I see the reboot date/time in the log file, I know for sure that the script is being executed upon boot. It seems to me to be a permissions issue, that whatever account is being used to execute init scripts at startup for some reason cannot su as $ORACLE_OWNER, yet me as a lowly admin can do this just fine from the command prompt. It was my understanding that the init code is executed as root, and therefore this su command should run without a problem. I've been searching and trying various things for the better part of a day trying to sort this out, and have pulled out what little remains of my hair.

The server itself is using DirectAuthorize to grant access permissions, which is why I end up using dzdo instead of sudo. Could this have something to do with it?

How to find gnome-terminal currently used profile with cmd line?

Posted: 10 Sep 2021 08:04 AM PDT

I'm using Ubuntu 16.04 and I want to be able to tell which profile is used by a given terminal emulator. Just the name would be enough.

It's trivial to find with GUI : just right click in the terminal window, and the profile in use will be indicated under "Profiles". You can also go Edit -> Profile Preferences -> Profile Name.

I would like to access that information with command line, but can't find how.

How to view full bounce message in mutt?

Posted: 10 Sep 2021 09:54 AM PDT

I got an error when sending email with mutt , some "red" text shows at the bottom of terminal , how should I view the full error message ? Is there a shortcut / macro that I could use / define ?

No comments:

Post a Comment