Friday, May 6, 2022

Recent Questions - Unix & Linux Stack Exchange

Recent Questions - Unix & Linux Stack Exchange


Check account status as a Samba domain member

Posted: 06 May 2022 10:59 AM PDT

I have a Rocky Linux 8.5 machine running as an Active Directory domain member. I've added it to the domain with net ads join command and I can run several commands to list domain objects as expected. The services smb, nmb and winbind are all active. Samba version is 4.14.5.

I was hoping to find some way to check if an user account is enabled/disabled in the same fashion as the below on Windows:

net user username /domain

In the output the "Account active" property shows its status.

How could I get that information on this Linux machine?

Thanks!

Copy folder/files from a custom initramfs to real root when the root_switch is done, and run one of the copied files (best set as systemd service)

Posted: 06 May 2022 10:34 AM PDT

In grub.cfg besides the default initramfs I'm using a custom iniramfs

menuentry "Custom Install Ubuntu Server" {          set gfxpayload=keep          linux   vmlinuz autoinstall  url=http://../some_file.iso --          initrd  custom_init.gz initrd  }  

In custom_init.gz I have some files that I want to transfer to the real root when the switch_root is done.

I know that I can achieve if I copy the file/folder from custom initramfs is run.

But besides that I want to run one of the copied files on the real root, I'm thinking the best option is setting a systemd services. How can I do that ?

Help with GPU PCI-passtrought

Posted: 06 May 2022 10:23 AM PDT

I have problem with GPU PCI-passtrought. My system is using Proxmox 7 and want to passthrough the GPU2 to Win10 VM also IOMMU and AMD-V are enabled from the UEFI. My system specs: CPU: AMD FX 8300 MB: ASRock 990fx extreme3 RAM: 32GB DDR3 GPU1: Nvidia 210 (for the host) GPU2: Radeon HD6870 Power Color Because the GPU has 2xDVI, HDMI and DP does is meter where will plug the VGA dummy plug?

These are the details about the Win10VM: Machine type: i440fx 6.1 Bios: SeaBIOS Virtio SCSI controller Added also Radeon HD6870 GPU as PCI device with these functions enabled: All functions, Primary GPU, ROM-bar At the Proxmox host added these attributes to the GRUB cfg: "quiet iommu=pt amd_iommu=on" Also created a VGA dummy plug for the Radeon. The problem with the GPU passthrough is that the Win10VM detects the GPU and the driver's are installed without any issues but I can't open Ati Catalyst Control Panel and when trying to start any benchmark like Furmark error appears "load library failed 87" also tried to rename atig6pxx.dll file in System32 folder in the Win10VM without any success. The same problems appear even without the VGA dummy plug if this meters. Tried and software Rainway but the program only detects the CPU.

Can someone help me with this problem? Thanks in advance.

Can't rebase Fedora 35 to 36

Posted: 06 May 2022 10:15 AM PDT

I am using Fedora 35 right now and would like to switch to version 36 but can't do that:

rpm-ostree rebase fedora:fedora/36/x86_64/silverblue  ⠤ Receiving objects; 99% (366/367) 10,2 MB/s 214,5 MB   Receiving objects; 99% (366/367) 10,2 MB/s 214,5 MB... done  Checking out tree f470509... done  Enabled rpm-md repositories: fedora-cisco-openh264 updates fedora rpmfusion-free-updates rpmfusion-free rpmfusion-nonfree-updates rpmfusion-nonfree copr:copr.fedorainfracloud.org:lukenukem:asus-linux updates-archive  Updating metadata for 'updates'... done  Updating metadata for 'rpmfusion-free-updates'... done  Updating metadata for 'rpmfusion-nonfree-updates'... done  Updating metadata for 'copr:copr.fedorainfracloud.org:lukenukem:asus-linux'... done  Updating metadata for 'updates-archive'... done  Importing rpm-md... done  rpm-md repo 'fedora-cisco-openh264' (cached); generated: 2022-04-07T16:52:38Z solvables: 4  rpm-md repo 'updates'; generated: 2022-02-08T18:40:57Z solvables: 0  rpm-md repo 'fedora' (cached); generated: 2022-05-01T10:06:39Z solvables: 67991  rpm-md repo 'rpmfusion-free-updates'; generated: 2022-02-17T15:50:15Z solvables: 0  rpm-md repo 'rpmfusion-free' (cached); generated: 2022-04-29T12:28:10Z solvables: 506  rpm-md repo 'rpmfusion-nonfree-updates'; generated: 2022-02-17T15:50:29Z solvables: 0  rpm-md repo 'rpmfusion-nonfree' (cached); generated: 2022-04-29T12:52:47Z solvables: 225  rpm-md repo 'copr:copr.fedorainfracloud.org:lukenukem:asus-linux'; generated: 2022-02-10T06:11:29Z solvables: 4  rpm-md repo 'updates-archive'; generated: 2022-02-11T15:18:19Z solvables: 0  Resolving dependencies... done  error: Could not depsolve transaction; 3 problems detected:   Problem 1: package kernel-devel-matched-5.17.3-302.fc36.x86_64 requires kernel-core = 5.17.3-302.fc36, but none of the providers can be installed    - package akmods-0.5.7-7.fc36.noarch requires (kernel-devel-matched if kernel-core), but none of the providers can be installed    - cannot install both kernel-core-5.17.3-302.fc36.x86_64 and kernel-core-5.17.5-300.fc36.x86_64    - package akmod-nvidia-3:510.60.02-1.fc36.x86_64 requires akmods, but none of the providers can be installed    - conflicting requests   Problem 2: package kmod-nvidia-3:510.60.02-1.fc36.x86_64 requires akmod-nvidia = 3:510.60.02-1.fc36, but none of the providers can be installed    - package xorg-x11-drv-nvidia-cuda-3:510.60.02-1.fc36.x86_64 requires nvidia-kmod >= 3:510.60.02, but none of the providers can be installed    - package akmod-nvidia-3:510.60.02-1.fc36.x86_64 requires akmods, but none of the providers can be installed    - package akmods-0.5.7-7.fc36.noarch requires (kernel-devel-matched if kernel-core), but none of the providers can be installed    - package kernel-devel-matched-5.17.3-302.fc36.x86_64 requires kernel-core = 5.17.3-302.fc36, but none of the providers can be installed    - cannot install both kernel-core-5.17.3-302.fc36.x86_64 and kernel-core-5.17.5-300.fc36.x86_64    - package kernel-5.17.5-300.fc36.x86_64 requires kernel-core-uname-r = 5.17.5-300.fc36.x86_64, but none of the providers can be installed    - conflicting requests   Problem 3: package kmod-nvidia-3:510.60.02-1.fc36.x86_64 requires akmod-nvidia = 3:510.60.02-1.fc36, but none of the providers can be installed    - package xorg-x11-drv-nvidia-3:510.60.02-1.fc36.x86_64 requires nvidia-kmod >= 3:510.60.02, but none of the providers can be installed    - package akmod-nvidia-3:510.60.02-1.fc36.x86_64 requires akmods, but none of the providers can be installed    - package akmods-0.5.7-7.fc36.noarch requires (kernel-devel-matched if kernel-core), but none of the providers can be installed    - package kernel-devel-matched-5.17.3-302.fc36.x86_64 requires kernel-core = 5.17.3-302.fc36, but none of the providers can be installed    - cannot install both kernel-core-5.17.3-302.fc36.x86_64 and kernel-core-5.17.5-300.fc36.x86_64    - package kernel-modules-5.17.5-300.fc36.x86_64 requires kernel-uname-r = 5.17.5-300.fc36.x86_64, but none of the providers can be installed    - package xorg-x11-drv-nvidia-power-3:510.60.02-1.fc36.x86_64 requires xorg-x11-drv-nvidia(x86-64) = 3:510.60.02, but none of the providers can be installed    - conflicting requests  

Not sure what is happening here to be honest.

Additional info:

uname -r  5.17.5-200.fc35.x86_64  
rpm-ostree status  State: idle  Deployments:  ● fedora:fedora/35/x86_64/silverblue                     Version: 35.20220505.0 (2022-05-05T06:22:10Z)                  BaseCommit: 955f0c8da93ec5c97ba5f5999f9061ec9a0c7fe41c14cb5ab6f64fadcca0c511                GPGSignature: Valid signature by 787EA6AE1147EEE56C40B30CDB4639719867C58F         RemovedBasePackages: firefox 100.0-2.fc35             LayeredPackages: akmod-nvidia asusctl docker-compose podman-docker rpmfusion-free-release rpmfusion-nonfree-release tlp tlp-rdw xorg-x11-drv-nvidia-cuda xorg-x11-drv-nvidia-power      fedora:fedora/35/x86_64/silverblue                     Version: 35.20220503.0 (2022-05-03T14:12:49Z)                  BaseCommit: aef35d7e7acf22ce7251f954679c73047cd889371d91dd21b228ce7ac0688b27                GPGSignature: Valid signature by 787EA6AE1147EEE56C40B30CDB4639719867C58F             LayeredPackages: akmod-nvidia asusctl docker-compose podman-docker rpmfusion-free-release rpmfusion-nonfree-release tlp tlp-rdw xorg-x11-drv-nvidia-cuda xorg-x11-drv-nvidia-power      fedora:fedora/35/x86_64/silverblue                     Version: 35.20220503.0 (2022-05-03T14:12:49Z)                  BaseCommit: aef35d7e7acf22ce7251f954679c73047cd889371d91dd21b228ce7ac0688b27                GPGSignature: Valid signature by 787EA6AE1147EEE56C40B30CDB4639719867C58F             LayeredPackages: akmod-nvidia asusctl rpmfusion-free-release rpmfusion-nonfree-release tlp tlp-rdw xorg-x11-drv-nvidia-cuda xorg-x11-drv-nvidia-power                      Pinned: yes  

What can I do to update to newer version?

Can't install Waydroid (Depends lxc but it is not installable)

Posted: 06 May 2022 10:11 AM PDT

Sorry if this question seems stupid/poorly worded, I'm quite new to Linux and StackExchange forums in general. But I'm trying to install Waydroid on my PopOS 21.10 system and this is the terminal output:

niko@niko-pc:~$ sudo apt-get install waydroid    Reading package lists... Done  Building dependency tree... Done  Reading state information... Done  Some packages could not be installed. This may mean that you have  requested an impossible situation or if you are using the unstable  distribution that some required packages have not yet been created  or been moved out of Incoming.  The following information may help to resolve the situation:    The following packages have unmet dependencies:   waydroid : Depends: lxc but it is not installable  E: Unable to correct problems, you have held broken packages.  

I have tried following some online guides that say to run commands like

sudo dpkg --configure -a  sudo apt-get install -f  sudo apt-get clean && sudo apt-get update -y  

but it still results in the same error from before, anybody have a solution?

Is cdimage.debian.org running UNIX?

Posted: 06 May 2022 09:55 AM PDT

I recentlty visited the 'cdimage.debian.org' website and typed an incorrect URL. I obviously got a 404 error code; at the bottom of the Apache2 error page I saw this message:

Apache/2.4.51 (Unix) Server at cdimage.debian.org Port 443

Apparently, the server is running on Unix; is that right? Is "real" UNIX (not a derivate unix-like os, like Minix or *BSD or GNU/Linux) still used in 2022? Are Debian's servers really running "pure" UNIX

EDIT: the full page looked like this:

Not Found

The requested URL was not found on this server.


Apache/2.4.51 (Unix) Server at cdimage.debian.org Port 443

ISPConfig3 UI does not show some modules, though they are enabled

Posted: 06 May 2022 09:46 AM PDT

My ISPConfig3 installation (current latest version 3.2.8p1 running on Debian 10) has a strange behaviour for some time. Some vital modules like 'Sites', 'Email' etc are not displayed on the User Interface. This prevents me from using ISPConfig for much of the work it is intended for.

When I check the list of the modules enabled via System -> CPUsers, I notice that they are enabled. I have disabled some of the modules (including the ones missing on UI) and checked the UI, indeed the relevant modules were removed from the UI. Then when I re-enabled, some of the modules came back, except the ones which were missing on the UI. I have checked the relevant database table/column (sys_user.modules), I can see that the enabled modules are there.

Thus, somehow the ISPConfig3 logic which displays the modules based on database settings does not work properly, probably hindered by a specific thing in my environment. So far, I have been unable to pinpoint it.

I wonder if anybody else has had this or a similar problem with ISPConfig3? What could the cause be? Appreciate any help.

Best Regards,

What is the official 'man' location in the web?

Posted: 06 May 2022 09:56 AM PDT

I had the following experience:

The answer is totally valid, the man available by the OS itself is not updated, so the web source is mandatory, but the reason of this question is that for example through the web exists other places about man, for example about the ps command:

I assumed the info should be same, but it is not, in the former appears the STIME column/header but in the latter not. If my memory does not fail me, I remember other 2 places about man.

Note: I have no intention to be rude with the effort of the authors of any page about man. I respect his/her efforts, but I have this doubt and concern - because exists the scenario about to get trouble(s) if is used the documentation being not neither correct nor latest updated.

Question

  • What is the official man location in the web?

How do I fix no mic input in pipewire?

Posted: 06 May 2022 09:19 AM PDT

I have bought myself a Sound BlasterX G6 and a pair of Beyerdynamic MMX 300 but I'm struggling with getting the microphone to register any sound. I'm using Fedora 35 so alsamixer does not work anymore, either.

Is there a way to edit the settings, either in files or, by CLI/GUI?

Under is a screenshot of the sound controls in the GNOME Settings. GNOME Settings

BTRFS quota for a user vs. dedup/compression?

Posted: 06 May 2022 09:14 AM PDT

I want the user to be able to use 1 GByte on a BTRFS mountpoint. If he has 1 GByte on its ex.: FAT32 FS and copies that 1 GByte to the BTRFS than that counts as 1 GByte.

But in the background I have deduplication and compression enabled on BTRFS! So I am cheating a little bit.

The question: is this possible? To set a quota on BTRFS that only allows 1 GByte of NON-deduplicated and NON-compressed size?

ps command: how to know all the headers with their respective descriptions?

Posted: 06 May 2022 09:29 AM PDT

Through the following valuable tutorial:

If the ps -ef command is executed then the output has the following header:

UID        PID  PPID  C STIME TTY          TIME CMD  ...    ...  

In the same tutorial exists an explanation of the STIME column/header.

But through the man ps in the STANDARD FORMAT SPECIFIERS section - and even doing a search in the same man through the /STIME search term, well the STIME term/column/header does not appear.

Note I am assuming it would happen for other columns/headers according the option(s) applied for the ps command. So ...

Question

  • How is expected to know all the headers with their respective descriptions?

Linux Distribution

This scenario happens for Ubuntu Server 18:04 and 20:04

LVExtend can't find space on physical volume

Posted: 06 May 2022 08:28 AM PDT

I've been struggling for a while now to create more space in my VM, and I believe it's going wrong on the first step. I've increased the size of my physical volume, sda3;

$ sudo lsblk  NAME                      MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT  loop0                       7:0    0  61.9M  1 loop /snap/core20/1405  loop1                       7:1    0  61.9M  1 loop /snap/core20/1434  loop2                       7:2    0  67.2M  1 loop /snap/lxd/21835  loop3                       7:3    0  67.8M  1 loop /snap/lxd/22753  loop4                       7:4    0  43.6M  1 loop /snap/snapd/14978  loop5                       7:5    0  44.7M  1 loop /snap/snapd/15534  sda                         8:0    0   500G  0 disk   ├─sda1                      8:1    0     1M  0 part   ├─sda2                      8:2    0   1.5G  0 part /boot  └─sda3                      8:3    0 498.5G  0 part     └─ubuntu--vg-ubuntu--lv 253:0    0  48.5G  0 lvm  /  

As you can see, sda3 is roughly 500GB, but when I run LVExtend, it seems to disagree;

$ sudo lvextend -L +100G /dev/mapper/ubuntu--vg-ubuntu--lv    Insufficient free space: 25600 extents needed, but only 0 available  

I already ran it once before and it only found a little space;

$ sudo lvextend -l +100%FREE /dev/mapper/ubuntu--vg-ubuntu--lv    Size of logical volume ubuntu-vg/ubuntu-lv changed from <24.64 GiB (6307 extents) to <48.50 GiB (12415 extents).    Logical volume ubuntu-vg/ubuntu-lv successfully resized.  

What am I doing wrong?

Edits: Output of pvs:

$ sudo pvs    PV         VG        Fmt  Attr PSize   PFree    /dev/sda3  ubuntu-vg lvm2 a--  <48.50g    0  

Serve files over SSL & Https using a docker container

Posted: 06 May 2022 08:14 AM PDT

docker-compose.yaml

version: '3'  services:      #PHP Service    app:      build:        context: .        dockerfile: Dockerfile      image: digitalocean.com/php      container_name: app      restart: unless-stopped      tty: true      environment:        SERVICE_NAME: app        SERVICE_TAGS: dev      working_dir: /var/www      volumes:        - ./:/var/www        - ./php/local.ini:/usr/local/etc/php/conf.d/local.ini      networks:        - app-network      #Nginx Service    webserver:      image: nginx:alpine      container_name: webserver      restart: unless-stopped      tty: true      ports:        - "8234:80"        - "453:443"      volumes:        - ./:/var/www        - ./nginx/conf.d/:/etc/nginx/conf.d/        - ./nginx/ssl:/etc/nginx/ssl      networks:        - app-network      #MySQL Service    db:      image: mysql:5.7.22      container_name: db      restart: unless-stopped      tty: true      ports:        - "3316:3306"      environment:        MYSQL_DATABASE: laravel        MYSQL_ROOT_PASSWORD: password        SERVICE_TAGS: dev        SERVICE_NAME: mysql      volumes:        - dbdata:/var/lib/mysql        - ./mysql/my.cnf:/etc/mysql/my.cnf      networks:        - app-network    #Docker Networks  networks:    app-network:      driver: bridge    #Volumes  volumes:    dbdata:      driver: local    

nginx/conf.d/app.conf

server {      listen 80;      listen                  *:443 ssl;      ssl_certificate             /etc/nginx/ssl/wildcard.crt;      ssl_certificate_key         /etc/nginx/ssl/wildcard.key;      index index.php index.html;      error_log  /var/log/nginx/error.log;      access_log /var/log/nginx/access.log;      root /var/www/public;      location ~ \.php$ {          try_files $uri =404;          fastcgi_split_path_info ^(.+\.php)(/.+)$;          fastcgi_pass app:9000;          fastcgi_index index.php;          include fastcgi_params;          fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;          fastcgi_param PATH_INFO $fastcgi_path_info;      }      location / {          try_files $uri $uri/ /index.php?$query_string;          gzip_static on;      }  }    

Dockerfile

FROM php:7.4-fpm    # Copy composer.lock and composer.json  COPY composer.lock composer.json /var/www/    # Set working directory  WORKDIR /var/www    # Install dependencies  RUN apt-get update && apt-get install -y \      build-essential \      libpng-dev \      libjpeg62-turbo-dev \      libfreetype6-dev \      locales \      libzip-dev \      zip \      jpegoptim optipng pngquant gifsicle \      vim \      unzip \      git \      curl    # Clear cache  RUN apt-get clean && rm -rf /var/lib/apt/lists/*    # Install extensions  RUN docker-php-ext-install pdo_mysql zip exif pcntl  RUN docker-php-ext-configure gd --with-freetype --with-jpeg  RUN docker-php-ext-install gd    # Install composer  RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer    # Add user for laravel application  RUN groupadd -g 1000 www  RUN useradd -u 1000 -ms /bin/bash -g www www    # Copy existing application directory contents  COPY . /var/www    # Copy existing application directory permissions  COPY --chown=www:www . /var/www    # Change current user to www  USER www    # Expose port 9000 and start php-fpm server  EXPOSE 9000  CMD ["php-fpm"]    

File structure like thus:

enter image description here

I'm able to serve the files over HTTP like thus: http://127.0.0.1:8234/

enter image description here

But it fails to serve them over HTTPS?

enter image description here

Any ideas?

"Could not find MDB Tools, which are required to build gmdb2" on Arch Linux

Posted: 06 May 2022 09:00 AM PDT

I'm trying to install gmdb2 from the AUR repo using yay on Arch Linux, and I've already installed mdbtools:

yay -S gmdb2  

I'm encountering the following error:

configure: error: Could not find MDB Tools, which are required to build gmdb2.  

As mentioned, mdbtools is already installed. Is there any solution or something to do with it?

How to parse a string containing multiple hyphens and/or whitespaces for line-by-line processing with grep etc?

Posted: 06 May 2022 09:53 AM PDT

I'm working with auditd rules on RHEL 7 and 8. Considering these example files...

file2.txt:

-a always,exit -S unlink -S unlinkat -S rename -S renameat -F auid>=1000 -F auid!=4294967295 -k delete  -a always,exit -F arch=b32 -S chmod,fchmod,fchmodat -F auid>=1000 -F auid!=unset -F key=perm_mod  -a always,exit -F arch=b64 -S chmod,fchmod,fchmodat -F auid>=1000 -F auid!=unset -F key=perm_mod  -a always,exit -F arch=b32 -S lchown,fchown,chown,fchownat -F auid>=1000 -F auid!=unset -F key=perm_mod  -a always,exit -F arch=b64 -S chown,fchown,lchown,fchownat -F auid>=1000 -F auid!=unset -F key=perm_mod  -a always,exit -F arch=b32 -S setxattr,lsetxattr,fsetxattr,removexattr,lremovexattr,fremovexattr -F auid>=1000 -F auid!=unset -F key=perm_mod  -a always,exit -F arch=b64 -S setxattr,lsetxattr,fsetxattr,removexattr,lremovexattr,fremovexattr -F auid>=1000 -F auid!=unset -F key=perm_mod  -w /etc/sudoers -p wa -k actions  -w /etc/sudoers.d/ -p wa -k actions  

file1.txt:

-a always,exit -S unlink -S unlinkat -S rename -S renameat -F auid>=1000 -F auid!=4294967295 -k delete  -a always,exit -F arch=b32 -S chmod,fchmod,fchmodat -F auid>=1000 -F auid!=unset -F key=perm_mod  -a always,exit -F arch=b64 -S chmod,fchmod,fchmodat -F auid>=1000 -F auid!=unset -F key=perm_mod  

I'm trying to parse these files programmatically with bash such that file2.txt is checked to see if it contains any of the lines in file1.txt; if it does, those lines should be deleted from file2.txt. I do not want to modify file1.txt in this process.

Desired output:

file2.txt:

-a always,exit -F arch=b32 -S lchown,fchown,chown,fchownat -F auid>=1000 -F auid!=unset -F key=perm_mod  -a always,exit -F arch=b64 -S chown,fchown,lchown,fchownat -F auid>=1000 -F auid!=unset -F key=perm_mod  -a always,exit -F arch=b32 -S setxattr,lsetxattr,fsetxattr,removexattr,lremovexattr,fremovexattr -F auid>=1000 -F auid!=unset -F key=perm_mod  -a always,exit -F arch=b64 -S setxattr,lsetxattr,fsetxattr,removexattr,lremovexattr,fremovexattr -F auid>=1000 -F auid!=unset -F key=perm_mod  -w /etc/sudoers -p wa -k actions  -w /etc/sudoers.d/ -p wa -k actions  

file1.txt (unchanged):

-a always,exit -S unlink -S unlinkat -S rename -S renameat -F auid>=1000 -F auid!=4294967295 -k delete  -a always,exit -F arch=b32 -S chmod,fchmod,fchmodat -F auid>=1000 -F auid!=unset -F key=perm_mod  -a always,exit -F arch=b64 -S chmod,fchmod,fchmodat -F auid>=1000 -F auid!=unset -F key=perm_mod  

I've tried a few different approaches, but this is probably the closest I've gotten (excuse minor syntactical errors, as these are transposed by hand).

# Write deltas to a temporary file  grep -f file2.txt file1.txt >> temp_file.txt    # For each line in the temporary delta file, delete that line from file2.txt  for i in $(cat temp_file.txt); do   sed -i /"$i"/d file2.txt  done;  

This gets the deltas into a temp file, but then the replacement doesn't work. I've tried -- escaping; no difference:

sed -e expression #1: expected newer version of sed  sed -e expression #1: unknown command 'u'  

Double-dash escaping seems to make no difference, e.g.:

sed -i -- /"$i"/d foo.txt  

For the heck of it, I've also tried unquoted:

sed -i /$i/d foo.txt  

I feel like I'm probably missing something simple, but I've bashed my head against this for a few hours and I haven't unraveled it. Any idea what I'm doing wrong?

Debian 11, the resolv.conf does not save nameserver settings permanently

Posted: 06 May 2022 09:10 AM PDT

Debian 11, I faced issue with internet connection not available, and pinging can't resolve domain names. The /etc/resolv.conf file is constantly overwritten by NetworkManager, and after reboot it contains

# Generated by NetworkManager  nameserver ::1   

I edited resolv.conf file by adding entry

# Generated by NetworkManager  nameserver 8.8.8.8   

But this changes are not persistent and disappear right after reboot. I have not had this problem in the past, I believe that it appeared after using VPN and TOR browser. How to solve this issue?

EDIT: I installed resolvconf tool, rebooted, but no changes:

~$ cat /etc/resolv.conf  # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)  #     DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN  # 127.0.0.53 is the systemd-resolved stub resolver.  # run "resolvectl status" to see details about the actual nameservers.    nameserver ::1  

~$ resolvectl status  Failed to get global data: Unit dbus-org.freedesktop.resolve1.service not found.  

Edit2: there is two active network connections, one is router, and the second, seems, is from VPN service:

~$ nmcli c show TRENDnet752 | grep -i -e name_servers -e dns  connection.mdns:                        -1 (default)  ipv4.dns:                               8.8.8.8  ipv4.dns-search:                        --  ipv4.dns-options:                       --  ipv4.dns-priority:                      0  ipv4.ignore-auto-dns:                   yes  ipv6.dns:                               --  ipv6.dns-search:                        --  ipv6.dns-options:                       --  ipv6.dns-priority:                      0  ipv6.ignore-auto-dns:                   no  IP4.DNS[1]:                             8.8.8.8  ~$ nmcli c show pvpn-ipv6leak-protection | grep -i -e name_servers -e dns  connection.mdns:                        -1 (default)  ipv4.dns:                               --  ipv4.dns-search:                        --  ipv4.dns-options:                       --  ipv4.dns-priority:                      0  ipv4.ignore-auto-dns:                   no  ipv6.dns:                               ::1  ipv6.dns-search:                        --  ipv6.dns-options:                       --  ipv6.dns-priority:                      -1400  ipv6.ignore-auto-dns:                   yes  IP6.DNS[1]:                             ::1  

How can I permanently change a network device name in CentOS8?

Posted: 06 May 2022 08:24 AM PDT

I want to permanently change the device name of a network interface using a portable script.

The interface gets the ugly name of enp02fghjkl1. I want to give a pretty name of netface1. I have tried using ip link and nmcli and so many other things to change the device name permanently, and I can change it, but I cannot get it to persist through a reboot.

Here is the command set that works to temporarily change it:

sudo ip link set enp02fghjkl1 down  sudo ip link set enp02fghjkl1 name netface1  sudo nmcli connection modify Wired\ connection\ 1 con-name netface1  sudo nmcli device connect netface1  sudo nmcli con up netface1  sudo nmcli con reload  

However, after a reboot, the command nmcli c shows netface1 but the command nmcli d shows enp02fghjkl1

Note: I have also created a network-script called ifcfg-netface1 and placed the uuid in it, but it just gets ignored after a reboot.

Unable to login to SSH remotely

Posted: 06 May 2022 09:37 AM PDT

When on the server i can login to localhost using:

ssh localhost      username@localhost's password:    Welcome to Ubuntu 20.04.3 LTS (GNU/Linux 5.11.0-40-generic x86_64)  

However when I try and login remotely i get this:

ssh 1.2.3.4 -l username  username@1.2.3.4's password:  Permission denied, please try again.  

The password is absolutely correct.

Edit: I added the verbosity to the client login and this is the output after entering the password

debug3: send packet: type 50  debug2: we sent a password packet, wait for reply  debug3: receive packet: type 51  debug1: Authentications that can continue: publickey,password  Permission denied, please try again.  

tcpdump in Kali Linux VM does not capture scp between two devices

Posted: 06 May 2022 08:33 AM PDT

Situation: I'm learning how to develop tests for a hardened server my company is developing for a client. The test configuration will consist of the test target (the server we're developing) and an external test laptop, which I'll use to run some test scripts.

The test laptop and the server will both be running Ubuntu 22.04 LTS. However, I'm posting here because this setup also involves Kali Linux. Apologies if this isn't the correct site to post to.

Using Kali Linux in a VirtualBox VM on the test laptop, I need to analyze packets transferred between the test laptop and the target server. I've been using Learning Kali Linux (an O'Reilly textbook) and this resource to build a simple environment in which I can accomplish this, but have reached a dead end.

Process:

When I configure the network of the Kali Linux VM to be NAT, I can successfully ping the IP addresses of the target server and the host laptop from the Kali Linux VM.

I can also successfully ping the host laptop from the target server, and vice-versa.

But, when I execute the tcpdump port 22 command in the VM, then perform an scp from the target server to the host laptop, I see no packet data in the Kali Linux shell.

Similarly, if I run tcpdump -i any in Kali Linux, I do get a stream, but it's a very slow trickle of eth0 data (the wired connection between the host and target server). Conversely, when I execute the same command on the host laptop outside of the VM I get a healthy flow of info.

Then, when I configure the network of the Kali Linux VM to be a Host-Only Adapter, all of the devices can ping each other (makes sense to me because they share a network now).

When I execute tcpdump port 22 and attempt to scp from server to host, I get packet data this time. Great! Except, the scp fails to complete. I don't get an error message in the Bash window of the server where I executed scp though—it appears to get 'hung-up' and never completes. I have to kill the process.

Even after I shut down the Kali Linux VM this remains true: scp fails to complete. Interestingly, when I run ifconfig on the test laptop I can see that the Host-Only Adapter VirtualBox created still persists. If I reboot the host, the network disappears, and I can once again successfully scp from the target server to the host. I now suspect this is a networking issue that I fail to grasp.

Can anyone offer some help with this problem?

PS, I can execute tcpdump on the host laptop (outside the VM) and see the data stream. Worse comes to worse I'll use that method. This is really a question of curiosity on the topic of networking between devices and how to monitor that networking using Kali Linux. I didn't choose to open Pandora's Box, yet here I am.

Booting updated system with encrypted root partion fails

Posted: 06 May 2022 09:30 AM PDT

I've just updated my Debian system to Bullseye and using the 5.10.0-13-amd64 kernel can no longer access my encrypted root disc.

I can still boot using previous version 5.4.0-0.bpo.2-amd64 kernel.

The error reported is "error allocating crypto tfm" and "Dependency failed for local encrypted volumes".

It does not say what the dependency is. One suspects a crypto module is missing but examining the initrd contents it seems it's all there.

The LUKS header looks like this:

LUKS header information  Version:        2  Epoch:          4  Metadata area:  16384 [bytes]    Keyslots area:  16744448 [bytes]  UUID:           27d09a50-bc61-473e-8c45-12295f45a319  Label:          (no label)  Subsystem:      (no subsystem)  Flags:          (no flags)    Data segments:    0: crypt          offset: 16777216 [bytes]          length: (whole device)          cipher: aes-xts-plain64          sector: 512 [bytes]    Keyslots:    0: luks2          Key:        512 bits          Priority:   normal          Cipher:     aes-xts-plain64          Cipher key: 512 bits          PBKDF:      argon2i          Time cost:  4          Memory:     735048          Threads:    4  ...  

The crypto modules in the initrd (built using dracut and including cbc.ko which I forced in) are:

kernel/arch/x86/crypto  kernel/arch/x86/crypto/aesni-intel.ko  kernel/arch/x86/crypto/crc32c-intel.ko  kernel/arch/x86/crypto/crct10dif-pclmul.ko  kernel/arch/x86/crypto/glue_helper.ko  kernel/arch/x86/crypto/sha256-ssse3.ko  kernel/crypto  kernel/crypto/aes_generic.ko  kernel/crypto/af_alg.ko  kernel/crypto/algif_hash.ko  kernel/crypto/algif_skcipher.ko  kernel/crypto/ansi_cprng.ko  kernel/crypto/async_tx  kernel/crypto/async_tx/async_memcpy.ko  kernel/crypto/async_tx/async_pq.ko  kernel/crypto/async_tx/async_raid6_recov.ko  kernel/crypto/async_tx/async_tx.ko  kernel/crypto/async_tx/async_xor.ko  kernel/crypto/authenc.ko  kernel/crypto/cbc.ko  kernel/crypto/cmac.ko  kernel/crypto/crc32c_generic.ko  kernel/crypto/crct10dif_common.ko  kernel/crypto/crct10dif_generic.ko  kernel/crypto/cryptd.ko  kernel/crypto/crypto_simd.ko  kernel/crypto/drbg.ko  kernel/crypto/ecc.ko  kernel/crypto/ecdh_generic.ko  kernel/crypto/essiv.ko  kernel/crypto/xor.ko  kernel/crypto/xts.ko  kernel/drivers/crypto  kernel/drivers/crypto/ccp  kernel/drivers/crypto/ccp/ccp.ko  kernel/drivers/crypto/padlock-aes.ko  kernel/lib/crypto  kernel/lib/crypto/libaes.ko  x86_64-linux-gnu/libcrypto.so -> libcrypto.so.1.1  x86_64-linux-gnu/libcrypto.so.1.1  

I'm not sure what's changed; if somebody could help that would be much appreciated.

how to get the deb package full path in the pre-install script

Posted: 06 May 2022 11:08 AM PDT

I have built a deb package with the pre-install script . I want to auto bakup the deb package after installed. So I want to get the full path of the package in the pre/post-install script. are there any solutions?

Why are sessions breaking when starting from standby on Debian with Wayland? It shows a black screen with "the screen locker is broken"

Posted: 06 May 2022 10:40 AM PDT

Since I switched to Wayland on Debian11/KDE starting the computer from standby often shows a black screen with the notice that one should use ctrl+alt+f2 and loginctl unlock-session {id} (this text disappears before one can fully read it). Doing so doesn't help (and running ctrl+alt+f7 shows a frozen login-screen) but running pkill -KILL -u {username} does. Edit: now running loginctl unlock-session {id} like described in the error-message and switching back works too.

I'd like to find out why this occurs and prevent it. Which things (logs etc) should I check or test to do so?

I'm aware that Wayland support isn't yet stable in KDE. This question is about how one could do basic troubleshooting, like checking specific logs or a set of common problems.

The message that's shown shortly is:

The screen locker is broken and unlocking is not possible anymore.  In order to unlock switch to a virtual terminal (e.g. Ctrl+Alt+F2), log in and execute the command:    loginctl unlock-session {id}    Afterwards switch back to the running session (Ctrl+Alt+F8).  

The journald log has:

org.kde.ActivityManager qt.qpa.wayland: Creating a fake screen in order for Qt not to crash  dbus-daemon [session uid=1000 pid=2859675] Activating service name='org.kde.ksystemstats' requested by ':1.71' (uid=1000 pid=2867777 comm="/usr/bin/plasmashell ")  dbus-daemon [session uid=1000 pid=2859675] Successfully activated service 'org.kde.ksystemstats'  dbus-daemon [session uid=1000 pid=2859675] Activating service name='org.kde.KSplash' requested by ':1.71' (uid=1000 pid=2867777 comm="/usr/bin/plasmashell ")  dbus-daemon [session uid=1000 pid=2859675] Activating service name='org.kde.kdeconnect' requested by ':1.71' (uid=1000 pid=2867777 comm="/usr/bin/plasmashell ")  

(KDE's Kdeconnect and Ksystemstats getting started automatically is another problem I'm having - details in the bug reports here. I don't think that this is causing the problem.)

syslog has these entries from after starting:

kernel: Filesystems sync: 0.0.. seconds  kernel: Freezing user space processes ... (elapsed 0.0.. seconds) done.  kernel: OOM killer disabled.  kernel: Freezing remaining freezable tasks ... (elapsed 0.0.. seconds) done.  kernel: printk: Suspending console(s) (use no_console_suspend to debug)  kernel: serial ...: disabled  kernel: parport_pc ...: disabled  kernel: ...: EEE TX LPI TIMER: ...  kernel: sd ..:0:0:0: [sd..] Stopping disk  ...  kernel: pci 0000:..:00.0: BAR ..: no space for [mem size ...]  kernel: pci 0000:..:00.0: BAR ..: failed to assign [mem size ...]  ...  

xlsclients shows the hostname of the computer (as well as Firefox).

Edit: I created an issue here (another one about troubleshooting/logs is linked there).

Edit: after running that command and pressing ctrl+alt+F(8 for example) in the login screen a qemu user is shown again, even though I reran this as a workaround to at least hide the user.

Edit: After pressing ctrl+alt+F(8 for example) to show to login screen after logging the user out it shortly shows the error Failed to start LSB: web-based administration interface for Unix systems before loading the login screen. Moreover, since recently, the VeraCrypt-mounted drive isn't showing as mounted in VeraCrypt anymore in the GUI (but it's still mounted). No veracrypt process is running.

Another thing that's happening with these crashes is that the bash-history file is only around 20kB large, everything else got removed (which means every time I restore the history from the backups). This is currently probably the most annoying things with this: the bash history is lost every time.

zsh completion for custom script

Posted: 06 May 2022 08:53 AM PDT

I have zsh completion for my custom script. It takes 3 optional arguments --insert, --edit, --rm and it completes files from given path:

#compdef pass    _pass() {      local -a args      args+=(      '--insert[Create a new password entry]'      '--edit[Edit a password entry]'      '--rm[Delete a password entry]'    )      _arguments $args '1: :->directory'      case $state in    directory)      _path_files -W $HOME/passwords -g '*(/)' -S /      _path_files -W $HOME/passwords -g '*.gpg(:r)' -S ' '      ;;    esac  }  

I need to add another option -P, that will also be offered for completion (when I type - and TAB), but does not offer path completion. This option should only take a string. So it should not match a path, and also it should not offer the other options if -P has been specified.

How can I add this new option to my completion script?

UPDATE:

The completion does not work for option -P, ie when i do:

pass -P <TAB>  

it completes nothing because option -P needs a string. This is good. But, when I do

pass -P foo <TAB>   

it also does not complete nothing. But it should complete directories in current path. How can do that?

StrictModes enable or disable in sshd_config in Linux

Posted: 06 May 2022 09:07 AM PDT

I have got question for you. I need to solve this problem.

I have got folder /keys which chmod 755 on /keys folder and ACL right such as setfacl -d -m u:myadmin:rwx /keys

In /keys folder are others folders such as user1, user2, user3

in each user1, user2, user3 folder are specific public.key for SSH access

And now there is my problem:

If I have got enable StrictModes in sshd_config, user1 during ssh connection received error message "Bad Ownership or Modes for Directory", because on folder user1 root has got rwx rights and also myadmin has got rwx rights. When I remove rights of myadmin all is OK and all is work. But I need have myadmin for manage public keys in these folders. So I tried disable StrictModes in sshd_config and all is OK and working. But I think disable StrictModes is not good idea for security. I have got all users, admins chrooted in specific folders in the system.

What do you think? There is any other solution for this problem? Or it can be disabled for solution of this problem ?


RE: All users are chrooted in the directory and can't login via putty,console..

They can connect only via ssh. After login they see their folders which specific rights, can't move outside from chrooted directory.

However I need each user need access specific folder which are on different place which is outside of chrooted folder of each user. User doesnt know about outside folder, its only for server access to write down some files. Thats all.

With setfacl -m u:user:rwx and StrictModes on user can't login via ssh. If i turn of StrictModes, user can connect via ssh.

I know StrictModes specifies wheter ssshd should check file modes and ownership of the user files and home directory before accepting login.

Or there is another solution ? Thanks

How to disable CUPS service on reboot with systemd?

Posted: 06 May 2022 09:56 AM PDT

I often connect to a network, which has a lot of printers. When printer discovery is ongoing, a lot of distracting messages pop up in GNOME. I use printer only rarely, so I would prefer to keep CUPS disabled most of time. Stopping CUPS works and eliminates annoying notifications:

systemctl stop cups  

I would like to disable it on boot. Surprisingly, after disabling

systemctl disable cups  

CUPS still runs after reboot. The status command

systemctl status cups  

produces

● cups.service - CUPS Scheduler     Loaded: loaded (/lib/systemd/system/cups.service; disabled; vendor preset: enabled)    Drop-In: /etc/systemd/system/cups.service.d     Active: active (running) since Tue 2018-11-06 02:35:50 PST; 11s ago  

I expected that disabling a service will prevent its running after reboot. Does activation happen because of preset? I was trying to preset "disabled" status with --preset-mode, but it did not work.

My OS is Debian Stretch.

systemctl --version  systemd 232  +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN  

Replacing .* in vi

Posted: 06 May 2022 09:33 AM PDT

I need to replace all occurrences of "period asterisk" as it is shown here:

blah blah .*:.*:.* blah blah  

with:

[0-9][0-9]:[0-9][0-9]:[0-9][0-9]  

so that the end result looks like this:

blah blah [0-9][0-9]:[0-9][0-9]:[0-9][0-9] blah blah  

I tried different variations of the following but it didn't work:

%s_ .*:.*:.* _ [0-9][0-9]:[0-9][0-9]:[0-9][0-9] _g  

How to install PostgreSQL 9.3 in FreeBSD jail?

Posted: 06 May 2022 10:08 AM PDT

I configured virtual NICS using pf, and a jail for FreeBSD using qjail create pgsql-jail 192.168.0.3.

When I tried to install PostgreSQL 9.3 using port collection, it shows strange message at first.

pgsql-jail /usr/ports/databases/postgresql93-server >make install  ===> Building/installing dialog4ports as it is required for the config dialog  ===>  Cleaning for dialog4ports-0.1.5_1  ===> Skipping 'config' as NO_DIALOG is defined  ====> You must select one and only one option from the KRB5 single  *** [check-config] Error code 1    Stop in /basejail/usr/ports/ports-mgmt/dialog4ports.  *** [install] Error code 1    Stop in /basejail/usr/ports/ports-mgmt/dialog4ports.  ===> Options unchanged  => postgresql-9.3.0.tar.bz2 doesn't seem to exist in /var/ports/distfiles/postgresql.  => Attempting to fetch ftp://ftp.se.postgresql.org/pub/databases/relational/postgresql/source/v9.3.0/postgresql-9.3.0.tar.bz2  postgresql-9.3.0.tar.bz2                        1% of   16 MB   71 kBps  

Anyway, installation continues, so I waited. I chose all default options for all option dialogs. And at the end of the process, I saw it finally failed with this message.

====> Compressing man pages  ===>  Building package for pkgconf-0.9.3  Creating package /basejail/usr/ports/devel/pkgconf/pkgconf-0.9.3.tbz  Registering depends:.  Registering conflicts: pkg-config-*.  Creating bzip'd tar ball in '/basejail/usr/ports/devel/pkgconf/pkgconf-0.9.3.tbz'  tar: Failed to open '/basejail/usr/ports/devel/pkgconf/pkgconf-0.9.3.tbz'  pkg_create: make_dist: tar command failed with code 256  *** [do-package] Error code 1    Stop in /basejail/usr/ports/devel/pkgconf.  *** [build-depends] Error code 1    Stop in /basejail/usr/ports/textproc/libxml2.  *** [install] Error code 1    Stop in /basejail/usr/ports/textproc/libxml2.  *** [lib-depends] Error code 1    Stop in /basejail/usr/ports/databases/postgresql93-server.  *** [install] Error code 1    Stop in /basejail/usr/ports/databases/postgresql93-server.  

I have no idea why this fails. Errors at beginning seems I have something wrong with dialog4ports. And errors at last seems installer cannot write to ports file tree. AFAIK, the ports files are read-only shared from host system.

What's wrong with my jail? How can install PostgreSQL 9.3 in my jail?

Reload of tmux config not unbinding keys (bind-key is cumulative)

Posted: 06 May 2022 09:33 AM PDT

I've been experimenting with different tmux keybinding settings and I've noticed the following:

If I reload my tmux config (from within tmux) the keybindings I once had loaded will remain loaded. The only way (I know of) to clean this up is to quit all tmux sessions and restart. So it looks like tmux remembers all previously loaded keybindings and will only remove them on a fresh start or by explicitly unbinding them.

To recreate this:

  • open a terminal (A)
  • start tmux
  • check whether the keybinding shows a clock (press PREFIX C-t)
  • press PREFIX ? to see the keybinding in the list
  • edit ~/.tmux.conf
  • add a keybinding (bind C-t display "Keybinding C-t")
  • reload tmux config (PREFIX : source-file ~/.tmux.conf)
  • check whether the keybinding works (press PREFIX C-t)
  • press PREFIX ? to see the new keybinding in the list
  • edit ~/.tmux.conf again
  • remove the keybinding (so remove bind C-t display "Keybinding C-t")
  • reload tmux config (PREFIX : source-file ~/.tmux.conf)
  • check whether the keybinding works (press PREFIX C-t), it still displays "Keybinding C-t"
  • press PREFIX ? to see that the new keybinding is still in the list
  • exit tmux
  • enter tmux
  • check whether the original keybinding works again (press PREFIX C-t), it should now display a clock again
  • press PREFIX ? to see that the new keybinding has been removed from the list

My question: is there a way to instruct tmux to "forget" all loaded configs and then load .tmux.conf ?

There must be a better way to replace single newlines only?

Posted: 06 May 2022 09:05 AM PDT

I am in the habit of writing one line per sentence because I typically compile things to LaTeX, or am writing in some other format where line breaks get ignored. I use a blank line to indicate the start of a new paragraph.

Now, I have a file written in this style which I'd like to just send as plain text. I want to remove all the single linebreaks but leave the double linebreaks intact. This is what I've done:

sed 's/^$/NEWLINE/' file.txt | awk '{printf "%s ",$0}' | sed 's/NEWLINE/\n\n/g' > linebreakfile.txt  

This replaces empty lines with some text I am confident doesn't appear in the file: NEWLINE and then it gets rid of all the line breaks with awk (I found that trick on some website) and then it replaces the NEWLINEs with the requisite two linebreaks.

This seems like a long winded way to do a pretty simple thing. Is there a simpler way? Also, if there were a way to replace multiple spaces (which sometimes creep in for some reason) with single spaces, that would be good too.

I use emacs, so if there's some emacs specific trick that's good, but I'd rather see a pure sed or pure awk version.

No comments:

Post a Comment