Tuesday, May 24, 2022

Recent Questions - Unix & Linux Stack Exchange

Recent Questions - Unix & Linux Stack Exchange


Why am I getting connection refused with IP address even though it works with localhost and firewall is open?

Posted: 24 May 2022 11:49 AM PDT

When I run sudo ufw status on a Ubuntu box, I get the following output

sudo ufw status  Status: active    To                         Action      From  --                         ------      ----  22/tcp                     ALLOW       Anywhere  30303                      ALLOW       Anywhere  9000                       ALLOW       Anywhere  3000                       ALLOW       Anywhere  80/tcp                     ALLOW       Anywhere  8008/tcp                   ALLOW       Anywhere  8008                       ALLOW       Anywhere  22/tcp (v6)                ALLOW       Anywhere (v6)  30303 (v6)                 ALLOW       Anywhere (v6)  9000 (v6)                  ALLOW       Anywhere (v6)  3000 (v6)                  ALLOW       Anywhere (v6)  80/tcp (v6)                ALLOW       Anywhere (v6)  8008/tcp (v6)              ALLOW       Anywhere (v6)  8008 (v6)                  ALLOW       Anywhere (v6)  

When I access a service running on port 8008 from within the box using localhost it works. That is, the following works:

curl --head http://localhost:8008/metrics  

But if I use the IP address of the box instead, it does not work. That is:

$ curl --head http://<public-ip>:8008/metrics  curl: (7) Failed to connect to <public-ip> port 8008: Connection refused  

And If I also try accessing from the browser, it is still connection refused.

What could be going on here? The output of sudo ufw status shows that the port is open and accessible, but it is still not working.

Settings window not visible in Gnome 3 with Wayland

Posted: 24 May 2022 11:40 AM PDT

I'm running Gnome 3 on Debian testing. So far two applications are affected: totem ("Videos") and Settings. When I try to open one or the other, the icon appears in the status bar below, but no window appears on the screen. Things I tried so far:

  • Searched online, "gnome 3 window invisible", "gnome 3 find my window" (in case it was somewhere outside the visible area), "gnome 3 settings window invisible", nothing helped so far; most answers were old, from 2015 or 2017.
  • Followed https://help.gnome.org/users/gnome-help/stable/shell-windows-lost.html.en – the window icon does not appear when I press Meta+TAB. It only appears on the status bar.
  • Deleted .config/dconf/user and ~/.config/gnome-session and restarted the computer.

I finally found a workaround: to log out and log in again, using the "Gnome 3 Xorg" profile. I can reliably reproduce the problem: if I log in using Wayland, the Settings window does not appear. If I log in using Xorg, the window is there.

I can keep using the workaround of course, but since I have a reproducible case, how would I go about further diagnosing this?

What is the use case difference between GRE and GRETAP?

Posted: 24 May 2022 11:22 AM PDT

What is the difference use case between GRE nad GRETAP? I understand that GRETAP is layer 2"Ethernet" tunnel.

But when to use GRE and when to use GRETAP? Can you give me specific example for each use case?

Thanks.

Reading from a Parallel Port Tape Drive using paraide

Posted: 24 May 2022 10:42 AM PDT

I have some old QIC-80/DC-2120 tapes in storage that I wanted to pull the data off of. Although the original Colorado 250MB drives can be found on eBay, I didn't have any machines with a floppy controller (needed for these drives), so I purchased an AIWA TD-P250 parallel port tape drive off eBay and purchased a PCI-E parallel port adapter

AIWA TD-P250 Tape Drive

In order to get the correct paraide kernel modules, I ended up using the liquorix kernel and using it with Debian Bullseye.

I can see my PCI-E parallel port card being detected:

[   14.965405] parport_serial 0000:01:00.0: enabling device (0000 -> 0003)  [   14.965539] parport0: PC-style at 0xf100, irq 97 [PCSPP,TRISTATE]  [   15.053879] ppdev: user-space parallel port driver  

I can modprobe the individual paraide protocol drivers and have them assigned a number:

[642412.131083] paride: epat registered as protocol 0  [642421.621593] paride: bpck registered as protocol 1  [642425.580159] paride: friq registered as protocol 2  [644696.035287] paride: comm registered as protocol 3  ...  ...  

Unfortunately, none of the protocols seem to work. For example, when I run modprobe pt drive0=0xf100,1, I get the following:

[  369.920354] pt: pt version 1.04, major 96  [  369.924266] pt0: Sharing parport0 at 0xf100  [  370.225661] pt0: bpck 1.02, backpack          unit 0  [  370.225663]  at 0xf100, mode 1 (8-bit), delay 4  [  376.508496] pt: No ATAPI tape drive detected  

The tape drive didn't have the original driver disk, and I can't seem to find a copy online (the filenames on the disk can sometimes indicate the correct protocol to use). I've opened up the unit, but I can't see anything indicating what the IDE->parallel port controller is

AIWA TD-p250 with top off

Close up of IDE board

Wide shot of entire drive

Is there any way to get this tape drive working in Linux?

Run a tmux new session with cron, then run a command

Posted: 24 May 2022 10:50 AM PDT

I have in my cron that auto starts a service inside a tmux if it detects that its not running. The rest of my bash script works, but if the tmux session doesn't exist, it throws an error. Which is why I added in "tmux new ENTER" below. But it still doesn't start tmux session. If I manually started the tmux session, the code works and it will execute the send-keys command.

I'm trying to see why the tmux new session doesn't start on cron. Any ideas?

  /usr/bin/pkill -9 java    /usr/bin/tmux new ENTER    sleep 3    /usr/bin/tmux send-keys -t 0 "cd /home/xxx/bbb;./run.sh" ENTER    echo "$(date) ${1} RESTARTED NODE"  

Creating a tar.gz archive of multiple directories of different locations -- "tar: Cowardly refusing to create an empty archive"

Posted: 24 May 2022 11:59 AM PDT

I'm trying to create an achive:

$ cd /tmp  $ tar -czf test1.tar.gz -C ~/Downloads/dir1 -C ~/Documents/dir2 -C ~/dir3/dir4/dir5  

... which is supposed not to preserve the full path of the directories in it, hence -C

Result

tar: Cowardly refusing to create an empty archive  Try 'tar --help' or 'tar --usage' for more information.    

Why? How to fix it?

zsh function with fzf selection requires Enter

Posted: 24 May 2022 12:04 PM PDT

I wrote a simple zsh function which allows me to select from the dirs-stack via fzf.

My .zshrc looks like

DIRSTACKSIZE='99'      setopt PUSHD_IGNORE_DUPS    # change to directory from the dirs stack  fzf-change-dirstack () {      cd "$(dirs -lv | cut -f2 | fzf )"   }    zle -N fzf-change-dirstack  bindkey '^[p' fzf-change-dirstack   # shortcut ALT+P  

It works fine even some improvements have to be done. The only thing which is very annoying for me is that when I use the keybinding I have to type Enter twice to change to the directory.

How can I modify the script to cd immediately without typing twice Enter again?

Library Permission Error With Python3

Posted: 24 May 2022 10:03 AM PDT

I have RHEL 8 with Python3 installed. I am using an EC2 instance with a hardened RHEL image.

When trying to run Python from the CLI or use any apps that use Python I get the following error:

python3: error while loading shared libraries: libz.so.1: cannot open shared object file: Permission denied

If I run the AWS CLI I get the same error just with aws: rather than python3.

I have tried making sure that zlib is installed and that it is in my path.

I can use python as root. This does not solve the problem as I cannot run the AWS CLI as root.

I appreciate any help in figuring out how to Python as the default Amazon user rather than as root.

Thanks!

How to tell SELinux to allow a python script to do "everything"

Posted: 24 May 2022 10:41 AM PDT

I'm new to SELinux and it is giving me a headache. I have a python service that runs a python script on my home directory (my_script.py). I've been running the service, seeing what aspect of it SELinux is blocking, and adding a new SElinux module

    allow this access for now by executing:  # ausearch -c 'my_script.py' --raw | audit2allow -M my_script  # semodule -X 300 -i my_script.pp  

However, each time I add a new module it keeps blocking another aspect of my script (reading files, writing files, then reading a socket etc.) I believe I have like 10 modules now, and I'm having trouble keeping track of all of them. I'm also worried that down the road my script might do something that SELinux doesn't like, but didn't come up during testing. Is there a way to tell SELinux, please let my_script.py do whatever it wants (read, write, rename, etc.)? I am about to just disable SELinux, but really would rather not. Thanks!

Use one of batteries in charging

Posted: 24 May 2022 09:37 AM PDT

My laptop has 2 batteries. Is there a way to prevent using battery X while charging? I mean that when I use laptop in charge, the OS uses battery Y not battery X.

My batteries are Sony 45N1111 and LGC 45N1735.

I use debian sid.

"apt install <name>.deb" not correctly installing Nvidia driver in the deb package

Posted: 24 May 2022 10:57 AM PDT

EDIT solution: The problem was that I thought the package installs the driver but it was not the case (more about it in the accepted answer). When installing Nvidia driver I suggest going with .run file from archive here - if you need specific version, which was my case so I could not just autoinstall. The .deb package was not working for me.

I have Debian 10 (Buster) and I'm trying to install Nvidia driver 460.91. I downloaded it and tried to install as root by executing:

apt install ./<name>.deb  

What I get is: https://i.imgur.com/FeErHh5.png

and the driver is not installed.

Why the Nvidia driver is not installed when it is claiming that the package is? Also the first run was extremely fast so it could not possibly install the driver.

I tried among other things running

sudo dpkg -i <name>.deb  sudo apt install -f  

but got same result. For some reason it seems that the package is installed however the driver is not.

How to list files with only name and size

Posted: 24 May 2022 09:16 AM PDT

I would like to list the content of a directory, 1 line per entry, with only the files names and the files sizes.

ls -l shows too much information.

ls -1 -s doesn't show a file's size but its allocation (--block-size=1 doesn't change that)

I cannot find a command line argument that makes ls do what I want... is there one?

If not, what would be a good, short and robust solution to make that kind of listing?

Rename files to order them using perl substitution

Posted: 24 May 2022 10:15 AM PDT

I have a set of files like the following

fine_0.vtu  fine_10.vtu  fine_4032.vtu  ...  

I want to add padding 0s to be able to order them and render them

I am trying this command

rename -n 's/fine_(\d+).vtu/sprintf("%05d", $1)/e' fine*.vtu  

but it's not showing anything. How should I modify the command to see the new namings and then apply it?

Extending the available space in a mirrored ZFS pool with zpool add?

Posted: 24 May 2022 11:44 AM PDT

I have a mirror pool with two devices (sda, sdb) on my Debian.

And now I inserted two additional devices (sdc, sdd), so I could double the available space in the /mnt/data/ directory.

Is it done by just sudo zpool add backup-pool mirror sdc sdd?

I'm a bit scared so I want to make sure. Sorry for the rookie question.

Here are the details of my pool:

sudo zfs list

NAME               USED  AVAIL     REFER  MOUNTPOINT  backup-pool       1.47T  1.17T       96K  /backup-pool  backup-pool/data  1.47T  1.17T     1.47T  /mnt/data  

sudo zpool list

NAME          SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT  backup-pool  2.72T  1.47T  1.25T        -         -    11%    53%  1.00x    ONLINE  -  

sudo fdisk -l

Disk /dev/sdc: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors  Disk model: WDC WD30EFAX-68J  Units: sectors of 1 * 512 = 512 bytes  Sector size (logical/physical): 512 bytes / 4096 bytes  I/O size (minimum/optimal): 4096 bytes / 4096 bytes      Disk /dev/sdd: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors  Disk model: WDC WD30EFAX-68J  Units: sectors of 1 * 512 = 512 bytes  Sector size (logical/physical): 512 bytes / 4096 bytes  I/O size (minimum/optimal): 4096 bytes / 4096 bytes      Disk /dev/sdb: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors  Disk model: WDC WD30EFAX-68J  Units: sectors of 1 * 512 = 512 bytes  Sector size (logical/physical): 512 bytes / 4096 bytes  I/O size (minimum/optimal): 4096 bytes / 4096 bytes  Disklabel type: gpt  Disk identifier: ABB57994-974B-734A-A2A9-2BA616368A52    Device          Start        End    Sectors  Size Type  /dev/sdb1        2048 5860515839 5860513792  2.7T Solaris /usr & Apple ZFS  /dev/sdb9  5860515840 5860532223      16384    8M Solaris reserved 1      Disk /dev/sda: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors  Disk model: WDC WD30EFAX-68J  Units: sectors of 1 * 512 = 512 bytes  Sector size (logical/physical): 512 bytes / 4096 bytes  I/O size (minimum/optimal): 4096 bytes / 4096 bytes  Disklabel type: gpt  Disk identifier: DCDDA5ED-CB54-C042-9AF6-076F07F44E96    Device          Start        End    Sectors  Size Type  /dev/sda1        2048 5860515839 5860513792  2.7T Solaris /usr & Apple ZFS  /dev/sda9  5860515840 5860532223      16384    8M Solaris reserved 1  

Thanks!

Permissions appear correct, still can't write to directory

Posted: 24 May 2022 11:23 AM PDT

I have two user with these permissions (id)

uid=113(sonarr) gid=1001(sonarr) groups=1001(sonarr),1000(master),1002(qbtuser)  uid=1001(qbtuser) gid=1001(sonarr) groups=1001(sonarr),1000(master),1002(qbtuser)  

I have a folder with these permissions.

drwxrwxr-x  6 master master  6 May 24 11:32 ./  drwxrwxr-x  5 master master  5 May  6  2021 ../  drwxrwxr-x 10 master master 10 Oct 20  2020 TV_Main  

I've already rebooted the system. When I log in as qbtuser, I can touch a file or make changes with no issue. Obviously logging with master also has no issues.

But if I log in as sonarr, I get permission denied for the folder. But they have the same groups. What am I not understanding?

.....

Edit for better clarification.

Yes, the entire tree has r-x permissions for ugo. I even tried changing the whole tree to g+rwx and still no luck.

It is a NFS share, version 4. The server has the same mappings for everything except uid 113. Maybe that's my problem? But I thought if the group was included it should work. I will explore this more.

To clarify permission denied, I can cd into the directories fine, or run ls fine. But if I try to touch file.txt or mkdir temp I get

mkdir: cannot create directory 'temp': Permission denied

If I make files w/ master/qbtuser, I can not edit, echo foo >> file, rename, delete or anything. Still the same Permission denied msg.

Mounting Network Share on Debian for Plex

Posted: 24 May 2022 11:26 AM PDT

I'm working through setting up a media server running Debian 1.2.0 on a VM with ESXi. I've installed Debian and also installed Plex. The media is on a Netgear ReadyNAS102 and needs to be accessed through the NAS.

My Plex folder is currently located at NetgearNAS>Plex
Name of my NAS is ManiaNAS
NAS is located at 192.168.0.101 (static ip)
cifs-utils is version (2:6.11-3.1) - latest

Now, while I can navigate in Debian to my Plex folder and see its contents, I cannot get Plex to see the same folder. After digging deeper into this, I understood that Plex cannot access network folders and the way around this is to mount the network folder within the local file system. I followed the advice here at the link below to the T but hasn't helped:

https://askubuntu.com/questions/345087/how-do-i-add-a-network-drive-to-plex

I opened up fstab and updated it to include a line as follows:
//ManiaNAS/Plex /media/Plex cifs guest 0 0

Now when I go back to the Terminal and try sudo mount -a, I get this error:
Could not resolve address for [name of NAS]: Unknown error

This is where I am stuck. I thought of trying something else when I noticed the path I got while hovering over my Plex folder in Files. The path was [smb://ManiaNAS.local/plex/]. So I entered this instead into fstab and got this error:
Mounting cifs url not implemented yet. Attempt to mount smb://manianas.local/Plex/

I tried mounting with write permission as well but just got an error that said:
Parse error at line 16 (which is where the fstab entry is).

My issues/questions:

1) Can I use //ManiaNAS/Plex in fstab or should I use //192.168.0.101/Plex? I know I should use //192.168.0.101/Plex

2) I can navigate to a web interface of my Netgear ReadyNas when I go to 192.168.0.101 but I cannot navigate to the Plex folder directly by entering in 192.168.0.101/Plex -- not sure what to do here. Not an issue as I can access the Plex folder by typing in //192.168.0.101/Plex

  1. What should my fstab entry be?

In Terminal, I tried this:
mount 192.168.0.101/Plex /media/Plex
That gave me this error: mount: /media/Plex: must be superuser to use mount.

Trying with sudo sudo mount -t cifs //192.168.0.101/Plex /media/Plex and I was asked for this: Password for root@//192.168.0.101/Plex: and I entered in the admin password for the NAS.
I then got this error: mount error(13): Permission denied. Refer to the mount.cifs(8) manual page (e.g. man mount .cifs) and kernel log messages (dmesg).

I have ReadyNAS OS 6+ and according to Netgear (https://kb.netgear.com/30068/ReadyNAS-OS-6-SSH-access-support-and-configuration-guides) the root password is the same as the admin password. I also checked the ReadyNAS users page and there is only an admin user.

Netgear suggested trying 'password' as the root password but that returned this message from sudo: Sorry, try again. This led me to believe that I am not entering an incorrect password.

I googled the earlier error that I got mount error(13): Permission denied. Refer to the mount.cifs(8) manual page (e.g. man mount .cifs) and kernel log messages (dmesg) and came across this page link and tried it out. The site said to use this (modified for my use case) but it did not work:

sudo mount -t cifs //192.168.0.101/Plex /mount/Plex/ -o vers=3.0,username=<username>,password=<password>,dir_mode=0777,file_mode=0777,serverino,sec=ntlmssp   

I then removed items that I didn't think had anything to do with the mount command and ended up at this which seemed to work!

sudo mount -t cifs //192.168.0.101/Plex /mount/Plex/ -o,username=<admin username>,password=<admin password>,dir_mode=0777,file_mode=0777,sec=ntlmssp   

I see the mounted folder!

Wondering 2 things:
what does -o mean? Understood it just means options.
Is there a better way to do this without exposing my admin password? Yes and no from what I can tell. Yes you can have the login credentials be referred to on a different file but no in the sense that the different file is still storing plain text passwords. Granted one would need admin access to view and modify that different file. So there is a level of risk involved. I decided to create a separate user on my NAS. That user will only have access to the Plex folder.

This has all been a huge learning exercise for me and I'm very appreciative of the guidance I've been receiving!

I'm gonna keep this link for reference once I figure out mounting: Debian server, auto-mount Samba share

Why k3s is still seeing swap on Debian Bullseye?

Posted: 24 May 2022 10:23 AM PDT

I've installed k3s on Debian Bullseye (on M1 Pro through qemu/UTM).

k3s recommend to disable the swap. After reading the answers of the following questions:

I've :

  • Disabled systemd swap service sudo systemctl mask "dev-*.swap"
  • Removed the swap partition in /etc/fstab.
  • Deleted the swap partition and extend the main partition to regain space
  • Set the swapiness to 0 in /etc/sysctl.conf

Now I have:

root@debian:~# systemctl --type swap --all    UNIT LOAD ACTIVE SUB DESCRIPTION  0 loaded units listed.    root@debian:~# sysctl vm.swappiness  vm.swappiness = 0    root@debian:~# lsblk  NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT  sr0     11:0    1 1024M  0 rom    vda    254:0    0   10G  0 disk   ├─vda1 254:1    0  512M  0 part /boot/efi  └─vda2 254:2    0  9.5G  0 part /    root@debian:~# free                 total        used        free      shared  buff/cache   available  Mem:         1000692      705588       34164        1704      260940      221484  Swap:              0           0           0    root@debian:~# swapon -s  root@debian:~#  

But when I run k3s check-config, I still have:

- swap: should be disabled  

What should I do in order to fully disable the swap in the eyes of k3s?

Can I disable one XHCI device from being able to ACPI wakeup the machine?

Posted: 24 May 2022 10:26 AM PDT

Rather than disabling all XHCI ACPI wakeup calls. Is it possible to disable just wakeup calls from one device. Say my integrated, Syntek Integrated Camera, on bus 3 sub device 3. Can I disable just that device from waking up my machine?


This is a follow up to "What is XHCI ACPI?"

What is XHCI ACPI?

Posted: 24 May 2022 10:27 AM PDT

I've been having a bear of a time getting this new Lenovo Thikpad X1 Carbon Gen 9 to suspend. I think these are the lines that indicate the cause of my problem

systemd-sleep[682835]: System returned from sleep state.  bluetoothd[829]: Controller resume with wake event 0x1  kernel: usb 3-3: new full-speed USB device number 120 using xhci_hcd  kernel: PM: suspend exit  

After seeing this I wanted to disable XHCI ACPI, because I've seen this suggested on the forums. I did this,

❯ acpitool -e | grep XHCI  7. XHCI  S3 *enabled   pci:0000:00:14.0ed  pci:0000:00:14.0  

And then I disabled 7 with sudo acpitool -W7. Now it shows *disabled and my laptop suspends. What does XHCI ACPI wake do? Is this needed?

Max number of devices this xHCI host supports is 32

Posted: 24 May 2022 10:27 AM PDT

I have a laptop E5470 and it only has 1 USB controller. My usecase is adding USB external drives. As of now, I am able to support more than 32 devices on a single controller.

/:  Bus 02.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/10p, 5000M      |__ Port 1: Dev 8, If 0, Class=Hub, Driver=hub/4p, 5000M          |__ Port 2: Dev 10, If 0, Class=Hub, Driver=hub/4p, 5000M              |__ Port 4: Dev 26, If 0, Class=Hub, Driver=hub/4p, 5000M                  |__ Port 3: Dev 35, If 0, Class=Mass Storage, Driver=usb-storage, 5000M                  |__ Port 4: Dev 39, If 0, Class=Mass Storage, Driver=usb-storage, 5000M              |__ Port 2: Dev 15, If 0, Class=Hub, Driver=hub/4p, 5000M                  |__ Port 4: Dev 36, If 0, Class=Mass Storage, Driver=usb-storage, 5000M                  |__ Port 2: Dev 23, If 0, Class=Mass Storage, Driver=usb-storage, 5000M                  |__ Port 3: Dev 29, If 0, Class=Mass Storage, Driver=usb-storage, 5000M              |__ Port 3: Dev 20, If 0, Class=Hub, Driver=hub/4p, 5000M                  |__ Port 3: Dev 38, If 0, Class=Mass Storage, Driver=usb-storage, 5000M                  |__ Port 1: Dev 28, If 0, Class=Mass Storage, Driver=usb-storage, 5000M                  |__ Port 4: Dev 41, If 0, Class=Mass Storage, Driver=usb-storage, 5000M                  |__ Port 2: Dev 34, If 0, Class=Mass Storage, Driver=usb-storage, 5000M              |__ Port 1: Dev 12, If 0, Class=Hub, Driver=hub/4p, 5000M                  |__ Port 1: Dev 17, If 0, Class=Mass Storage, Driver=usb-storage, 5000M                  |__ Port 4: Dev 33, If 0, Class=Mass Storage, Driver=usb-storage, 5000M                  |__ Port 2: Dev 22, If 0, Class=Mass Storage, Driver=usb-storage, 5000M                  |__ Port 3: Dev 27, If 0, Class=Mass Storage, Driver=usb-storage, 5000M          |__ Port 1: Dev 9, If 0, Class=Hub, Driver=hub/4p, 5000M              |__ Port 1: Dev 11, If 0, Class=Hub, Driver=hub/4p, 5000M                  |__ Port 4: Dev 32, If 0, Class=Mass Storage, Driver=usb-storage, 5000M                  |__ Port 2: Dev 19, If 0, Class=Mass Storage, Driver=usb-storage, 5000M                  |__ Port 3: Dev 25, If 0, Class=Mass Storage, Driver=usb-storage, 5000M                  |__ Port 1: Dev 14, If 0, Class=Mass Storage, Driver=usb-storage, 5000M              |__ Port 4: Dev 21, If 0, Class=Hub, Driver=hub/4p, 5000M                  |__ Port 4: Dev 42, If 0, Class=Mass Storage, Driver=usb-storage, 5000M                  |__ Port 2: Dev 37, If 0, Class=Mass Storage, Driver=usb-storage, 5000M                  |__ Port 3: Dev 40, If 0, Class=Mass Storage, Driver=usb-storage, 5000M                  |__ Port 1: Dev 30, If 0, Class=Mass Storage, Driver=usb-storage, 5000M              |__ Port 2: Dev 13, If 0, Class=Hub, Driver=hub/4p, 5000M                  |__ Port 3: Dev 24, If 0, Class=Mass Storage, Driver=usb-storage, 5000M                  |__ Port 4: Dev 31, If 0, Class=Mass Storage, Driver=usb-storage, 5000M                  |__ Port 2: Dev 18, If 0, Class=Mass Storage, Driver=usb-storage, 5000M              |__ Port 3: Dev 16, If 0, Class=Hub, Driver=hub/4p, 5000M      |__ Port 3: Dev 50, If 0, Class=Hub, Driver=hub/4p, 5000M          |__ Port 1: Dev 51, If 0, Class=Vendor Specific Class, Driver=r8152, 5000M          |__ Port 2: Dev 52, If 0, Class=Vendor Specific Class, Driver=r8152, 5000M      |__ Port 4: Dev 53, If 0, Class=Mass Storage, Driver=uas, 5000M  

Now, when I added a new USB controller via (replacing my wifi card and putting in a m2 to mini PCI adapter, and then adding my own mini PCI adapter), it was assigned Bus 4. I get the following (I'm able to add some devices)

/:  Bus 04.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/2p, 5000M      |__ Port 1: Dev 2, If 0, Class=Hub, Driver=hub/4p, 5000M          |__ Port 3: Dev 6, If 0, Class=Hub, Driver=hub/4p, 5000M              |__ Port 4: Dev 19, If 0, Class=Mass Storage, Driver=uas, 5000M              |__ Port 2: Dev 13, If 0, Class=Mass Storage, Driver=uas, 5000M              |__ Port 3: Dev 17, If 0, Class=Mass Storage, Driver=uas, 5000M              |__ Port 1: Dev 10, If 0, Class=Mass Storage, Driver=usb-storage, 5000M          |__ Port 1: Dev 3, If 0, Class=Hub, Driver=hub/4p, 5000M              |__ Port 2: Dev 8, If 0, Class=Mass Storage, Driver=uas, 5000M              |__ Port 3: Dev 12, If 0, Class=Mass Storage, Driver=uas, 5000M              |__ Port 1: Dev 5, If 0, Class=Mass Storage, Driver=uas, 5000M              |__ Port 4: Dev 16, If 0, Class=Mass Storage, Driver=uas, 5000M          |__ Port 4: Dev 9, If 0, Class=Hub, Driver=hub/4p, 5000M              |__ Port 4: Dev 20, If 0, Class=Mass Storage, Driver=uas, 5000M              |__ Port 3: Dev 18, If 0, Class=Mass Storage, Driver=uas, 5000M              |__ Port 1: Dev 15, If 0, Class=Mass Storage, Driver=uas, 5000M          |__ Port 2: Dev 4, If 0, Class=Hub, Driver=hub/4p, 5000M              |__ Port 3: Dev 14, If 0, Class=Mass Storage, Driver=usb-storage, 5000M              |__ Port 1: Dev 7, If 0, Class=Mass Storage, Driver=uas, 5000M              |__ Port 2: Dev 11, If 0, Class=Mass Storage, Driver=uas, 5000M      |__ Port 2: Dev 21, If 0, Class=Hub, Driver=hub/4p, 5000M          |__ Port 1: Dev 22, If 0, Class=Hub, Driver=hub/4p, 5000M              |__ Port 4: Dev 24, If 0, Class=Mass Storage, Driver=usb-storage, 5000M          |__ Port 2: Dev 23, If 0, Class=Hub, Driver=hub/4p, 5000M  

Now, when I attempt to add more USB devices to Bus 4, I am getting the error. I don't understand.. My Bus 2 USB controller has more devices than my Bus 4.. Yet Bus 4 complains it cant add more devices? How come?

I'm expecting Bus 4 to hold the same amount of devices as Bus 2, but it is erroring out

[  733.095066] xhci_hcd 0000:01:00.0: Error while assigning device slot ID  [  733.095081] xhci_hcd 0000:01:00.0: Max number of devices this xHCI host supports is 32.  [  733.095092] usb 4-2.1-port3: couldn't allocate usb_device  

How do I find out why Unix is not allowing me to add more devices to a controller? lsusb doesnt give more information like whats the limit of a particular controller.

Fail to install tcpdump package in UBI8 (Red Hat Universal Base Image)

Posted: 24 May 2022 09:51 AM PDT

I am building a docker image based on UBI8(Red Hat Universal Base Image), Dockerfile looks like

FROM registry.access.redhat.com/ubi8/ubi-minimal    RUN microdnf install sudo zip tar bash procps openssl iptables net-tools tcpdump && microdnf update; microdnf clean all    ENTRYPOINT [ "/usr/sbin/tcpdump" ]  

But it failed to install tcpdump package.

Downloading metadata...  error: No package matches 'tcpdump'    (process:57): librhsm-WARNING **: 22:03:51.398: Found 0 entitlement certificates    (process:57): librhsm-WARNING **: 22:03:51.400: Found 0 entitlement certificates    (process:57): libdnf-WARNING **: 22:03:51.400: Loading "/etc/dnf/dnf.conf": IniParser: Can't open file  

How to fix it in order to install a tcpdump package in UBI? Thanks.

Which users are necessary on Unix/Linux?

Posted: 24 May 2022 10:19 AM PDT

I want to know which users are necessary for a Unix/Linux system. I found a doc which told me that there were three necessary users: root, bin, and daemon.

For the user bin and the user daemon, I still can't understand what they are used for. Here is how the doc described them:

Notes: The bin User ID/Group ID is included for compatibility with legacy applications. New applications should no longer use the bin User ID/Group ID.
The daemon User ID/Group ID was used as an unprivileged User ID/Group ID for daemons to execute under in order to limit their access to the system. Generally daemons should now run under individual User ID/Group IDs in order to further partition daemons from one another.

How to use memmap with U-Boot?

Posted: 24 May 2022 10:07 AM PDT

I'll like to reserved the first 2 GB to the RAM because my hardware write in this position to the memory RAM and I need to the kernel don't touch this part to the memory.

I read to use this option need launch the order memmap in the bootloader and the bootloader to I use is U-Boot because I'm dessing to Driver-Kernel in Yocto OS.

I read this to example to use the memmap:

memmap=nn[KMG]$ss[KMG]      [KNL,ACPI] Mark specific memory as reserved.      Region of memory to be reserved is from ss to ss+nn.      Example: Exclude memory from 0x18690000-0x1869ffff               memmap=64K$0x18690000               or               memmap=0x10000$0x18690000      Some bootloaders may need an escape character before '$',      like Grub2, otherwise '$' and the following number      will be eaten.  

And I don't know to use in this case, thankyou

EDIT: New question

I write this option in the U-Boot, using memmap=2G$0x00000000 and memmap=7fffffff$0x00000000, don't return exception, I guess I write this correctly but in cat / proc / iomem I do not see anything that tells me this memory is reserved for memory.

Would you need to modify .dtb?

Linux - Hades Canyon Intel Nuc 8th Generation

Posted: 24 May 2022 11:02 AM PDT

I have recently acquired a Hades Canyon (Intel Nuc8i7hvk) and I'm trying to put Linux on it.

I have tried Ubuntu 18.04/Debian 9.4. On the Ubuntu 18.04 version I make it to the GRUB options page, and then I see a black screen hanging on the system.

On Debian 9.4 I make it to the GRUB options page with the initial screen, and then I see a black screen hanging on the system once I select any option.

In both cases I have set the nomodeset option, but I haven't been successful.

I have not yet succeeded with the following options:
  Bios 037 - nomodeset (in the GRUB entry options)
  Bios 040 - nomodeset (in the GRUB entry options)

Has anyone been successful in installing Linux on this machine? I was thinking about lack of GPU drivers bundled in the distro. Would it be possible to "attach" those AMDGPU drivers in the distro?

I've read elsewhere that I need at least 4.15 Linux kernel which should be available on the Ubuntu 18.04 release.

Firewalld: How to whitelist just two IP-addresses, not on the same subnet

Posted: 24 May 2022 12:01 PM PDT

I'm running firwalld on a VPS / webserver.

The public zone is active and default (and I do not want the change that). How do I allow only these two external IP-addresses to access the VPS (i.e. all of the services I have defined in the public zone):

   IP1:  11.22.33.44/24     IP2:  55.66.77.88/24  

These are fake IP addresses and notice that they are intentionally not on the same subnet.

I think I understand why the following doesn't work (it locks out one or the other IP).

user$ sudo firewall-cmd --zone=public --permanent --add-source=11.22.33.44/24  user$ sudo firewall-cmd --zone=public --permanent --add-source=55.66.77.88/24    user$ sudo firewall-cmd --permanent --zone=public --add-rich-rule='rule family="ipv4" source address="11.22.33.44/24" invert="True" drop'   user$ sudo firewall-cmd --permanent --zone=public --add-rich-rule='rule family="ipv4" source address="55.66.77.88/24" invert="True" drop'  user$ sudo firewall-cmd --reload  

What do I need to modify for this to work (so it doesn't lock out one IP or the other or both)?

Thank you! :)

EDIT: I also tried a /32 bit mask for all four commands above. Sadly it did not help. Still looking for a solution.

I think the logic might sound something like: if IP1 or IP2, allow it and stop processing the chain. else Continue processing the chain, where the very next rule would be to DROP.. Something like that.

EDIT2: Posting the output of sudo firewall-cmd --list-all-zones below. Note that I removed all the rules mentioned above since they weren't working. So the below is back to square one.

user$ sudo firewall-cmd --list-all-zones  block    target: %%REJECT%%    icmp-block-inversion: no    interfaces:     sources:     services:     ports:     protocols:     masquerade: no    forward-ports:     source-ports:     icmp-blocks:     rich rules:       dmz    target: default    icmp-block-inversion: no    interfaces:     sources:     services:     ports:     protocols:     masquerade: no    forward-ports:     source-ports:     icmp-blocks:     rich rules:       drop    target: DROP    icmp-block-inversion: no    interfaces:     sources:     services:     ports:     protocols:     masquerade: no    forward-ports:     source-ports:     icmp-blocks:     rich rules:       external    target: default    icmp-block-inversion: no    interfaces:     sources:     services:     ports:     protocols:     masquerade: yes    forward-ports:     source-ports:     icmp-blocks:     rich rules:       home    target: default    icmp-block-inversion: no    interfaces:     sources:     services:     ports:     protocols:     masquerade: no    forward-ports:     source-ports:     icmp-blocks:     rich rules:       internal    target: default    icmp-block-inversion: no    interfaces:     sources:     services:     ports:     protocols:     masquerade: no    forward-ports:     source-ports:     icmp-blocks:     rich rules:       public (active)    target: default    icmp-block-inversion: no    interfaces: venet0:0 venet0    sources:     services: ssh-vps http https    ports: 8080/tcp 8080/udp    protocols:     masquerade: no    forward-ports:     source-ports:     icmp-blocks: echo-reply echo-request timestamp-reply timestamp-request    rich rules:     trusted    target: ACCEPT    icmp-block-inversion: no    interfaces:     sources:     services:     ports:     protocols:     masquerade: no    forward-ports:     source-ports:     icmp-blocks:     rich rules:       work    target: default    icmp-block-inversion: no    interfaces:     sources:     services:     ports:     protocols:     masquerade: no    forward-ports:     source-ports:     icmp-blocks:     rich rules:  

Fail2Ban fails to start on CentOS 7

Posted: 24 May 2022 09:01 AM PDT

I'm running CentOS 7, all fully updated, and am trying to get Fail2Ban to work, but I'm running into problems.

Specifically, I'm trying to block brute force SSH attacks. I'm pretty sure I've set up everything right – enabled the sshd jail in jail.local, using firewallcmd-ipset as the ban action, definitely using Firewalld, not using SELinux.

But when I start Fail2Ban, here's what's in /var/log/fail2ban.log:

2017-06-21 06:11:44,186 fail2ban.server         [3357]: INFO    Changed logging target to /var/log/fail2ban.log for Fail2ban v0.9.6  2017-06-21 06:11:44,186 fail2ban.database       [3357]: INFO    Connected to fail2ban persistent database '/var/lib/fail2ban/fail2ban.sqlite3'  2017-06-21 06:11:44,188 fail2ban.jail           [3357]: INFO    Creating new jail 'sshd'  2017-06-21 06:11:44,206 fail2ban.jail           [3357]: INFO    Jail 'sshd' uses systemd {}  2017-06-21 06:11:44,230 fail2ban.jail           [3357]: INFO    Initiated 'systemd' backend  2017-06-21 06:11:44,232 fail2ban.filter         [3357]: INFO    Set maxRetry = 3  2017-06-21 06:11:44,232 fail2ban.filter         [3357]: INFO    Set jail log file encoding to UTF-8  2017-06-21 06:11:44,233 fail2ban.actions        [3357]: INFO    Set banTime = 86400  2017-06-21 06:11:44,233 fail2ban.filter         [3357]: INFO    Set findtime = 3600  2017-06-21 06:11:44,234 fail2ban.filter         [3357]: INFO    Set maxlines = 10  2017-06-21 06:11:44,320 fail2ban.filtersystemd  [3357]: INFO    Added journal match for: '_SYSTEMD_UNIT=sshd.service + _COMM=sshd'  2017-06-21 06:11:44,335 fail2ban.jail           [3357]: INFO    Jail 'sshd' started  2017-06-21 06:11:44,864 fail2ban.action         [3357]: ERROR   ipset create fail2ban-sshd hash:ip timeout 86400  firewall-cmd --direct --add-rule ipv4 filter INPUT 0 -p all -m multiport --dports 44 -m set --match-set fail2ban-sshd src -j REJECT --reject-with icmp-port-unreachable -- stdout: ''  2017-06-21 06:11:44,865 fail2ban.action         [3357]: ERROR   ipset create fail2ban-sshd hash:ip timeout 86400  firewall-cmd --direct --add-rule ipv4 filter INPUT 0 -p all -m multiport --dports 44 -m set --match-set fail2ban-sshd src -j REJECT --reject-with icmp-port-unreachable -- stderr: '\x1b[91mError: COMMAND_FAILED\x1b[00m\n'  2017-06-21 06:11:44,865 fail2ban.action         [3357]: ERROR   ipset create fail2ban-sshd hash:ip timeout 86400  firewall-cmd --direct --add-rule ipv4 filter INPUT 0 -p all -m multiport --dports 44 -m set --match-set fail2ban-sshd src -j REJECT --reject-with icmp-port-unreachable -- returned 13  2017-06-21 06:11:44,865 fail2ban.actions        [3357]: ERROR   Failed to start jail 'sshd' action 'firewallcmd-ipset': Error starting action  

As you'll note, everything runs smoothly until firewall-cmd is tried. The commands it's trying to run are:

ipset create fail2ban-sshd hash:ip timeout 86400

followed by

firewall-cmd --direct --add-rule ipv4 filter INPUT 0 -p all -m multiport --dports 44 -m set --match-set fail2ban-sshd src -j REJECT --reject-with icmp-port-unreachable

If I try to run those myself, the ipset command works fine, but the firewall-cmd one returns with Error: COMMAND_FAILED. So, I'm guessing it's a problem with the command that Fail2Ban is trying to send to firewall-cmd – but I don't know enough about Firewalld to fix it.

(Oh, SSH is on port 44 because I've found that it massively reduces drive-by attacks, so let's not get into the pros and cons of that!

Also, systemctl status fail2ban shows everything to be running smoothly, no problems reported there. I only noticed this when I logged in and saw that there'd been a bunch of failed login attempts, which is rare what with the port change and all.

Finally, uname -r returns 3.10.0-229.14.1.el7.centos.plus.x86_64 so I'm fairly sure it's not the OpenVZ problem which I've seen as a cause of this elsewhere.)

Which distributions have $HOME/.local/bin in $PATH?

Posted: 24 May 2022 11:53 AM PDT

For example, in Ubuntu, there is always a .local directory in the home directory and .profile includes this line:

PATH="$HOME/bin:$HOME/.local/bin:$PATH"  

$HOME/.local/bin does not exist by default, but if it is created it's already in $PATH and executables within can be found.

This is not exactly mentioned in the XDG directory specification but seems derived from it.

What I wonder is if this is common enough that it could be usually assumed to exist in the most common end user distributions. Is it, for instance, in all of the Debian derivatives, or at least the Ubuntu ones? How about the Red Hat/Fedora/CentOS ecosystem? And so on with Arch, SUSE, and what people are using nowadays.

To be extra clear, this is only for $HOME/.local/bin, not $HOME/bin.

Out of curiosity, feel free to include BSDs, OS/X and others if you have the information. :)

What did the sticky bit originally do when applied to files?

Posted: 24 May 2022 10:32 AM PDT

In various places one can see the "sticky bit" accused of nowadays being a complete misnomer, as its functionality nowadays is to affect the write permissions on directories and act as a restricted deletion flag.

In an AskUbuntu answer the answerer wrote that "a sticky bit usually applies to directories". I observed that indeed modern systems seem in practice to never apply it to files, but that a long time ago the usual case was for it to apply to (executable program image) files rather than to directories. (When it comes to the paucity of modern usage on files, there's a related question at Is the sticky bit not used in current file systems .)

This prompted the question:

What did a sticky bit applied to an executable do? Was it like setuid then?

Note the past tense. This is not How does the sticky bit work? now. It's how it used to work then.

How to change the Xorg gamma/brightness?

Posted: 24 May 2022 09:55 AM PDT

I'm trying to play a game (Deus Ex) which I have to modify the brightness since it is very dark in my ambiance. The game has a "Brightness" setting, but lately it doesn't work. I tried to figure out how to change it and find out that xgamma do a similar effect with xgamma -gamma 5. But whenever I change it, the settings revert back after almost a second (so yeah, my screen light up then shuts down). How can I either, make the xgamma settings permanent (or persistent) or I have to use another tool?

My system is a desktop.

Seemsly xrandr --output DVI-0 --brightness 2 do the same, but still reverts back to 0 whenever I apply the settings.

Each time I try to change it the following output fill the Xorg.0.log file:

[ 14768.313] (II) RADEON(0): EDID vendor "HWP", prod id 9798  [ 14768.313] (II) RADEON(0): Using hsync ranges from config file  [ 14768.313] (II) RADEON(0): Using vrefresh ranges from config file  [ 14768.313] (II) RADEON(0): Printing DDC gathered Modelines:  [ 14768.313] (II) RADEON(0): Modeline "1024x768"x0.0   65.00  1024 1048 1184 1344  768 771 777 806 -hsync -vsync (48.4 kHz eP)  [ 14768.313] (II) RADEON(0): Modeline "800x600"x0.0   40.00  800 840 968 1056  600 601 605 628 +hsync +vsync (37.9 kHz e)  [ 14768.313] (II) RADEON(0): Modeline "640x480"x0.0   31.50  640 656 720 840  480 481 484 500 -hsync -vsync (37.5 kHz e)  [ 14768.313] (II) RADEON(0): Modeline "640x480"x0.0   31.50  640 664 704 832  480 489 492 520 -hsync -vsync (37.9 kHz e)  [ 14768.313] (II) RADEON(0): Modeline "640x480"x0.0   25.18  640 656 752 800  480 490 492 525 -hsync -vsync (31.5 kHz e)  [ 14768.313] (II) RADEON(0): Modeline "720x400"x0.0   28.32  720 738 846 900  400 412 414 449 -hsync +vsync (31.5 kHz e)  [ 14768.313] (II) RADEON(0): Modeline "1024x768"x0.0   78.75  1024 1040 1136 1312  768 769 772 800 +hsync +vsync (60.0 kHz e)  [ 14768.313] (II) RADEON(0): Modeline "1024x768"x0.0   75.00  1024 1048 1184 1328  768 771 777 806 -hsync -vsync (56.5 kHz e)  [ 14768.313] (II) RADEON(0): Modeline "832x624"x0.0   57.28  832 864 928 1152  624 625 628 667 -hsync -vsync (49.7 kHz e)  [ 14768.313] (II) RADEON(0): Modeline "800x600"x0.0   49.50  800 816 896 1056  600 601 604 625 +hsync +vsync (46.9 kHz e)  [ 14768.313] (II) RADEON(0): Modeline "800x600"x0.0   50.00  800 856 976 1040  600 637 643 666 +hsync +vsync (48.1 kHz e)  

So, apparently my monitor gets redetected each time.

Where should a local executable be placed?

Posted: 24 May 2022 11:16 AM PDT

I have an executable for the perforce version control client (p4). I can't place it in /opt/local because I don't have root privileges. Is there a standard location where it needs to be placed under $HOME?

Does the File System Hierarchy have a convention that says that local executables/binaries need to be placed in $HOME/bin?

I couldn't find such a convention mentioned on the Wikipedia article for the FHS.

Also, if there indeed is a convention, would I have to explicitly include the path to the $HOME/bin directory or whatever the location of the bin directory is?

No comments:

Post a Comment