Sunday, June 13, 2021

Recent Questions - Unix & Linux Stack Exchange

Recent Questions - Unix & Linux Stack Exchange


How is "fifty million man-years" development effort inside <The Art of Unix Programming>

Posted: 13 Jun 2021 04:59 AM PDT

I was reading the book <The Art of Unix Programming>

There is one line claiming

The three and a half decades between 1969 and 2003 is a long time. Going by the historical trend curve in number of Unix sites during that period, probably somewhere upwards of fifty million man-years have been plowed into Unix development worldwide.

That's 1.47 million man-year goes into Unix development during that period. It means 1.47 million developers worked on Unix system yearly.

Personally I find the number a bit hard to believe. Or am I understanding the number in the wrong way?

How to recover Data after "mdadm zero-superblocks"

Posted: 13 Jun 2021 05:16 AM PDT

I wanted to swap from CentOS to Openmediavault. I had a RAID1 with mdadm in place and wanted to split it again into two separate disks. The Guide (Similar to ArchWiki) told me to do the following:

umount -l /mnt/nas  mdadm --stop /dev/md0    mdadm --zero-superblock /dev/sdc1  mdadm --zero-superblock /dev/sdd1  

Then I installed OMV5 and now I can not mount it because the superblock is gone(which I thought would not be a problem).

I will provide more information if needed. Thanks in advance :)

fdisk:

fdisk -l /dev/sdd  Disk /dev/sdd: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors  Disk model: WDC WD30EFRX-68E  Units: sectors of 1 * 512 = 512 bytes  Sector size (logical/physical): 512 bytes / 4096 bytes  I/O size (minimum/optimal): 4096 bytes / 4096 bytes  Disklabel type: gpt  Disk identifier: FB8D182E-6744-4C9F-8E08-B3038729CA6D    Device     Start        End    Sectors  Size Type  /dev/sdd1   2048 5860524976 5860522929  2.7T Linux RAID  

blkid:

/dev/sdd1: LABEL="NAS" UUID="2f0e4622-29fa-41e5-8cd5-bc1d8d5e98e0" TYPE="ext4" PARTLABEL="primary" PARTUUID="01294e69-ee07-49ea-9d04-46b379d9c4c4"  

mount:

mount /dev/sdd1 /test  mount: /test: wrong fs type, bad option, bad superblock on /dev/sdd1, missing codepage or helper program, or other error.  

Disagreement between glib and gcc after gcc downgrade

Posted: 13 Jun 2021 04:34 AM PDT

I updated my distro a week ago Linux *** 5.10.41-1-MANJARO x86_64 GNU/Linux

But need to work with gcc 10.2.0 and its equivalents gcc-libs 10.2.0

I downgraded gcc and glib according to this guide

There is no problem with this but other thing get break like Firefox, Chromium etc and it gives the following error

firefox  /usr/lib/firefox/firefox: /usr/lib/libstdc++.so.6: version `GLIBCXX_3.4.29' not found (required by /usr/lib/firefox/firefox)  

How can I downgrade gcc and coworker libs which I think its glib

I don't know how to deal with glib

sudo downgrade glib-  glib-compile-resources  glib-genmarshal         glib-mkenums  glib-compile-schemas    glib-gettextize   

Should I downgrade a glib thing belongs to them?

Why qbittorrent and deluge can download the resource instead of transmission?

Posted: 13 Jun 2021 04:21 AM PDT

I found that both qbittorrent and deluge can download the sample torrent which i have uploaded in the dropbox.

sample torrent to check enter image description here enter image description here

It can't be downloaded with transmission,why? enter image description here Please try to download it with transmission,maybe some wrong setting in my transmission!?

different ways of disabling password logins on FreeBSD

Posted: 13 Jun 2021 04:00 AM PDT

What is the difference between:

pw lock <user>

and

pw mod user <user> -w no

They both accomplish the same thing: disabling password-based logins, but why would I pick one way over the other?

How can I disable (and later re-enable) one of my NVIDIA GPUs?

Posted: 13 Jun 2021 05:19 AM PDT

I'm working on a system with multiple NVIDIA GPUs. I would like disable / make-disappear one of my GPUs, but not the others; without rebooting; and so that I can later re-enable it.

Is this possible?

Notes:

  • Assume I have root (though a non-root solution for users which have permissions for the device files is even better).
  • In case it matters, the distribution is either SLES 12 or SLES 15, and - don't ask me why :-(

bash multiline string variable assignment failing

Posted: 13 Jun 2021 03:51 AM PDT

in bash why does this work fine:

$ cat test1.sh  #!/bin/bash  echo "some text" \  "some more text"  $ ./test1.sh  some text some more text  

but this fails

$ cat test2.sh  #!/bin/bash  text="some text" \  "some more text"  echo $text  $ ./test2.sh  ./test2.sh: line 3: some more text: command not found  

I was expecting both test1.sh and test2.sh to do the same thing.

Applying simple string mappings on JSON files

Posted: 13 Jun 2021 02:36 AM PDT

Somehow I think there must be a one-liner to apply a simple mapping on the command line. In this case the keys in JSON will (as usual) provide context, ensuring that we don't foolishly replace strings that shouldn't.

Suppose we are given a library catalog in a JSON file using the Dewey Decimal Classification

[    {      "Title": "Design Pattern",      "Call Number": "005.12 DES"    },    {      "Title": "Intro to C++",      "Call Number": "005.133 C STR"    }  ]  

as well as a mapping between Dewey and the Library of Congress call numbers

[    {      "Dewey": "005.12 DES"      "Congress": "QA76.64 .D47 1995X"    },    {      "Dewey": "005.133 C STR"      "Congress": "QA76.73.C153 S77 2013"    }  ]    

and want to produce the output file:

[    {      "Title": "Design Pattern",      "Call Number": "QA76.64 .D47 1995X"    },    {      "Title": "Intro to C++",      "Call Number": "QA76.73.C153 S77 2013"    }  ]  

Does this still fit within the one-line set of transformations that jq will handle?

What is more efficient or recommended for reading output of a command into variables in Bash?

Posted: 13 Jun 2021 03:32 AM PDT

If you want to read the single line output of a system command into Bash shell variables, you have at least two options, as in the examples below:

  1. IFS=: read user x1 uid gid x2 home shell <<<$(grep :root: /etc/passwd | head -n1)

and

  1. IFS=: read user x1 uid gid x2 home shell < <(grep :root: /etc/passwd | head -n1)

Is there any difference between these two? What is more efficient or recommended?

How to assign a remote public ip (via wireguard) to lxc container

Posted: 13 Jun 2021 02:04 AM PDT

What I have:

lxc host

# ifconfig  wg1: flags=209<UP,POINTOPOINT,RUNNING,NOARP>  mtu 1420  inet 192.168.7.2  netmask 255.255.255.0  destination 192.168.7.2  ...    enp2s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500  inet 192.168.1.110  netmask 255.255.255.0  broadcast 192.168.1.255  ...    lxdbr0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500  inet 10.7.56.1  netmask 255.255.255.0  broadcast 0.0.0.0  ...  

/etc/wireguard/wg1.conf

[Interface]  PrivateKey = my_private_key  Address = 192.168.7.2/24    [Peer]  PublicKey = my_public_key  AllowedIPs = 0.0.0.0/0  Endpoint = my_remote_server_ipv4:51194  PersistentKeepalive = 15  
# route -n  Kernel IP routing table  Destination     Gateway         Genmask         Flags Metric Ref    Use Iface  0.0.0.0         192.168.1.1     0.0.0.0         UG    100    0        0 enp2s0  10.7.56.0       0.0.0.0         255.255.255.0   U     0      0        0 lxdbr0  192.168.1.0     0.0.0.0         255.255.255.0   U     0      0        0 enp2s0  192.168.1.1     0.0.0.0         255.255.255.255 UH    100    0        0 enp2s0  192.168.7.0     0.0.0.0         255.255.255.0   U     0      0        0 wg1  

lxc container

# curl ifconfig.me  my_remote_server_ipv4  

Everything is ok, but:

# ifconfig  eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500  inet 10.7.56.100  netmask 255.255.255.0  broadcast 10.7.56.255  ...  

What I want:

# ifconfig  eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500  inet my_remote_server_ipv4  netmask 255.255.255.0  broadcast ...  ...  

I want to do this without changing anything in the container.

Upd. I want to assign a public ip (owned by my remote server) to my home lxc container. Looks like it's impossible...

Ensuring EXT4/BTRFS/other_journal_file_system guarantee backups even in power outages

Posted: 13 Jun 2021 02:43 AM PDT

I'm using RSYNC to do periodic backups of my data into an external hard disk formatted in EXT4. I'm using the "hard link option", thus files that haven't changed from the previous backup are just hard linked instead of fully copied, which reduces the used disk space.

To guarantee that a power cut, a system hang, or other problem during backups doesn't pose a problem, I first do the backup into a temporary folder, run a SYNC to flush the disk cache, rename the temporary folder to the definitive name, and run another SYNC. This way I can guarantee that all the data is in the disk before committing the backup.

The point is that now I want to use a NAS, but I can't order a SYNC command (to flush the remote cache), neither through SFTP, nor NFS, which means that a problem in the middle of a backup can leave it in an intermediate state (like a file with only part of the data).

My question is: setting my NAS in "data=journal" mode would guarantee that there won't be incorrect data if I suffer a power loss, as long as I first do the backup in a temporary folder and then, after that, I rename it to the definitive name? Do other filesystems (like BTRFS or ReiserFS) have that mode and should it be set like in EXT4? And, of course, which NAS can you recommend me that allows to enable that mode?

protection for duplicate sed replacements

Posted: 13 Jun 2021 01:58 AM PDT

This is more of a architectural question:

I have a script that does some in file sed replacements/additions like:

sed -i 's/MY_VAR = 1000/MY_VAR = 1000\nMY_VAR2 = 500/' file.txt  

which works fine but this is part of a fairly large script and someone may either ^C the first run and re-run this or just re-run it N-times back-to-back which will result in something like

MY_VAR = 1000  MY_VAR2 = 500  MY_VAR2 = 500  MY_VAR2 = 500  MY_VAR2 = 500  ...  

which might not be expected. So my question is: what's the best way to avoid this? I came up with something like:

if [ ! -f file.txt~copy ]  then      cp file.txt file.txt~copy      sed -i 's/MY_VAR = 1000/MY_VAR = 1000\nMY_VAR2 = 500/' file.txt  fi  

which should work fine but I was wondering if there's a better/recommended way to go about it? The above could obviously be problematic if your files are significantly large and you may just want to touch a copy file for protection instead.

Flattening JSON lines arrays with JQ

Posted: 13 Jun 2021 05:20 AM PDT

I can write

> echo '{"a": "arbiter", "b": "brisk"}{"a": "astound", "b": "bistro"}' | jq '.a, .b'  "arbiter"  "brisk"  "astound"  "bistro"  

but if I do

> echo '{"a": "arbiter", "b": "brisk", "c": ["cloak", "conceal"]} {"a": "astound", "b": "bistro", "c": ["confer", "consider"]}' | jq '.a, .b, .c'  

I get

"arbiter"  "brisk"  [      "cloak",      "conceal"  ]  "astound"  "bistro"  [      "confer",      "consider"  ]  

How do I flatten the c arrays to get instead

"arbiter"  "brisk"  "cloak",  "conceal"  "astound"  "bistro"  "confer",  "consider"  

Update

Since null safety is quite fashionable in several modern languages (and justifiably so), it is perhaps fitting to suppose that the question as asked above was incomplete. It's necessary to know how to handle the absence of a value.

If one of the values is null,

> echo '{"b": "brisk"}{"a": "astound", "b": "bistro"}' | jq '.a, .b'  

we get a null in the output

null  "brisk"  "astound"  "bistro"  

That may well be what we want. We could add a second step in the pipeline (watching out not to exclude "null"), but it's cleaner if jq itself excludes nulls. Just writing select(.a != null) does the trick, but introduces a {} level. What is the right way to discard nulls from within jq?

Have I put ZFS in a dangerous state by building against the wrong kernel headers?

Posted: 13 Jun 2021 03:53 AM PDT

I've reverted to an older kernel after discovering that OpenZFS does not have currently have DKMS packages for the latest 5.12 release of Fedora 33 after a yum update.

I used Koji with the search term Kernel to download and install the necessary dependencies to revert to the latest 5.11 kernel, rebooted into it, and rebuilt ZFS without issue.

But while removing packages from the newer kernel I found that only kernel-headers for 5.12.9-200 was still installed (kernel-headers-5.12.9-200.fc33.x86_64 specifically).

I then realized that the packages listed by Koji for the packages I'd selected for 5.11 did not include kernel-headers and I had neglected to revert this before rebuilding ZFS.

Everything appears to be running correctly but have I inadvertently put my system or ZFS into a dangerously undefined state as a result?

The packages annobin, boost-devel, perl-ExtUtils-CBuilder, and zfs all depend on kernel headers and I am most worried about zfs.

I plan to install the correct kernel headers and rebuild these packages but wanted to ask here first for advice.

Also, why is kernel-headers not listed on the Koji page for the selected kernel? Because headers do not always track with the actual kernel release, I had to manually locate the last release for 5.11 which was 5.11.20-200. I would have preferred if this information was linked directly by Koji with the other kernel packages as tracking it manually is subject to error.

SSH Tunnel with automatic reconnect and password auth in Docker container

Posted: 13 Jun 2021 03:50 AM PDT

I want to forward a Socks5 proxy using SSH with password authentication inside a Docker container. YES, I know that SSH keys would be better. But since it's not my own server, I'm not able to use keys, they just offer user/password authentication. autossh seems to be the right tool for this job, so I used it with sshpass in my entrypoint shellscript:

sshpass -P "assphrase" -p "${PASSWORD}" autossh -M0 -o "ServerAliveInterval 60" -o "ServerAliveCountMax 3" -oStrictHostKeyChecking=no -oUserKnownHostsFile=custom_known_hosts -4 -N -D *:5080 ${USER}@${HOST}  

Both packages are installed in the Dockerfile

FROM debian:stretch-slim  RUN apt-get update \    && apt-get upgrade -y  RUN apt-get install -y ssh wget sshpass autossh  RUN wget https://www.provider.com/custom_known_hosts   COPY run.sh .  ENTRYPOINT "./run.sh"  

This establishes the SSH tunnel to the Socks5 proxy. But after the internet connection got lost, the authentication fails:

socks5_forward_1  | Warning: Permanently added 'server.provider.com' (ECDSA) to the list of known hosts.  socks5_forward_1  | SSHPASS searching for password prompt using match "assword"  socks5_forward_1  | SSHPASS read: myUser@server.provider.com's password:  socks5_forward_1  | SSHPASS detected prompt. Sending password.  socks5_forward_1  | SSHPASS read:  socks5_forward_1  |  socks5_forward_1  | packet_write_wait: Connection to 1.2.3.4 port 22: Broken pipe  socks5_forward_1  | SSHPASS read: myUser@server.provider.com's password:  socks5_forward_1  | SSHPASS detected prompt, again. Wrong password. Terminating.  socks5_forward_1  | Permission denied, please try again.  socks5_forward_1  | Permission denied, please try again.  socks5_forward_1  | Received disconnect from 1.2.3.4 port 22:2: Too many authentication failures  socks5_forward_1  | Authentication failed.  socks5_forward_1  | Permission denied, please try again.  socks5_forward_1  | Permission denied, please try again.  socks5_forward_1  | Received disconnect from 1.2.3.4 port 22:2: Too many authentication failures  socks5_forward_1  | Authentication failed.  

I also tried

sshpass -v -p "${PASSWORD}" autossh -M0 -o "ServerAliveInterval 60" -o "ServerAliveCountMax 3" -oStrictHostKeyChecking=no -oUserKnownHostsFile=custom_known_hosts -4 -N -D *:5080 ${USER}@${HOST}  

and build a loop myself since I thought that sshpass won't properly work after autossh tries to reconnect:

while true; do command sshpass -P "assphrase" -p "${PASSWORD}" ssh -M0 -o "ServerAliveInterval 60" -o "ServerAliveCountMax 3" -oStrictHostKeyChecking=no -oUserKnownHostsFile=custom_known_hosts -4 -N -D *:5080 ${USER}@${HOST}; [ $? -eq 0 ] && break || sleep 5; done  

Both approaches doesn't work. Since my ISP does a reconnect every 24 hours, it's annoying to restart the container every day by hand. I couldn't figure out yet why at the last approach in the loop can' handle the reconnect properly

unable to access Manjaro (kernel file not found)

Posted: 13 Jun 2021 01:56 AM PDT

I have Prime OS and Windows 10 alongside Linux Manjaro. I was using Prime OS nearly 2 hours. Then, I turned off my laptop for 30-60 minutes. Then, I turned my laptop on and I was using Windows 10 for 1 hours. After working on Windows 10, I had thought to work in Linux Manjaro. So, I turned my laptop off. Unfortunately, It was taking too much time to turn off my laptop that's why I had turned my laptop off by power button. Again, Windows 10 was automatically opening (without showing grub). So, I had again turned my laptop off by power button again. When I turned my laptop on and clicked on Linux Manjaro, I saw the following screen.

error: file `/boot/vmlinuz-5.10-x86_64' not found.  error: you need to load the kernel first.    Press any key to continue..._   

I think that I can't access Manjaro's file from Windows or, Prime OS (I am not sure). How can I turn on Manjaro?

Here's my grub info grub info

I have found following solutions

  1. https://forum.manjaro.org/t/boot-file-not-found/60059
  2. https://forum.manjaro.org/t/file-boot-vmlinuz-5-4-x86-64-not-found/18488
  3. https://forum.manjaro.org/t/manjaro-unable-to-boot-error-file-boot-vmlinuz-5-7-x86-64-not-found/18445/3

Currently, I don't have bootable USB. So, how can I deal with it from grub? I can move to grub command-line also.

I had said what I did before starting Manjaro. After searching little bit, I remember that I had broke updating system. I had run following command

sudo pacman -Syyu  

Then, I had pressed on Ctrl+C. Since then, I am facing the problem.


Currently what i am thinking that is searching for vmlinuz somehow in Manjaro from grub command line.

Image alt text

I had tried with search,locate,find. None of them were working. But, search option was used for something else.

USB Created with dd will not boot

Posted: 13 Jun 2021 01:06 AM PDT

I downloaded the newest RHEL 8 ISO (9 GB) and created a bootable USB with a dd command in CentOS 7.9. But it would not boot at system startup. I tried it with 2 different USBs but to no avail.

Secondly I tried the Rufus software on Windows. It said that there is some kind of "Lock" on the ISO image from the creators, so it, (Rufus), also used the dd option to create the USB. The result is the same; it will not boot to start the installation process.

dd if=rhel-8.4-x86_64-dvd of=/dev/sdd1 command was used.

ssh with separate stdin, stdout, stderr AND tty

Posted: 13 Jun 2021 03:50 AM PDT

Problem

Consider a command like this:

<binary_input ssh user@server 'sudo tool' >binary_output 2>error.log  

where tool is arbitrary and ssh is a wrapper or some ssh-like-contraption that allows the above to work. With regular ssh it doesn't work.

I used sudo here but it's just an example of a command that requires tty. I'd like a general solution, not specific to sudo.


Research: the cause

With regular ssh it doesn't work because:

  • sudo needs tty to ask for password (or to work at all), so I need ssh -t; actually in this case I need ssh -tt.
  • On the other hand ssh -tt will make sudo read the password from binary_input. I want to provide the password via my local tty. Even if sudo is configured to work without password or if I inject the password to the binary_input, ssh -tt will make sudo and tool read from the remote tty and write output and errors and prompts to the remote tty. Not only I won't be able to tell the output and the errors/prompts apart locally. All the streams will be processed by the remote tty and this will mangle binary data.

Research: comparison to commands that work

  • This local command is the reference point. Let's assume it successfully processes some binary data:

    <binary_input tool >binary_output  
  • If I need to run tool on a server, I can do this. Even if ssh asks for my password, this will work:

    <binary_input ssh user@server tool >binary_output  

    In this case ssh is transparent for binary data.

  • Similarly local sudo can be transparent. The following command won't mangle the data even if sudo asks for my password:

    <binary_input sudo tool >binary_output  
  • But running tool on the server with sudo is troublesome:

    <binary_input ssh user@server 'sudo tool' >binary_output  

    In this configuration ssh and sudo together cannot be transparent in general. Finding a way to make them transparent is the gist of this question.


Research: similar questions

I have found few similar questions:

  • Use sudo with ssh command and capturing stdout

    This question cares about stdout only. The existing answer (from the author of the question) advises sudo -S which consumes stdin. I don't really want to alter my binary_input. And I would appreciate a solution not specific to sudo.

  • stderr over ssh -t

    This concentrates on passing Ctrl+c and the background is GNU parallel. A workaround that only makes Ctrl+c work without a remote tty is not enough for me.

  • SSH: Provide additional "pipe" fds in addition to stdin, stdout, stderr

    This is a good start (especially this answer, I think). However here I want to emphasize the need for tty. I want a solution that automates things and allows me to use remote sudo (or whatever) as if it was local.


My explicit question

In the following command:

<binary_input ssh user@server 'requires-tty' >binary_output 2>error.log  

requires-tty is a placeholder for code that requires a tty but processes binary data from its stdin to its stdout. It seems I need ssh -tt, otherwise requires-tty will not work; and at the same time I mustn't use ssh -tt, otherwise the binary data will be mangled. How to solve this problem in a convenient way?

requires-tty can be sudo … but I don't want a solution specific to sudo.

I imagine the ideal(?) solution will be a script/tool that replaces ssh in the above invocation and just works. It should(?) connect the remote stdin, stdout and stderr each to its local counterpart, and the remote tty to the local tty.

If it's possible, I prefer a client-side solution that does not require any server-side companion program.

Can I install another Linux distribution to an extra HDD without rebooting?

Posted: 13 Jun 2021 04:38 AM PDT

I have a computer with a Linux distribution installed on partitions in drive /dev/sda. I also have another physical drive, /dev/sdb.

I want to install Linux to the second physical drive - to later run either on the same computer or another one. I know the planned hardware configuration of the target machine, and I have an installer for my new Linux distribution (say on a third drive, /dev/sdc, or in an ISO I can mount etc.)

Can I perform the installation without rebooting? That is, other than in the usual way of booting from an installation medium?

If this question is too general, then - can I do so with Debian Buster/Devuan Beowulf?

Note: You may make any reasonable assumption about the system, but please state it explicitly.

Virtualbox shared folders with symlinks

Posted: 13 Jun 2021 01:58 AM PDT

I want to configure 1 shared folder /home/host/shared and put symlinks to other shared items in that shared directory to make them accessible by the guest machine like /home/host/shared/d1 --> /home/host/a_linked_dir. This is so that I can modify the files & directories as I use the VM without having to change the config in Virtualbox and so that multiple VMs can easily be configured to use a single shared folder. However, putting s symlink /home/guest/shared/link --> /home/host/shared/f1 doesn't allow the guest machine to access the linked directory and instead just points to a target non-existent on the guest machine.

Is there a way to use symlinks between the host & guest VMs in Virtualbox?

Trouble selecting "Fully Preemptible Kernel (Real-Time)" when configuring/compiling from source

Posted: 13 Jun 2021 02:00 AM PDT

I am trying to compile the 5.4 kernel with the latest stable PREEMPT_RT patch (5.4.28-rt19) but for some reason can't select the Fully Preemptible Kernel (RT) option inside make nconfig/menconfig.

I've compiled the 4.19 rt patch before, and it was as simple as copying the current config (/boot/config-4.18-xxx) to the new .config, and the option would show. Now I only see:

No Forced Preemption (Server)  Voluntary Kernel Preemption (Desktop)  Preemptible Kernel (Low-Latency Desktop)  

And if I press F4 to "ShowAll", I do see the option:

XXX Fully Preemptible Kernel (Real-Time)   

But cannot select it. I've tried manually setting it in .config with various PREEMPT options like:

CONFIG_PREEMPT=y  CONFIG_PREEMPT_RT_BASE=y  CONFIG_PREEMPT_RT_FULL=y  

But it never shows. I just went ahead and compiled it with CONFIG_PREEMPT_RT_FULL=y (which is overwritten before when saving the make nconfig), but it seems it's still not the fully preemptive kernel that is installed.

With 4.19, uname -a would show something like:

Linux 4.19.106-rt45 #2 SMP PREEMPT RT <date>

or something like that, but now it will just say:

Linux 5.4.28-rt19 #2 <date>

Anyone know what I'm missing here?

OS: CentOS 8.1.1911

Kernel: 4.18.0-147.8.1 -> 5.4.28-rt19

When can a "cd" command fail in a shell script and what can I do to remedy it?

Posted: 13 Jun 2021 05:06 AM PDT

I have a shell script that failed to finish last week; it was a failed "cd" command and it exits if it fails.

The script is a bash shell script for configuring new Debian installs. Here is the full script: debianConfigAswome.sh. The script is run as root so it has full access to the file-system.

Can you please list all the reasons a script would not be able to successfully execute a cd command and what to do to avoid the error?

Can I use Clonezilla to backup and restore a bootable USB volume?

Posted: 13 Jun 2021 05:02 AM PDT


I have this external USB HDD that I've installed Lubuntu 16.04 and Windows 10 on as a dual boot. They are portable installations that I can take around with me and use on a variety of computers and have all my apps and configurations with me. It's a real installation of Lubuntu, not a persistent live USB. Windows 10 was done through WinToUSB.

Can I use Clonezilla to make a backup of this drive? And if I then restore that image to another drive, will that second drive also be bootable in the same way?

What to do if the owner of /usr/bin/* changes to a non-root?

Posted: 13 Jun 2021 04:00 AM PDT

It would be the right thing to say that I messed up!

Accidentally, I changed the owner of all files in /usr/bin to 'dev' from 'root'. Now, sudo does not work! If I use sudo with any command, I get -

sudo: effective uid is not 0, is sudo installed setuid root?

I cannot use chown command to change the owner back to 'root'. This is a major set back!

Because this is a Virtual Machine, I cannot access the Recovery Console. Infact, even the reboot command needs the user to have 'root' access.

Experts, please help me in getting control of the OS without having to re-image.

Thanks!

P.S - Possibly a duplicate but reposting as his solution was to start afresh.

More info -

su - root always says incorrect password. Unfortunately, the owner of su is also 'dev'.

I am able to create a new user using the GUI. It accepted the root password. How do I grant the new user with root access without using visudo.

Apache - Allow access for folders starting with /

Posted: 13 Jun 2021 02:07 AM PDT

I'm running apache in a linux environment. I've to serve files whose directory structure have "/." in it. Now, apache by default won't allow files with /. To remove the constraint, I've included the following entry in httpd conf.

  <DirectoryMatch "^.|/.">      Order allow,deny      Allow from all  </DirectoryMatch>  
But,this opens up all hidden directories under apache root. I'm trying to use a directory pattern so that it'll only allow files under this directory. For e.g. the directory path always start with /content/. The occurrence of /. can be anywhere. For e.g.
  /content/url/test/.NET/sample/abc.html  /content/xyz/.BETA/sample/test.html  
As you can see, I'm trying to create a pattern where the rule would allow directories starting with "/content" and can having "/." in its path. Just wondering if it's possible to have a rule like this in option.

How to stop the find command after first match?

Posted: 13 Jun 2021 01:26 AM PDT

Is there a way to force the find command to stop right after finding the first match?

No comments:

Post a Comment