Sunday, August 15, 2021

Recent Questions - Unix & Linux Stack Exchange

Recent Questions - Unix & Linux Stack Exchange


Why does every directory have "." and ".." dots?

Posted: 15 Aug 2021 09:55 AM PDT

A historical question. I tried searching for the answer to this, but no luck.

Every directory contains a "." and "..", even ROOT (/) contains "..", but why? Neither seem necessary from my admittedly limited perspective.

  1. To run a script in the current directory ./script.sh. But if "." didn't exist, then we could just use bash script.sh.

  2. To change to the parent directory cd ... But if ".." didn't exist, then I imagine a command could exist like cd --parent 1 (go up one parent directory)

So rather than mandatory, the dot and dotdot seem to be more of a shorthand.

As downsides, they prevent the creation of any file named "." or "..". Also, they can make listing/manipulating files that start with a dot more difficult/error prone.

rm -r .* # delete current and parent directory?  

To be clear, I'm not asking for a change. I'm just curious how we ended up here?

CPU speeds on linux

Posted: 15 Aug 2021 09:31 AM PDT

I am a new Linux user and have just recently installed Mint.

In my bios I have my cpu set to a constant speed of 4.1 GHZ. In windows it shows as 4.1 GHZ but in Linux it shows up as just 3600 GHZ. I have a 9700k so basically whenever I boot onto linux my cpu speed is set back to default. is there anyfix to getting my cpu speed to run at 4.1 GHZ in Linux? Thanks

Starting ffplay X-window without a window manager

Posted: 15 Aug 2021 09:17 AM PDT

I want what is essentially a kiosk, to display a video stream using ffplay, without a whole Desktop environment.

I started with minimal CentOS7, and installed xterm and X11, per this simple guide: https://linuxconfig.org/how-to-run-x-applications-without-a-desktop-or-a-wm

I've created a .xinitrc file:

#!/bin/bash  exec firefox  

and when I execute startx, it opens an X11 window containing Firefox.

But, when I replace the .xinitrc file with this one:

#!/bin/bash  exec ffplay udp://192.168.0.237:5444  

I just get a blank screen, in spite of knowing that ffplay command is right. Though I can tell from ps -ef | grep ffplay that ffplay is indeed executing. I think it is just piping the output video someplace different from where Firefox did. I don't know how to tell where that is, nor how to force it to go to localhost:0.0.

Note the CentOS7 box is a Hyper-V VM, which I am accessing via a Hyper-V console.

Are Linux commands included with or part of the shell?

Posted: 15 Aug 2021 09:53 AM PDT

I am trying to figure out the different components of Linux and how they work together, and I have a terminology related question. The terminal runs the shell, which is usually Bash. One can also run Linux commands (e.g. ls, mkdir and cp) in terminal. But then I learned that not all Linux commands are part of bash (or shell). Does that mean that the terminal does not run shell only?

upgrade RHEL version 7.6 to RHEL 8.4 ( offline )

Posted: 15 Aug 2021 09:17 AM PDT

we are trying to upgrade our RHEL server 7.6 to RHEL 8.4

as

 leapp preupgrade --no-rhsm --enablerepo BaseOS --enablerepo AppStream  

but finally we get the following errors

============================================================                       UPGRADE INHIBITED  ============================================================    Upgrade has been inhibited due to the following problems:      1. Inhibitor: The installed OS version is not supported for the in-place upgrade to RHEL 8      2. Inhibitor: Detected loaded kernel drivers which have been removed in RHEL 8. Upgrade cannot proceed.      3. Inhibitor: Missing required answers in the answer file  Consult the pre-upgrade report for details and possible remediation.    ============================================================                       UPGRADE INHIBITED  ============================================================  

any idea how to continue from above stage?

note:

under /etc/leapp/files , we set the following files

ls -ltr /etc/leapp/files  total 3100  -rw-r--r-- 1 root root   47708 Aug 15 12:55 unsupported_pci_ids.json  -rw-r--r-- 1 root root   20711 Aug 15 12:55 unsupported_driver_names.json  -rw-r--r-- 1 root root 3057300 Aug 15 12:55 pes-events.json  -rw-r--r-- 1 root root   39703 Aug 15 12:55 repomap.csv       more  /var/log/leapp/leapp-report.txt  Risk Factor: high (inhibitor)  Title: The installed OS version is not supported for the in-place upgrade to RHEL 8  Summary: The supported OS releases for the upgrade process:   RHEL-ALT 7.6  RHEL-SAPHANA 7.7  RHEL 7.9  

Problem with running a script as a startup program

Posted: 15 Aug 2021 08:00 AM PDT

I wrote a script and added it to /usr/bin/ with the permission for execution. It works exactly as expected when I run it on terminal. The script is written for updating the TeX-system regularly, add a time-stamp to this process and display the output with a text-editor. In the following code, tlmgr is the program for managing/updating/installing TeX-related packages and programs; tlmgr-update is the name of the output file and gedit is a text editor.

#!/bin/sh  tlmgr update --all > tlmgr-update  date >> tlmgr-update  gedit tlmgr-update  

Now this script works absolutely fine when called by its name on the terminal. A sample output would be something like this when called from the terminal.

tlmgr: package repository https://mirrors.concertpass.com/tex-archive/systems/texlive/tlnet (verified)  tlmgr: saving backups to /usr/local/texlive/2021/tlpkg/backups  [1/8, ??:??/??:??] update: beebe [863k] (59956 -> 60238) ... done  [2/8, 00:11/02:36] update: easybook [624k] (60221 -> 60243) ... done  [3/8, 00:19/02:36] update: hvlogos [77k] (60126 -> 60236) ... done  [4/8, 00:24/03:08] update: media9 [7224k] (60110 -> 60244) ... done  [5/8, 01:16/01:45] update: tex4ht [2202k] (60231 -> 60245) ... done  [6/8, 01:32/01:42] update: texlive-scripts [496k] (60219 -> 60238) ... done  [7/8, 01:37/01:43] update: tikzbricks [253k] (60211 -> 60234) ... done  [8/8, 01:43/01:47] update: xindex [502k] (59875 -> 60242) ... done  running mktexlsr ...  done running mktexlsr.  running mtxrun --generate ...  done running mtxrun --generate.  running updmap-sys ...  done running updmap-sys.  tlmgr: package log updated: /usr/local/texlive/2021/texmf-var/web2c/tlmgr.log  tlmgr: command log updated: /usr/local/texlive/2021/texmf-var/web2c/tlmgr-commands.log  Sun 15 Aug 2021 08:24:37 PM IST  

but when I add it as a startup command on my MX Linux computer, it just adds the timestamp to a file named tlmgr-update and opens it. tlmgr is not run. Is this a bug for tlmgr, or my distro? My system specifications are as follows.

cat /etc/*-release  NAME="MX"  VERSION="19 (patito feo)"  ID="mx"  VERSION_ID="19"  PRETTY_NAME="MX 19 (patito feo)"  ANSI_COLOR="0;34"  HOME_URL="https://mxlinux.org"  BUG_REPORT_URL="https://mxlinux.org"  PRETTY_NAME="MX 19.4 patito feo"  DISTRIB_ID=MX  DISTRIB_RELEASE=19.4   DISTRIB_CODENAME="patito feo"  DISTRIB_DESCRIPTION="MX 19.4 patito feo"  PRETTY_NAME="Debian GNU/Linux 10 (buster)"  NAME="Debian GNU/Linux"  VERSION_ID="10"  VERSION="10 (buster)"  VERSION_CODENAME=buster  ID=debian  HOME_URL="https://www.debian.org/"  SUPPORT_URL="https://www.debian.org/support"  BUG_REPORT_URL="https://bugs.debian.org/"  

Tools for batching N commands over L scripts (for N≫L)?

Posted: 15 Aug 2021 08:53 AM PDT

Let's say that I have access to a high-performance Linux cluster equipped with a scheduler (e.g. LSF, Slurm, etc.) that will allow me to have up to M jobs either running or pending at any one time, of which at most L < M can be running concurrently.

Now, suppose that I want to run N independent commands as quickly as possible.

If N ≤ M, I can just submit each command as a separate job to the scheduler, and be done with.

But what if N > M? Or N ≫ M even?


The N ≫ M scenario occurs extremely often in my line of work, so often in fact that a hope to find tools to facilitate dealing with it would not be unreasonable1.

One very general and straightforward way to get around the scheduler-imposed limits is to split the N independent commands into L separate one-time "batching" scripts, and submit the latter to the scheduler, as L separate jobs2.

Granted, creating such one-time batching scripts is a dull, somewhat annoying chore, but someone who is handy with their shell, or with a scripting language like Python, Perl, etc., can easily take care of it, and even home-roll their own hacks to automate it.

My question is, however, are there publicly (and freely) available tools in the Unix ecosystem that can be used even by those with less programming skill to automate the chore of generating L such batching scripts, given as input a list of N independent commands?


1Actually, the scenario occurs so often that I am surprised that schedulers do not already have built-in support for it. At least the schedulers that I am most familiar with (Slurm and LSF) do not have any such support, as far as I can tell. Please correct me if I missed something.

2 More generally, one could to batch the N commands into k batching scripts, as long as k ≤ M, but, in my experience, choosing k = L is the most straightforward way to achieve a maximal, or near-maximal, throughput under these constraints. The reasons for this are not too difficult to see, but an adequate discussion of the matter would require more time than I want to take up here.

Having two separate grub configs

Posted: 15 Aug 2021 07:28 AM PDT

so i have two disks: one ssd which is my main system, and a backup one on a usb stick. i want my stick to be bootable: i've changed the fstab and i installed grub. but the grub config on the usb always points to the ssd, which prevents it from booting without the ssd. i edited the /etc/grub.d/40_custom to point to the linux image on the usb and the usb now boots alone (i hope), but i find that workaround ugly. how should i configure grub to make both installs independent?

How to insert code before matched multi-line of code with sed?

Posted: 15 Aug 2021 07:13 AM PDT

I want to add this code

$cfg['Servers'][$i]['hide_db'] = '^(mysql|information_schema|performance_schema|phpmyadmin)$';  

into phpMyAdmin's config.inc.php file before the line

/**   * End of servers configuration  

Expect result:

$cfg['Servers'][$i]['hide_db'] = '^(mysql|information_schema|performance_schema|phpmyadmin)$';    /**   * End of servers configuration   */  

Here is sample of config.inc.php file ( https://github.com/DaoCloud/phpmyadmin/blob/master/src/config.inc.php )

My current sed code is

PHPMATARGETDIR="/var/www/phpmyadmin"    sudo sed -i "s/\(\/\*\*\)/ #my code before;\n\1/" ${PHPMATARGETDIR}/config.inc.php  

but it is not working, it's just prepend to all open comment block.

If I use this code then it is not working at all.

sudo sed -i "s/\(\/\*\*\n\s*\* End of servers configuration\)/ #my code before;\n\1/" ${PHPMATARGETDIR}/config.inc.php    

Capture picture after unlocking screen / logging in i3

Posted: 15 Aug 2021 07:05 AM PDT

Running Manjaro with i3wm, and my .i3/config has the following related to locking the screen:

# Lock screen  exec --no-startup-id xss-lock -- ~/.i3/lock.sh  bindsym $mod+Ctrl+l exec --no-startup-id i3exit lock  bindsym $mod+9 exec --no-startup-id blurlock  

The script lock.sh is:

#!/bin/sh  set -e  xset s off dpms 0 10 0  i3lock --color=4c7899 --ignore-empty-password --show-failed-attempts --nofork  xset s off -dpms  

Similar to this post I want to have a picture taken via a script every time the screen is unlocked. I've written a script that captures a picture from the local webcam, and this works perfectly fine - how do I alter the above setup to have the script run after unlocking my screen?

I hope this can be done at the level of the .i3/config rather than messing with pam.d files like the answer in the linked post.

Bluetooth not working, cannot turn on

Posted: 15 Aug 2021 08:23 AM PDT

I have a bluetooth adapter that worked fine on win 10. But I cannot enable it on fresh installed PopOS. It's just always off.

Model https://ks-is.com/adaptery-i-perehodniki/usb-bluetooth-5-0-adapter-ks-is-ks-457

systemctl status bluetooth shows active status

lsusb results

Bus 003 Device 005: ID 0bda:8771 Realtek Semiconductor Corp. Bluetooth Radio

❯ rfkill  ID TYPE DEVICE SOFT HARD  4 bluetooth hci0 unblocked unblocked    ❯ hcitool dev  Devices:    ❯ hciconfig -a  hci0: Type: Primary Bus: USB  BD Address: 00:00:00:00:00:00 ACL MTU: 0:0 SCO MTU: 0:0  DOWN  RX bytes:21 acl:0 sco:0 events:2 errors:0  TX bytes:6 acl:0 sco:0 commands:2 errors:0  Features: 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00  Packet type: DM1 DH1 HV1  Link policy:  Link mode: SLAVE ACCEPT  

What's written in official docs about linux

KS-is KS-457 Bluetooth 5.0 USB Adapter

Requirements to install and use this model under Linux

It is identified by lsusb as 0bda: 8771 Realtek Semiconductor Corp.

Linux support

a. The adapter is supported by bt_trl (CONFIG_BT_RTL, starting with Linux 5.8. Firmware is required for the driver. Firmware is available in the linux-formware package starting April 2020.

b. You will need to upgrade your kernel to version 5.8+ if you have an older kernel and want to use this adapter.

c. The recommended version is Linux 5.8.1

AUR package: https://aur.archlinux.org/packages/rtl8761b-fw/

Is there anyway to see my REAL DNS server

Posted: 15 Aug 2021 09:54 AM PDT

Is there anyway to see my REAL DNS server?

and I'm not referring to 127.0.0.53 or the router (192.168.0.1), but the real external server in bash?

I'm talking about the ISP DNS, or VPN DNS servers...

Port forwarding does not work using different gateway

Posted: 15 Aug 2021 06:24 AM PDT

Let me try to explain my home network setup:

          ┌────────────────────┐            │      Internet      │            │ Public IP: 1.2.3.4 │            └──────────┬─────────┘                       │    ┌──────────────────┴─────────────────┐    │             ISP Modem              │    │ Forwarding everything to AP Router │    │            192.168.1.1             │    └──────────────────┬─────────────────┘                       │     ┌─────────────────┴───────────────┐     │             AP Router           │     │         DHCP happens here       │     │ Forward 1122 to 192.168.10.2:22 ├─────────────┐     │           192.168.10.1          │             │     └─────────────────┬───────────────┘             │                       │                             │                       │                             │                       │                             │                       │                     ┌───────┴───────┐                       │                     │ PiHole + VPN  │                       │                     │ 192.168.10.50 │                       │                     └───────────────┘                       │                             ▲                       │                             │  ┌────────────────────┴──────────────────┐          │  │                 Desktop               │          │ Default routing  │              192.168.10.2             │          │  │    Default gateway: 192.168.10.50     ├──────────┘  │          DNS: 192.168.10.50           │  └───────────────────────────────────────┘  

If the desktop uses 192.168.10.1 as the default gateway, doing, for example, SSH to 1.2.3.4:1122 works, I can SSH to the desktop. But I want the computer to use 192.168.10.50 as the default gateway. In that case, any port forwarding does not work.

After doing a little bit of research this can be done with IP tables/policy based routing, but I know nothing about that. What's the simplest way to do it?

Edit: I'm using Pop-OS which is Ubuntu-based.

automatically connect to vpn on system startup using systemd

Posted: 15 Aug 2021 09:35 AM PDT

I want to auto start vpn on system startup and reconnect ever time the network gets reconnected (after getting disconnected for any reason).

The command to connect is protonvpn-cli connect -f

The command to disconnect is protonvpn-cli disconnect

the auto_vpn.service file placed in /etc/systemd/system looks like:

[Unit]  Description=Connect to Proton-VPN  After=network-online.target  Wants=network-online.target  BindsTo=network.service    [Service]  Type=forking  ExecStart=protonvpn-cli connect -f  ExecStop=protonvpn-cli disconnect  Restart=on-failure  RestartSec=30  StartLimitInterval=350  StartLimitBurst=10    [Install]  WantedBy=multi-user.target  

When i run sudo systemctl start auto_vpn.service, it says:

Job for auto_vpn.service failed because the control process exited with error code.  See "systemctl status auto_vpn.service" and "journalctl -xe" for details.  

systemctl status auto_vpn.service command output is:

● auto_vpn.service - Connect to Proton-VPN       Loaded: loaded (/etc/systemd/system/auto_vpn.service; disabled; vendor preset: enabled)       Active: activating (auto-restart) (Result: exit-code) since Tue 2021-08-10 16:37:41 +06; 7s ago      Process: 646859 ExecStart=/usr/bin/protonvpn-cli connect -f (code=exited, status=1/FAILURE)  

journalctl -xe command output is:

Aug 10 16:39:43 i5-8600k protonvpn-cli[647440]: protonvpn_nm_lib.exceptions.KeyringError: Unable to autolaunch a dbus-daemon without a $DISPLAY for X11  Aug 10 16:39:43 i5-8600k systemd[1]: auto_vpn.service: Control process exited, code=exited, status=1/FAILURE  -- Subject: Unit process exited  -- Defined-By: systemd  -- Support: http://www.ubuntu.com/support  --  -- An ExecStart= process belonging to unit auto_vpn.service has exited.  --  -- The process' exit code is 'exited' and its exit status is 1.  Aug 10 16:39:43 i5-8600k systemd[1]: auto_vpn.service: Failed with result 'exit-code'.  -- Subject: Unit failed  -- Defined-By: systemd  -- Support: http://www.ubuntu.com/support  --  -- The unit auto_vpn.service has entered the 'failed' state with result 'exit-code'.  Aug 10 16:39:43 i5-8600k systemd[1]: Failed to start Connect to Proton-VPN.  -- Subject: A start job for unit auto_vpn.service has failed  -- Defined-By: systemd  -- Support: http://www.ubuntu.com/support  --  -- A start job for unit auto_vpn.service has finished with a failure.  --  -- The job identifier is 27363 and the job result is failed.  Aug 10 16:39:47 i5-8600k sudo[647491]:   blueray : TTY=pts/0 ; PWD=/home/blueray ; USER=root ; COMMAND=/usr/bin/journalctl -xe  Aug 10 16:39:47 i5-8600k sudo[647491]: pam_unix(sudo:session): session opened for user root by (uid=0)  

I am also not sure the auto_vpn.service is configured properly to serve my purpose or not.

How does VA to PA translation in a 4-level page table just take 4 memory accesses

Posted: 15 Aug 2021 08:34 AM PDT

I am learning page table management, and I learned that VA to PA translation takes 4 memory accesses in a 4-level page table (considering TLB miss and miss in page walk cache).

But, as Linux uses the follow_page function for the PTW and this function internally calls to follow_page_mask. That further makes calls to p4d_offset, pud_offset, pgd_offset and so on.

So, my question here is that, for example, when pud_offset is called, it will return virtual address of a PMD directory (I guess) and to get the physical address of the PMD directory, there is again need to perform PTW.

So, how does memory accesses for address translation will 4? Isn't it more than 4?

How can you determine which EFI System Partition was used to boot a Linux System?

Posted: 15 Aug 2021 07:19 AM PDT

If you have a system with multiple disks and multiple EFI System Partitions how can you determine which one was used to boot the Linux system once the system is booted if they both end up booting the same kernel and root partition?

Environment variable expansion inside $(command substitution)

Posted: 15 Aug 2021 09:55 AM PDT

I'm running Bash 5.1.4 on Debian.

I'm writing a post-installation script to copy configuration and other files to locations in my home directory. I add the intended destination to each file at the beginning with a prefix; for example: # DEST: $HOME/.config/mousepad/Thunar (of course, in the script the file name will be substituted by a variable, and the hash symbol by the appropiate comment character; this line appears within the first 10 lines, not necessarily at the first, so I don't mess with shebangs).

To get these locations I'm using this command: head Thunar.acs | egrep "DEST:" | awk '{print $3}, which returns literally $HOME/.config/Thunar; I'd like it to expand $HOME. What I mean is when I try ls $(head Thunar.acs | egrep "DEST:" | awk '{print $2}) I get the error ls: cannot access '$HOME/.config/Thunar/': No such file or directory. I read this question and tried all of the combination of double quotes in the selected answer, but I still got the error. How can I solve this?

Enclosing the variable name in braces doesn't work either.

Thanks!

Debian kernel - why do I need the firmware file if the driver is compiled in the kernel?

Posted: 15 Aug 2021 06:21 AM PDT

I am using this usb wifi device on Debian running on my DE10-Nano board.

Looking at the product details, it seems like this uses the RT5370 chipset which is included in the RT2800USB driver. I have enabled this in the kernel as shown in the screenshot below:

enter image description here

However, the wifi device doesn't work unless I install the firmware also with the following command:

sudo apt install firmware-ralink  

My question is - what does the firmware have to do with the driver? Shouldn't the wifi device already have the necessary firmware? What exactly is going on here?

I'm new to kernel drivers and devices so trying to understand the magic going on here. My understanding is that to use a device, I just need to make sure the relevant driver is either compiled into the kernel or available as a module that you can load in later.

Here is the dmesg output when I run ifup wlan0. The firmware file rt2870.bin is provided by the package firmware-ralink.

[   78.302351] ieee80211 phy0: rt2x00lib_request_firmware: Info - Loading firmware file 'rt2870.bin'  [   78.311413] ieee80211 phy0: rt2x00lib_request_firmware: Info - Firmware detected - version: 0.36  [   80.175252] wlan0: authenticate with 30:23:03:41:73:67  [   80.206023] wlan0: send auth to 30:23:03:41:73:67 (try 1/3)  [   80.220665] wlan0: authenticated  [   80.232966] wlan0: associate with 30:23:03:41:73:67 (try 1/3)  [   80.257518] wlan0: RX AssocResp from 30:23:03:41:73:67 (capab=0x411 status=0 aid=5)  [   80.270065] wlan0: associated  [   80.503705] IPv6: ADDRCONF(NETDEV_CHANGE): wlan0: link becomes ready  

Edge case - detecting input on STDIN in perl

Posted: 15 Aug 2021 08:18 AM PDT

I don't know quite how to ask this question and I'm not even sure this is the place to ask it. It seems rather complex and I don't have a full understanding of what is going on. Frankly, that's why I'm posting - to get some help wrapping my head around this. My end goal is to learn, not to solve my overall problem. I want to understand when I can expect to encounter the situation I'm about to describe and why it happens.

I have a perl module which I've been developing. One of the things it does is it detects whether there is input on standard in (whether that's via a pipe or via a redirect (i.e. <)).

To catch redirects, I employ a few different checks for various cases. One of them is looking for 0r file descriptors in lsof output. It works fairly well and I use my module in a lot of scripts without issue, but I have 1 use-case where my script thinks it's getting input on STDIN when it is not - and it has to do with what I'm getting in the lsof output. Here are the conditions I have narrowed down the case to, but these are not all the requirements - I'm missing something. Regardless, these conditions seem to be required, but take my intuition with a hefty grain of salt, because I really don't know how to make it happen in a toy example - I have tried - which is why I know I'm missing something:

  1. When I run a perl script from within a perl script via backticks, (the inner script is the one that thinks it has been intentionally fed input on STDIN when it has not - though I should point out that I don't know whether it's the parent or child that actually opened that handle)
  2. An input file is supplied to the inner script call that resides in a subdirectory

The file with the 0r file descriptor that lsof is reporting is:

/Library/Perl/5.18/AppendToPath  

This file does not show up in the lsof output under other conditions. And if I do eof(STDIN) before and after the lsof call, the result is 1 each time. -t STDIN is undefined. fileno(STDIN) is 0.

I read about this file here and if I cat it, it has:

>cat /Library/Perl/5.18/AppendToPath  /System/Library/Perl/Extras/5.18  

It appears this is a macOS-perl-specific file meant to append to the @INC perl path, but I don't know if other OS's provide analogous mechanisms.

I'd like to know more about when that file is present/opened and when it's closed. Can I close it? It seems like the file content has already been read in by the interpreter maybe - so why is it hanging around in my script as an open file handle? Why is it on STDIN? What happens in this case when I actually redirect a file in myself? Is the child process somehow inheriting it from the parent under some circumstance I'm unaware of?

UPDATE: I figured out a third (possibly final) requirement needed to make that AppendToPath file handle be open on STDIN during script execution of the child script. It turns out I had a line of code at the top of the parent script (probably added to try and solve a similar problem when I knew even less than I know now about detecting input on STDIN) that was closing STDIN. I commented out that close and everything started working without any need to exclude that weird file (i.e. that file: /Library/Perl/5.18/AppendToPath no longer shows as open on STDIN in lsof). This was the code I commented out:

close(STDIN) if(defined(fileno(STDIN)) && fileno(STDIN) ne '' &&                  fileno(STDIN) > -1);  

It had a comment above it that read:

#Prevent the passing of active standard in handles to the calls to the script  #being tested by closing STDIN.  

So I was probably learning about standard input detection at the time I wrote that years ago. My module probably ended up using -t STDIN and -f STDIN, etc, but I'd switched those out to work around a problem like this one using lsof so I could see better what was going on. So with the current module (using either lsof or my new(/reverted?) streamlined version using -t/-f/-p works just fine (as intended) when I don't close STDIN in the parent.

However, I would still like to understand why that file is on STDIN in a child process when the parent closes STDIN...

If using while read loops for text processing in bash is bad...what should I do, then?

Posted: 15 Aug 2021 09:42 AM PDT

I guess this may be a naive question but I can't get my head around so I felt like asking... I was searching for some solution to a problem, when I found this very interesting post about why is using [while|for] loops in bash considered bad practice. There is a very good explanation in the post (see the chosen answer) but I can't find anything that solves the issues that are discussed.

I searched extensively: I googled (or duckduckgo-ed) how to read a file in bash and all the results I am getting point towards a solution that, according to the above-mentioned post, is absolutely non-bash style and something that should be avoided. In particular, we have this:

while read line; do    echo $line | cut -c3  done  

and this:

for line in `cat file`; do    foo=`echo $line | awk '{print $2}'`    echo whatever $foo  done  

that are indicated as very bad examples of shell scripting. At this point I am wondering, and this is the actual question: if the posted while loops should be avoided because they are bad practice and whatever...what am I supposed to do, instead?

EDIT: I see that I am already having comments/questions addressing the exact issue with the while loop, so I feel like to widen the question a bit. Basically, what I am understanding is that I need to dig deeper into bash commands, and that is the real thing that I should do. But, when one searches around, it looks like people are, in the general case, using and teaching bash in an improper way (as per my google-ing).

Can you make a bash script's option arguments be optional?

Posted: 15 Aug 2021 09:11 AM PDT

I would like either of these inputs to work. That is, the -n option itself is optional – I already know how to do that – but it then may have an optional parameter on top. If no parameter is given, a fallback value will be applied.

command -n 100  command -n  

I can only make the former input type work or the latter, but not both.

HAS_NICE_THINGS=0  NICE_THINGS=50       # default value.    while getopts n: option; do  #while getopts n option; do    # NICE_THINGS would always be that default value.  #while getopts nn: option; do    # same.      case "${option}" in      n)          HAS_NICE_THINGS=1          if [[ ! -z "${OPTARG}" ]] && (( "${OPTARG}" > 0 )) && (( "${OPTARG}" <= 100 )); then              NICE_THINGS=${OPTARG}          fi;;      esac  done    # error message:  # option requires an argument -- n  

I'm not entirely sure yet if I would need a boolean for my script, but so far, just in case, I am logging one (HAS_NICE_THINGS).

The end goal I had in mind was to set the JPG quality when eventually saving an image. Though, I can imagine this construct being useful elsewhere as well.

I'm using Ubuntu 18.04.5 and GNU bash, version 4.4.20(1)-release (x86_64-pc-linux-gnu).

key signing: can't see new signatures

Posted: 15 Aug 2021 08:22 AM PDT

I'm getting a few friends to sign my key. Each time they've signed my key, if they send the signed key to a key server, when I try to get the signatures with gpg --refresh-keys --keyserver some.keyserver, my key is unchanged, I don't see their signatures. The same thing happens if I use gpg --recv-keys. They've tried three different servers. However, if they If they email me my key, or I look up my key on the keyserver's web interface and copy the text, then I import it, I see their signatures on my key. Does anyone have an idea as to why this might be happening or what I'm doing wrong?

How to start a systemd service based on ExecStartPre execution result

Posted: 15 Aug 2021 07:02 AM PDT

I have a daemon which is started using systemd service file during boot-up flow. I want to start the daemon based on the execution result of a script. The script is included in service file under ExecStartPre option.

Based on the execution result of the script, I have to handle the service as mentioned below

  1. If the script returns 0, start the service and proceed with bootup
  2. If 1 is returned, stop the service, don't proceed with bootup
  3. If 2 is returned, don't start the service but proceed with bootup

I would like to know whether my scenario is valid. If yes how to achieve this?

Thanks in Advance.

(Buildroot) "silentoldconfig" error on compile

Posted: 15 Aug 2021 09:07 AM PDT

I just moved an old buildroot folder from an old VM to a newer one to consolidate. I thought that simply moving the folder, along with any dependent folders, and making the appropriate path/name changes would be all that is required to get it up and running in the new VM.

Unfortunately, this appears to not be the case as I am greeted with the following error upon attempting to build in this new VM:

#  # configuration written to /home/mirion/mirion/buildroot-2013.05/.config  #  /usr/bin/make -j5  HOSTCC="/usr/bin/gcc" HOSTCXX="/usr/bin/g++" silentoldconfig  make[1]: Entering directory '/home/mirion/mirion/buildroot-2013.05'  BR2_DEFCONFIG='' KCONFIG_AUTOCONFIG=/home/mirion/mirion/buildroot-2013.05/output/build/buildroot-config/auto.conf KCONFIG_AUTOHEADER=/home/mirion/mirion/buildroot-2013.05/output/build/buildroot-config/autoconf.h KCONFIG_TRISTATE=/home/mirion/mirion/buildroot-2013.05/output/build/buildroot-config/tristate.config BUILDROOT_CONFIG=/home/mirion/mirion/buildroot-2013.05/.config /home/mirion/mirion/buildroot-2013.05/output/build/buildroot-config/conf --silentoldconfig Config.in    *** Error during update of the configuration.    Makefile:692: recipe for target 'silentoldconfig' failed  make[1]: *** [silentoldconfig] Error 1  make[1]: Leaving directory '/home/mirion/mirion/buildroot-2013.05'  Makefile:396: recipe for target '/home/mirion/mirion/buildroot-2013.05/output/build/buildroot-config/auto.conf' failed  make: *** [/home/mirion/mirion/buildroot-2013.05/output/build/buildroot-config/auto.conf] Error 2  mv: cannot stat 'output/images/rootfs.ubi': No such file or directory  

Are there any ideas as to what I can do to resolve this?

I did some poking around on google but could not find anything conclusive.

EDIT: Original VM was running Lubuntu 12.04, the new VM is running Ubuntu 17.10.

Thanks.

Manual Duplex printing

Posted: 15 Aug 2021 07:53 AM PDT

I have a printer (Samsung M2022W) which doesn't support duplex printing.

However, I would like to manually print on both sides (that is to say, print even pages, then insert these pages again on the printer and launch the odd pages). The problem is that I don't have a "manual duplex" option on my Debian system. And there is not even a "odd/even pages only" option.

How can I simply print manually on both sides on *unix?

Missing separate debuginfos

Posted: 15 Aug 2021 07:26 AM PDT

I'm trying to debug a code using GDB in a Fedora machine. It produces this message each time I run it.

Missing separate debuginfos, use: debuginfo-install glibc-2.18-12.fc20.x86_64 libgcc-4.8.3-1.fc20.x86_64 libstdc++-4.8.3-1.fc20.x86_64  

My questions:

  1. Should these packages be in GDB by default?
  2. What is the function of each of these packages?
  3. In real production environments should these packages be installed for GDB?
  4. Is it ok if I do not install these packages? What will be the effect?

Can't seem to connect to my Debian Mysqli Server?

Posted: 15 Aug 2021 08:02 AM PDT

So I have a simple PHP script which I attempted to do something along the lines:

$db = mysqli_connect("localhost", "root", "PASSWORD HERE", "database name");  mysqli_query($db, "SELECT STATEMENT HERE") or die (mysqli_error($db));  

to try something out. But it won't make a connection to the database. The script is working fine as I tried it on a homeserver and so on, but it won't work on the vps, so I know it's that the mysql server is screwed up.

I'm using Debian 7. I used

apt-get install apache2  apt-get install mysql-server mysql-client  apt-get install php5  apt-get install phpmyadmin  

and a bunch of php-* (* = different modules or whatever)

I tried:

mysql -u root -p  

and then wrote in the password, and it worked. So what could be wrong?

Wipe last 1MB of a Hard drive

Posted: 15 Aug 2021 09:26 AM PDT

Is there an easy command that I can use to zero out the last 1MB of a hard drive?

For the start of the drive I would dd if=/dev/zero of=/dev/sdx bs=1M count=1. The seek option for dd looks promising, but does someone have an easy way to determine exactly how far I should seek?

I have a hardware RAID appliance, that stores some of the RAID configuration at the end of the drive. I need the RAID appliance to see the drives as un-configured, so I want to remove the RAID configuration without having to spend the time to do a full wipe of the drives. I have a dozen 2TB drives, and a full erase of all of those drives would take a long time.

How big is the pipe buffer?

Posted: 15 Aug 2021 07:25 AM PDT

As a comment in I'm confused as to why "| true" in a makefile has the same effect as "|| true" user cjm wrote:

Another reason to avoid | true is that if the command produced enough output to fill up the pipe buffer, it would block waiting for true to read it.

Do we have some way of finding out what the size of the pipe buffer is?

No comments:

Post a Comment