Thursday, July 29, 2021

Recent Questions - Unix & Linux Stack Exchange

Recent Questions - Unix & Linux Stack Exchange


systemd NOT directing standard output to a file

Posted: 29 Jul 2021 10:23 AM PDT

Our service-definition, running under systemd --user, begins with:

[Service]  StandardOutput=append:logs/foo.log  StandardError=fd:stdout  

The service starts without obvious errors and the actual program runs Ok too. However, nothing appears in the specified log-file and, when I list the files opened by the process (lsof -p PID), I see that the file-descriptors 1 and 2 are both sockets instead of referring to the file.

I tried specifying absolute path to the log-file too -- it made no difference.

We're on RHEL7, where systemd is of version 219-78 -- is this a known problem, perhaps? Is there a solution/workaround?

how to perform a silent install of bandwidthD in ubuntu 20.04

Posted: 29 Jul 2021 09:50 AM PDT

how to perform a silent install of bandwidthD to avoid windows and put IP and interfaces to monitor by command line (ubuntu 20.04)

sudo apt-get install bandwidthd # with what parameters  

thanks

Stuck in windows troubleshoot screen after installing Zorin OS 16

Posted: 29 Jul 2021 09:24 AM PDT

I recently installed Zorin OS by getting the iso and then using rufus to put it in a flash drive. I then tried out the OS from the flash drive without actually installing it on my laptop.

The OS worked well so I decided to install it on my laptop. It said that I have to disable RST so I changed it to AHCI. I also chose the option to remove my previous OS(Windows 10) and replace it with Zorin.

After the installation, I had to restart my computer and remove the flash drive with the iso.

Instead of starting Zorin, it would keep opening the windows troubleshoot screen. Actually, it opens the windows trouble screen pretty fast that I doubt it even attempted to start Zorin.

I tried repeatedly pressing f12 during the startup screen and chose to boot using the "ubuntu" option. It still led me to windows troubleshoot screen.

Does this mean that I still have windows 10 in my computer? And how do I make my laptop start Zorin instead? Sorry I'm still new to linux so idk what idk.

Oh and I should also mention that I didn't encounter any Blue Screen of Deaths, it just opens windows troubleshoot screen right after the startup screen with the manufacturer's icon.

Windows troubleshoot screen

Custom run level

Posted: 29 Jul 2021 09:10 AM PDT

I recall in the past creating a custom run (init) level. Has anyone else come across that? I'm wanting to have a run level invoked in specific conditions. I've searched through current documentation. I can't remember for the life of me how we did it previously.

Wifi stopped working on debian 10 buster

Posted: 29 Jul 2021 10:17 AM PDT

I was using an Atheros AR9271 usb dongle and it worked fine for years, but since a couple of days I cant connect to my network anymore. To be precise, I can select my network but it fails to connect. The same problems of the associating process taking to long also happens with a different dongle (realtek).

ouput of NetworkManager (of 2nd dongle):

Jul 29 17:51:37 debian NetworkManager[8639]: <debug> [1627573897.3458] sup-iface[0x55c1415a48d0,wlan0]: assoc[0x55c14160a0c0]: starting association...  Jul 29 17:51:37 debian NetworkManager[8639]: <debug> [1627573897.3459] device[0x55c141623250] (wlan0): activation-stage: complete activate_stage2_device_config,v4 (id 1547)  Jul 29 17:51:37 debian NetworkManager[8639]: <debug> [1627573897.3517] sup-iface[0x55c1415a48d0,wlan0]: assoc[0x55c14160a0c0]: association request successful  Jul 29 17:51:43 debian NetworkManager[8639]: <debug> [1627573903.8827] device[0x55c141623250] (wlan0): wifi-scan: scan-done callback: successful  Jul 29 17:51:43 debian NetworkManager[8639]: <debug> [1627573903.8828] device[0x55c141623250] (wlan0): remove_pending_action (1): 'wifi-scan'  Jul 29 17:51:43 debian NetworkManager[8639]: <debug> [1627573903.8829] device[0x55c141623250] (wlan0): wifi-scan: scanning-state: idle  Jul 29 17:51:44 debian NetworkManager[8639]: <info>  [1627573904.3632] device (wlan0): supplicant interface state: disconnected -> scanning  Jul 29 17:51:44 debian NetworkManager[8639]: <debug> [1627573904.3632] device[0x55c141623250] (wlan0): wifi-scan: scanning-state: scanning  Jul 29 17:51:49 debian NetworkManager[8639]: <debug> [1627573909.0312] device[0x55c141623250] (wlan0): wifi-scan: scan-done callback: successful  Jul 29 17:51:49 debian NetworkManager[8639]: <info>  [1627573909.0314] device (wlan0): supplicant interface state: scanning -> associating  Jul 29 17:51:49 debian NetworkManager[8639]: <debug> [1627573909.0314] device[0x55c141623250] (wlan0): wifi-scan: scanning-state: idle  Jul 29 17:51:55 debian NetworkManager[8639]: <info>  [1627573915.1421] device (wlan0): supplicant interface state: associating -> disconnected  Jul 29 17:52:02 debian NetworkManager[8639]: <warn>  [1627573922.5147] device (wlan0): Activation: (wifi) association took too long  Jul 29 17:52:02 debian NetworkManager[8639]: <info>  [1627573922.5147] device (wlan0): state change: config -> need-auth (reason 'none', sys-iface-state: 'managed')  

Its hard to say if a upgrade affected the NetworkManager but this is the complete list of upgrade packages from the period of breaking:

2021-07-23 18:47:17 upgrade libudev1:i386 241-7~deb10u7 241-7~deb10u8  2021-07-23 18:47:17 upgrade libudev1:amd64 241-7~deb10u7 241-7~deb10u8  2021-07-23 18:47:18 upgrade udev:amd64 241-7~deb10u7 241-7~deb10u8  2021-07-23 18:47:18 upgrade libnss-myhostname:amd64 241-7~deb10u7 241-7~deb10u8  2021-07-23 18:47:18 upgrade libpam-systemd:amd64 241-7~deb10u7 241-7~deb10u8  2021-07-23 18:47:18 upgrade libnss-systemd:amd64 241-7~deb10u7 241-7~deb10u8  2021-07-23 18:47:18 upgrade systemd:amd64 241-7~deb10u7 241-7~deb10u8  2021-07-23 18:47:19 upgrade libsystemd0:amd64 241-7~deb10u7 241-7~deb10u8  2021-07-23 18:47:19 upgrade libsystemd0:i386 241-7~deb10u7 241-7~deb10u8  2021-07-23 18:47:21 upgrade systemd-sysv:amd64 241-7~deb10u7 241-7~deb10u8  2021-07-23 18:47:21 upgrade virtualbox-6.1:amd64 6.1.22-144080~Debian~buster 6.1.24-145767~Debian~buster  2021-07-23 18:48:10 upgrade containerd.io:amd64 1.4.6-1 1.4.8-1  2021-07-23 18:48:13 upgrade linux-compiler-gcc-8-x86:amd64 4.19.181-1 4.19.194-3  2021-07-23 18:48:13 upgrade linux-kbuild-4.19:amd64 4.19.181-1 4.19.194-3  2021-07-23 18:48:13 upgrade linux-libc-dev:amd64 4.19.181-1 4.19.194-3  2021-07-23 18:54:13 upgrade kmod:amd64 26-1 26-1  2021-07-23 19:07:28 upgrade firmware-amd-graphics:all 20190114-2 20210315-2~bpo10+1  2021-07-23 19:07:29 upgrade firmware-linux:all 20190114-2 20210315-2~bpo10+1  2021-07-23 19:07:29 upgrade firmware-linux-nonfree:all 20190114-2 20210315-2~bpo10+1  2021-07-23 19:07:29 upgrade firmware-misc-nonfree:all 20190114-2 20210315-2~bpo10+1  2021-07-25 22:34:44 upgrade krb5-locales:all 1.17-3+deb10u1 1.17-3+deb10u2  2021-07-25 22:34:44 upgrade libgssapi-krb5-2:amd64 1.17-3+deb10u1 1.17-3+deb10u2  2021-07-25 22:34:45 upgrade libgssapi-krb5-2:i386 1.17-3+deb10u1 1.17-3+deb10u2  2021-07-25 22:34:45 upgrade libkrb5-3:i386 1.17-3+deb10u1 1.17-3+deb10u2  2021-07-25 22:34:45 upgrade libkrb5-3:amd64 1.17-3+deb10u1 1.17-3+deb10u2  2021-07-25 22:34:45 upgrade libkrb5support0:amd64 1.17-3+deb10u1 1.17-3+deb10u2  2021-07-25 22:34:45 upgrade libkrb5support0:i386 1.17-3+deb10u1 1.17-3+deb10u2  2021-07-25 22:34:45 upgrade libk5crypto3:i386 1.17-3+deb10u1 1.17-3+deb10u2  2021-07-25 22:34:45 upgrade libk5crypto3:amd64 1.17-3+deb10u1 1.17-3+deb10u2  

regarding the WiFi router, it is one supplied by my ISP but it is a temporary solution working on 4g (huawei) (looks similar to a phone wifi hotspot). I don't think they update these remotely.

Linux: How do I get device name for 3rd partition of a given block device

Posted: 29 Jul 2021 09:17 AM PDT

How do I get (reliably) a partition device name knowing block device and partition number?

For example:

_get_part_dev_from_disk_dev /dev/ccis0 3 => /dev/ccis0p3. # notice the p

_get_part_dev_from_disk_dev /dev/sde 2 => /dev/sde2

My current method is the following but it is buggy as it assumes minor number is the partition number which is completely wrong except for 1st disk. minor can be computed, but it has many limitations. one is that after a maximum, dev does dynamic allocation.

_get_part_dev_from_disk_dev() {          if test -b "$1"          then                  DEV_MAJOR=$(printf "%d" "0x$(stat -c '%t' $1)")          else                  shellout "[$1] is no a block device"          fi            test -n "${2//[0-9]/}" && shellout "[$2] is not a partition number"            if test ! -r /sys/dev/block/$DEV_MAJOR:$2/uevent          then                  logerror "Can't read /sys/dev/block/$DEV_MAJOR:$2/uevent"                  shellout "Can't gather $1 partition $2 informations"          fi            . /sys/dev/block/$DEV_MAJOR:$2/uevent          test "$DEVTYPE" != "partition" && shellout "/sys/dev/block/$DEV_MAJOR:$2 TYPE=$DEVTYPE is not a partition."            # echo $(udevadm info --query=name --path=/sys/dev/block/$DEV_MAJOR:$2)          echo "/dev/$DEVNAME"  }  

This is part of a script that create partitions knowing block device name and partition number. there after I need too create a filesystem, but I can't assume that partition name is the block device name followed by partition number. Sometimes there is a letter p (driver dependent).

maybe udev knows that?

RHEL 7 doing init 5 isolate graphical.target no graphical login screen, and nvidia

Posted: 29 Jul 2021 08:41 AM PDT

using RHEL 7.9 x86-64 on an HP server having a small nvidia GPGPU, a Tesla T4 if I remember correctly. I do know this gpgpu is not a graphics card... it has no monitor outputs to it.

We currently have systemctl set-default multi-user.target so the system boots to run level 3 with a text console.

We installed NVIDIA-Linux-x86_64-470.57.02.run from runlevel 3, seemed to be successful as we can run nvidia-smi and that shows normal output.

However when doing init 5 or systemctl isolate graphical.target we remain in a single text window at the console, and doing runlevel reports N 3 5.

My question is: since RHEL 7 no longer uses /etc/X11/xorg.conf this file is no longer there, how does console graphics and whatnot work? What files where and how does all this graphics stuff happen now?

I don't believe this console graphics problem is related to nvidia entirely as I have somehow created this problem on other servers not having any nvidia card or driver installed and have run into this same problem. And we have everything working fine on numerous other servers having nvidia gpgpus and/or nvidia graphics card.

What does one do when doing systemctl isolate graphical.target does not result in a graphical console?

This is on a 24" 1920x1200 monitor off the blue VGA port off the server motherboard. Where if successful I can log in graphically and have terminal windows as well as mouse and copy/paste capability, but having no window and no mouse capability at the console is a show stopper and I am hoping to find a way to fix this without having to reinstall RHEL 7 from dvd.

After installing RHEL 7.9 from dvd everything is initially fine, we somehow travel down some road we are not aware of where we nuke the graphical console.

Get all css style, link, js and script from html file using ShellScript

Posted: 29 Jul 2021 08:53 AM PDT

I want to get all JS <script> tag inside data as well as <script src="path/to/js"></script> or <script src="http(s)://example.com/to.js"></script>. same for the <style> and <link href="path/to/css"> or <link src="http(s)://example.com/to.js">

Here I can run different commands to get the script block, script link, and same for style and link.

I have tried below where I can get the script tag details as well as the script link.

sed -n 's/.*\(<script>.*<\/script>\).*/\1/p' path/to/file.html  

But here it will return from the start of the <script> tag to the end of the </script>, here if any other context is present in between the

<script>  var a = "hello";  </script>    Hello I'm here    <script src="https://example.com/assets/some.js"></script>  <script src="path/to/1/js"></script>  <script src="path/to/2/js"></script>  

my command will return all of it when the HTML file is minified(in a single line). Where I need the outcome to be

<script>  var a = "hello";  </script>  <script src="https://example.com/assets/some.js"></script>  <script src="path/to/1/js"></script>  <script src="path/to/2/js"></script>  

Similar for <style></style> and link(stylesheet)

For this I found below command will only return the CSS link from the link tag:

sed -n 's/.*href="\([^"]*\).*/\1/p' path/to/file.html  

it returns

path/to/file/1.css  

Check out my sample HERE

Edit it from HERE

Any other solution will be highly appreciated. (e.g - awk, Python or any other)

Adding a column with random values to end of CSV

Posted: 29 Jul 2021 08:00 AM PDT

I have a CSV with a list of users, and would like to add a column with a one-time-use randomly generated password, unique to each user.

My script works... but then it just keeps going indefinitely adding rows. If I move the code to set the variable out of the loop, it works just fine, but then every user gets the same password.

How do I get this to terminate on the last row?

#!/bin/bash  #add column to csv  ORIG_FILE="new-users2.csv"  NEW_FILE="Output.csv"  { echo `head -1 $ORIG_FILE`",One Time Password" ; tail -n +2 $ORIG_FILE | \    while read x ; OneTimePass=$(openssl rand -base64 14 | head -c 6) ; do echo "$x,$OneTimePass" ; done ; } > $NEW_FILE  

Instructions for combining some partitions using gparted, and some other questions

Posted: 29 Jul 2021 09:58 AM PDT

(first time StackExchange Linux user here),

I have been using Linux Mint for quite a while now, and originally partitioned with only a little space for it (originally intended just to play around with it). Because of this, I have just used all my storage.

I have another 80 or so GB to add to it using gparted, but am inexperienced with the program, and don't want to break anything. My disk allocation looks like this:

partition img

As can be seen, Linux Mint is on /dev/nvme0n1p5, with 70 GB unallocated to its left. I would like to combine these.

In addition to this, windows is on /dev/nvme0n1p3, which I would like to remove. Is it safe just to right-click in gparted and delete it? (Everything is backed up on flash drive already.)

Would the combination process for the old-windows partition (if I delete it), be the same as for the unallocated 74.81 GB?

Update: I have taken initiative, and uninstalled Windows completely. The partition table on gparted now looks like this:

New Partition Table

Matching a valid version number inside case statement

Posted: 29 Jul 2021 07:58 AM PDT

I want to match a version number inside a case statement. The version number could look like this

1.12.0.32  1.12.0.32.1  2.10.0.30.1.2  

and is stored inside a shell variable: version.

As I searched on the internet, it's little tricky to match a regex inside a case-statement. But, does anyone has an idea or could help on how it can be done inside the case-statement?

I tried the following, but it didn't work:

case "$version" in      "([0-9]+\.*)+")                    echo "Correct"          ;;                   *)                    echo "Not Correct"          ;;  esac  

Centos Resolve conflicts

Posted: 29 Jul 2021 08:52 AM PDT

i am trying to install kubernetes and docker on Centos 8 but i have package conflicts how do i fix them?

[root@master-node ~]# yum install kubeadm docker -y  Ultima verifica della scadenza dei metadati: 3:22:18 fa il mar 27 lug 2021 18:20:38 EDT.  Package kubeadm-1.21.3-0.x86_64 is already installed.  Errore:   Problema: problem with installed package docker-ce-cli-1:20.10.7-3.el8.x86_64    - package docker-ce-cli-1:20.10.7-3.el8.x86_64 conflicts with docker provided by podman-docker-3.1.0-0.13.module_el8.5.0+733+9bb5dffa.noarch    - package docker-ce-cli-1:19.03.13-3.el8.x86_64 conflicts with docker provided by podman-docker-3.1.0-0.13.module_el8.5.0+733+9bb5dffa.noarch    - package docker-ce-cli-1:19.03.14-3.el8.x86_64 conflicts with docker provided by podman-docker-3.1.0-0.13.module_el8.5.0+733+9bb5dffa.noarch    - package docker-ce-cli-1:19.03.15-3.el8.x86_64 conflicts with docker provided by podman-docker-3.1.0-0.13.module_el8.5.0+733+9bb5dffa.noarch    - package docker-ce-cli-1:20.10.0-3.el8.x86_64 conflicts with docker provided by podman-docker-3.1.0-0.13.module_el8.5.0+733+9bb5dffa.noarch    - package docker-ce-cli-1:20.10.1-3.el8.x86_64 conflicts with docker provided by podman-docker-3.1.0-0.13.module_el8.5.0+733+9bb5dffa.noarch    - package docker-ce-cli-1:20.10.2-3.el8.x86_64 conflicts with docker provided by podman-docker-3.1.0-0.13.module_el8.5.0+733+9bb5dffa.noarch    - package docker-ce-cli-1:20.10.3-3.el8.x86_64 conflicts with docker provided by podman-docker-3.1.0-0.13.module_el8.5.0+733+9bb5dffa.noarch    - package docker-ce-cli-1:20.10.4-3.el8.x86_64 conflicts with docker provided by podman-docker-3.1.0-0.13.module_el8.5.0+733+9bb5dffa.noarch    - package docker-ce-cli-1:20.10.5-3.el8.x86_64 conflicts with docker provided by podman-docker-3.1.0-0.13.module_el8.5.0+733+9bb5dffa.noarch    - package docker-ce-cli-1:20.10.6-3.el8.x86_64 conflicts with docker provided by podman-docker-3.1.0-0.13.module_el8.5.0+733+9bb5dffa.noarch    - conflicting requests  (try to add '--allowerasing' to command line to replace conflicting packages or '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)  

Extract text starting at specific category header to next category header from a text file

Posted: 29 Jul 2021 07:07 AM PDT

I have a TOML file in the following format (categories may have any name, the sequential numbering is just an example and not guaranteed):

[CATEGORY_1]  A=1  B=2    [CATEGORY_2]  C=3  D=4    E=5    ...    [CATEGORY_N]  Z=26  

What I want to achieve is to retrieve the text inside a given category.

So, if I specify, let's say, [CATEGORY_1] I want it to give me the output:

A=1  B=2  

I tried using grep to achieve this task, with the z flag, so it could interpret newlines as null-byte characters and using this regular expression:

(^\[.*])             # Match the category     ((.*\n*)+?         # Match the category content in a non-greedy way      (?=\[|$))        # Lookahead to the start of other category or end of line  

It wasn't working unless I removed the ^ at beginning of the expression. However, if I do this, it will misinterpret loose pairs of brackets as a category.

Is there a way to do it correctly? If not with grep, with other tool, such as sed or awk.

Remove window from Alt-Tab-Menu

Posted: 29 Jul 2021 07:51 AM PDT

I'm using pop_os with Wayland and I was wondering whether there was a way to remove certain processes from the Alt-Tab menu.

In my case I have a certain gjs process @!0,28;BDH which appears in my Alt-Tab menu and I would like to get rid of it.

scp / ssh: deleting a big file (1 TB+) WHILE it is being transferred

Posted: 29 Jul 2021 10:47 AM PDT

EDIT — To clarify/summarize, the scenario is the following:

- Context: Large file (1 TB+) on server A, virtually no disk space left on server A, disk utilization on A keeps growing rapidly and that can't be stopped and there's no practical way to add more storage without interrupting production processes

- Goal: Move the "huge file" from A to another machine B, and delete already transferred parts of the file from A's disks while the file is being transferred (the transfer could take a while given the file size, but the disk utilization keeps growing ruthlessly, so we can't just wait for the transfer to finish)

Original request:

Is there a standard solution to delete big files (think 1 TB+) as they are being transferred via rsync/scp?

The solutions that I've found require extra disk space to first split the file into pieces. However, what if there is virtually no disk space left for these operations?

In the scp/rsync man pages, I only found switches that delete files after they've been fully transferred.

PS: Please note that I'm primarily looking for a mature standard solution, not a bash script / hack. I think it shouldn't be very difficult to come up with something using tools like truncate. However, if there's no standard solution and someone has an elegant bash script (or similar), I'd still be curious to see it.

null characters at the beginning of an ASCII log file

Posted: 29 Jul 2021 08:34 AM PDT

We have an java application that log4j2 to generate log file and have script to stop the process before restart it with another script. There is 5 minutes pause between stop and restart at the midnight daily. In the startup script, we use "mv" command to rename the log file with timstamp as extension. The issue is that one of the log file contains null characters( a few MB) at the beginning of the file and the log file becomes binary. a few observation notes to provide more context for this issue: 1.- Same startup script was used in other hosts with identical version of the java application. Don't have this issue at all. 2.- Occured occasionally. Ie., one week all 5 log files get corrupted and another week the log files are fine. 3.- Can not reproduce in similar developer Linux host; only in production Linux host. 4.- Size of the log file typically is about 4 - 6 GB daily. 5.- The application will be stop + 5 minutes pause + start by script at midnight daily. 6.- Used the hexdump to peek into the content of the binary log file. It had a few MB of null characters at the beginning and then followed by the normal typical ASCII content.

any suggestions will be appreciated. Thanks!

SFTP - check and copy recent files to local then archive within sftp

Posted: 29 Jul 2021 08:29 AM PDT

We have application integration layer over SFTP where the file feeds placed in SFTP server on random time interval like every 10 minute or 30 min interval and our need is to copy files from SFTP path to local, after complete of file copy to local then move those copied files to Archive directory in sftp server

currently using expect method to copy the files via cronjob but below script is not sufficient for actual need above mentioned. Please help to enhance this script or with other options, struggling with minimal knowledge of scripting

#!/usr/bin/expect  spawn sftp test_user@sftpserver.com  expect "password:"  send "12345\n"  expect "sftp>"  send "cd /incoming\n"  expect "sftp>"  send "mget -a *.xml /path_to_local_server\n"  expect "sftp>"  send "exit\n"  interact  

Does except method SFTP connection will get timeout?

Can named pipes/FIFOs be used in a "cyclic" manner together with `tee`?

Posted: 29 Jul 2021 08:01 AM PDT

Why does the last line in this script gets stuck?

#!/usr/bin/env bash    trap 'rm -f numbers' EXIT    mkfifo numbers    decrement() {    while read -r number; do      echo "debug: $number" >&2        if (( number )); then        echo $(( --number ))      else        break      fi    done  }    echo 10 > numbers &    # Works: prints the debug line  decrement < numbers >> numbers    # Works: prints an infinite stream of 10's  cat numbers | tee numbers    # Fails: prints "debug: 10" and then gets stuck  cat numbers | decrement | tee numbers  

Below is the question as I original wrote it, but it contains a lot of unnecessary details. However, I'm keeping it just in case someone is curious to know how I bumped into this. Here goes:


Is it possible to use named pipes/fifos in a cyclic way? Something like this:

line → fifo ←───────┐           │          │           ↓          ↑           │          │         curl ─────→ tee → stdout  

Here's the problem I had to solve. I wanted to write a Bash utility to fetch all the tags of a Docker image using the Docker Hub API. The basic request is this:

declare -r repo=library%2Fubuntu # %2F is a URL-encoded forward slash  curl "https://hub.docker.com/v2/repositories/$repo/tags/?page=1&page_size=100"  

You'll notice that the response includes a link to the next page in case the total count of image tags is greater than the number of items requested per page (which has an upper limit of 100). Additionally, the next field is set to null when on the last page.

{    "count": 447,    "next": "https://hub.docker.com/v2/repositories/library%2Fubuntu/tags/?page=2&page_size=1"    "previous": null,    "results": []  }  

The problem looked recursive to me, which is what I attempted to do, and managed to solve it in the end by piping into a recursive call:

url-encode() {    # A lazy trick to URL-encode strings using `jq`.    printf '"%s"' "$1" | jq --raw-output '@uri'  }    fetch() {    # The first line fed in to `fetch` is the URL we have to fetch    read -r next_url      # The rest of the stdin are the tag names we need to send to stdout    cat      # BASE CASE    #    # A `null` next link means we've just seen the last page, so we can return.    #    if [[ "$next_url" == "null" ]]; then return; fi      # RECURSIVE CASE    #    #   1. Fetch the URL    #   2. Extract the next link and the image tags using `jq`    #   3. Pipe the result into a recursive call    #    echo "Fetching URL: $next_url" >&2    curl --location --silent --show-error --fail "$next_url" \      | jq --raw-output '.next, .results[].name' \      | fetch  }    # We need a way to start off the recursive chain, which we do by sending  # a single line to `fetch` containing the URL of the first page we want  # to fetch.  first() {    local -r repo=$(url-encode "$1")    echo "https://hub.docker.com/v2/repositories/$repo/tags/?page=1&page_size=100"  }    declare -r repo=$1    first "$repo" | fetch  

Maybe this isn't ideal and I'm happy to receive suggestions on improving it, but for the purposes of this question I'm interested in whether the problem can be solved by using FIFO's. Probably FIFO's are not the best tool for the job, but I've just recently found out about them, so my mind tries to apply them even when they might not be ideal. In any case, here's what I've tried, but failed, to do when approaching the problem from a FIFO perspective.

In short, I've tried to reproduce the diagram posted at the beginning of the question:

first URL → fifo ←───────┐                │          │                ↓          ↑                │          │              curl ─────→ tee → stdout  
mkfifo urls    # Remove FIFO on script exit.  trap 'rm -f urls' EXIT    fetch() {    local url=$1      # For each line we read from the FIFO, parse it as JSON and extract the    # `next` field. If it's not null, we pass it to `curl` via `xargs`.    #    # The response is both sent to the `urls` FIFO and piped to another `jq`    # call where we keep just what we're interested in — the tag names.    #    cat urls \      | jq --raw-output '.next | select(. != null)' \      | xargs curl --silent \      | tee urls \      | jq --raw-output '.results[].name' &      # The pipeline above is successful in reading the first URL if we take      # out the `tee urls` component of the pipeline. However, the pipeline      # gets stuck if the `tee` component is present.      # Start off the process of fetching by pushing a first URL to the FIFO.    cat <<JSON > urls &  {"next": "$url"}  JSON      # Both previous commands were started off asynchronously (hoping that    # this will achieve the necessary concurrency on the `urls` FIFO), so    # we need to wait on both of them to finish before returning.    wait  }    fetch 'https://hub.docker.com/v2/repositories/library%2Fubuntu/tags/?page=1&page_size=1'  

Finally, here are my questions (and thank you for reading up until this point):

  1. Why doesn't the above work?
  2. How can the script be changed so that it works?

Thanks! And let me know if I should provide further details.

Hostapd active, but no WIFI signal with ath9k

Posted: 29 Jul 2021 07:13 AM PDT

I'm trying to set up a wifi access point with hostapd, which seems to be running fine, but I can't see the hotspot on another device. The wifi card is DNXA-116 with AR9382 chipset. When running the same hostapd configuration with an Intel AC 3160 chip everything works like it should. The difference is the driver. I'm running a linux qmx6 kernel 5.0.7. With the Intel chip, after installing the driver, I basically followed this guide and it worked:

https://developer.toradex.com/knowledge-base/wi-fi-access-point-mode

With the Atheros chip, I've tried many more configurations in etc/hostapd.conf with no luck yet.

Below is a list of relevant output in terminal.

$ lspci -v  03:00.0 Network controller: Qualcomm Atheros AR93xx Wireless Network Adapter (rev 01)  Subsystem: Qualcomm Atheros AR93xx Wireless Network Adapter  Flags: bus master, fast devsel, latency 0, IRQ 318  Memory at 01100000 (64-bit, non-prefetchable) [size=128K]  [virtual] Expansion ROM at 01120000 [disabled] [size=64K]  Capabilities: <access denied>  Kernel driver in use: ath9k  Kernel modules: ath9k    $ dmesg|grep ath9k  ath9k 0000:03:00.0: enabling device (0140 -> 0142)  ath9k 0000:03:00.0 wlp3s0: renamed from wlan0    $ systemctl status hostapd -l  ● hostapd.service - Hostapd IEEE 802.11 AP, IEEE 802.1X/WPA/WPA2/EAP/RADIUS Authenticator     Loaded: loaded (/lib/systemd/system/hostapd.service; enabled; vendor preset: enabled)     Active: active (running) since Tue 2021-07-20 13:48:02 CEST; 15s ago    Process: 1024 ExecStart=/usr/sbin/hostapd /etc/hostapd.conf -P /run/hostapd.pid -B (code=exited, status=0/SUCCESS)   Main PID: 1025 (hostapd)     CGroup: /system.slice/hostapd.service             └─1025 /usr/sbin/hostapd /etc/hostapd.conf -P /run/hostapd.pid -B    Jul 20 13:48:02 asdf systemd[1]: Starting Hostapd IEEE 802.11 AP, IEEE 802.1X/WPA/WPA2/EAP/RADIUS Authenticator...  Jul 20 13:48:02 asdf hostapd[1024]: Configuration file: /etc/hostapd.conf  Jul 20 13:48:02 asdf hostapd[1024]: Using interface wlp3s0 with hwaddr x:x:x:x:x:x and ssid "TEST_WIFI"  Jul 20 13:48:02 asdf hostapd[1024]: wlp3s0: interface state UNINITIALIZED->ENABLED  Jul 20 13:48:02 asdf hostapd[1024]: wlp3s0: AP-ENABLED  Jul 20 13:48:02 asdf systemd[1]: Started Hostapd IEEE 802.11 AP, IEEE 802.1X/WPA/WPA2/EAP/RADIUS Authenticator.    $ iw dev  phy#0          Interface wlp3s0                  ifindex 5                  wdev 0x1                  addr x:x:x:x:x:x                  ssid TEST_WIFI                  type AP                  channel 1 (2412 MHz), width: 20 MHz (no HT), center1: 2412 MHz                  txpower 15.00 dBm    $ lspci -nn | grep -i network  03:00.0 Network controller [0280]: Qualcomm Atheros AR93xx Wireless Network Adapter [168c:0030] (rev 01)      $ ifconfig  wlp3s0    Link encap:Ethernet  HWaddr x:x:x:x:x:x            inet addr:192.168.8.1  Bcast:192.168.8.255  Mask:255.255.255.0            inet6 addr: x::x:x:x:x/64 Scope:Link            UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1            RX packets:0 errors:0 dropped:0 overruns:0 frame:0            TX packets:222 errors:0 dropped:0 overruns:0 carrier:0            collisions:0 txqueuelen:1000            RX bytes:0 (0.0 B)  TX bytes:42188 (41.1 KiB)  

The contents in the configuration files are basic as in the guide in the link above. If someone has insight to share on what else to look for that would be awesome.

UPDATE

The AP problem was solved by using legacy PCIe interrupts instead of MSI: https://community.toradex.com/t/pcie2usb-card-on-apalis-imx6/5357#answer-6285, but other problems arise.

By passing pci=nomsi in the kernel command line legacy mode is used globally. This makes the AR9382 chip work for setting up an AP, but breaks other parts of the system. The question now is how can I set only 03:00.0 to legacy mode?

echo 0 > /sys/bus/pci/devices/$bridge/msi_bus from 4.5.2 in https://www.kernel.org/doc/html/v5.7/PCI/msi-howto.html doesn't work.

Tmux sessions get killed on ssh logout

Posted: 29 Jul 2021 09:28 AM PDT

I am using tmux on a remote machine that I access over ssh. For some reason, the tmux sessions do not persist between consecutive ssh login sessions. I do not have this issue while logging into this other remote machine that I have access to. This is essentially the same issue as described in this question.

However, the machine that I use already uses ssh.service rather than ssh.socket, so the accepted answer does not work for me.

Exact steps taken:

  1. Login to the machine via ssh
  2. Start a tmux session, do some work.
  3. Detach session, and possibly attach again (this works).
  4. Log out with Ctrl+D.
  5. Login again and try tmux a or tmux ls (doesn't work).

How to make nc listen for remote connections

Posted: 29 Jul 2021 09:22 AM PDT

I'm trying to listen for a remote connection but nc keeps giving me this error:

Error: Couldn't setup listening socket (err=-3)  

This is the command I used:

nc -lvnp port_number -s my_public_ip  

How to properly set Environment variables (golang on Manjaro)

Posted: 29 Jul 2021 09:41 AM PDT

I have a problem with my go installation. I'm using Manjaro and I think it is related to this as Manjaro seems to handle the go env differently then suggestest by ubuntu and windows.

I have go installed and can run code as expected:

$ go run gitlab.com/gitlabtest  Hello, GitLab!  

Then I check if there any enviroment variables set with:

$ echo $GOROOT    $ echo $GOPATH    $ echo $GOBIN    $  

So there are non as it seems which is odd. Why could I run my test program? I try to check for environment variables another way:

$ go env GOROOT  /usr/lib/go  $ go env GOPATH  /home/bluebrown/go  $ go env GOBIN    $   

That is interesting. Go itself seems to have some knowledge of the environment variables. That's probably why I can run go code from anywhere, effectively targeting $GOPATH. There is just one problem, $GOBIN seem to be unset.

OK, that means I have to edit my ~/.bash_profile I guess.

#  # ~/.bash_profile  #  [[ -f ~/.bashrc ]] && . ~/.bashrc    export GOROOT=/usr/lib/go  export GOPATH=$HOME/go  export GOBIN=$HOME/go/bin  

Finally I get the result I want as it seems:

$ source .bash_profile  $ echo $GOROOT &&echo $GOPATH &&echo $GOBIN  /usr/lib/go  /home/bluebrown/go  /home/bluebrown/go/bin  $  $ go env GOROOT &&go env GOPATH &&go env GOBIN  /usr/lib/go  /home/bluebrown/go  /home/bluebrown/go/bin  

Thats great now I can use packages from the $GOBIN like glide or govendor right?

[~]$ go get -u -v github.com/kardianos/govendor  github.com/kardianos/govendor (download)  [~]$ cd $GOPATH/src  [src]$ mkdir testdir  [src]$ cd testdir  [testdir]$ govendor init  bash: govendor: command not found  

Well maybe not. So I try glide:

$ curl https://glide.sh/get | sh  $...  $which: no glide in    $ (/usr/local/bin:/usr/bin:/bin:/usr/local/sbin:       /usr/lib/jvm/default/bin:/usr/bin/site_perl:     /usr/bin/vendor_perl:/usr/bin/core_perl:/usr/local/go/bin)    glide not found. Did you add $GOBIN to your $PATH?  Fail to install glide  

So it turns out that no matter what pack I have in $GOBIN it can't get found. When I restart the device, everything is reset again for some reason :(

At this point I don't know anymore what to do.

How to replace Ubuntu with Arch on a dual boot?

Posted: 29 Jul 2021 10:06 AM PDT

I want to replace Ubuntu Linux on my current set up. Currently I have a dual boot with Ubuntu Mate and Windows 10. I would like replace the Ubuntu installation with Arch.

But before I do so I was wondering what preparation I should do before erasing /dev/sdc and reinstalling. I used Grub2 to chose which operating system to start.

$ dpkg --list | grep grub   ii  grub-common                                                 2.02~beta2-36ubuntu3.17                                 amd64        GRand Unified Bootloader (common files)  ii  grub-efi-amd64                                              2.02~beta2-36ubuntu3.17                                 amd64        GRand Unified Bootloader, version 2 (EFI-AMD64 version)  ii  grub-efi-amd64-bin                                          2.02~beta2-36ubuntu3.17                                 amd64        GRand Unified Bootloader, version 2 (EFI-AMD64 binaries)  ii  grub-efi-amd64-signed                                       1.66.17+2.02~beta2-36ubuntu3.17                         amd64        GRand Unified Bootloader, version 2 (EFI-AMD64 version, signed)  ii  grub2-common                                                2.02~beta2-36ubuntu3.17                                 amd64        GRand Unified Bootloader (common files for version 2)  ii  grub2-themes-ubuntu-mate                                    0.3.7                                                   all          GRand Unified Bootloader, version 2 (ubuntu-mate theme)  

So should the settings be saved? If yes how is this accomplished. Further down you see that I have three hard drives

$ parted --list  Model: ATA Samsung SSD 840 (scsi)  Disk /dev/sda: 250GB                                      Sector size (logical/physical): 512B/512B  Partition Table: gpt  Disk Flags:     Number  Start   End    Size   File system  Name                          Flags   1      1049kB  106MB  105MB  fat32        EFI system partition          boot, esp   2      106MB   240MB  134MB               Microsoft reserved partition  msftres   3      240MB   250GB  249GB  ntfs         Basic data partition          msftdata   4      250GB   250GB  472MB  ntfs                                       hidden, diag      Model: ATA ST1000LM024 HN-M (scsi)  Disk /dev/sdb: 1000GB  Sector size (logical/physical): 512B/4096B  Partition Table: msdos  Disk Flags:     Number  Start   End     Size    Type     File system  Flags   1      1049kB  1000GB  1000GB  primary  ntfs      Model: ATA KINGSTON SV300S3 (scsi)  Disk /dev/sdc: 120GB  Sector size (logical/physical): 512B/512B  Partition Table: gpt  Disk Flags:     Number  Start   End    Size    File system     Name                  Flags   1      1049kB  538MB  537MB   fat32           EFI System Partition  boot, esp   2      538MB   103GB  102GB   ext4   3      103GB   120GB  17,1GB  linux-swap(v1)  

How do I create a new empty file in a bash script?

Posted: 29 Jul 2021 07:48 AM PDT

I'm running some third-party Perl script written such that it requires an output file for the output flag, -o.

Unfortunately, the script appears to require an actual file, that is, users must create an empty file filename.txt with 0 bytes and then input this empty file on the script command line

perl script1.pl -o filename.txt  

Question: How would I create an empty file within a bash script? If one simply tries perl script1.pl -o filename.txt, the script gives an error that the file doesn't exist.

How to extract unique values from column #x with its corresponding values of column #y?

Posted: 29 Jul 2021 09:04 AM PDT

I have a comma',' FS filename as csv with n number of columns. I need to extract the unique value from colm.#1 with only corresponding values in colm.#10. So basically the column 10 is the date which is always unique for colm.#1 despite the other columns.

Content of file filename:

colm.#1 colm.#2 colm.#3 colm.#4 colm.#5 colm.#6 colm.#7 colm.#8 colm.#9 colm.#10    colm.#11      a   231 412 30.84873962 3   1   1   2013    5/28/2013   6/6/2006    299      c   12  41  66.80690765 3   1   1   2014    5/25/2014   4/4/2004    351      d   35  6   25.91622925 3   1   2   2013    6/27/2013   3/3/2003    303      d   352 55  33.91288757 3   1   2   2014    6/26/2014   3/3/2003    355      a   86  3   30.58783722 3   1   3   2013    7/24/2013   6/6/2006    307      c   15  3242    26.6435585  3   1   3   2014    7/24/2014   4/4/2004    359      e   67  1   22.95526123 3   1   4   2013    8/21/2013   5/5/2005    311      a   464 64  4.804824352 3   1   4   2014    8/20/2014   6/6/2006    363      b   66  42  29.42435265 3   1   5   2014    9/18/2014   7/7/2007    367      m   24  2   66.10663319 3   1   6   2014    10/13/2014  9/9/2009    371  

I tried the following command but it is only for colm.#1 and I do not know how to get the corresponding value of the colm.#10.

cut -d',' -f1 filename |uniq  

The expected output would be:

a   6/6/2006  b   7/7/2007  c   4/4/2004  d   3/3/2003  e   5/5/2005  m   9/9/2009  

Remove specific word in variable

Posted: 29 Jul 2021 10:55 AM PDT

In a bash script, how can I remove a word from a string, the word would be stored in a variable.

FOO="CATS DOGS FISH MICE"  WORDTOREMOVE="MICE"  

HPC ssh "connection closed by remote host"

Posted: 29 Jul 2021 07:08 AM PDT

My HPC installed LSF job scheduler.

I logon the login node (I use xshell) and using interactive job submission command

bsub -Is csh  

Thus, I entered one of the HPC node, for example c01 node

Then I want to enter another node, for example c02, so I use

ssh c02  

I successfully entered c02 node. But after several minutes, the connection is closed. The message is

Connection to c02 closed by remote host.   Connection to c02 closed.  

So how to maintain this connection?

The following message is generated when using ssh -vvv c02

debug3: Wrote 64 bytes for a total of 2925  debug1: channel 0: free: client-session, nchannels 1  debug3: channel 0: status: The following connections are open:  #0 client-session (t4 r0 i0/0 o0/0 fd 4/5 cfd -1)    debug3: channel 0: close_fds r 4 w 5 e 6 c -1  Connection to c02 closed by remote host.  Connection to c02 closed.  Transferred: sent 2744, received 2384 bytes, in 158.3 seconds  Bytes per second: sent 17.3, received 15.1  debug1: Exit status -1  

what is the difference between "ssh user@ip" and "ssh ip user"?

Posted: 29 Jul 2021 09:47 AM PDT

I am facing problem with one of the above.

Only ssh user@ip works not the other way round. And I am getting the following error when I run ssh ip root

sh: root: not found  

Because of this I GUESS one of the application which uses the problematic syntax is not able to login.

How can I send stdout to multiple commands?

Posted: 29 Jul 2021 09:30 AM PDT

There are some commands which filter or act on input, and then pass it along as output, I think usually to stdout - but some commands will just take the stdin and do whatever they do with it, and output nothing.

I'm most familiar with OS X and so there are two that come to mind immediately are pbcopy and pbpaste- which are means of accessing the system clipboard.

Anyhow, I know that if I want to take stdout and spit the output to go to both stdout and a file then I can use the tee command. And I know a little about xargs, but I don't think that's what I'm looking for.

I want to know how I can split stdout to go between two (or more) commands. For example:

cat file.txt | stdout-split -c1 pbcopy -c2 grep -i errors  

There is probably a better example than that one, but I really am interested in knowing how I can send stdout to a command that does not relay it and while keeping stdout from being "muted" - I'm not asking about how to cat a file and grep part of it and copy it to the clipboard - the specific commands are not that important.

Also - I'm not asking how to send this to a file and stdout - this may be a "duplicate" question (sorry) but I did some looking and could only find similar ones that were asking about how to split between stdout and a file - and the answers to those questions seemed to be tee, which I don't think will work for me.

Finally, you may ask "why not just make pbcopy the last thing in the pipe chain?" and my response is 1) what if I want to use it and still see the output in the console? 2) what if I want to use two commands which do not output stdout after they process the input?

Oh, and one more thing - I realize I could use tee and a named pipe (mkfifo) but I was hoping for a way this could be done inline, concisely, without a prior setup :)

No comments:

Post a Comment