Monday, October 4, 2021

Recent Questions - Unix & Linux Stack Exchange

Recent Questions - Unix & Linux Stack Exchange


Unrar specific .rar file from multipart list without automatically iterating through the list

Posted: 04 Oct 2021 10:47 AM PDT

I was wondering, since executing unrar -x file.part01.rar 123.zip destinationPath would automatically iterate through the list until the file 123.zip is found, is there a way to stop that auto search for multivolume archives?

For example, I have a set of 100parts of a multipart volume, it would take some time to find the file if the file were to be located at part51. Instead, I would like to generate two processes that starts at part1 and the other starts at part50. If I were to execute unrar -x file.part50.rar 123.zip destinationPath, it skips part50 and starts over to the top.

Second option, if possible, is there a way I can just execute unrar -x file.part50.rar 123.zip destinationPath and if file is not found exit/stop process and start a new command execution unrar -x file.part51.rar 123.zip destinationPath.

Any suggestions or advice would be very much appreciated. Thank you.

Why 'let' exits with code 1 when calculation result equals to 0?

Posted: 04 Oct 2021 10:18 AM PDT

I came across this question whose author had dealt with problem caused by: let x=1-1 exits with code 1.

According to bash manual:

If the last expression evaluates to 0, let returns 1; otherwise 0 is returned. (p. 56-57)

I'm not so familiar with bash nuances, so my question is "What is the reason of such behaviour?". May be that's because 0 interprets like 'false'? It's a bit strange for bash-beginner like me that 0 as a result of arithmetic expression leads to an error exit code...

How to get xinput device ID of touchscreen without using manufacturer string?

Posted: 04 Oct 2021 10:53 AM PDT

Chromium only shows proper touchscreen behavior when the supplied ID argument is 7 in the case of the below devices. Note that the xinput IDs are subject to change during a reboot, so I can't just use 7 all the time.

I understand I can whitelist manufacturer strings but I would like something reliable that always works and doesn't depend on manufacturer strings staying the same.

DEVICE 1:

Touchscreen Hardware on device 1 (reported by xinput):  ? Virtual core pointer id=2 [master pointer (3)]  ? ? Virtual core XTEST pointer id=4 [slave pointer (2)]  ? ? ADS7846 Touchscreen id=9 [slave pointer (2)]  ? ? NHD Newhaven Display id=7 [slave pointer (2)]  ? Virtual core keyboard id=3 [master keyboard (2)]  ? Virtual core XTEST keyboard id=5 [slave keyboard (3)]  ? 30370000.snvs:snvs-powerkey id=6 [slave keyboard (3)]  ? Dell Dell USB Entry Keyboard id=8 [slave keyboard (3)]  
/dev/input/touchscreen0 -> /dev/input/event1  
udevadm info --query=property --name=/dev/input/event1  DEVLINKS=/dev/input/by-id/usb-NHD_Newhaven_Display-event-if00 /dev/input/by-path/platform-ci_hdrc.0-usb-0:1:1.0-event /dev/input/touchscreen0  DEVNAME=/dev/input/event1  DEVPATH=/devices/soc0/soc/30800000.aips-bus/30b10000.usb/ci_hdrc.0/usb1/1-1/1-1:1.0/0003:0461:0022.005C/input/input93/event1  ID_BUS=usb  ID_INPUT=1  ID_INPUT_TOUCHSCREEN=1  ID_MODEL=Newhaven_Display  ID_MODEL_ENC=Newhavenx20Displayx20  ID_MODEL_ID=0022  ID_PATH=platform-ci_hdrc.0-usb-0:1:1.0  ID_PATH_TAG=platform-ci_hdrc_0-usb-0_1_1_0  ID_REVISION=0100  ID_SERIAL=NHD_Newhaven_Display  ID_TYPE=hid  ID_USB_DRIVER=usbhid  ID_USB_INTERFACES=:030102:  ID_USB_INTERFACE_NUM=00  ID_VENDOR=NHD  ID_VENDOR_ENC=NHD  ID_VENDOR_ID=0461  LIBINPUT_CALIBRATION_MATRIX=1.066870 -0.005907 -0.026620 0.007245 -1.136364 1.046200  LIBINPUT_DEVICE_GROUP=3/461/22:usb-ci_hdrc.0-1  MAJOR=13  MINOR=65  SUBSYSTEM=input  USEC_INITIALIZED=514876494495  

DEVICE 2:

Touchscreen Hardware on device 2 (reported by xinput):  ? Virtual core pointer id=2 [master pointer (3)]  ? Virtual core XTEST pointer id=4 [slave pointer (2)]  ? Silicon Integrated System Co. SiS HID Touch Controller Mouse id=8 [slave pointer (2)]  ? Silicon Integrated System Co. SiS HID Touch Controller Touchscreen id=7 [slave pointer (2)]  ? ADS7846 Touchscreen id=9 [slave pointer (2)]  ? Virtual core keyboard id=3 [master keyboard (2)]  ? Virtual core XTEST keyboard id=5 [slave keyboard (3)]  ? 30370000.snvs:snvs-powerkey id=6 [slave keyboard (3)]  
/dev/input/touchscreen0 -> /dev/input/event3 (event1 seems like the correct touchscreen though,  based on the udevadm output)  
udevadm info --query=property --name=/dev/input/event1  DEVLINKS=/dev/input/by-path/platform-ci_hdrc.0-usb-0:1:1.0-event /dev/input/touchscreen0 /dev/input/by-id/usb-Silicon_Integrated_System_Co._SiS_HID_Touch_Controller-event-if00  DEVNAME=/dev/input/event1  DEVPATH=/devices/soc0/soc/30800000.aips-bus/30b10000.usb/ci_hdrc.0/usb1/1-1/1-1:1.0/0003:04E7:1080.0001/input/input1/event1  ID_BUS=usb  ID_INPUT=1  ID_INPUT_HEIGHT_MM=136  ID_INPUT_TOUCHSCREEN=1  ID_INPUT_WIDTH_MM=215  ID_MODEL=SiS_HID_Touch_Controller  ID_MODEL_ENC=SiSx20HIDx20Touchx20Controller  ID_MODEL_ID=1080  ID_PATH=platform-ci_hdrc.0-usb-0:1:1.0  ID_PATH_TAG=platform-ci_hdrc_0-usb-0_1_1_0  ID_REVISION=0100  ID_SERIAL=Silicon_Integrated_System_Co._SiS_HID_Touch_Controller  ID_TYPE=hid  ID_USB_DRIVER=usbhid  ID_USB_INTERFACES=:030000:  ID_USB_INTERFACE_NUM=00  ID_VENDOR=Silicon_Integrated_System_Co.  ID_VENDOR_ENC=Siliconx20Integratedx20Systemx20Co.  ID_VENDOR_ID=04e7  LIBINPUT_CALIBRATION_MATRIX=1.066870 -0.005907 -0.026620 0.007245 -1.136364 1.046200  LIBINPUT_DEVICE_GROUP=3/4e7/1080:usb-ci_hdrc.0-1  MAJOR=13  MINOR=65  SUBSYSTEM=input  USEC_INITIALIZED=8612683  

find larger files under /tmp ( owned by oracle user, size larger than 1M) and nullify

Posted: 04 Oct 2021 10:11 AM PDT

My requirement is to find files under /tmp owned by oracle user, size larger than 1M then nullify those files. can someone please help me with this.

run systemctl --user commands as root

Posted: 04 Oct 2021 09:25 AM PDT

I need root to be able to manage systemctl --user units. Right now I have user1 set up with systemd user units. If the user logs in directly via terminal, GUI, or ssh, they are able to to run all systemctl --user commands. While the user is still logged in, I can run the following as root and perform all systemctl --user commands at that user with no problem:

su - user1 -c "systemctl --user status myunit.service"  

However if the user logs off, then no one can run systemctl --user commands as that user, not even root. I will continue to get

Failed to connect to bus: No such file or directory  

Even if I "sudo - user1" as root, that is not good enough, will get the same error. The user literally needs to login to manage that user's units.

Apparently this is a known "issue" (quotes as the system is running as designed). NOte: I tried setting XDG_RUNTIME_DIR env var in user1's bashrc, does not help.

This user seems to found a workaround, but it does not work. Looks like the developers did not approve of his idea: Inspect unit status for user-units with systemctl as root

The only work around I found is literally ssh into the user account to run commands like this after auth with public keys:

ssh user1@localhost -f 'systemctl --user status myunit.service'  

I am looking for a workaround that does not require a ssh connection. I need root to be able to manage a systemd user unit while that user is not logged in. How can I accomplish this?

Display the sum of the number in csv file

Posted: 04 Oct 2021 09:16 AM PDT

Write a shell script to read a student record csv file and display the sum of the number in third column.

Details: I will pass the input file path which is csv file and it contains below data:

Name Math Marks English Marks Total Marks   Vivek     95          90  Ajay      92          82  Vinay     84          89  

Read the data from the file and update the sum of the values in fourth column.

Is there a tool that can read and write to disk at constant throughput?

Posted: 04 Oct 2021 09:07 AM PDT

I'm looking for a tool that can read and write data at a constant target throughput, say 3Mb/s, rather than pushing the I/O system to its limits. I then intend to monitor various metrics whilst this fairly constant I/O activity is happening. I've looked at tools like stress and fio but it seems like they're more geared towards maximum throughput. Any suggestions for tools that can do something like this would be greatly appreciated. Thanks.

Drive formatted through USB, plugged in through SATA but no filesystems recognized?

Posted: 04 Oct 2021 08:55 AM PDT

I need to change a drive in my home NAS, and so bought a new WD 2TB drive. Using a WD My Book 111D USB enclosure, I have partitioned, formatted and copied the data of the old drive. Then, I change drives in my NAS, which has a SATA connection, but now the drive appears unformatted. See below.

I can't understand what's happening, when I plug it back through USB everything is back to normal.

Does the partitioning through USB differ somehow??? It appears as if the partition table "stays in the USB enclosure", which doesn't make sense for me. Is there something to try? Do I have to start again the process in the NAS enclosure?

My NAS is a kobol Helios64 (Debian Linux 5.10.35-rockchip64) The disk appears in /dev/sda. When connected through SATA:

mike@helios64:~$ lsblk  NAME         MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT  sda            8:0    0   1,8T  0 disk    sdb            8:16   0 931,5G  0 disk    └─md0          9:0    0   2,7T  0 raid5 /srv/dev-disk-by-label-Backup  sdc            8:32   0 931,5G  0 disk    └─md0          9:0    0   2,7T  0 raid5 /srv/dev-disk-by-label-Backup  sdd            8:48   0 931,5G  0 disk    └─md0          9:0    0   2,7T  0 raid5 /srv/dev-disk-by-label-Backup  sde            8:64   0 931,5G  0 disk    └─md0          9:0    0   2,7T  0 raid5 /srv/dev-disk-by-label-Backup  mmcblk2      179:0    0  14,6G  0 disk    └─mmcblk2p1  179:1    0  14,4G  0 part  /  mmcblk2boot0 179:32   0     4M  1 disk    mmcblk2boot1 179:64   0     4M  1 disk   

When connected through USB:

mike@helios64:~$ lsblk  NAME         MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT  sda            8:0    0   1,8T  0 disk    ├─sda1         8:1    0   1,2T  0 part    └─sda2         8:2    0   613G  0 part    sdb            8:16   0 931,5G  0 disk    └─md0          9:0    0   2,7T  0 raid5 /srv/dev-disk-by-label-Backup  sdc            8:32   0 931,5G  0 disk    └─md0          9:0    0   2,7T  0 raid5 /srv/dev-disk-by-label-Backup  sdd            8:48   0 931,5G  0 disk    └─md0          9:0    0   2,7T  0 raid5 /srv/dev-disk-by-label-Backup  sde            8:64   0 931,5G  0 disk    └─md0          9:0    0   2,7T  0 raid5 /srv/dev-disk-by-label-Backup  sr0           11:0    1    30M  0 rom     mmcblk2      179:0    0  14,6G  0 disk    └─mmcblk2p1  179:1    0  14,4G  0 part  /  mmcblk2boot0 179:32   0     4M  1 disk    mmcblk2boot1 179:64   0     4M  1 disk   

How to determine whether a process would run out-of-memory?

Posted: 04 Oct 2021 08:56 AM PDT

A testsuite has a couple of tests, that will consume a quite a lot of memory (11GB) or so. This is typically not a problem when running those tests on my developer machine, but within the context of CI the testsuite is oftentimes executed on machine that don't have that much RAM available.

When the CI-host lacks enough memory, my testsuite is stopped by the OOMkiller, thus failing.

Rather than completely failing the entire test-suite, i would like to limit the tests that are run based on the characteristics of the host machine.

The idea is to use valgrind's massif tool to get an approximation of how much memory will be consumed by a given test, then check how much memory is available on the host, and skip those tests that would exceed this memory.

However, I have the gut feeling that finding out "how much memory is available" is a non-trivial task.

  • I cannot control on which runners the CI is going to be executed (and there might be different ones with different amounts of available RAM)
  • On the runners, I am not root, and thus have little/no control over OOMkiller itself
  • The runners are most likely just docker containers running on some random cloud provider (Who is in control of OOMkiller in this case? the container? the host?)

Just parsing /proc/meminfo seems a bit too naive

  • does that give me the memory available within the container?
  • MemAvailable seems like a good choice, but even on my 32GB-RAM desktop this currently shows only 10GB of available memory and I can still run a process that consumes 11GB of memory (afaict MemAvailable ignores available swapspace)

Obviously the system is dynamic, and while there might be enough available memory when I start my test, other processes might consume memory during the test's runtime, so I could still run out-of-memory

So: How should I tackle this problem? Is there a way to query the OOMkiller for my allowance of memory?

nftables map `port` to `ip:port` for DNAT

Posted: 04 Oct 2021 08:35 AM PDT

Is it possible to have an nftables map which maps port to ipv4_addr:port, where port and ipv4_addr:port have different TCP port numbers? For example, I want to dnat all incoming packets on port 80 to a container running a web server on port 8080 (purely with nftables). This is possible using two maps and two map lookups, as below:

table ip dnatTable {    chain dnatChain {      type nat hook prerouting priority dstnat; policy accept;      dnat to tcp dport map { 80 : 172.17.0.3 }:tcp dport map { 80 : 8080 }    }  }    

However, I was wondering if it is possible with only a single map lookup?

Thanks

Can ZFS snapshot names contain newlines? If yes, how to parse the output of zfs list -t snapshot?

Posted: 04 Oct 2021 09:23 AM PDT

The following question relates to ZFS on Linux (ZoL) / OpenZFS, as it is provided e.g. in Debian Buster and Debian Bullseye.

As the title says, I'd like to know whether ZFS snapshot names theoretically can contain newlines (as normal filenames can do), and if yes, how I can work safely through a list of snapshot names in a script.

I have tried to create snapshots with such names, but zfs (fortunately) refused to do that. However, I'd like to be sure, and therefore, I'm asking here.

A bit of background: On this site and elsewhere, there often is the question how we could parse the output of ls to work through the list of file names name by name. The answer mostly is: Don't do this, because there can be unexpected side effects, for example if the file names contain newlines; instead, use bash's globbing. I have understood that (in fact, I always did it like that).

However, when it comes to ZFS snapshot names, there is no globbing. For example, on my box, when I issue something like zfs list -H -r -o name -t snapshot rpool/vm-garak, I get a list of snapshot names having the entries separated by a newline:

root@cerberus ~/scripts # zfs list -H -r -o name -t snapshot rpool/vm-garak  rpool/vm-garak@Q-2021-10-03-12-09-01  rpool/vm-garak@T-2021-10-03-12-14-01  rpool/vm-garak@T-2021-10-03-12-19-01  rpool/vm-garak@Q-2021-10-03-12-24-01  rpool/vm-garak@T-2021-10-03-12-29-01  rpool/vm-garak@T-2021-10-03-12-34-01  rpool/vm-garak@Q-2021-10-03-12-39-01  rpool/vm-garak@T-2021-10-03-12-44-01  rpool/vm-garak@T-2021-10-03-12-49-01  rpool/vm-garak@H-2021-10-03-12-54-01  

I have some scripts which work through this list name by name; that is, line by line, relying on the fact the the newline character reliably indicates a new snapshot name.

As long as I have the snapshot creation under my control, this is safe, because I can avoid unreasonable snapshot names. But the snapshots are created by somebody else, so what if there is a newline in the name? As mentioned above, I had no success with creating such snapshot names, but I am surely not aware of all weird methods which could produce them.

A final note: I am aware that I eventually could get away with globbing as long as it concerns normal dataset (file system) snapshots, because ZFS puts them into a hidden directory and makes them accessible as normal directories / files. However, in my case, the snapshots are snapshots from ZVOLs, which ZFS does not make accessible that way.

How can the grep faster than the find and locate command?

Posted: 04 Oct 2021 08:32 AM PDT

I know locate command is faster than find command because it searches through database(mlocate.db). If I create my own database, I can find the file I want much faster with the grep command.

I use the sudo find / > database.txt command to create my own database. This gives us a file that looks like.

/  /vmlinuz.old  /var  /var/mail  /var/spool  /var/spool/rsyslog  /var/spool/mail  /var/spool/cups  /var/spool/cups/tmp  /var/spool/cron  

I'm searching for the same file three different ways.

$ time find / -name blah  0.59user 0.67system 0:01.71elapsed 71%CPU  $ time locate blah  0.26user 0.00system 0:00.30elapsed 83%CPU  $ time grep blah database.txt  0.04user 0.02system 0:00.10elapsed 61%CPU  

As you can see, our homegrown locate using grep is actually way faster! Our homegrown database takes about 3x as much space as locate's database (45MB instead of 15MB), so that's probably part of why.

This kind of makes me wonder if our database format which doesn't use any clever compression tricks might actually be a better format if you're not worried about the extra space it takes up. But I don't really understand yet why locate is so much slower.

My current theory is that grep is better optimized than locate and that it can do smarter stuff.

I came across this blog post while researching and couldn't find a definitive answer.

Difference between if [ ... and test ... statement in bash

Posted: 04 Oct 2021 08:35 AM PDT

Consider the following:

echo "hello" > file.txt  is_match1 () {    local m    m=$(cat "file.txt" | grep -F "$1")    if [ -z "$m" ]; then      return 1    fi  }  is_match2 () {    local m    m=$(cat "file.txt" | grep -F "$1")    test -z "$m" && return 1  }  is_match1 "hello"  echo "$?"  0  is_match2 "hello"  echo "$?"  1  

Why does is_match2 return 1?

Replace new lines at end of lines starting with pattern

Posted: 04 Oct 2021 08:21 AM PDT

I have a file on my Ubuntu machine where I've marked the start of some lines using '@':

@A  bbb  @B  bbb  @D  ccc  

I want to remove the new lines at the end of lines starting '@' so the above file becomes

@Abbb  @Bbbb  @Dccc  

I can match the lines starting @ using sed:

sed /^@/ ...  

but I'm lost at trying to find the way to remove the new line character at the end of the string. There must be a way of doing this easily using sed without having to manually open the file in a text editor and remove them myself.

To update some Arch certain packages only

Posted: 04 Oct 2021 09:48 AM PDT

How do we update some certain packages by use pacman ?

E.g how do we update only packages (in regex syntax) py.+ as:

$ sudo pacman -S 'py.+'   error: target not found: py.+  

not work
Please help out, thanks in advance

How to Print ONLY Column Value between two matched columns

Posted: 04 Oct 2021 10:33 AM PDT

I am Having a file /tmp/ggloc.log which contains following data

$ cat /tmp/ggloc.log  oracle    12061      1  1 Sep08 ?        10:44:07 ./mgr PARAMFILE /oracle/gg/dirprm/mgr.prm REPORTFILE /oracle/gg/dirrpt/MGR.rpt PROCESSID MGR USESUBDIRS  oracle    75841  75810  0 13:55 ?        00:00:00 grep -i mgr  postfix  103283 103268  0 Feb24 ?        00:02:18 qmgr -l -t unix -u  oracle   185935      1  0 Sep08 ?        00:14:14 ./mgr PARAMFILE /oracle/GG_123012/GG_HOME/dirprm/mgr.prm REPORTFILE /oracle/GG_123012/GG_HOME/dirrpt/MGR.rpt PROCESSID MGR  

So from above file , I want below output

/oracle/gg  /oracle/GG_123012/GG_HOME  

I have tried as below

k=$(cat /tmp/ggloc.log)  echo "$k" | sed 's/.*PARAMFILE \(.*\) REPORTFILE.*/\1/' | awk -F "/dirprm" '{print $1}'  

and I am getting below output

/oracle/gg  oracle    75841  75810  0 13:55 ?        00:00:00 grep -i mgr  postfix  103283 103268  0 Feb24 ?        00:02:18 qmgr -l -t unix -u  /oracle/GG_123012/GG_HOME  

So how do I get only

/oracle/gg  /oracle/GG_123012/GG_HOME  

Need your inputs

How to write a range of /16 IPs in a single expression?

Posted: 04 Oct 2021 09:30 AM PDT

I'd like to ban this range of Chinese IPs in nginx:

 '223.64.0.0 - 223.117.255.255'  

I know how to ban each of /16 range like:

deny 223.64.0.0/16;  

But it will take many lines to include the whole 223.64 - 233.117 range. So I'm wondering if there is shorthand notation to do so in one line?

How to grep numbers from line matched with a pattern

Posted: 04 Oct 2021 09:39 AM PDT

I want to extract some informaiton from the log file which reads:

...  Running ep. 0  ...  ...  Initial position for this ep is 7.338690864048985,28.51815509409351,11.795143979909135  ...  ...  ...  Running ep. 1  ...  ...  Initial position for this ep is 10.599326804010953,7.514871863851674,14.843070346933654  ...  ...  

Now I have a bash code that can extract some data from it as

cat screen2.dat|grep -oP 'Running ep. \K([0-9]+)|(?<=for this ep is )[+-]?[0-9]+([.][0-9]+)?'|paste -d' ' - -  

but the output is only the number after the "Running ep." and the first number after the "Initial position for this ep is "

0 7.338690864048985   1 10.599326804010953   .  .  .  

I am expecting something like

0 7.338690864048985 28.51815509409351 11.795143979909135  1 10.599326804010953 7.514871863851674 14.843070346933654  .  .  .  

Which of the multiple IPv6 addresses is used as source address and how is the decision made?

Posted: 04 Oct 2021 09:14 AM PDT

I have a setup with Computer A being directly tied to the router via a LAN cable and Computer B being connected via wifi.

This is the output of Computer As ip addr command:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000      link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00      inet 127.0.0.1/8 scope host lo         valid_lft forever preferred_lft forever      inet6 ::1/128 scope host          valid_lft forever preferred_lft forever  2: enp8s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000      link/ether 70:85:c2:cc:c2:4d brd ff:ff:ff:ff:ff:ff      inet 192.168.0.130/24 brd 192.168.0.255 scope global dynamic noprefixroute enp8s0         valid_lft 568337sec preferred_lft 568337sec      inet6 2a02:8109:9cc0:3090:a99a:ec1e:b598:facd/128 scope global dynamic noprefixroute          valid_lft 568305sec preferred_lft 568305sec      inet6 2a02:8109:9cc0:3090:7b53:da6c:4e19:580c/64 scope global dynamic noprefixroute          valid_lft 86399sec preferred_lft 43199sec      inet6 fe80::3c15:8b76:5ba7:4f87/64 scope link noprefixroute          valid_lft forever preferred_lft forever  

Using the ip route get command it seems to me that all packages I send from the computer have src set to 2a02:8109:9cc0:3090:a99a:ec1e:b598:facd (the /128 address). I have multiple questions to understand this output.

  1. Why is this address chosen as src?
  2. Why do I have two scope global addresses in the first place?
  3. Why does one of the IPv6 addresses say the network part is /128 (allthough the network part is /64, the router clearly only fixes the first 64 bits.)
  4. Finally, why can't I connect to the last scope local address from Computer B, even though they are on the same network? (Would it work if both where connected to the same switch via LAN or both would be connected via wifi?)

how to increment date from a to b?

Posted: 04 Oct 2021 09:19 AM PDT

I have 2 dates, start=20190903 & end=20210912 and want to increment till start approaches the end, increment is 13 days.

have following code, but it exceeds end.

#! /usr/bin/env bash    start="20190903"  end="20210912"    startdate="$(date -d ${start} +'%Y-%m-%d')"  enddate="$(date -d ${end} +'%Y-%m-%d')"  echo ${startdate}  echo ${enddate}    while [ "${startdate}" < "${enddate}" ]; do      echo ${startdate}      startdate="$( date -d "${startdate} + 13 days" +'%Y-%m-%d')"  done  

Generate templates in yaml from a CVS file

Posted: 04 Oct 2021 09:34 AM PDT

im trying to create yaml files from a template, using my variables. My yaml template look like this

number: {{NUMBER}}    name: {{NAME}}    region: {{REGION}}    storenum: {{STORENUM}}    clients: {{CLIENTS}}    tags: {{TAGS}}      storename: {{STORENAME}}  employee: {{EMPLOYEE}}  products: {{PRODUCTS}}  

But my variables are in a CSV file the structure is the variables.

Number - Name - Region - Storenum    StoreX - StoreX - New York - 30    

I now have a little script, to create from a template with the variable parameters and the template like this script.sh template.yml -f variables.txt. And my result look like this

number: 37579922    name: Store1    region: New York    storenum: 32    clients: 100    tags: stores      storename: Store newyork  employee: 10  products: 200  

But i can only do one by one. Is there any way to read the CSV parameters and send to the program and generate for example Template1,Template2,.. from the CSV parameters? Any help

#!/bin/bash  readonly PROGNAME=$(basename $0)    config_file="<none>"  print_only="false"  silent="false"    usage="${PROGNAME} [-h] [-d] [-f] [-s] --     where:      -h, --help          Show this help text      -p, --print          Don't do anything, just print the result of the variable expansion(s)      -f, --file          Specify a file to read variables from      -s, --silent          Don't print warning messages (for example if no variables are found)    examples:      VAR1=Something VAR2=1.2.3 ${PROGNAME} test.txt       ${PROGNAME} test.txt -f my-variables.txt      ${PROGNAME} test.txt -f my-variables.txt > new-test.txt"    if [ $# -eq 0 ]; then    echo "$usage"    exit 1      fi    if [[ ! -f "${1}" ]]; then      echo "You need to specify a template file" >&2      echo "$usage"      exit 1  fi    template="${1}"    if [ "$#" -ne 0 ]; then      while [ "$#" -gt 0 ]      do          case "$1" in          -h|--help)              echo "$usage"              exit 0              ;;                  -p|--print)              print_only="true"              ;;          -f|--file)              config_file="$2"              ;;          -s|--silent)              silent="true"              ;;          --)              break              ;;          -*)              echo "Invalid option '$1'. Use --help to see the valid options" >&2              exit 1              ;;          # an option argument, continue          *)  ;;          esac          shift      done  fi    vars=$(grep -oE '\{\{[A-Za-z0-9_]+\}\}' "${template}" | sort | uniq | sed -e 's/^{{//' -e 's/}}$//')    if [[ -z "$vars" ]]; then      if [ "$silent" == "false" ]; then          echo "Warning: No variable was found in ${template}, syntax is {{VAR}}" >&2      fi  fi    # Load variables from file if needed  if [ "${config_file}" != "<none>" ]; then      if [[ ! -f "${config_file}" ]]; then        echo "The file ${config_file} does not exists" >&2        echo "$usage"              exit 1      fi        source "${config_file}"  fi        var_value() {      eval echo \$$1  }    replaces=""    # Reads default values defined as {{VAR=value}} and delete those lines  # There are evaluated, so you can do {{PATH=$HOME}} or {{PATH=`pwd`}}  # You can even reference variables defined in the template before  defaults=$(grep -oE '^\{\{[A-Za-z0-9_]+=.+\}\}' "${template}" | sed -e 's/^{{//' -e 's/}}$//')    for default in $defaults; do      var=$(echo "$default" | grep -oE "^[A-Za-z0-9_]+")      current=`var_value $var`        # Replace only if var is not set      if [[ -z "$current" ]]; then          eval $default      fi        # remove define line      replaces="-e '/^{{$var=/d' $replaces"      vars="$vars  $current"  done    vars=$(echo $vars | sort | uniq)    if [[ "$print_only" == "true" ]]; then      for var in $vars; do          value=`var_value $var`          echo "$var = $value"      done      exit 0  fi    # Replace all {{VAR}} by $VAR value  for var in $vars; do      value=$(var_value $var | sed -e "s;\&;\\\&;g" -e "s;\ ;\\\ ;g") # '&' and <space> is escaped       if [[ -z "$value" ]]; then          if [ $silent == "false" ]; then              echo "Warning: $var is not defined and no default is set, replacing by empty" >&2          fi      fi        # Escape slashes      value=$(echo "$value" | sed 's/\//\\\//g');      replaces="-e 's/{{$var}}/${value}/g' $replaces"      done    escaped_template_path=$(echo $template | sed 's/ /\\ /g')  eval sed $replaces "$escaped_template_path"  

I can't install pip and other essentials

Posted: 04 Oct 2021 10:40 AM PDT

When I try to install pip using apt by this command: sudo apt install python-pip it replies:

Reading package lists... Done  Building dependency tree... Done  Reading state information... Done  E: Unable to locate package python-pip  

It also happens with pip3.

┌──(aja㉿aja)-[~/Desktop/minecraft java]   └─$ sudo apt install python3-pip   [sudo] password for aja:   Reading package lists... Done   Building dependency tree... Done   Reading state information... Done   Package python3-pip is not available, but is referred to by another package.   This may mean that the package is missing, has been obsoleted, or   is only available from another source     E: Package 'python3-pip' has no installation candidate  

I am using Kali Linux.

How do I fix this?

how to set passwordless authentication in a cluster where users /home directory from headnode is mounted to all machines /home of the cluster

Posted: 04 Oct 2021 09:01 AM PDT

First of all thank you in advance for your help.

I hope the title makes sense. Basically, on the headnode the users' home directory (i.e: headnode:/home/eric) are NFS shared and mounted to all the machines /home directory (i.e: node01:/home/eric) I am trying to setup password-less SSH connections between all the users on the headnode and all the machines in the cluster. This is what I have done so far but i don't seem to be able to make it work.

I am running CentOS 7 on the headnode and all the machines in the cluster.

Mounted headnode's /home to all machine's /home in the cluster. On the headnode the /etc/exports looks like this.

/home    *(rw,sync,no_root_squash,no_all_squash)  

On the headnode for user eric I generated the RSA key.

eric@headnode $: ssh-keygen -t rsa   

With no passphrase.

Then I added the public key to the list of keys allowed to log in to eric's account.

cat id_rsa.pub >> authorized_keys  

I also created a "config" file in /home/eric/.ssh with the following lines.

StrictHostKeyChecking no  UserKnownHostsFile /dev/null  

I also edited /etc/ssh/ssh_config to reflect

StrictHostKeyChecking no  

I made sure that the /home/eric/id_rsa.pub key and the /home/eric/authorized_keys on the headnode are the same in /home/eric/id_rsa.pub key and the /home/eric/authorized_keys on the machines in the cluster. Which they are the same since /home/eric on the headnode is mounted on /home/eric in all machines in the cluster.

I also made sure that that the permissions on /home/eric/.ssh on the headnode and the machines in the cluster and the files inside .ssh were appropriate.

~/.ssh/  700  ~/.ssh/authorized_keys  600  ~/.ssh/config   600  ~/.ssh/id_rsa  600  ~/.ssh/id_rsa.pub 644  

After all these steps I still cannot establish a password-less ssh connection between the headnode and the machines in the cluster.

Here is the verbose log when I ssh from the headnode to the node in the cluster.

    OpenSSH_7.4p1, OpenSSL 1.0.2k-fips  26 Jan 2017  debug1: Reading configuration data /home/eric/.ssh/config  debug1: Reading configuration data /etc/ssh/ssh_config  debug1: /etc/ssh/ssh_config line 58: Applying options for *  debug1: Connecting to tq3 [10.112.0.14] port 22.  debug1: Connection established.  debug1: identity file /home/eric/.ssh/id_rsa type 1  debug1: key_load_public: No such file or directory  debug1: identity file /home/eric/.ssh/id_rsa-cert type -1  debug1: key_load_public: No such file or directory  debug1: identity file /home/eric/.ssh/id_dsa type -1  debug1: key_load_public: No such file or directory  debug1: identity file /home/eric/.ssh/id_dsa-cert type -1  debug1: key_load_public: No such file or directory  debug1: identity file /home/eric/.ssh/id_ecdsa type -1  debug1: key_load_public: No such file or directory  debug1: identity file /home/eric/.ssh/id_ecdsa-cert type -1  debug1: key_load_public: No such file or directory  debug1: identity file /home/eric/.ssh/id_ed25519 type -1  debug1: key_load_public: No such file or directory  debug1: identity file /home/eric/.ssh/id_ed25519-cert type -1  debug1: Enabling compatibility mode for protocol 2.0  debug1: Local version string SSH-2.0-OpenSSH_7.4  debug1: Remote protocol version 2.0, remote software version OpenSSH_7.4  debug1: match: OpenSSH_7.4 pat OpenSSH* compat 0x04000000  debug1: Authenticating to tq3:22 as 'eric'  debug1: SSH2_MSG_KEXINIT sent  debug1: SSH2_MSG_KEXINIT received  debug1: kex: algorithm: curve25519-sha256  debug1: kex: host key algorithm: ecdsa-sha2-nistp256  debug1: kex: server->client cipher: chacha20-poly1305@openssh.com MAC: <implicit> compression: none  debug1: kex: client->server cipher: chacha20-poly1305@openssh.com MAC: <implicit> compression: none  debug1: kex: curve25519-sha256 need=64 dh_need=64  debug1: kex: curve25519-sha256 need=64 dh_need=64  debug1: expecting SSH2_MSG_KEX_ECDH_REPLY  debug1: Server host key: ecdsa-sha2-nistp256 SHA256:M8Z5sDopU8J8sEkr9dkAwnIUbhcnLSKZjLfn5RykKA0  Warning: Permanently added 'tq3,10.112.0.14' (ECDSA) to the list of known hosts.  debug1: rekey after 134217728 blocks  debug1: SSH2_MSG_NEWKEYS sent  debug1: expecting SSH2_MSG_NEWKEYS  debug1: SSH2_MSG_NEWKEYS received  debug1: rekey after 134217728 blocks  debug1: SSH2_MSG_EXT_INFO received  debug1: kex_input_ext_info: server-sig-algs=<rsa-sha2-256,rsa-sha2-512>  debug1: SSH2_MSG_SERVICE_ACCEPT received  debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,password  debug1: Next authentication method: gssapi-keyex  debug1: No valid Key exchange context  debug1: Next authentication method: gssapi-with-mic  debug1: Unspecified GSS failure.  Minor code may provide more information  No Kerberos credentials available (default cache: KEYRING:persistent:1000)    debug1: Unspecified GSS failure.  Minor code may provide more information  No Kerberos credentials available (default cache: KEYRING:persistent:1000)    debug1: Next authentication method: publickey  debug1: Offering RSA public key: /home/eric/.ssh/id_rsa  debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,password  debug1: Trying private key: /home/eric/.ssh/id_dsa  debug1: Trying private key: /home/eric/.ssh/id_ecdsa  debug1: Trying private key: /home/eric/.ssh/id_ed25519  debug1: Next authentication method: password  

Did i miss or miss-configured something?

Thank you all for your help.

Eric

Problem with TP-Link TL-WN722N v2 with putting it in monitor mode! Kali Linux

Posted: 04 Oct 2021 10:07 AM PDT

I cannot put TP-Link TL-WN722N v2 in monitor mode. I've tried this:

sudo apt update  sudo apt install bc  sudo rmmod r8188eu.ko  git clone https://github.com/aircrack-ng/rtl8188eus  cd rtl8188eus  sudo -i  echo "blacklist r8188eu.ko" > "/etc/modprobe.d/realtek.conf"  exit  make  sudo make install  sudo modprobe 8188eu  

but when trying to use "make" i got this error:

root@kali:/home/hizzly/Documents/tmp/rtl8188eu# make  make ARCH=x86_64 CROSS_COMPILE= -C /lib/modules/5.6.0-kali2-amd64/build M=/home/hizzly/Documents/tmp/rtl8188eu  modules  make[1]: *** /lib/modules/5.6.0-kali2-amd64/build: No such file or directory.  Stop.  make: *** [Makefile:155: modules] Error 2  

I wanted to ask how should fix this issue? I've done almost anything from forums, but nothing happened. I will be so glad if someone help me.


@alecxs i used this commands

[Download and Installation]  01]  #apt update && apt upgrade]    02] #apt install -y bc linux-headers-amd64      03] #git clone https://github.com/kimocoder/rtl8188eus  or  direct download from :  https://github.com/kimocoder/rtl8188e...      04] #cd rtl8188eus    05] #cp realtek_blacklist.conf /etc/modprobe.d    06] #make  

this time started but i get other kind of error:

home/hizzly/temp/rtl8188eus/os_dep/osdep_service.c:2204:10: error: implicit declaration of function 'get_ds'; did you mean 'get_da'? [-Werror=implicit-function-declaration]   2204 |   set_fs(get_ds());        |          ^~~~~~        |          get_da  /home/hizzly/temp/rtl8188eus/os_dep/osdep_service.c:2204:10: error: incompatible type for argument 1 of 'set_fs'   2204 |   set_fs(get_ds());        |          ^~~~~~~~        |          |        |          int  In file included from /usr/src/linux-headers-5.6.0-kali2-common/include/linux/uaccess.h:11,                   from /usr/src/linux-headers-5.6.0-kali2-common/include/linux/sched/task.h:11,                   from /usr/src/linux-headers-5.6.0-kali2-common/include/linux/sched/signal.h:9,                   from /home/hizzly/temp/rtl8188eus/include/osdep_service.h:47,                   from /home/hizzly/temp/rtl8188eus/include/drv_types.h:27,                   from /home/hizzly/temp/rtl8188eus/os_dep/osdep_service.c:19:  /usr/src/linux-headers-5.6.0-kali2-common/arch/x86/include/asm/uaccess.h:29:40: note: expected 'mm_segment_t' {aka 'struct <anonymous>'} but argument is of type 'int'     29 | static inline void set_fs(mm_segment_t fs)        |                           ~~~~~~~~~~~~~^~  /home/hizzly/temp/rtl8188eus/os_dep/osdep_service.c: In function 'retriveFromFile':  /home/hizzly/temp/rtl8188eus/os_dep/osdep_service.c:2242:11: error: incompatible type for argument 1 of 'set_fs'   2242 |    set_fs(get_ds());        |           ^~~~~~~~        |           |        |           int  In file included from /usr/src/linux-headers-5.6.0-kali2-common/include/linux/uaccess.h:11,                   from /usr/src/linux-headers-5.6.0-kali2-common/include/linux/sched/task.h:11,                   from /usr/src/linux-headers-5.6.0-kali2-common/include/linux/sched/signal.h:9,                   from /home/hizzly/temp/rtl8188eus/include/osdep_service.h:47,                   from /home/hizzly/temp/rtl8188eus/include/drv_types.h:27,                   from /home/hizzly/temp/rtl8188eus/os_dep/osdep_service.c:19:  /usr/src/linux-headers-5.6.0-kali2-common/arch/x86/include/asm/uaccess.h:29:40: note: expected 'mm_segment_t' {aka 'struct <anonymous>'} but argument is of type 'int'     29 | static inline void set_fs(mm_segment_t fs)        |                           ~~~~~~~~~~~~~^~  /home/hizzly/temp/rtl8188eus/os_dep/osdep_service.c: In function 'storeToFile':  /home/hizzly/temp/rtl8188eus/os_dep/osdep_service.c:2277:11: error: incompatible type for argument 1 of 'set_fs'   2277 |    set_fs(get_ds());        |           ^~~~~~~~        |           |        |           int  In file included from /usr/src/linux-headers-5.6.0-kali2-common/include/linux/uaccess.h:11,                   from /usr/src/linux-headers-5.6.0-kali2-common/include/linux/sched/task.h:11,                   from /usr/src/linux-headers-5.6.0-kali2-common/include/linux/sched/signal.h:9,                   from /home/hizzly/temp/rtl8188eus/include/osdep_service.h:47,                   from /home/hizzly/temp/rtl8188eus/include/drv_types.h:27,                   from /home/hizzly/temp/rtl8188eus/os_dep/osdep_service.c:19:  /usr/src/linux-headers-5.6.0-kali2-common/arch/x86/include/asm/uaccess.h:29:40: note: expected 'mm_segment_t' {aka 'struct <anonymous>'} but argument is of type 'int'     29 | static inline void set_fs(mm_segment_t fs)        |                           ~~~~~~~~~~~~~^~  cc1: some warnings being treated as errors  make[3]: *** [/usr/src/linux-headers-5.6.0-kali2-common/scripts/Makefile.build:273: /home/hizzly/temp/rtl8188eus/os_dep/osdep_service.o] Error 1  make[2]: *** [/usr/src/linux-headers-5.6.0-kali2-common/Makefile:1704: /home/hizzly/temp/rtl8188eus] Error 2  make[1]: *** [/usr/src/linux-headers-5.6.0-kali2-common/Makefile:180: sub-make] Error 2  make[1]: Leaving directory '/usr/src/linux-headers-5.6.0-kali2-amd64'  make: *** [Makefile:2286: modules] Error 2  root@kali:/home/hizzly/temp/rtl8188eus# make install  install -p -m 644 8188eu.ko  /lib/modules/5.6.0-kali2-amd64/kernel/drivers/net/wireless/  install: cannot stat '8188eu.ko': No such file or directory  make: *** [Makefile:2292: install] Error 1  

after using CFLAGS="$CFLAGS -Wno-error" make:

root@kali:/home/hizzly/temp/rtl8188eus# CFLAGS="$CFLAGS -Wno-error" make      make ARCH=x86_64 CROSS_COMPILE= -C /lib/modules/5.6.0-kali2-amd64/build M=/home/hizzly/temp/rtl8188eus  modules      make[1]: Entering directory '/usr/src/linux-headers-5.6.0-kali2-amd64'        CC [M]  /home/hizzly/temp/rtl8188eus/os_dep/osdep_service.o      In file included from /home/hizzly/temp/rtl8188eus/include/drv_types.h:30,                       from /home/hizzly/temp/rtl8188eus/os_dep/osdep_service.c:19:      /home/hizzly/temp/rtl8188eus/include/wifi.h:970: warning: "IEEE80211_MAX_AMPDU_BUF" redefined        970 | #define IEEE80211_MAX_AMPDU_BUF 0x40            |       In file included from /home/hizzly/temp/rtl8188eus/include/osdep_service_linux.h:83,                       from /home/hizzly/temp/rtl8188eus/include/osdep_service.h:50,                       from /home/hizzly/temp/rtl8188eus/include/drv_types.h:27,                       from /home/hizzly/temp/rtl8188eus/os_dep/osdep_service.c:19:      /usr/src/linux-headers-5.6.0-kali2-common/include/linux/ieee80211.h:1460: note: this is the location of the previous definition       1460 | #define IEEE80211_MAX_AMPDU_BUF  0x100            |       /home/hizzly/temp/rtl8188eus/os_dep/osdep_service.c: In function 'isFileReadable':      /home/hizzly/temp/rtl8188eus/os_dep/osdep_service.c:2204:10: error: implicit declaration of function 'get_ds'; did you mean 'get_da'? [-Werror=implicit-function-declaration]       2204 |   set_fs(get_ds());            |          ^~~~~~            |          get_da      /home/hizzly/temp/rtl8188eus/os_dep/osdep_service.c:2204:10: error: incompatible type for argument 1 of 'set_fs'       2204 |   set_fs(get_ds());            |          ^~~~~~~~            |          |            |          int      In file included from /usr/src/linux-headers-5.6.0-kali2-common/include/linux/uaccess.h:11,                       from /usr/src/linux-headers-5.6.0-kali2-common/include/linux/sched/task.h:11,                       from /usr/src/linux-headers-5.6.0-kali2-common/include/linux/sched/signal.h:9,                       from /home/hizzly/temp/rtl8188eus/include/osdep_service.h:47,                       from /home/hizzly/temp/rtl8188eus/include/drv_types.h:27,                       from /home/hizzly/temp/rtl8188eus/os_dep/osdep_service.c:19:      /usr/src/linux-headers-5.6.0-kali2-common/arch/x86/include/asm/uaccess.h:29:40: note: expected 'mm_segment_t' {aka 'struct <anonymous>'} but argument is of type 'int'         29 | static inline void set_fs(mm_segment_t fs)            |                           ~~~~~~~~~~~~~^~      /home/hizzly/temp/rtl8188eus/os_dep/osdep_service.c: In function 'retriveFromFile':      /home/hizzly/temp/rtl8188eus/os_dep/osdep_service.c:2242:11: error: incompatible type for argument 1 of 'set_fs'       2242 |    set_fs(get_ds());            |           ^~~~~~~~            |           |            |           int      In file included from /usr/src/linux-headers-5.6.0-kali2-common/include/linux/uaccess.h:11,                       from /usr/src/linux-headers-5.6.0-kali2-common/include/linux/sched/task.h:11,                       from /usr/src/linux-headers-5.6.0-kali2-common/include/linux/sched/signal.h:9,                       from /home/hizzly/temp/rtl8188eus/include/osdep_service.h:47,                       from /home/hizzly/temp/rtl8188eus/include/drv_types.h:27,                       from /home/hizzly/temp/rtl8188eus/os_dep/osdep_service.c:19:      /usr/src/linux-headers-5.6.0-kali2-common/arch/x86/include/asm/uaccess.h:29:40: note: expected 'mm_segment_t' {aka 'struct <anonymous>'} but argument is of type 'int'         29 | static inline void set_fs(mm_segment_t fs)            |                           ~~~~~~~~~~~~~^~      /home/hizzly/temp/rtl8188eus/os_dep/osdep_service.c: In function 'storeToFile':      /home/hizzly/temp/rtl8188eus/os_dep/osdep_service.c:2277:11: error: incompatible type for argument 1 of 'set_fs'       2277 |    set_fs(get_ds());            |           ^~~~~~~~            |           |            |           int      In file included from /usr/src/linux-headers-5.6.0-kali2-common/include/linux/uaccess.h:11,                       from /usr/src/linux-headers-5.6.0-kali2-common/include/linux/sched/task.h:11,                       from /usr/src/linux-headers-5.6.0-kali2-common/include/linux/sched/signal.h:9,                       from /home/hizzly/temp/rtl8188eus/include/osdep_service.h:47,                       from /home/hizzly/temp/rtl8188eus/include/drv_types.h:27,                       from /home/hizzly/temp/rtl8188eus/os_dep/osdep_service.c:19:      /usr/src/linux-headers-5.6.0-kali2-common/arch/x86/include/asm/uaccess.h:29:40: note: expected 'mm_segment_t' {aka 'struct <anonymous>'} but argument is of type 'int'         29 | static inline void set_fs(mm_segment_t fs)            |                           ~~~~~~~~~~~~~^~      cc1: some warnings being treated as errors      make[3]: *** [/usr/src/linux-headers-5.6.0-kali2-common/scripts/Makefile.build:273: /home/hizzly/temp/rtl8188eus/os_dep/osdep_service.o] Error 1      make[2]: *** [/usr/src/linux-headers-5.6.0-kali2-common/Makefile:1704: /home/hizzly/temp/rtl8188eus] Error 2      make[1]: *** [/usr/src/linux-headers-5.6.0-kali2-common/Makefile:180: sub-make] Error 2      make[1]: Leaving directory '/usr/src/linux-headers-5.6.0-kali2-amd64'      make: *** [Makefile:2286: modules] Error 2  

Stop systemd user services as root user

Posted: 04 Oct 2021 08:54 AM PDT

I have a systemd A@.service that every user on the server can start with systemd --user start A@1.service.

As root I would like to stop that service for all users when we do maintenance.

I haven't found a way in systemd man, not even via Conflict. Is there a way to stop user services as root?

What do the "buff/cache" and "avail mem" fields in top mean?

Posted: 04 Oct 2021 08:16 AM PDT

Within the output of top, there are two fields, marked "buff/cache" and "avail Mem" in the memory and swap usage lines:

enter image description here

What do these two fields mean?

I've tried Googling them, but the results only bring up generic articles on top, and they don't explain what these fields signify.

/sbin/ldconfig.real: /usr/local/lib is not a known library type

Posted: 04 Oct 2021 10:03 AM PDT

I was following this instruction on this site to installing tesseract: https://github.com/tesseract-ocr/tesseract/wiki/Compiling

git clone https://github.com/tesseract-ocr/tesseract.git  cd tesseract  ./autogen.sh  ./configure  make  sudo make install  sudo ldconfig  

But there is a problem in the last line and I got this error messages when I tried ldconfig:

/sbin/ldconfig.real: /usr/local/lib is not a known library type  /sbin/ldconfig.real: /usr/local/lib/pkgconfig is not a known library type  

What's that error meaning and how can I fix it?

This is the content of /etc/ld.so.conf.d/libc.conf :

# libc default configuration  /usr/local/lib  

How can I create a virtual serial port that relays data over ssh?

Posted: 04 Oct 2021 09:01 AM PDT

I have a serial port on a remote linux box which I can access over ssh.

I would like to create a file (not a real file, maybe a device file or unix domain socket?) which when written to writes to a remote serial port over ssh, and the reverse for reads.

I think it would be sufficient to have a command which creates a file, then makes the STDIN to the command accessible by reading from the file, and writes to the file would result in data output on the command's STDOUT stream. Then I could use it as such:

ssh user@host "cat /dev/ttyREAL" | <some_command> /dev/ttyFAKE | ssh user@host "tee /dev/ttyREAL"  

Is there such a command, or am I going about it the wrong way?

The best way to expand glob pattern?

Posted: 04 Oct 2021 08:31 AM PDT

I need to expand a glob pattern (like ../smth*/*, or /etc/cron*/) into a list of files, programmatically. What would be the best way to do it?

No comments:

Post a Comment