Saturday, November 27, 2021

Recent Questions - Unix & Linux Stack Exchange

Recent Questions - Unix & Linux Stack Exchange


Am I using Cron / Crontab Correctly in Pop_OS! / Ubuntu?

Posted: 27 Nov 2021 05:51 PM PST

I'm attempting to use dwall (Dynamic Wallpaper) with cron to schedule an hourly wallpaper change on Pop!_OS 21.04, but for whatever reason cron.service isn't executing my crontab file (comment lines excluded):

0 * * * * env PATH=/home/m/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin DISPLAY=:1 DESKTOP_SESSION=pop DBUS_SESSION_BUS_ADDRESS="unix:path=/run/user/1000/bus" /usr/bin/dwall -p -s beach  

When I check to see if cron.service is running (systemctl status cron.service), this is the output I get:

cron.service - Regular background program processing daemon         Loaded: loaded (/lib/systemd/system/cron.service; enabled; vendor preset: enabled)         Active: active (running) since Sat 2021-11-27 17:34:26 EST; 3h 6min ago         Docs: man:cron(8)     Main PID: 980 (cron)        Tasks: 1 (limit: 38305)       Memory: 17.4M       CGroup: /system.slice/cron.service               └─980 /usr/sbin/cron -f -P  

I confirmed that dwall is working as expected via terminal using either /usr/bin/dwall -p -s beach or dwall -p -s beach

When storing text on a USB drive, how do I make cat not hang?

Posted: 27 Nov 2021 05:53 PM PST

Warning: I used these commands on a drive that had nothing on it (/dev/sdb). Do not attempt this on a drive with anything important on it.

I was experimenting some, and I discovered that the following works:

$ printf 'hi\n' | sudo tee /dev/sdb  hi  $ sudo head -n 1 /dev/sdb  hi  $  

Neat. Here's where I'm confused. I tried it again with cat (the first command is the same, I replaced the second one with sudo cat /dev/sdb. It printed hi, followed by a newline, and hung. Doing Ctrl + C didn't work to stop it. Bummer.

I reasoned that perhaps cat wanted a null (\0) character at the end. So I tried again (printf 'hi\n\0' | sudo tee /dev/sdb), and head worked as before, but cat still hung. How can I get cat to not hang when writing directly to a USB drive? I'm not asking if this is a good idea (it isn't). I'm well aware I could just format the drive and use a text file, but I'm curious why this didn't work as expected.

I'm using Debian 11, with a 2 GB flash drive (/dev/sdb).

Pop_OS Grub can't find Windows

Posted: 27 Nov 2021 05:12 PM PST

I know this question was answered before... But I'm so new that I don't even know how to do it :( After installing Linux and giving it a try i found that I coudn't boot back to windows and I was stuck in Linux...which isn't really bad, but I still need windows. I tried os-prober and boot-repair

LSBLK command:

AME          MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT  sda             8:0    0 119.2G  0 disk    ├─sda1          8:1    0   499M  0 part    ├─sda2          8:2    0    16M  0 part    ├─sda3          8:3    0 118.2G  0 part  /media/trollcraft1002/6EF006E2F006AFFF  ├─sda4          8:4    0   565M  0 part    └─sda5          8:5    0     1M  0 part    sdb             8:16   0 465.8G  0 disk    ├─sdb1          8:17   0 461.8G  0 part  /  └─sdb2          8:18   0     4G  0 part      └─cryptswap 253:0    0     4G  0 crypt [SWAP]  sdc             8:32   0 931.5G  0 disk    └─sdc1          8:33   0 927.3G  0 part      

SDA3 is the Windows location SDB is the Linux

OS: POP_OS 21.04  WINDOWS 10: 20H2    

sudo parted -l

Model: ATA SAMSUNG SSD 830 (scsi)  Disk /dev/sda: 128GB  Sector size (logical/physical): 512B/512B  Partition Table: gpt  Disk Flags:     Number  Start   End    Size    File system  Name                          Flags   1      1049kB  524MB  523MB   ntfs         Basic data partition          hidden, diag   2      524MB   541MB  16.8MB               Microsoft reserved partition  msftres   3      541MB   127GB  127GB   ntfs         Basic data partition          msftdata   4      127GB   128GB  592MB   ntfs                                       hidden, diag   5      128GB   128GB  1049kB                                             bios_grub      Model: ATA Hitachi HDS72105 (scsi)  Disk /dev/sdb: 500GB  Sector size (logical/physical): 512B/512B  Partition Table: msdos  Disk Flags:     Number  Start   End    Size    Type     File system     Flags   1      2097kB  496GB  496GB   primary  ext4   2      496GB   500GB  4295MB  primary  linux-swap(v1)      Model: ATA ST1000DM003-1SB1 (scsi)  Disk /dev/sdc: 1000GB  Sector size (logical/physical): 512B/4096B  Partition Table: gpt  Disk Flags:     Number  Start   End    Size   File system  Name                  Flags   1      16.8MB  996GB  996GB  ntfs         Basic data partition  msftdata      Model: Linux device-mapper (crypt) (dm)  Disk /dev/mapper/cryptswap: 4294MB  Sector size (logical/physical): 512B/512B  Partition Table: loop  Disk Flags:     Number  Start  End     Size    File system     Flags   1      0.00B  4294MB  4294MB  linux-swap(v1)  

Environment Variables defined in /etc/environment only show for root, not home user

Posted: 27 Nov 2021 05:06 PM PST

I'm fairly new to linux and am setting up a server running Ubuntu 20.04 LTS. I've defined a few environment variable in /etc/environment for use in my docker-compose.yml file. I added the variables to the /etc/environment file, formatted as shown below.

PUID=XXXX  PGID=XXX  TZ="America/REGION"  USERDIR="/home/USER_NAME"  

However, it seems that only the root user, and not the home user can see these variables. When I use printenv as the home user these variables don't show up, however they do show up using sudo or as the root user. I thought that when a environment variable was defined in /etc/environment that it should be accessible to all users, but it seems that this isn't the case or that something is going wrong. This is causing problems when trying to access these environment variables in my docker-compose.yml file, as on startup it can't access these variables. I'm not sure how to get these variables to show up to users other than root. I'd appreciate any help!

Expose local dev server externally with SSH Tunneling / Port Forwarding through Raspberry Pi

Posted: 27 Nov 2021 04:51 PM PST

I'm trying to expose my local dev server externally so that my web app can be tested by users who are not in my local network. Users who test the web app must run the apps with localhost:3000 and localhost:8080 in their browsers, as those are white listed domains. I thought this would be a perfect application for my Raspberry Pi to act as a proxy using SSH tunneling. This is my setup:

enter image description here

Local port forwarding (i.e. ssh -L) is working well, but I can't get remote port forwarding (i.e. ssh -R) to work. I'm not able to load localhost:3000 or localhost:8080 from either the Client or the Raspberry Pi. I even enabled GatewayPorts yes on my Raspberry Pi as instructed here: https://blog.trackets.com/2014/05/17/ssh-tunnel-local-and-remote-port-forwarding-explained-with-examples.html. The fact that those URLs won't load on my Raspberry Pi implies to me that this problem is between the Raspberry Pi and my Dev Server. How can I get this proxy working?

Pop OS 21.04 is randomly crashing everytime

Posted: 27 Nov 2021 03:53 PM PST

Everytime my Pop OS just freezes at a random time when I start my computer and do stuff

FreeBSD serial port lock files

Posted: 27 Nov 2021 03:05 PM PST

Does anyone know when in FreeBSD's boot process /dev/ttyu*.init and /dev/ttyu*.lock files are created?

In other words, are they created by the OS within its boot process, or are they created when I try to configure and then open a /dev/ttyu* serial port?

Note: I am using four serial ports on my machines (Nexcom 6100 and Nexcom 6210) as an instrumentation interface, not a terminal interface.

FreeBSD 13.0-RELEASE-p4 amd64

Thanks!

Can I force Conda to use system packages when available?

Posted: 27 Nov 2021 03:02 PM PST

I am looking to tidy up the build and runtime environment requirements for a scientific software package that has been worked on by grad students over the last 10 years and now has a conda environment from hell. As part of overhauling the environment, I am interested in whether I can force conda to check for system packages first and use those so that I can use the system repositories and package manager to keep after all the routine stuff instead of having conda download duplicates of everything and keep after those updates as well.

Adding a VLAN-transparent bridge to Linux (Debian)

Posted: 27 Nov 2021 03:21 PM PST

On my current Debian system I use VLAN tagging and I create bridges br-wan and br-lan that I use for LXC containers:

auto lo br-lan br-wan    iface lo inet loopback    iface br-wan inet manual      bridge_ports eth0.3      bridge_maxwait 0    iface br-lan inet static      address 192.168.200.10      netmask 255.255.255.0      gateway 192.168.200.1      bridge_ports eth0.2      bridge_maxwait 0          iface eth0.1 inet static      address 10.7.1.10      netmask 255.255.255.0  

Now I would like to add a KVM guest that gets transparent access to eth0 and in turn uses VLAN tagging internally (for example, to create a virtualized router routing between eth0.2 and eth0.3). To test, I did the following:

brctl addbr br-master  brctl addif eth0  

And then I created a KVM guest with --network=bridge=br-master,model=virtio which creates an interface vnet0 that is also added as bridge port to br-master. Inside the guest added the VLAN interfaces.   On the host, I see tagged packets in vnet0 (tcpdump -i vnet0 -e vlan) and br-master (tcpdump -i br-master -e vlan).   However, the traffic never reaches eth0 on the host.

I am sure the story is not that simple. How can I add such a VLAN-transparent "master bridge" to my system, without disrupting my old config?

Calcurse cant sync with Nextcloud

Posted: 27 Nov 2021 01:31 PM PST

Hello I've recently setup a nextcloud instance with a calendar and wanted to sync my calcurse with it.

This is my calcurse-caldav config(password, username and hostname obviously changed):

[General]  Binary = calcurse  Hostname = nextcloud.example.com/nextcloud  Path = remote.php/dav/  AuthMethod = basic  HTTPS = Yes   DryRun = No    Verbose = Yes    [Auth]  Username = BestUsername  Password = SuperSecretPassword  

I get the following error though:

╰─$ calcurse-caldav --init keep-remote  Connecting to nextcloud.example.com/nextcloud...  Removing all local calcurse objects...  error: The server at nextcloud.example.com/nextcloud replied with HTTP status  error: code 415 (Unsupported Media Type) while trying to access  error: https://nextcloud.example.com/nextcloud/remote.php/dav/.  

I strongly suspect that there is something wrong with my config but I can't understand what as this config worked perfectly with Calibre.

Restore removed files from LUKS-encrypted disk

Posted: 27 Nov 2021 01:48 PM PST

The problem

I erroneously removed several files from my /home/username with rm. I realized the mistake as soon as I hit enter, but the damage was done. I immediately created a full disk image with sudo dd if=/dev/sda of=/media/username/external_drive/image.iso and copied it to another PC and prepared to follow a very long path towards data recovery. And then realized I had no idea about where to start from.

What I did

I read some guides online and eventually extundelete /dev/partition_to_recover_from --restore-directory /path/to/restore came up as the most promising solution, so I tried it.

The first problem I encountered was that I had encrypted my drive with LUKS (during OS install) and had to decrypt it. After some more research, I prepared the partition with the following commands (here I changed the real volume group name from the real value of <my_pc_name>-vg to pc-vg).

$ sudo kpartx -a -v image.iso # map the disk image partitions  add map loop0p1 (254:0): 0 997376 linear 7:0 2048  add map loop0p2 (254:1): 0 2 linear 7:0 1001470  add map loop0p5 (254:2): 0 975769600 linear 7:0 1001472    $ sudo cryptsetup luksOpen /dev/mapper/loop0p5 img # unlock the partition with my data  Enter passhprase for /dev/mapper/loop0p5:    $ lsblk  NAME                  MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT  loop0                   7:0    0 465,8G  0 loop  ├─loop0p1             254:0    0   487M  0 part  ├─loop0p2             254:1    0     1K  0 part  └─loop0p5             254:2    0 465,3G  0 part    └─img               254:3    0 465,3G  0 crypt      ├─pc--vg-root   254:4    0 464,3G  0 lvm      └─pc--vg-swap_1 254:5    0   980M  0 lvm    [...omitting other lsblk output...]    $ sudo vgchange -a y frdm-vg    2 logical volume(s) in volume group "pc-vg" now active  

and then tried to recover with

$ sudo extundelete /dev/mapper/pc--vg-root --restore-directory /home/username/path/to/restore  NOTICE: Extended attributes are not restored.  WARNING: EXT3_FEATURE_INCOMPAT_RECOVER is set.  The partition should be unmounted to undelete any files without further data loss.  If the partition is not currently mounted, this message indicates   it was improperly unmounted, and you should run fsck before continuing.  If you decide to continue, extundelete may overwrite some of the deleted  files and make recovering those files impossible.  You should unmount the  file system and check it with fsck before using extundelete.  Would you like to continue? (y/n)  

However, the partition was not mounted and df confirmed that. Also, sudo fsck -N only wanted to operate on /dev/sdaX. In doubt, I rebooted the system and repeated the above steps. I received exactly the same output, and considering that I was working on a copy of the original disk image (so I had a backup to use in case of data loss) this time I answered y. The result was:

$ sudo extundelete /dev/mapper/pc--vg-root --restore-directory /home/username/path/to/restore  NOTICE: Extended attributes are not restored.  WARNING: EXT3_FEATURE_INCOMPAT_RECOVER is set.  The partition should be unmounted to undelete any files without further data loss.  If the partition is not currently mounted, this message indicates   it was improperly unmounted, and you should run fsck before continuing.  If you decide to continue, extundelete may overwrite some of the deleted  files and make recovering those files impossible.  You should unmount the  file system and check it with fsck before using extundelete.  Would you like to continue? (y/n)   y  Loading filesystem metadata ... extundelete: Extended attribute has an invalid value length when trying to examine filesystem  

I did do other research, but I couldn't understand what that means.

The questions

I'll try to avoid the XY problem.

Is the method I used to try to recover my data corect? If so, what is extundelete complaining about and how can I resolve it? If not, how can I (try to) restore my data from the LUKS-encrypted disk in Debian?

If any additional info is required, please ask for it.

P. S.: «Restore from your recent backup you obviously have» is the correct answer, I know =).

I do have a full backup of my home taken a couple of days before the data loss (not such a n00b), but I lost the product of more than twenty hours of work and I would like to have it back.

Failed when binding the blox,neo-6m driver to uart 1 on BeagleBoneBlack-wireless

Posted: 27 Nov 2021 02:42 PM PST

i'm trying to bind the gnss driver ubx neo6m to my sensor mapped on uart1

Source of the driver kernel 5.10.23 : : https://elixir.bootlin.com/linux/v5.10.23/source/drivers/gnss

Source of the DTS i'm using : : https://elixir.bootlin.com/linux/v5.10.23/source/arch/arm/boot/dts/am335x-boneblack-wireless.dts

i patched the device tree adding :

uart1_pins: pinmux_uart1_pins {          pinctrl-single,pins = < 0x180 0x30 0x00 0x184 0x00 0x00 >;      };  

and

&uart1 {   status="okay";   pinctrl-names = "default";   pinctrl-0 = <&uart1_pins>;   gps@0{      compatible = "blox,neo-6m";      vcc-supply = <0>;      current-speed = <115200>;      };  };  

but after booting i don't see any gnss device under /dev

Thanks

elif not working

Posted: 27 Nov 2021 02:31 PM PST

I am running a cardano node wich includes an automint solution. Im rewriting a script and i got it working. But there is a problem with a particular part of the script. This part of the elif should check if there is a incoming transaction of 8000000 and check if there is a metadata file in the folders metadata3-1 metadata3. If the transaction of 8000000 is ok and there is a file in both folders it should go to the "then" part of the script if there is not a file in one of the folders or the transaction is higher or lower then it should go to the "else" part. For some reason its not doing that. Can somebody help me with that?

 elif [ ${utxo_balance} == 8000000 ] || [ $(ls "metadata3-1/" | wc -l) >= 1 ] || [ $(ls "metadata3/" | wc -l) >= 1 ] ;          then          ###1 NFT version 1+2###          echo "Sending NFT..." >> $log          numberCompleted=$(( numberCompleted+1 ))          POLICYID=$(cardano-cli transaction policyid --script-file $scriptPath)              metadata_file=$(ls metadata3/ | sort -R | tail -1)              metadata_file2=$(ls metadata3-1/ | sort -R | tail -1)              name=$(echo ${metadata_file} | awk '{ print substr( $0, 1, length($0)-5 ) }')              name2=$(echo ${metadata_file2} | awk '{ print substr( $0, 1, length($0)-5 ) }')              head -n -3 ./metadata3/${metadata_file} > ./metadata3/metatemp.json ; mv ./metadata3/metatemp.json ./metadata3/${metadata_file}              tail -n +4 ./metadata3-1/${metadata_file2} > ./metadata3-1/metatemp2.json ; mv ./metadata3-1/metatemp2.json ./metadata3-1/${metadata_file2}              cat ./tempfiles/policy.json ./metadata3/${metadata_file} ./tempfiles/komma.json ./metadata3-1/${metadata_file2} > ./tempfiles/meta.json          amountToSendUser=2200000          amountToDonate=2800000          amountToSendProfit=3000000              currentSlot=$(cardano-cli query tip --mainnet | jq -r '.slot')              cardano-cli transaction build-raw \                  --fee 0 \                  ${tx_in} \                  --tx-out ${in_addr}+${amountToSendUser}+"1 $POLICYID.${name}"+"1 $POLICYID.${name2}" \                  --tx-out ${profitAddr}+${amountToSendProfit} \          --tx-out ${donationAddr}+${amountToDonate} \          --mint="1 $POLICYID.${name}"+"1 $POLICYID.${name2}" \          --minting-script-file $scriptPath \          --metadata-json-file ./tempfiles/meta.json \                  --invalid-hereafter $(( ${currentSlot} + 10000)) \                  --out-file tx3.tmp >> $log              fee=$(cardano-cli transaction calculate-min-fee \                  --tx-body-file tx3.tmp \                  --tx-in-count 1 \                  --tx-out-count 3 \                  --mainnet \                  --witness-count 2 \                  --byron-witness-count 0 \                  --protocol-params-file protocol3.json | awk '{ print $1 }') >> $log          fee=${fee%" Lovelace"}              amountToSendUser=$((${amountToSendUser} - ${fee}))              cardano-cli transaction build-raw \                  --fee ${fee} \                  ${tx_in} \                  --tx-out ${in_addr}+${amountToSendUser}+"1 $POLICYID.${name}"+"1 $POLICYID.${name2}" \                  --tx-out ${profitAddr}+${amountToSendProfit} \          --tx-out ${donationAddr}+${amountToDonate} \          --mint="1 $POLICYID.${name}"+"1 $POLICYID.${name2}" \                  --minting-script-file $scriptPath \          --metadata-json-file ./tempfiles/meta.json \                  --invalid-hereafter $(( ${currentSlot} + 10000)) \                  --out-file tx3.raw >> $log              cardano-cli transaction sign \                  --signing-key-file $paymentSignKeyPath \              --signing-key-file $policySignKeyPath \                  --tx-body-file tx3.raw \                  --out-file tx3.signed \                  --mainnet >> $log              cardano-cli transaction submit --tx-file tx3.signed --mainnet >> $log          rm ./metadata3/${metadata_file}          rm ./metadata3-1/${metadata_file2}          else          echo ${utxo_balance} >> $log          echo "Refund Initiated..." >> $log          currentSlot=$(cardano-cli query tip --mainnet | jq -r '.slot')              cardano-cli transaction build-raw \                  --fee 0 \                 ${tx_in} \                  --tx-out ${in_addr}+${utxo_balance} \                  --invalid-hereafter $(( ${currentSlot} + 1000)) \                  --out-file tx3.tmp >> $log              fee=$(cardano-cli transaction calculate-min-fee \                  --tx-body-file tx3.tmp \                  --tx-in-count 1 \                  --tx-out-count 1 \                  --mainnet \                  --witness-count 1 \                  --byron-witness-count 0 \                  --protocol-params-file protocol3.json | awk '{ print $1 }') >> $log              fee=${fee%" Lovelace"}              amountToSendUser=$(( ${utxo_balance}-${fee} ))          echo ${amountToSendUser} >> $log              cardano-cli transaction build-raw \                  --fee ${fee} \                  ${tx_in} \                  --tx-out ${in_addr}+${amountToSendUser} \                  --invalid-hereafter $(( ${currentSlot} + 1000)) \                  --out-file tx3.raw >> $log              cardano-cli transaction sign \                  --signing-key-file $paymentSignKeyPath \                  --tx-body-file tx3.raw \                  --out-file tx3.signed \                  --mainnet >> $log              cardano-cli transaction submit --tx-file tx3.signed --mainnet >> $log          fi  

Where can I find the source code of Voidlinux

Posted: 27 Nov 2021 12:08 PM PST

Why do I need to know where is the code source?

I found a bug in the config file of the service named wpa_supplicant, when using some shell, like sh or zsh instead of bash for the root user

$ cat /etc/passwd    root:x:0:0:root:/root:/bin/zsh  

this error is in this file : /etc/sv/wpa_supplicant/auto

  • Because in bash, we can find in two directories like that
  for f in /etc/wpa_supplicant/wpa_supplicant-*.conf /etc/wpa_supplicant-*.conf ;   do      #....  done  

but we can't do it in another shell

  • my PR is changing this for-loop or adding bash shebang,

but can't find `Void-Linux source in GitHub

SCHED_BATCH, SCHED_IDLE and SCHED_OTHER controversy

Posted: 27 Nov 2021 12:38 PM PST

There are several scheduling options in Linux, which can be set to a process with the help of a chrt command line. And I can't seem to grasp one of them...

SCHED_FIFO and SCHED_RR are basically real-time scheduling policies, with slight differences, no questions here mostly, they are always run first before any other processes.

SCHED_OTHER is the default policy, relying on nice values of per process to assign priorities.

SCHED_DEADLINE - I haven't completely understood this one, it seems to be actually closer to SCHED_FIFO but with set timers of execution, and may actually preempt SCHED_FIFO/SCHED_RR tasks if those timers are reached.

SCHED_IDLE - basically "schedule only if otherwise the CPU would be idling`, no questions here.

And now, there is SCHED_BATCH, which I found out that is quite controversial. On one hand, in the original implementations notes back from 2002 it is said it's basically the same as SCHED_IDLE for user code and SCHED_OTHER for kernel code (to avoid kernel resources lock-up due to a process never being scheduled). In the MAN articles though it is said it's basically the same as SCHED_OTHER but with a small penalty because it is considered CPU-bound.

Now, theese two SCHED_BATCH explanations are not coherent one with another. Which one is the truth? Did SCHED_BATCH got reimplemented somewhere along the way? Or someone misunderstood SCHED_BATCH when was writing the MAN article? Which of the explanations is the true one?

No internet access on Arch fresh install

Posted: 27 Nov 2021 01:16 PM PST

I just freshly installed Arch on an ROG Asus laptop and successfully got everything setup. I was able to connect to the internet even after rebooting out of the live environment. I installed a few packages(git, vim etc, etc). I then went to install xfce and found that I was no longer connected to the WiFi(this laptop does not have an Ethernet port). I am not completely unable to connect to the internet as everything results in a Name or Service unknown error. I have restarted dhcpcd, resolvd and reboot several times to no effect. The interface still shows up in ip link and lspci -k as well as in iwctl. I am at a loss of what else to do and any help would be appreciated.

running bash script in systemd

Posted: 27 Nov 2021 01:31 PM PST

I'm trying to start a service which runs this script, which is just flags for a binary file

This is the script:

./geth_linux --config ./config.toml --datadir ./mainnet --cache 100000 \               --rpc.allow-unprotected-txs --txlookuplimit 0 --http --ws \               --maxpeers 100 --syncmode=snap --snapshot=false --diffsync  

When I run this script in terminal from the binary's directory /home/bsc, it runs as I want it to.

When I start the service though, I get this error:

(code=exited, status=203/EXEC)  

Here is my .service file:

[Unit]  Description=BSC Full Node  [Service]  User=bsc  Type=simple  WorkingDirectory=/home/bsc  ExecStart=/home/bsc/start.sh  Restart=on-failure  RestartSec=5  [Install]  WantedBy=default.target  

I have made this script executable with:

chmod +x /home/bsc/start.sh  

Also i have run:

chown +R bsc.bsc /home/bsc/*  

I also tried running ExecStart=/bin/bash /home/bsc/start.sh and also adding #!/bin/bash at the top line of the script but failed also.

I'm stumped.

Ubuntu 20.04.3 LTS

systemd error log

Nov 24 04:25:14 rezonautik systemd[1]: Started BSC Full Node.  Nov 24 04:25:14 rezonautik bash[26939]: /home/bsc/start.sh: line 1: geth: command not found  Nov 24 04:25:14 rezonautik systemd[1]: bsc.service: Main process exited, code=exited, status=127/n/a  Nov 24 04:25:14 rezonautik systemd[1]: bsc.service: Failed with result 'exit-code'.  Nov 24 04:25:19 rezonautik systemd[1]: bsc.service: Scheduled restart job, restart counter is at 1.  Nov 24 04:25:19 rezonautik systemd[1]: Stopped BSC Full Node.  Nov 24 04:25:19 rezonautik systemd[1]: Started BSC Full Node.  Nov 24 04:25:19 rezonautik bash[26981]: /home/bsc/start.sh: line 1: geth: command not found  Nov 24 04:25:19 rezonautik systemd[1]: bsc.service: Main process exited, code=exited, status=127/n/a  

How delete BILLIONS of files efficiently on linux?

Posted: 27 Nov 2021 05:12 PM PST

A storage system had collected 1.7 billion files from revision backups over a couple of years and was getting a bit full.

So I started to delete all files older than five years. Which I assume are around 1.7 billion (!!!) files with around 90 TByte of data - I have to estimate because even a mere find or du would take weeks, maybe months. The Backend (mdraid, ext4) itself actually isn't too important because I would like to change it anyway.

I let rm delete files for one day and only got rid of around 0,1% of all files. I estimate that deleting everything that way would take one to two years. And most likely kill some drives while doing so. Not that I worry too much, it is a Hotswap RAID.

I have been using ionice -c3 to make sure files are only deleted while the drives are not busy to avoid disk thrashing as the drive usually is under heavy load for 1-2 hours per day. On a rather funny sidenote, when I tried to run rm the first times the millions of hard links drove its memory usuage to around 100GByte then it coredumped. So I split the operation into smaller parts, works file if I only delete single subdirectories but still often spikes at 20-30GByte.

My two questions:

  1. how do I delete the old files on this system in a way that doesn't take years?

e.g. I thought about manually editing the Inode-Structures so the files are gone but the space isn't given back and then let fsck repair the system.

other mad ideas are welcome. I can always get back by making an LVM snapshot.

  1. what setups are there to avoid the same problem in the future? Eg. using a different file system, different tool chain, putting meta data (Inodes, allocation tables etcpp) on SSD - the data itself needs to stay on HD for several reason.

If noone comes up with a better idea I will just massively reduce the numbers of revisions created and/or tar/xz everything older than one month to an external USB drive. Which would be uncool because the users actually enjoy being able to access old stuff from the revisions.

Capture output of systemd-run --scope to journal

Posted: 27 Nov 2021 05:42 PM PST

For improved traceability, accountability, etc., I want certain applications to be supervised by systemd and have their output collected in the journal. I know that this all happens automatically when I use systemd-run's service mode, but due to dependence on the calling environment, this is sometimes not possible.

I have tried using systemd-run and systemd-cat together, but have found that it sadly fails to capture the unit metadata. I know I could theoretically dump the environment variables to a file with env before starting the process and then pass that as EnvironmentFile= to systemd-run, but that seems like a hack. It also fails if there is other data in the execution environment than variables that are needed, such as file descriptors, although that's a relatively rare requirement, as shell pipes, the most common type of shared file descriptor, can usually be replaced with temporary named pipes or files.

I'm not unwilling to use the hacky approach; I'm just wondering if there is a "known good" alternative to it.

EDIT: Since the comments asked, the metadata fields I'm looking for are _SYSTEMD_UNIT, _SYSTEMD_SLICE, etc., as well as _AUDIT_LOGINUID, _CAP_EFFECTIVE, and other such metadata that is usually collected for supervised processes. I found out this is possible for scope units when I was looking at podman/conmon container logs, which have full metadata, despite being manually created using sd_journal_sendv from within a scope unit. I also checked the source code of logger(1) and it uses the same function, so I'm assuming there's something about the unit setup of podman/conmon that enables journald to establish the correlation that isn't there with systemd-run.

Network Manager Access Point WPS Pin Disable

Posted: 27 Nov 2021 02:12 PM PST

we are opening an Access Point with network manager (on a RPI4 with Raspbian) and everything is working fine except that when connecting from Windows 10 we are asked for a pin first (probably wps pin) instead of getting shown directly the passphrase field to enter. This is not observed on Mac and Linux.

Our wifi-ap configuration:

sudo nmcli c add con-name wifi-ap type wifi ssid test ifname wlan0 save yes autoconnect yes 802-11-wireless.mode ap 802-11-wireless.band bg ipv4.method shared wifi-sec.key-mgmt wpa-psk wifi-sec.psk "test1234"  

We already tried multiple configurations from the provided page: https://developer.gnome.org/NetworkManager/stable/settings-802-11-wireless-security.html like:

  • wps-method 1
  • proto rsn
  • pairwise ccmp

But nothing really helped. Would be fantastic to get your support here.

Thanks

Does lubuntu have an anti-blue light feature?

Posted: 27 Nov 2021 01:50 PM PST

Does lubuntu have an anti-blue light feature?

Linux shell script to check if another user has unread mail

Posted: 27 Nov 2021 01:05 PM PST

I'm looking to create a shell script that will accept userids as an argument to check if that user has unread mail in /var/spool/mail.

How would I even go about checking a user's mail status? Is /var/spool/mail only unread mail? If so, then I'm assuming I would just check for users with files of size greater than 0.

Unable to use Valgrind on executable file for ARM-Linux

Posted: 27 Nov 2021 02:00 PM PST

I'm having problem using Valgrind on "arm-linux" executable file.

I downloaded Valgrind from the main page here : http://valgrind.org/downloads/. Run the following command to install it:

sudo ./configure  sudo make  sudo make install  

On Ubuntu, I try with an executable HelloWorld file compiled "HelloWorld.c" using Linux GCC. Valgrind runs without any problem.

Then I try it on an executable file compiled from using Cross-compiler for ARM-Linux (this executable file is compiled to run on an embedded device) and I got this error:

valgrind: failed to start tool 'memcheck' for platform 'arm-linux': No such file or directory  

I have digged around on google and couldn't find much information, I tried:

export VALGRIND_LIB=/usr/local/lib/Valgrind  

It still show the same error, what am I doing wrong?

How to preview my GRUB menu without rebooting?

Posted: 27 Nov 2021 03:02 PM PST

I want to change my default boot OS in GRUB2. But the only way I know of seeing the order of the OS I want in the GRUB menu is doing a reboot and seeing the menu displayed.

In grub.cfg there are many more menuentry lines than actual choices in the GRUB menu, so I can't identify in that file the one I want.

Is there any place where the actually displayed menu is stored so that I can see it without having to reboot?

Missing openvpn routes with network manager

Posted: 27 Nov 2021 03:06 PM PST

I'm trying to setup a VPN using network manager. When I run openvpn manually with my config, it works fine (sudo openvpn --config MyVPN.ovpn). Then when I print the routes (sudo route -n), it gives me:

Destination     Gateway         Genmask         Flags Metric Ref    Use Iface  0.0.0.0         192.168.1.254   0.0.0.0         UG    600    0        0 wlp3s0  10.c.d.0        172.a.b.1       255.255.255.0   UG    0      0        0 tun0  10.e.f.0        172.a.b.1       255.255.255.0   UG    0      0        0 tun0  169.254.0.0     0.0.0.0         255.255.0.0     U     1000   0        0 wlp3s0  172.a.b.0       0.0.0.0         255.255.255.0   U     0      0        0 tun0  192.168.1.0     0.0.0.0         255.255.255.0   U     600    0        0 wlp3s0  

When I use network manager (with the openvpn plugin), I get am missing the 10. gateway routes.

I checked Use this connection only for resources on its network for both ipv4 and 6 (I don't want all my traffic to go through that VPN).

Why am I missing routes with network manager?

I'm using Linux Mint 18.2, network manager 1.2.6-0ubuntu0.16.04.1 and openvpn 2.3.10-1ubuntu2.1

Here's my tail -f /var/log/syslog when i'm using NM:

Sep  6 12:32:05 MyMint NetworkManager[867]: <info>  [1504693925.1089] audit: op="connection-activate" uuid="d4e40650-bc76-4139-a92f-ab51276287e2" name="MyVPN" pid=15515 uid=1000 result="success"  Sep  6 12:32:05 MyMint NetworkManager[867]: <info>  [1504693925.1171] vpn-connection[0x1ba5460,d4e40650-bc76-4139-a92f-ab51276287e2,"MyVPN",0]: Started the VPN service, PID 31326  Sep  6 12:32:05 MyMint NetworkManager[867]: <info>  [1504693925.1314] vpn-connection[0x1ba5460,d4e40650-bc76-4139-a92f-ab51276287e2,"MyVPN",0]: Saw the service appear; activating connection  Sep  6 12:32:12 MyMint NetworkManager[867]: <info>  [1504693932.3783] keyfile: update /etc/NetworkManager/system-connections/MyVPN (d4e40650-bc76-4139-a92f-ab51276287e2,"MyVPN")  Sep  6 12:32:12 MyMint NetworkManager[867]: nm-openvpn-Message: openvpn[31341] started  Sep  6 12:32:12 MyMint NetworkManager[867]: <info>  [1504693932.3865] vpn-connection[0x1ba5460,d4e40650-bc76-4139-a92f-ab51276287e2,"MyVPN",0]: VPN plugin: state changed: starting (3)  Sep  6 12:32:12 MyMint NetworkManager[867]: <info>  [1504693932.3866] vpn-connection[0x1ba5460,d4e40650-bc76-4139-a92f-ab51276287e2,"MyVPN",0]: VPN connection: (ConnectInteractive) reply received  Sep  6 12:32:12 MyMint nm-openvpn[31341]: OpenVPN 2.3.10 x86_64-pc-linux-gnu [SSL (OpenSSL)] [LZO] [EPOLL] [PKCS11] [MH] [IPv6] built on Jun 22 2017  Sep  6 12:32:12 MyMint nm-openvpn[31341]: library versions: OpenSSL 1.0.2g  1 Mar 2016, LZO 2.08  Sep  6 12:32:12 MyMint nm-openvpn[31341]: NOTE: the current --script-security setting may allow this configuration to call user-defined scripts  Sep  6 12:32:12 MyMint nm-openvpn[31341]: WARNING: file '/home/laurian/MyVPN/MyVPN.key' is group or others accessible  Sep  6 12:32:12 MyMint nm-openvpn[31341]: NOTE: chroot will be delayed because of --client, --pull, or --up-delay  Sep  6 12:32:12 MyMint nm-openvpn[31341]: NOTE: UID/GID downgrade will be delayed because of --client, --pull, or --up-delay  Sep  6 12:32:12 MyMint nm-openvpn[31341]: UDPv4 link local: [undef]  Sep  6 12:32:12 MyMint nm-openvpn[31341]: UDPv4 link remote: [AF_INET]170.75.241.82:1194  Sep  6 12:32:14 MyMint nm-openvpn[31341]: [MyVPN] Peer Connection Initiated with [AF_INET]170.75.241.82:1194  Sep  6 12:32:16 MyMint nm-openvpn[31341]: TUN/TAP device tun0 opened  Sep  6 12:32:16 MyMint nm-openvpn[31341]: /usr/lib/NetworkManager/nm-openvpn-service-openvpn-helper --bus-name org.freedesktop.NetworkManager.openvpn.Connection_8 --tun -- tun0 1500 1558 172.a.b.4 255.255.255.0 init  Sep  6 12:32:16 MyMint NetworkManager[867]: <info>  [1504693936.7477] manager: (tun0): new Tun device (/org/freedesktop/NetworkManager/Devices/8)  Sep  6 12:32:16 MyMint NetworkManager[867]: <info>  [1504693936.7562] devices added (path: /sys/devices/virtual/net/tun0, iface: tun0)  Sep  6 12:32:16 MyMint NetworkManager[867]: <info>  [1504693936.7562] device added (path: /sys/devices/virtual/net/tun0, iface: tun0): no ifupdown configuration found.  Sep  6 12:32:16 MyMint NetworkManager[867]: <info>  [1504693936.7796] vpn-connection[0x1ba5460,d4e40650-bc76-4139-a92f-ab51276287e2,"MyVPN",0]: VPN connection: (IP Config Get) reply received.  Sep  6 12:32:16 MyMint NetworkManager[867]: <info>  [1504693936.7852] vpn-connection[0x1ba5460,d4e40650-bc76-4139-a92f-ab51276287e2,"MyVPN",9:(tun0)]: VPN connection: (IP4 Config Get) reply received  Sep  6 12:32:16 MyMint NetworkManager[867]: <info>  [1504693936.7861] vpn-connection[0x1ba5460,d4e40650-bc76-4139-a92f-ab51276287e2,"MyVPN",9:(tun0)]: Data: VPN Gateway: 170.x.y.z  Sep  6 12:32:16 MyMint NetworkManager[867]: <info>  [1504693936.7862] vpn-connection[0x1ba5460,d4e40650-bc76-4139-a92f-ab51276287e2,"MyVPN",9:(tun0)]: Data: Tunnel Device: "tun0"  Sep  6 12:32:16 MyMint NetworkManager[867]: <info>  [1504693936.7862] vpn-connection[0x1ba5460,d4e40650-bc76-4139-a92f-ab51276287e2,"MyVPN",9:(tun0)]: Data: IPv4 configuration:  Sep  6 12:32:16 MyMint NetworkManager[867]: <info>  [1504693936.7862] vpn-connection[0x1ba5460,d4e40650-bc76-4139-a92f-ab51276287e2,"MyVPN",9:(tun0)]: Data:   Internal Gateway: 172.a.b.1  Sep  6 12:32:16 MyMint NetworkManager[867]: <info>  [1504693936.7862] vpn-connection[0x1ba5460,d4e40650-bc76-4139-a92f-ab51276287e2,"MyVPN",9:(tun0)]: Data:   Internal Address: 172.a.b.4  Sep  6 12:32:16 MyMint NetworkManager[867]: <info>  [1504693936.7862] vpn-connection[0x1ba5460,d4e40650-bc76-4139-a92f-ab51276287e2,"MyVPN",9:(tun0)]: Data:   Internal Prefix: 24  Sep  6 12:32:16 MyMint NetworkManager[867]: <info>  [1504693936.7862] vpn-connection[0x1ba5460,d4e40650-bc76-4139-a92f-ab51276287e2,"MyVPN",9:(tun0)]: Data:   Internal Point-to-Point Address: 172.a.b.4  Sep  6 12:32:16 MyMint NetworkManager[867]: <info>  [1504693936.7862] vpn-connection[0x1ba5460,d4e40650-bc76-4139-a92f-ab51276287e2,"MyVPN",9:(tun0)]: Data:   Maximum Segment Size (MSS): 0  Sep  6 12:32:16 MyMint NetworkManager[867]: <info>  [1504693936.7863] vpn-connection[0x1ba5460,d4e40650-bc76-4139-a92f-ab51276287e2,"MyVPN",9:(tun0)]: Data:   Static Route: 10.c.d.0/24   Next Hop: 172.a.b.1  Sep  6 12:32:16 MyMint NetworkManager[867]: <info>  [1504693936.7863] vpn-connection[0x1ba5460,d4e40650-bc76-4139-a92f-ab51276287e2,"MyVPN",9:(tun0)]: Data:   Static Route: 10.e.f.0/24   Next Hop: 172.a.b.1  Sep  6 12:32:16 MyMint nm-openvpn[31341]: chroot to '/var/lib/openvpn/chroot' and cd to '/' succeeded  Sep  6 12:32:16 MyMint NetworkManager[867]: <info>  [1504693936.7863] vpn-connection[0x1ba5460,d4e40650-bc76-4139-a92f-ab51276287e2,"MyVPN",9:(tun0)]: Data:   Forbid Default Route: yes  Sep  6 12:32:16 MyMint nm-openvpn[31341]: GID set to nm-openvpn  Sep  6 12:32:16 MyMint NetworkManager[867]: <info>  [1504693936.7863] vpn-connection[0x1ba5460,d4e40650-bc76-4139-a92f-ab51276287e2,"MyVPN",9:(tun0)]: Data:   DNS Domain: '(none)'  Sep  6 12:32:16 MyMint nm-openvpn[31341]: UID set to nm-openvpn  Sep  6 12:32:16 MyMint NetworkManager[867]: <info>  [1504693936.7863] vpn-connection[0x1ba5460,d4e40650-bc76-4139-a92f-ab51276287e2,"MyVPN",9:(tun0)]: Data: No IPv6 configuration  Sep  6 12:32:16 MyMint nm-openvpn[31341]: Initialization Sequence Completed  Sep  6 12:32:16 MyMint NetworkManager[867]: <info>  [1504693936.7864] vpn-connection[0x1ba5460,d4e40650-bc76-4139-a92f-ab51276287e2,"MyVPN",9:(tun0)]: VPN plugin: state changed: started (4)  Sep  6 12:32:16 MyMint NetworkManager[867]: <info>  [1504693936.7890] vpn-connection[0x1ba5460,d4e40650-bc76-4139-a92f-ab51276287e2,"MyVPN",9:(tun0)]: VPN connection: (IP Config Get) complete  Sep  6 12:32:16 MyMint NetworkManager[867]: <info>  [1504693936.7893] device (tun0): state change: unmanaged -> unavailable (reason 'connection-assumed') [10 20 41]  Sep  6 12:32:16 MyMint dbus[823]: [system] Activating via systemd: service name='org.freedesktop.nm_dispatcher' unit='dbus-org.freedesktop.nm-dispatcher.service'  Sep  6 12:32:16 MyMint NetworkManager[867]: <info>  [1504693936.8035] keyfile: add connection in-memory (6cc36f83-a713-494f-a153-8c0ef8482c23,"tun0")  Sep  6 12:32:16 MyMint NetworkManager[867]: <info>  [1504693936.8041] device (tun0): state change: unavailable -> disconnected (reason 'connection-assumed') [20 30 41]  Sep  6 12:32:16 MyMint NetworkManager[867]: <info>  [1504693936.8061] device (tun0): Activation: starting connection 'tun0' (6cc36f83-a713-494f-a153-8c0ef8482c23)  Sep  6 12:32:16 MyMint NetworkManager[867]: <info>  [1504693936.8070] device (tun0): state change: disconnected -> prepare (reason 'none') [30 40 0]  Sep  6 12:32:16 MyMint NetworkManager[867]: <info>  [1504693936.8075] device (tun0): state change: prepare -> config (reason 'none') [40 50 0]  Sep  6 12:32:16 MyMint NetworkManager[867]: <info>  [1504693936.8078] device (tun0): state change: config -> ip-config (reason 'none') [50 70 0]  Sep  6 12:32:16 MyMint NetworkManager[867]: <info>  [1504693936.8081] device (tun0): state change: ip-config -> ip-check (reason 'none') [70 80 0]  Sep  6 12:32:16 MyMint NetworkManager[867]: <info>  [1504693936.8088] device (tun0): state change: ip-check -> secondaries (reason 'none') [80 90 0]  Sep  6 12:32:16 MyMint systemd[1]: Starting Network Manager Script Dispatcher Service...  Sep  6 12:32:16 MyMint NetworkManager[867]: <info>  [1504693936.8132] device (tun0): state change: secondaries -> activated (reason 'none') [90 100 0]  Sep  6 12:32:16 MyMint NetworkManager[867]: <info>  [1504693936.8238] device (tun0): Activation: successful, device activated.  Sep  6 12:32:16 MyMint dbus[823]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher'  Sep  6 12:32:16 MyMint systemd[1]: Started Network Manager Script Dispatcher Service.  Sep  6 12:32:16 MyMint nm-dispatcher: req:1 'vpn-up' [tun0]: new request (1 scripts)  Sep  6 12:32:16 MyMint nm-dispatcher: req:1 'vpn-up' [tun0]: start running ordered scripts...  Sep  6 12:32:16 MyMint nm-dispatcher: req:2 'up' [tun0]: new request (1 scripts)  Sep  6 12:32:16 MyMint nm-dispatcher: req:2 'up' [tun0]: start running ordered scripts...  Sep  6 12:32:16 MyMint ntpdate[31411]: the NTP socket is in use, exiting  Sep  6 12:32:17 MyMint ntpdate[31471]: the NTP socket is in use, exiting  Sep  6 12:32:17 MyMint ntpdate[31530]: the NTP socket is in use, exiting  Sep  6 12:32:18 MyMint ntpd[1364]: Listen normally on 28 tun0 172.a.b.4:123  Sep  6 12:32:18 MyMint ntpd[1364]: Listen normally on 29 tun0 [fe80::a1e0:e276:5803:2ce5%9]:123  Sep  6 12:32:18 MyMint ntpd[1364]: new interface(s) found: waking up resolver  

Trigger a script in another server

Posted: 27 Nov 2021 05:03 PM PST

I have a script which in turn triggers 4 more scripts in another server sequentially. My script is waiting until the first script completes in target server and then triggers the second one. Below is the code

SCB_CMD=/sc/db2home/scbinst/bin/reload_scrpt1.sh  SCB_LOG=/sc/db2home/scbinst/log/reload_scrpt1.log    echo `date` "Executing $SCB_HOST:$SCB_CMD ..."     ssh $SCB_HOST "$SCB_CMD | tee $SCB_LOG"    RC=$?    #---------------------------------------------------------------------------  # -- Check for errors  #---------------------------------------------------------------------------  if [ $RC -ne 0 ]  then    echo `date` "!error occurred executing SCB load script1!"       exit 99  fi  #---------------------------------------------------------------------------  SCB_CMD=/sc/db2home/scbinst/bin/reload_scrpt2.sh  SCB_LOG=/sc/db2home/scbinst/log/reload_scrpt2.log    #---------------------------------------------------------------------------  # -- Execute the remote load script  #---------------------------------------------------------------------------    echo `date` "Executing $SCB_HOST:$SCB_CMD ..."     ssh $SCB_HOST "$SCB_CMD | tee $SCB_LOG"  --------------------------------------------  

Is there a way to trigger all these four scripts in parallel in target server?

File /etc/resolv.conf deleted on every reboot, why or what?

Posted: 27 Nov 2021 04:26 PM PST

I am having an issue where DHCP (I though as I read in other similar topics) is clearing the /etc/resolv.conf file on each boot. I am not sure about how to deal with this since the post I have found (1, 2 and some others) are for Debian based distros or other but not Fedora.

This is the output of ifcfg-enp0s31f6 so for sure is DHCP:

cat /etc/sysconfig/network-scripts/ifcfg-enp0s31f6   HWADDR=C8:5B:76:1A:8E:55  TYPE=Ethernet  DEFROUTE=yes  IPV4_FAILURE_FATAL=no  IPV6INIT=no  IPV6_AUTOCONF=no  IPV6_DEFROUTE=no  IPV6_FAILURE_FATAL=no  IPV6_ADDR_GEN_MODE=stable-privacy  NAME=enp0s31f6  UUID=0af812a3-ac8e-32a0-887d-10884872d6c7  ONBOOT=yes  IPV6_PEERDNS=no  IPV6_PEERROUTES=no  BOOTPROTO=dhcp  PEERDNS=yes  PEERROUTES=yes  

In the other side I don't know if Network Manager is doing something else around this.

Update: Content of NetworkManager.conf (I have removed the comments since are useless)

$ cat /etc/NetworkManager/NetworkManager.conf   [main]  #plugins=ifcfg-rh,ibft  dns=none    [logging]  #domains=ALL  

Can I get some help with this? It's annonying be setting up the file once and once on every reboot.

UPDATE 2

After a month I'm still having the same issue where file gets deleted by "something".

Here is the steps I did follow in order to make a fresh test:

  • Reboot the PC
  • After PC gets restarted open a terminal and try to ping Google servers of course without success:

    $ ping google.com  ping: google.com: Name or service not known  
  • Check the network configuration were all seems to be fine:

    $ cat /etc/sysconfig/network-scripts/ifcfg-enp0s31f6   NAME=enp0s31f6  ONBOOT=yes  HWADDR=C8:5B:76:1A:8E:55  MACADDR=C8:5B:76:1A:8E:55  UUID=0af812a3-ac8e-32a0-887d-10884872d6c7  BOOTPROTO=static  PEERDNS=no  DNS1=8.8.8.8  DNS2=8.8.4.4  DNS3=192.168.1.10  NM_CONTROLLED=yes  IPADDR=192.168.1.66  NETMASK=255.255.255.0  BROADCAST=192.168.1.255  GATEWAY=192.168.1.1  TYPE=Ethernet  DEFROUTE=yes  IPV4_FAILURE_FATAL=no  IPV6INIT=no  
  • Restart the network service:

    $ sudo service network restart  [sudo] password for <current_user>:   Restarting network (via systemctl):                        [  OK  ]  
  • Try to ping Google servers again, with no success:

    $ ping google.com  ping: google.com: Name or service not known  
  • Check for file /etc/resolv.conf:

    $ cat /etc/resolv.conf   cat: /etc/resolv.conf: No such file or directory  
  • File doesn't exists anymore - and this is the problem something is deleting it on every reboot

  • Create the file and add the content of DNS:

    $ sudo nano /etc/resolv.conf   
  • Ping Google servers this time with success:

    $ ping google.com  PING google.com (216.58.192.110) 56(84) bytes of data.  64 bytes from mia07s35-in-f110.1e100.net (216.58.192.110): icmp_seq=1 ttl=57 time=3.87 ms  

Any ideas in what could be happening here?

behavior of ServerAliveInterval with ssh connection

Posted: 27 Nov 2021 12:06 PM PST

Using ssh I am logging to another system and executing scripts there that creates new machines, and do some setups. It takes around 7-8 hours. So what happened is, the ssh connection keeps dropping and I always get timeout with unsuccessful execution of the script.

So now I am using this argument along with ssh connection:

ssh -o ServerAliveInterval=60 user@host ....  

This ssh is spawned multiple times. The problem is after few ssh connection, I am getting error :

too many logins of user and the after ssh connections are getting closed just after successful logins.

So is it the behavior of the ServerAliveInterval, that keeps the ssh user login session in remote machine alive even after ssh work is over and that's why my further logins are disconnected?

Can I watch the progress of a `sync` operation?

Posted: 27 Nov 2021 03:03 PM PST

I've copied a large file to a USB disk mounted on a Linux system with async. This returns to a command prompt relatively quickly, but when I type sync, of course, it all has to go to disk, and that takes a long time.

I understand that it's going to be slow, but is there somewhere where I can watch a counter go down to zero? Watching buffers in top doesn't help.

No comments:

Post a Comment