Monday, May 31, 2021

Recent Questions - Unix & Linux Stack Exchange

Recent Questions - Unix & Linux Stack Exchange


Trapped in log in loop after enabling auto-login

Posted: 31 May 2021 09:58 AM PDT

I am running Linux Mint 20.1 Ulyssa with kernel 5.4.0-72-generic x86_64.

I enabled automatic logins in the settings menu. Since then, I cannot log in at all. The log-in screen resets every 5 seconds. I tried to bypass this with ctl-alt-f1 and log in through the terminal, but the log-in screen keeps resetting faster than I can finish typing in my username and password.

I am able to boot into recovery mode and open a root shell, but I don't know how to go about disabling auto-logins from the command line.

Debian 10.9 fresh install: network configuration

Posted: 31 May 2021 09:57 AM PDT

I have installed Debian 10.9 on a Dell poweredge r710 but was not able to configure network. Server has 4 physical ethernet ports; all are of Broadcom BCM5709C NetXtreme II GigE (Client NDIS VDB) #38-41. Server firmware version is iDRAC6 Firmware revision version: 1.96.01. Primary Backplane firmware revision 1.07. Thank you.

why does "gnome-keyring-daemon -r -d --unlock <<<wrongPassword" change the password of my keyring ? And wha'ts the new password?

Posted: 31 May 2021 09:26 AM PDT

After reading this, I tried to unlock Gnome Keyring Daemon from command line :

$ gnome-keyring-daemon --replace --daemonize --unlock <<<goodpassword  ** Message: Replacing daemon, using directory: /run/user/1000/keyring  GNOME_KEYRING_CONTROL=/run/user/1000/keyring  SSH_AUTH_SOCK=/run/user/1000/keyring/ssh  

The previous command didn't do anything, when I opened Gnome Passwords and Keys (ex-Seahorse), the keyring was still locked.

I tried again, and I misspelled the password :

$ gnome-keyring-daemon --replace --daemonize --unlock <<<wrongPassword  ** Message: Replacing daemon, using directory: /run/user/1000/keyring  GNOME_KEYRING_CONTROL=/run/user/1000/keyring  SSH_AUTH_SOCK=/run/user/1000/keyring/ssh  

Of course the keyring remains locked. But now I am unable to unlock it, event through the graphical interface (Passwords and Keys, ex-Seahorse). What's more, even the "new" password (wrongPassword) isn't working.

So my questions :

  1. Why does

$ gnome-keyring-daemon --replace --daemonize --unlock<<<wrongPassword

change my keyring's password ?

  1. And what is the new password after having type this command ? It's not "wrongPassword".

Any help would be appreciated. I restored an old backup of my keyring, but I'd like to find a way to get my recent keyring back.

GNOME problem reporting keeps crashing

Posted: 31 May 2021 09:08 AM PDT

I am experiencing issues with the GNOME problem reporting software (gnome-abrt). I open it up because I want to report the errors I get (I get a lot of crashes in software and system crashes) but libreport itself crashes and doesn't let me report the issues.

At first I thought it was the custom theme I installed that was causing this:

~$ gnome-abrt    (gnome-abrt:12043): Gtk-WARNING **: 17:29:39.302: Theme parsing error: main-dark.css:2470:0: Expected a valid selector    (org.freedesktop.GnomeAbrt:12068): Gtk-WARNING **: 17:29:42.849: Theme parsing error: main-dark.css:2470:0: Expected a valid selector  free(): double free detected in tcache 2  

But switching back to the default theme causes the same:

free(): double free detected in tcache 2  

Any help is greatly appreciated.

Unable to Open Vmware after Installation on Kali Linux

Posted: 31 May 2021 09:56 AM PDT

So, I was trying to install Vmware on Kali Linux(Linux kali 5.9.0-kali1-amd64 #1 SMP Debian 5.9.1-1kali2 (2020-10-29) x86_64 GNU/Linux ). Vmware is of 16.1.2 version. It did get installed successfully but while opening it, This is showing off, I did a bit google research on this topic but i didnt get a proper hint. So if anyone could clarify in a proper way.

enter image description here

Execute program in current shell within shell script

Posted: 31 May 2021 09:55 AM PDT

I made a little shell script, that parses the .ssh config and allows me to pick an entry with fzf, and then connects to that host:

#!/bin/bash    set -o nounset -o errexit -o pipefail    list_remote_hosts()  {      choice="$(cat $HOME/.ssh/config | awk -v RS= -v FS=\\n -v IGNORECASE=1 '          {              ip = ""              alias = ""              id_file = ""              username = ""              port = ""                for (j = 1; j <= NF; ++j) {                  split($j, tmp, " ")                  if (tmp[1] == "Host") { alias = tmp[2] }                  if (tmp[1] == "Hostname") { ip = tmp[2] }                  if (tmp[1] == "IdentityFile") { id_file = tmp[2] }                  if (tmp[1] == "User") { username = tmp[2] }                  if (tmp[1] == "Port") { port = tmp[2] }              }                if (ip || alias && alias != "*") {                  if (port == "")                  {                      port = "22"                  }                  print "ssh " username "@" ip " -i " id_file " -p " port              }          }      ' | fzf)"        "$($choice)"  }    list_remote_hosts  

That works, but I am having problems giving ownership to the current shell. When connected, the script freezes (because ssh is started in a subshell I imagine). Once I type e.g. exit, and the ssh command terminates, I can see the output.

I want to automatically give ownership to the current shell when the script is run, so that I get the same behavior like running the ssh command from within my terminal.

I tried all sorts of things like appending && zsh or using eval or exec, but none of these worked. How can I do this?

Linux readAhead when concurrent reads done on same file

Posted: 31 May 2021 08:04 AM PDT

Linux perform read ahead (specified in /sys/block//queue/read_ahead_kb) when a file is read sequentially.

Interested OS : Red hat Linux Interested File System : xfs, ext4

What is the criteria for deciding a sequential read is done ? Consider multiple concurrent reads done on same file using pread (https://man7.org/linux/man-pages/man2/pwrite.2.html) with same or different FDs.

e.g.

Same FD. reads at positions 10-20-30-78-89(out of seq reads) -40-50-60-70 - 23-34 (out of seq reads)- 80-90-100...

Can above subtle out of sequence reads could avoid readaheads in this case ?

If so, Would using two different FDs solve this issue (i.e. a separate FD used for reads at 78-89-23-34) ? (i.e. readAheads will happen as uasual for 10-20-30-40-50 read)

systemctl suspend not locking kali linux

Posted: 31 May 2021 07:12 AM PDT

I used to use systemctl suspend to lock my ubuntu 20.04, whenever I have to go out. Now, when I've now switched to kali linux,the same command systemctl suspend is no longer locking the screen. Rather it is just turning off the screen, which can easily be unlocked my moving mouse or by pressing any key on the keyboard. I want kali linux to actually suspend and lock whenever, i type systemctl suspend, rather than just turning off the screen. How can i have my issue fixed?

awk: extract % of the top lines from data file

Posted: 31 May 2021 07:19 AM PDT

Dealing with the post-processing of multi-column csv file contained many (10000+) lines:

ID(Prot), ID(lig), ID(cluster), dG(rescored), dG(before), POP(before)  9000, lig662, 1, 0.421573, -7.8400, 153  10V2, lig807, 1, 0.42692, -8.0300, 149  3000, lig158, 1, 0.427342, -8.1900, 147  3001, lig158, 1, 0.427342, -8.1900, 147  10V2, lig342, 1, 0.432943, -9.4200, 137  10V1, lig807, 1, 0.434338, -8.0300, 147  4000, lig236, 1, 0.440377, -7.3200, 156  10V1, lig342, 1, 0.441205, -9.4200, 135  4000, lig497, 1, 0.442088, -7.7900, 148  9000, lig28, 1, 0.442239, -7.5200, 152  3001, lig296, 1, 0.444512, -7.8900, 146  10V2, lig166, 1, 0.447681, -7.1500, 157  ....  4000, lig612, 1, 0.452904, -7.0200, 158  9000, lig123, 1, 0.461601, -6.8000, 160  10V1, lig166, 1, 0.463963, -7.1500, 152  10V1, lig369, 1, 0.465029, -7.3600, 148  

I am using the following AWK code integrated into a bash function, which takes 1% (top lines) from the CSV and saves it as a new CSV (contained thus reduced number of the lines):

take_top44 () {      # Take the top lines from the initial CSV  awk -v lines="$(wc -l < original.csv)" '  BEGIN{    top=int(lines/100)  }  FNR>(top){exit}  1  ' original.csv >> csv_with_top_lines.csv  

How could I modify my awk code to apply more selective filter on the original.csv? For example to make filtering of the data based on the value (float number) of the 4th column (in dG(rescored))? For example I need to use the lowest value (which is always on the second line, minForth = 0.421573 ) as the reference and save all the lines (from the same csv) matching the condition $4 >= (0.2 * minForth).

Beaglebone Black Static IP rentention

Posted: 31 May 2021 09:11 AM PDT

I am using Beaglebone black device as an Energy Monitoring device in my IOT Projects. The application reads data through USB (Modbus RTU) and sends it to remote cloud through MQTT. There are around 15-20 Beaglebone Black devices such devices. To access, the internet, the plant IT Manager has given me (15-20)number of static ip addresses. I have set static ip address in the (/etc/network/interfaces) file. But sometimes, the internet connectivity is not working. when i debugged it, i found that beaglebone black is getting dynamic ip address.

There are separate ranges for static and dynamic ip address in the plant. If i reboot the beaglebone black, it again catches the static IP address properly and the system works normally.

I am facing this issue in random ip address. As of now there is no option of shifting it permanently on the dynamic ip range. This is occurring in random devices. Please help me resolve this issue. I am attaching the screenshots of ip set in the (/etc/network/interface) file and ip address received.

for example, the static IP address set in the device,

enter image description here

IP: 10.12.4.152

netmask: 255.255.254.0

gateway: 10.12.4.1

IP Address received (checked using ifconfig command)

IP: 10.12.4.207enter image description here

netmask: 255.255.254.0

I have attached two separate images of configuration done in the beablebone black. one image is /etc/network/interface file and the other is response of ifconfig command.

Limiting ssh connections to specific devices

Posted: 31 May 2021 10:03 AM PDT

I have an ubuntu system at home, allowed ssh and enabled port forwarding to my machine for ssh connections. So far so good.

Now I can access from everywhere. How do i confugure my machine to refuse any connection that is not from my laptop or my phone?

Should it be done from the router or from the machine's firewall? How do I do it?

grub-mkconfig adding entry for other Linux system but ignoring its Grub config

Posted: 31 May 2021 07:56 AM PDT

I've got a machine with two Linux systems and Windows. When I run grub-mkconfig from my Ubuntu Mate system, it identifies and creates entries for itself, the second Linux system, and Windows.

Mystifyingly, when I look at /boot/grub/grub.cfg on the Ubuntu system, the entry for the other Linux system isn't there! However, it still appears on the Grub menu.

Furthermore, I've set up some custom kernel parameters in the /etc/default/grub file on the other system, but they don't propagate to the Grub config file.

What have I misunderstood or messed up?

Calculating Hourly Averages for multiple data columns

Posted: 31 May 2021 08:35 AM PDT

Good day, I would like to calculate hourly averages for the follwing sample data:

Timestamp,data1,data2  2018 07 16 13:00:00,23,451  2018 07 16 13:10:00,26,452  2018 07 16 13:20:00,24,453  2018 07 16 13:30:00,23,454  2018 07 16 13:50:00,28,455  2018 07 16 14:20:00,20,456  2018 07 16 14:40:00,12,457  2018 07 16 14:50:00,22,458  2018 07 16 15:10:00,234,459  2018 07 16 17:50:00,23,845  2018 07 16 18:10:00,239,453  2018 07 17 10:10:00,29,452  2018 07 18 13:20:00,49,451  2018 07 19 13:30:00,28,456  

desired output:

Date,Hour,Ave_data1,Ave_data2  2018 07 16,13,24.8,453  2018 07 16,14,18,457  2018 07 16,15,234,459  2018 07 16,17,23,845  2018 07 16,18,239,453  2018 07 17,10,29,452  2018 07 18,13,49,451  2018 07 19,13,28,456  

Please note that data goes on for days (100000+ records) and data columns vary, sometimes there's more than 2 columns (i.e. data1,data2,...,dataX). So i would like the script to be able to do calculations even when there are more columns. your help will be highly appreciated.

PS: Before posting this, i checked old posts and they dont really address my problem.

how to list bluetooth connections per controller

Posted: 31 May 2021 07:18 AM PDT

I am on Fedora 34. I wonder how I can list all connected and/or known bluetooth devices in relation to which controller is aware of the device? I recognise bluetoothctl devices - but I do not see how I can list connections for a specific controller.

What System call creates the parent process?

Posted: 31 May 2021 06:52 AM PDT

My understanding is that fork is the system call that creates a new process by cloning the parent process. By what creates the parent process? If using a C library to create multiple processes, what was the system call to create the first process? For example when running ./main.o

Extract interface name, hwaddress and IP address from ifconfig(must to use ifconfig not ip commands)

Posted: 31 May 2021 07:23 AM PDT

Input 1

eno16780032: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500      inet 192.168.0.1 netmask 255.255.255.255  broadcast 192.168.0.254      ether 00:50:56:00:00:00  txqueuelen 1000  (Ethernet)  eno33559296: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500      inet 192.168.0.2  netmask 255.255.255.255  broadcast 192.168.0.254      ether 00:50:56:00:00:01  txqueuelen 1000  (Ethernet)  lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536      inet 127.0.0.1  netmask 255.0.0.0      loop  txqueuelen 0  (Local Loopback)  

Input 2

bond0   Link encap:Ethernet  HWaddr 00:50:56:00:00:00          inet addr:192.168.0.1   Bcast:192.168.0.254  Mask:255.255.254.255          UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1  bond0:0 Link encap:Ethernet  HWaddr 00:50:56:00:00:00          inet addr:192.168.0.1  Bcast:192.168.0.254  Mask:255.255.254.255          UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1  eth0    Link encap:Ethernet  HWaddr 00:50:56:00:00:00          UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1  eth1    Link encap:Ethernet  HWaddr 00:50:56:00:00:00          UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1  lo      Link encap:Local Loopback          inet addr:127.0.0.1  Mask:255.0.0.0          UP LOOPBACK RUNNING  MTU:65536  Metric:1  

I want output as like, ( Basically to extract Interface, IP and Hardware Address )

Output1

eno16780032 192.168.0.1 00:50:56:00:00:00   eno33559296 192.168.0.2 00:50:56:00:00:01     lo          127.0.0.1  

Output2

bond0   192.168.0.1 00:50:56:00:00:00  bond0:0 192.168.0.2 00:50:56:00:00:00  eth0                00:50:56:00:00:00 ===> No IP since its under bonding  eth1                00:50:56:00:00:00 ===> No IP since its under bonding  lo      127.0.0.1  

I've tried with awk (awk '/flags|Link/{a=$1;hw=$NF;next;} /inet /{ip=$2;print a,ip,hw}'), but since Not all match pattern available on each line Unable to get the desired ouput.

So thinking to add matching pattern empty "inet addr:" on input 2 file for the interfaces are part of bonding and the things would be fine.

  1. Can you please help either to add empty "inet addr:" after interface line

     eth0    Link encap:Ethernet  HWaddr 00:50:56:00:00:00           UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1           inet addr:   <==== Insert empty line  

or

  1. To get the desired output as mentioned above.

How to mark directories in the output of the `find` command?

Posted: 31 May 2021 06:34 AM PDT

For example, I looking for files and directories in some directory:

ubuntu@example:/etc/letsencrypt$ sudo find . -name example.com*  ./archive/example.com  ./renewal/example.com.conf  ./live/example.com  ubuntu@example:/etc/letsencrypt$   

How can I mark that ./archive/example.com and ./live/example.com are directories in the output above?

How to calculate median for multiple split files generated from one big file

Posted: 31 May 2021 09:55 AM PDT

I want to calculate the median for my mouse data set (file name = test). This data set is very big so I split the dataset into multiple files (n=5) with this command:

 split -l$((`wc -l < test`/5)) test test.split -da 4  

After this step, now I have 5 files test.split0000, test.split0001, test.split0002, test.split0003, test.split0004.

I use the following script for calculating the median

#!/usr/bin/R    data <- read.table("Input_file", row.names=1, header=T)    M <- apply(data, 1, median)     write.table(M, "Final_median_mousegene", quote=FALSE, sep="\t", row.names=TRUE)    q()  

But now I have multiple files so I want to run a single script that works together on all split files.

Thank you

updata variable content

Posted: 31 May 2021 07:03 AM PDT

i'm editing network mount point script file. When the variable is assembed, it contains information, but when it is unmounted it appears empty, and if the script asks to assemble, the content of the variable does not update.

VARIABLE=$(df -h | awk '{print $1}'| grep //user@IP/path/user)

When unmounted: echo $VARIABLE empty, if i mount with command:

open smb://user:passwd@IP/path/user  

and type df -h | awk '{print $1}' | grep //user@IP/path/user seem //user@IP/path/user but if i give the check command echo $VARIABLE seem empty

Someone can help me?

Why is this simple bash script destroying my computer?

Posted: 31 May 2021 08:06 AM PDT

Something weird is happening with a seemingly inoffensive script I have. I need to copy a series of files to some locations in the system and I have the following script to do so.

#!/bin/bash    # Get all the files from the file  LINES=$(cat Release-Nodejs/dependencies.txt)    # Copy each file to its location as indicated in the file  for LINE in ${LINES}  do      LIBRARY=$(basename ${LINE})      LIBRARY=Release-Nodejs/${LIBRARY}      LIB_PATH=$(dirname ${LINE})      echo -e "Copying \e[38;5;10m${LIBRARY}\e[0m to \e[38;5;11m${LIB_PATH}\e[0m"      cp ${LIBRARY} ${LIB_PATH}  done  

The script is getting the files and locations from the dependencies.txt file whose contents are:

/usr/lib/x86_64-linux-gnu/libnode.so.72  /lib/x86_64-linux-gnu/libgcc_s.so.1  /lib/x86_64-linux-gnu/libpthread.so.0  /lib/x86_64-linux-gnu/libc.so.6  /lib/x86_64-linux-gnu/libz.so.1  /usr/lib/x86_64-linux-gnu/libbrotlidec.so.1  /usr/lib/x86_64-linux-gnu/libbrotlienc.so.1  /usr/lib/x86_64-linux-gnu/libcares.so.2  /usr/lib/x86_64-linux-gnu/libnghttp2.so.14  /usr/lib/x86_64-linux-gnu/libcrypto.so.1.1  /usr/lib/x86_64-linux-gnu/libssl.so.1.1  /usr/lib/x86_64-linux-gnu/libicui18n.so.67  /usr/lib/x86_64-linux-gnu/libicuuc.so.67  /lib/x86_64-linux-gnu/libdl.so.2  /usr/lib/x86_64-linux-gnu/libstdc++.so.6  /lib/x86_64-linux-gnu/libm.so.6  /usr/lib/x86_64-linux-gnu/libbrotlicommon.so.1  /usr/lib/x86_64-linux-gnu/libicudata.so.67  

If I comment out the cp ${LIBRARY} ${LIB_PATH} line I get:

Script output

So I know I'm getting the filenames and paths correctly. It is when I uncomment the cp ${LIBRARY} ${LIB_PATH} line and run the script with sudo that the script destroys my system (by the way, this is harmless because I'm testing this on a VM). When doing this the screen just goes black and I have to force close the VM window. Then when I try to run the VM again I get this:

Dead System

And I have to completely reinstall UBUNTU.

I wonder why this is happening since I can manually execute the cp on the command line for each file and nothing bad happens, the files just get copied to their destinations.


EDIT:

As pointed out in one of the comments and in the XY Problem, the problem I'm trying to solve is that I'm creating a native nodejs module on my machine which has node v12.18.1 and that shall be used on a machine with node v 10.19.0 and I absolutely can't update the node version on the target machine or install other packages that include the dependencies.

When I execute ldd mymodule.node I get:

linux-vdso.so.1 (0x00007ffe878c6000)  libnode.so.72 => /usr/lib/x86_64-linux-gnu/libnode.so.72 (0x00007f9bd34fb000)  libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f9bd34e0000)  libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f9bd34be000)  libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f9bd32d4000)  /lib64/ld-linux-x86-64.so.2 (0x00007f9bd5b1e000)  libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007f9bd32b7000)  libbrotlidec.so.1 => /usr/lib/x86_64-linux-gnu/libbrotlidec.so.1 (0x00007f9bd32a9000)  libbrotlienc.so.1 => /usr/lib/x86_64-linux-gnu/libbrotlienc.so.1 (0x00007f9bd3215000)  libcares.so.2 => /usr/lib/x86_64-linux-gnu/libcares.so.2 (0x00007f9bd31fe000)  libnghttp2.so.14 => /usr/lib/x86_64-linux-gnu/libnghttp2.so.14 (0x00007f9bd31d2000)  libcrypto.so.1.1 => /usr/lib/x86_64-linux-gnu/libcrypto.so.1.1 (0x00007f9bd2ef5000)  libssl.so.1.1 => /usr/lib/x86_64-linux-gnu/libssl.so.1.1 (0x00007f9bd2e61000)  libicui18n.so.67 => /usr/lib/x86_64-linux-gnu/libicui18n.so.67 (0x00007f9bd2b4f000)  libicuuc.so.67 => /usr/lib/x86_64-linux-gnu/libicuuc.so.67 (0x00007f9bd2961000)  libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f9bd295b000)  libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f9bd277a000)  libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f9bd262b000)  libbrotlicommon.so.1 => /usr/lib/x86_64-linux-gnu/libbrotlicommon.so.1 (0x00007f9bd2608000)  libicudata.so.67 => /usr/lib/x86_64-linux-gnu/libicudata.so.67 (0x00007f9bd0aed000)  

And that is why I'm trying to do what I'm trying to do in this question. So the real question here is; How can I include these dependencies along with the .node module so I can deploy this without having to update anything on the target?

I would prefer to link all these into the module and have only one .node file that includes everything and doesn't depend on system libraries, but I don't think that is possible, or is it?

How to run a systemd service as a dedicated user for rtorrent?

Posted: 31 May 2021 10:01 AM PDT

I am trying to get rtorrent to run as a systemd service, but the service wouldn't start. Here's the config file and any log I can get. Ask for more info if you need to. I am running:

$ lsb_release -a  No LSB modules are available.  Distributor ID: Ubuntu  Description:    Ubuntu 20.04.2 LTS  Release:        20.04  Codename:       focal  
$ systemctl status rtorrent  ● rtorrent.service - rTorrent       Loaded: loaded (/etc/systemd/system/rtorrent.service; enabled; vendor preset: enabled)       Active: failed (Result: exit-code) since Thu 2021-05-27 08:52:43 EEST; 5min ago      Process: 20199 ExecStart=/usr/bin/tmux new-session -d -P -s rt -n rtorrent /usr/bin/rtorrent (code=exited, status=0/SUCCESS)      Process: 20205 ExecStop=/usr/bin/tmux send-keys -t rt:rtorrent C-q (code=exited, status=1/FAILURE)     Main PID: 20201 (code=exited, status=0/SUCCESS)    May 27 08:52:43 $MACHINE systemd[1]: Starting rTorrent...  May 27 08:52:43 $MACHINE tmux[20199]: rt:  May 27 08:52:43 $MACHINE systemd[1]: Started rTorrent.  May 27 08:52:43 $MACHINE tmux[20205]: no server running on /tmp/tmux-110/default  May 27 08:52:43 $MACHINE systemd[1]: rtorrent.service: Control process exited, code=exited, status=1/FAILURE  May 27 08:52:43 $MACHINE systemd[1]: rtorrent.service: Failed with result 'exit-code'.  

The config file..

[Unit]  Description=rTorrent  Requires=network.target local-fs.target    [Service]  Type=forking  KillMode=none  User=rt  Group=adm  ExecStart=/usr/bin/tmux new-session -d -P -s rt -n rtorrent /usr/bin/rtorrent  ExecStop=/usr/bin/tmux send-keys -t rt:rtorrent C-q  WorkingDirectory=/tmp/tmux-110/    [Install]  WantedBy=multi-user.target  

Some more logs:

$ journalctl -u rtorrent  May 27 08:52:43 $MACHINE systemd[1]: Starting rTorrent...  May 27 08:52:43 $MACHINE tmux[20199]: rt:  May 27 08:52:43 $MACHINE systemd[1]: Started rTorrent.  May 27 08:52:43 $MACHINE tmux[20205]: no server running on /tmp/tmux-110/default  May 27 08:52:43 $MACHINE systemd[1]: rtorrent.service: Control process exited, code=exited, status=1/FAILURE  May 27 08:52:43 $MACHINE systemd[1]: rtorrent.service: Failed with result 'exit-code'.  

So far I have added the user rt to the adm group, but I can't figure it out why tmux can't be started as rt. I also authorized rt user to launch services thanks to the enable-linger option: loginctl enable-linger rt I first added the rt user with:sudo adduser --system --gecos "rTorrent Client" --disabled-password --group --home /home/rt rt. How to make rtorrent run as systemd service with tmuxas a dedicated user? Or is there any other way to run it as service with systemd? Any help is really appreciated.

UPDATE: So, just to get a fresh start, I have created a new user named rtorrent with: sudo adduser --system --gecos "rTorrent System Client" --disabled-password --group --home /home/rtorrent rtorrent and changed the /etc/systemd/system/rtorrent.service file to this (also adding system.daemon = true in /home/rtorrent/.rtorrent.rc, because of this post):

[Unit]  Description=rTorrent System Daemon  After=network.target    [Service]  Type=simple  User=rtorrent  Group=rtorrent    ExecStartPre=-/bin/rm -f /home/rtorrent/.session/rtorrent.lock  ExecStart=/usr/bin/rtorrent -o import=/home/rtorrent/.rtorrent.rc  Restart=on-failure  RestartSec=3    [Install]  WantedBy=multi-user.target  

But after all I get this error:

$ systemctl status rtorrent  ● rtorrent.service - rTorrent System Daemon       Loaded: loaded (/etc/systemd/system/rtorrent.service; enabled; vendor preset: enabled)       Active: activating (auto-restart) (Result: exit-code) since Thu 2021-05-27 10:12:26 EEST; 2s ago      Process: 22855 ExecStartPre=/bin/rm -f /home/rtorrent/.session/rtorrent.lock (code=exited, status=0/SUCCESS)      Process: 22856 ExecStart=/usr/bin/rtorrent -o import=/home/rtorrent/.rtorrent.rc (code=exited, status=255/EXCEPTION)     Main PID: 22856 (code=exited, status=255/EXCEPTION)  

Why is this happening? What I am doing wrong?

UPDATE 2: One more thing, This post suggest not dropping any files in the /etc/systemd/system/, but instead, to drop them in /usr/local/lib/systemd/system which in Debian based systems is in /lib/systemd/system. Therefore, I moved the unit-file there and when enabling it, it automatically created a symlink to /etc/systemd/system/. But still,, I get this error:

$ sudo systemctl status rtorrent  ● rtorrent.service - rTorrent System Daemon       Loaded: loaded (/lib/systemd/system/rtorrent.service; enabled; vendor preset: enabled)       Active: activating (auto-restart) (Result: exit-code) since Thu 2021-05-27 10:39:14 EEST; 924ms ago      Process: 24530 ExecStartPre=/bin/rm -f /home/rtorrent/.session/rtorrent.lock (code=exited, status=0/SUCCESS)      Process: 24531 ExecStart=/usr/bin/rtorrent -o import=/home/rtorrent/.rtorrent.rc (code=exited, status=255/EXCEPTION)     Main PID: 24531 (code=exited, status=255/EXCEPTION)  

What would be a regex to capture DHCP host registration records?

Posted: 31 May 2021 07:56 AM PDT

I need a regex to capture DHCP host registration records

I need to parse through a dhcpd.conf file, for all host reservations, and if possible capture such to a file or Bash array. So if a host reservation is defined as follows,

    host Service-Ethernet {          hardware ethernet 11:11:11:11:11:11;          fixed-address 192.168.0.3;          option host-name "service";      }        host Service-Wifi {          hardware ethernet 22:22:22:22:22:22;          fixed-address 192.168.0.4;      }        host Test {          hardware ethernet 33:33:33:33:33:33;          fixed-address 192.168.0.5          option host-name "test";      }  

Output to file or Bash array...

11:11:11:11:11:11, 192.168.0.3, service  22:22:22:22:22:22. 192.168.0.4,  , 192.168.0.5, test  

If one of the three parameters is missing, leave it blank.

Even if the expression has to be applied line by line, that is still acceptable. I can wrap the expression via a Bash script that reads the configuration file line by line, of course.

How to create a mkfs.ext4 file system on an SD card that can be written to by anyone?

Posted: 31 May 2021 07:09 AM PDT

I'm using up to date Arch Linux 5.12.5.

SD cards from time to time become corrupted, and if not bricked have to be reset/ reformatted.

I do this as follows

# 1. unmount the card / make sure it's unmounted  umount /dev/mmcblk0  umount /dev/mmcblk0p1    # 2. wipe the card. After this the card cannot be mounted becasue   #      there is no partition. There's nothing on it at all.  echo password | sudo -S dd bs=4M if=/dev/zero of=/dev/mmcblk0 oflag=sync    # 3. create a GPT partition table  #   the "-s" defaults the go ahead answer to "yes" so that   #      no user input is necessary rather confusingly the  #         command is 'mklabel' for creating a partition table!  sudo parted -s /dev/mmcblk0 mklabel gpt    # 4. create a GPT file system  #   HAVING THE "-E root_owner=$UID:$GID" IS ESSENTIAL,  #      OTHERWISE THE PARTITION CAN ONLY BE WRITTEN TO AS ROOT  sudo mkfs.ext4 -F -O ^64bit -E root_owner=$UID:$GID -L 'SD_CARD' '/dev/mmcblk0'  

If I use the below line, ie miss out setting the UID:GID to me as above, then ownership of the file system is for root only and the SD card cannot be written to by anyone other than root

sudo mkfs.ext4 -F -O ^64bit -L 'SD_CARD' '/dev/mmcblk0  

When I use the below line, which sets the UID:GID to my UID:GID, then ownership of the file system is for me only and the SD card cannot be written to by anyone other than me

sudo mkfs.ext4 -F -O ^64bit -E root_owner=$UID:$GID -L 'SD_CARD' '/dev/mmcblk0'  

How do I set the UID:GID so that the SD card file system can be written to by anyone?

How to exit from videotest in the grub?

Posted: 31 May 2021 08:01 AM PDT

I am checking different video resolutions in the grub menu by using videotest and vbetest programs and can't go back to the grub command line, after using this programs. System doesn't respond - like hanging. Only Virtualbox's "poweroff the machine" helps.

Question: How exit from this mode? May be I use this programs wrong way?

My actions:

  1. Enters to the grub menu, while booting, then go to the command mode.
  2. Run videotest 800x600

    List item

  3. Look at the new resolution example

    List item

  4. Then, after pressing any key, system stops respond and I can't go back to the grub menu. Get this screen:

    enter image description here

OpenLDAP: rfc2307bis instead of nis schema

Posted: 31 May 2021 07:02 AM PDT

I'm looking for a way to create an empty LDAP dictionary with the rfc2307bis schema. On Debian when installing slapd or when reconfiguring with dpkg the nis schema is used by default. How do I remove it or replace it with rfc2307bis ? At initialization or after.

Unknown runlevel on Ubuntu 14.04, services not starting on boot

Posted: 31 May 2021 10:04 AM PDT

I rent a VPS from a VPS company and run an Ubuntu 14.04 web server there. Recently it had to be suspended by my provider for a while. After the suspension period (1-2 days), the VPS boots, but cannot acquire any runlevels.

root@vps:/# runlevel  unknown  

This, in turn, means no Upstart services are starting on boot, as the "useful stuff" requires runlevel [2345].

I can start individual services manually with initctl, unless they have dependencies which the boot did not start automatically.

I cannot find anything useful/understandable from logs. Please do ask if you want specific log entries and I can try to find them.

The server is (was) running PHP7, Nginx, MySQL, Redis, Minecraft Server, Mumble Server. The server was operating fine (and survived multiple reboots) before the suspension period.

Here is my initctl list after a fresh reboot: http://pastebin.com/fcfcnxBU. Please do ask for specific details as I'm not entirely sure where to look for them (e.g. log files, debug artifacts, files and directories, etc.).

EDIT: some progress via tinkering:

It seems the filesystem and/or network stack is not started correctly when booting. When I do the following:

$ ifup --all  $ initctl emit static-network-up  $ initctl emit filesystem  ... Ctrl-C to exit loop  $ initctl emit local-filesystems  

Then I get

$ runlevel  >N 2  

And my server services (at least most of them) are running normally.

I'll check if there is a single command of these that makes the boot init sequence continue normally.

EDIT2:

  • ifup --all brings up a venet0:0 which is tied to the VPS' public static IP.

  • emit static-network-up does nothing.

  • emit filesystem + Ctrl-C starts

    • rsyslog
    • ssh
    • minecraft-server
    • cron
    • xinetd
    • console
    • tty2
    • upstart-file-bridge
    • mysql

    and stops

    • plymouth
    • plymouth-upstart-bridge
  • emit local-filesystems starts

    • avahi-daemon
    • systemd-logind
    • mountall.sh
    • dbus
    • networking

    and something called network-interface-security (network-interface/lo) start/running disappears.

Bash: pipe 'find' output into 'readarray'

Posted: 31 May 2021 09:16 AM PDT

I'm trying to search for files using find, and put those files into a Bash array so that I can do other operations on them (e.g. ls or grep them). But I can't figure out why readarray isn't reading the find output as it's piped into it.

Say I have two files in the current directory, file1.txt and file2.txt. So the find output is as follows:

$ find . -name "file*"  ./file1.txt  ./file2.txt  

So I want to pipe that into an array whose two elements are the strings "./file1.txt" and "./file2.txt" (without quotes, obviously).

I've tried this, among a few other things:

$ declare -a FILES  $ find . -name "file*" | readarray FILES  $ echo "${FILES[@]}"; echo "${#FILES[@]}"    0  

As you can see from the echo output, my array is empty.

So what exactly am I doing wrong here? Why is readarray not reading find's output as its standard input and putting those strings into the array?

CentOs 7 Python Issue “-bash: python: command not found”

Posted: 31 May 2021 09:03 AM PDT

I asked this at stackoverflow, but just realized it might be best here. If I need to delete it over there or should not have posted here, please let me know. I am still new to this site. Thanks in advance!

I'm using centos 7 and was trying to install python 3.4 alongside python 2.6 (2.7?) the default install. I was attempting to change my bashrc file with an alias to make python 3.4 the default from the shell. It did not work, and I commented out the script, resourced bashrc, and now the system acts as if it can no longer find python, default or otherwise.

Just typing "python" returns:

-bash: python: command not found   

which python gives:

/usr/bin/which: no python in (/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/myusername/.local/bin:/home/myusername/bin)   

However there is a python install in both /usr/bin and /usr/sbin.

alternatives --list | grep -i python yields:

    python  auto  /usr/bin/python3.4  

-v python returns nothing.

type -a python gives:

 -bash: type: python: not found  

declare -p PATH outputs

declare -x PATH="/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/myusername/.local/‌​bin:/home/myusername/bin"  

I am not entirely sure where to go from here, and any help would be much appreciated.

I do seem to have /usr/bin/python:

$ ls -l /usr/bin/python  lrwxrwxrwx 1 root root 24 Jun 25 15:39 /usr/bin/python -> /etc/alternatives/python   

but:

$ ls -l $(readlink -f /usr/bin/python) gives:

ls: cannot access /usr/bin/python3.4: No such file or directory  

I do not know if this is relevant, but /etc/alternatives/python was pink looking in the terminal.

Edit 2:

ls -l /usr/local/bin/ prints -rwxr-xr-x 1 root root 101 Sep 4 2014 2to3-3.4 -rwxr-xr-x 1 root root 241 Sep 4 2014 easy_install-3.4 -rwxr-xr-x 1 root root 99 Sep 4 2014 idle3.4 -rwxr-xr-x 1 root root 213 Sep 4 2014 pip3.4 -rwxr-xr-x 1 root root 84 Sep 4 2014 pydoc3.4 -rwxr-xr-x 2 root root 17544 Sep 4 2014 python3.4 -rwxr-xr-x 2 root root 17544 Sep 4 2014 python3.4m -rwxr-xr-x 1 root root 3066 Sep 4 2014 python3.4m-config -rwxr-xr-x 1 root root 236 Sep 4 2014 pyvenv-3.4 So perhaps a linking error still?

Edit 3:

This is the series of commands which I used to install python 3.

yum install scl-utils sudo yum install scl-utils sudo wget https://www.softwarecollections.org/en/scls/rhscl/python33/epel-7-x86_64/download/rhscl-python33-epel-7-x86_64.noarch.rpm sudo yum install rhscl-python33-*.noarch.rpm

Difference between sdX and vdX

Posted: 31 May 2021 06:20 AM PDT

When I use Ubuntu and CentOS, I see /dev/sda and /dev/vda. So I can't understand what is the different between above two?

Where are all the posibilities of storing a log file

Posted: 31 May 2021 07:07 AM PDT

I'm writing a program, and would like it to store a log file. Problem is, the program really shouldn't be ran as root.

So if I wanted to uphold to the traditions of where files are placed, where could I keep the log file if not in /var/log that a normal user would have permissions to?

Edit: I'm using Arch linux.

No comments:

Post a Comment