Thursday, August 5, 2021

Recent Questions - Unix & Linux Stack Exchange

Recent Questions - Unix & Linux Stack Exchange


Vim-like terminal email client?

Posted: 05 Aug 2021 10:23 AM PDT

Looking foward to vim-like email clients (TUI or GUI).

For example:

  • 5dd should delete the next 5 email messages starting from the cursor.
  • /bar should search for "bar" in email titles and messages.
  • ' should go to bookmarked messages
  • m should bookmark messages
  • : should enter command mode

If those keybindings aren't predefined, there should be a way to define custom keybindings to do the job. Also, it should be possible to define external editor to send emails.

Does anyone know of some software that satisfies that?

(Just saying, I've tried neomutt but I've been unable to do some things like 5dd or '. I don't know if that is actually possible, so I decided to ask here in any case).

Cannot install jq version 1.6 in Docker

Posted: 05 Aug 2021 10:07 AM PDT

Our Dockerfile uses FROM python:3.7-slim-buster as the base image. One of the lines in our dockerfile is RUN apt-get install jq -y. When we exec into a running container build off of this image, we get:

jq --version  jq-1.5-1-a5b5cbe  

Per https://stedolan.github.io/jq/, version 1.6 was released in 2018, and we'd like to use 1.6 in our app. When we try:

  • RUN apt-get install jq=1.6.0 -y or RUN apt-get install jq=1.6 -y, we are met with errors
  • E: Version '1.6.0' for 'jq' was not found and E: Version '1.6' for 'jq' was not found

When I run jq --version locally on my Mac, I receive jq-1.6. How can we get version 1.6 for our docker image?

systemd-resolved+VPN: 2nd DNS server ignored (L2TP)

Posted: 05 Aug 2021 09:54 AM PDT

I'm connecting to a corporate VPN via network-manager-l2tp with a pre-shared key and user+pass. I'm getting a correct DNS server IP automatically, which resolves the companies URLs correctly.

However, public internet isn't resolved (I tested with www.google.com all the time), but this depends on the perspective: I can't get systemd-resolved to resolve from 2 DNS servers at the same time (1.1.1.1 and the corporate DNS). It's strictly either or and I've tried a lot of different configs...

Question: How do I configure systemd-resolved to use both a corporate VPN's DNS and the regular DNS servers at the same time?

I don't care if it's 'conditional forwarding' based on domain or using the 2nd DNS after the 1st fails. I couldn't get neither approach to work. My guess is this has something to do with l2tp, but I can't find any solutions that apply to my case.

I use: NetworkManager 1.30.0, systemd-resolved (systemd 247.3) and openresolv (instead of old resolvconf) on Pop OS. Both services are up and running.

resolv.conf -> /run/systemd/resolve/stub-resolv.conf

# This file is managed by man:systemd-resolved(8). Do not edit.  [...]    nameserver 127.0.0.53  options edns0 trust-ad  search fritz.box  

/etc/systemd/resolved.conf

[Resolve]  FallbackDNS=1.1.1.1 corp.ip.add.ress  

resolvectl status output after connecting to VPN

Global             Protocols: -LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported      resolv.conf mode: stub  Fallback DNS Servers: 1.1.1.1 corp.ip.add.ress    Link 2 (enp6s0)      Current Scopes: DNS           Protocols: -DefaultRoute +LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported  Current DNS Server: 192.168.178.1         DNS Servers: 192.168.178.1          DNS Domain: fritz.box    Link 3 (ip_vti0)  Current Scopes: none       Protocols: -DefaultRoute +LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported    Link 23 (ppp0)      Current Scopes: DNS           Protocols: +DefaultRoute +LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported  Current DNS Server: corp.ip.add.ress         DNS Servers: 1.1.1.1 corp.ip.add.ress  

I've tried a lot of different things, but what you see above is a good starting point to come up with a robust, final solution.

Git - how to add/link subfolders into one git-repository directory

Posted: 05 Aug 2021 10:25 AM PDT

Assuming I have a file structure like this:

├── Project-1/  │   ├── files/  │   └── special-files/  ├── Project-2/  │   ├── files/  │   └── special-files/  └── Project-3/      ├── files/      └── special-files/  

Now I want to create a Git repository, including all the special-files folders. If it was files, I could create a hardlink ln ./Project-1/special-files ./Git-Project/special-files-1 and so on, so I would get:

Git-Project/  ├── .git  ├── .gitignore  ├── special-files-1/  ├── special-files-2/  └── special-files-3/  

Though hardlinks do not work with folders. Symlinks do not get handled by git. Is there a way to achieve, collecting/linking these folders into a git repository-folder?

wlan0 down and ip link set wlan0 up doesnt' work

Posted: 05 Aug 2021 09:46 AM PDT

Running Kali linux and ax210 intel wireless card. Installed the driver .59 ucode. Getting the following dmseg....

Saw some post about deleting the file iwlwifi-ty-a0-gf-a0.pnvm in /lib/firmware but I don't see that file in my /lib/firmware directory. Could it be somewhere else or is that an old fix?

dmesg | grep iwlwifi

[ 3.956693] iwlwifi 0000:04:00.0: enabling device (0000 -> 0002)

[ 3.968428] iwlwifi 0000:04:00.0: firmware: direct-loading firmware iwlwifi-ty-a0-gf-a0-59.ucode

[ 3.968437] iwlwifi 0000:04:00.0: api flags index 2 larger than supported by driver

[ 3.968452] iwlwifi 0000:04:00.0: TLV_FW_FSEQ_VERSION: FSEQ Version: 93.8.63.28

[ 3.968672] iwlwifi 0000:04:00.0: loaded firmware version 59.601f3a66.0 ty-a0-gf-a0-59.ucode op_mode iwlmvm

[ 3.968682] iwlwifi 0000:04:00.0: firmware: failed to load iwl-debug-yoyo.bin (-2)

[ 4.102730] iwlwifi 0000:04:00.0: Detected Intel(R) Wi-Fi 6 AX210 160MHz, REV=0x420

[ 4.258916] iwlwifi 0000:04:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0.pnvm (-2)

[ 4.328799] iwlwifi 0000:04:00.0: base HW address: a4:6b:b6:3d:61:fc

─# ip a

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 8c:04:ba:99:7c:57 brd ff:ff:ff:ff:ff:ff inet 192.168.1.218/24 brd 192.168.1.255 scope global dynamic noprefixroute eth0 valid_lft 82392sec preferred_lft 82392sec inet6 fe80::8e04:baff:fe99:7c57/64 scope link noprefixroute valid_lft forever preferred_lft forever

3: wlan0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000 link/ether 72:4b:92:74:8c:bc brd ff:ff:ff:ff:ff:ff permaddr a4:6b:b6:3d:61:fc

I run

ip link set wlan0 up

but it still shows as DOWN.

Anyone have any thoughts would be much appreciated. Thank you,

GAWK Syntax error

Posted: 05 Aug 2021 09:44 AM PDT

I am getting error while using below script in linux:

FILENAME_DIR=/u01/EDQ_122140/user_projects/domains/base_domain/config/fmwconfig/edq/oedq.local.home/landingarea/WC/world-check.xml.gz  FILENAME_DIR=/u01/EDQ_122140/user_projects/domains/base_domain/config/fmwconfig/edq/oedq.local.home/landingarea/WC/world-check-keywords.xml  FILENAME_DIR=/u01/EDQ_122140/user_projects/domains/base_domain/config/fmwconfig/edq/oedq.local.home/landingarea/WC/world-check-native-character-names.xml.gz    gawk '/<record |<id_numbers>|<id|<\/id_numbers>|<names>|first_name|<last_name><aliases>|<country>/ { print } ' <<< ${FILENAME_DIR}${FILENAME} > ${FILENAME_DIR}${FILENAME}TEMP  gawk '/<record/ {++a: fn=sprintf("'${FILENAME_DIR}'record_%02d.vcf", a): print "Writing: ", fn } { print $0 >> fn; } ' <<< ${FILENAME_DIR}world_check_list_pepTEMP > $logfile    for record_file in `ls ${FILENAME_DIR}record*`  do     cat $record_file |sed -e :a -e '$!N;s/\n */ /:ta' -e 'P;D' | sed 's/<id_numbers>.*<\/id_numbers>//' |sed 's/.*<names>\(.*\).*<country>\(.*\)/\1~\2/' | sed 's/<[^<]*>//g' > ${record_file}_records    cat $record_file |sed -e :a -e '$!N;s/\n */ /:ta' -e 'P;D' | sed 's/.*<id_numbers>\(.*\)<\/id_numbers>.*/\1~TT/' |sed 's/<[^<]*>//g' > ${record_file}_nrc_id    rm $record_file   done   cat ${FILENAME_DIR}*records > ${FILENAME_DIR}final_file.dat   cat ${FILENAME_DIR}*nrc_id > ${FILENAME_DIR}final_file_nrc.dat   rm  ${FILENAME_DIR}*records   rm  ${FILENAME_DIR}*nrc_id  

Error :

$ ./ABC.sh  : not found_Format.sh: line 4:  gawk: cmd. line:1: /<record/ {++a: fn=sprintf("/u01/EDQ_122140/user_projects/domains/base_domain/config/fmwconfig/edq/oedq.local.home/landingarea/WC/world-check-native-record_%02d.vcf", a): print "Writing: ", fn } { print $0 >> fn; }  gawk: cmd. line:1:               ^ syntax error  gawk: cmd. line:1: /<record/ {++a: fn=sprintf("/u01/EDQ_122140/user_projects/domains/base_domain/config/fmwconfig/edq/oedq.local.home/landingarea/WC/world-check-native-record_%02d.vcf", a): print "Writing: ", fn } { print $0 >> fn; }  gawk: cmd. line:1:                                                                                                                                                                                                 ^ syntax error  : not found_Format.sh: line 7:  ./Watchlist_Format.sh: line 7: syntax error at line 19: `for' unmatched  

Moving files from folder with carriage return in folder name

Posted: 05 Aug 2021 10:36 AM PDT

Provided my admin with a shell script to rename a few folders, but for some reason those folders now contain carriage returns in the name (the script worked fine in UAT, and I'm not quite sure what the difference is between the two environments.) My application creates a folder if it can't find it, so now I have two folders containing files that need to be merged.

So if I have folders: testfolder\r and testfolder, how would I correctly write the following command to move all files from the "CR" folder into the correct folder, preserving the contents of the correct folder in the event of any filename collisions?

mv testfolder\r/* testfolder/   

Arch linux: Cura thows: ValueError: PyCapsule_GetPointer called with incorrect name

Posted: 05 Aug 2021 09:21 AM PDT

when i type cura to my terminal I get this error and program does not start:

john@arch-thinkpad ~> cura  /usr/lib/python3.9/site-packages/UM/PluginRegistry.py:4: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses    import imp  /usr/lib/python3.9/site-packages/UM/Settings/InstanceContainer.py:53: DeprecationWarning: invalid escape sequence \d    version_regex = re.compile("\nversion ?= ?(\d+)")  /usr/lib/python3.9/site-packages/UM/Settings/InstanceContainer.py:55: DeprecationWarning: invalid escape sequence \w    type_regex = re.compile("\ntype ?= ?(\w+)")  /usr/lib/python3.9/site-packages/UM/VersionUpgradeManager.py:98: DeprecationWarning: invalid escape sequence \.    ".*\.lock",       # Don't upgrade the configuration file lock. It's not persistent.  Error in sys.excepthook:  Traceback (most recent call last):    File "<frozen importlib._bootstrap>", line 1007, in _find_and_load    File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked    File "<frozen importlib._bootstrap>", line 680, in _load_unlocked    File "<frozen importlib._bootstrap_external>", line 850, in exec_module    File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed    File "/usr/lib/python3.9/site-packages/cura/CuraApplication.py", line 57, in <module>      from cura.Arranging.ArrangeObjectsJob import ArrangeObjectsJob    File "/usr/lib/python3.9/site-packages/cura/Arranging/ArrangeObjectsJob.py", line 11, in <module>      from cura.Arranging.Nest2DArrange import arrange    File "/usr/lib/python3.9/site-packages/cura/Arranging/Nest2DArrange.py", line 5, in <module>      from pynest2d import Point, Box, Item, NfpConfig, nest  ValueError: PyCapsule_GetPointer called with incorrect name    Original exception was:  Traceback (most recent call last):    File "/usr/bin/cura", line 187, in <module>      from cura.CuraApplication import CuraApplication    File "/usr/lib/python3.9/site-packages/cura/CuraApplication.py", line 57, in <module>      from cura.Arranging.ArrangeObjectsJob import ArrangeObjectsJob    File "/usr/lib/python3.9/site-packages/cura/Arranging/ArrangeObjectsJob.py", line 11, in <module>      from cura.Arranging.Nest2DArrange import arrange    File "/usr/lib/python3.9/site-packages/cura/Arranging/Nest2DArrange.py", line 5, in <module>      from pynest2d import Point, Box, Item, NfpConfig, nest  ValueError: PyCapsule_GetPointer called with incorrect name  

What can I do to fix this ? I already tried to update all pip packages but error remains.

thank you for help

Source additional files on login (~/.bash_profile? ~/.bashrc? /etc/profile? /etc/bashrc?)

Posted: 05 Aug 2021 09:17 AM PDT

I've a few special aliases and such set up on a few servers (CentOS 7, bash shell).

Some of them are server specific (ex: what IP point to the internet, hostname of the server, etc), while others are relevant to all of them (command aliases and such).

I've set each server's unique aliases in the ~/.bash_profile file, which sources another file, spread via Git, with all the non-unique environment variables. That way, whenever I add an alias or variable I'd like all servers to add, I add it to the file and push it via Git.

However, I've come to understand that on some (seemingly rare) occasions, the non-unique aliases and variables are only available after sourcing ~/.bash_profile manually after logging in. Most of the time it works okay as is - I log in, and everything is set, while on other times another source is required.

I've tried sourcing the non-uniques file from different locations - using a script from /etc/profile.d/, using /etc/bashrc, /etc/profile, and ~/.bash_rc, which result in a loop which hangs login.

The bash_profile looks something like this:

# .bash_profile  source "/etc/non_uniques_file"    #Unique to this server  alias servername=[name of server]...  

And the non-uniques file looks something like this:

# Get the aliases and functions  if [ -f ~/.bashrc ]; then          . ~/.bashrc  fi    # User specific environment and startup programs  PATH=$PATH:$HOME/bin  export PATH  export HISTTIMEFORMAT="%d/%m/%y %T "  export HISTSIZE=2000  export HISTFILESIZE=20000...  

My question is - where would be the correct place to source the uniques file from, to have it set system-wide and under all circumstances?

What could be the reason these variables are set in some logins, but not on others, despite the same login method?

Thanks!

xorg not recognizing monitor but wayland does

Posted: 05 Aug 2021 08:57 AM PDT

So I have two monitors, my main one (hooked up to my graphics card), and my secondary one (hooked up the the motherboards hdmi). I switched from Arch Linux to Debian and setup everything, then I ran arandr to setup the displays, but my secondary monitor wasn't detected. During the lock screen (gdm), the monitor works, and doing some digging, I found gdm was starting with wayland, then logging into awesomewm, meaning xorg isn't detecting my secondary monitor.

Output of xrandr -q

Screen 0: minimum 320 x 200, current 1920 x 1080, maximum 16384 x 16384  DisplayPort-0 disconnected primary (normal left inverted right x axis y axis)  DisplayPort-1 disconnected (normal left inverted right x axis y axis)  HDMI-A-0 disconnected (normal left inverted right x axis y axis)  HDMI-A-1 connected 1920x1080+0+0 (normal left inverted right x axis y axis) 510mm x 287mm     1920x1080     60.00*+  60.00    50.00    50.00    59.94     1680x1050     59.88     1280x1024     60.02     1440x900      59.90     1360x768      60.02     1280x800      60.00     1280x720      60.00    50.00    59.94     1024x768      60.00     800x600       60.32     720x576       50.00     720x480       60.00    59.94     640x480       60.00    59.94  DVI-D-0 disconnected (normal left inverted right x axis y axis)  

Get an accellerated X11 driver for "XGI Z7" GPU under Alpine Linux v3.14?

Posted: 05 Aug 2021 09:03 AM PDT

I am trying to get a modern Linux up and running on an elderly PC with a Vortex86DX CPU (i586) and a built-in GPU which is reported in dmesg as

[   21.246156] Console: switching to colour frame buffer device 100x37  [   21.256977] sisfb: 2D acceleration is enabled, y-panning enabled (auto-max)  [   21.257003] fb0: XGI Z7 frame buffer device version 1.8.9  [   21.257017] sisfb: Copyright (C) 2001-2005 Thomas Winischhofer  

To my understanding this GPU was supported by the SIS module, but when trying to run startx the XGI driver is attempted, fails, and then the fbdev driver is used:

[  2994.516] (==) Matched xgi as autoconfigured driver 0  [  2994.516] (==) Matched modesetting as autoconfigured driver 1  [  2994.516] (==) Matched fbdev as autoconfigured driver 2  [  2994.516] (==) Matched vesa as autoconfigured driver 3  [  2994.516] (==) Assigned the driver to the xf86ConfigLayout  [  2994.517] (II) LoadModule: "xgi"  [  2994.532] (WW) Warning, couldn't open module xgi  [  2994.532] (EE) Failed to load module "xgi" (module does not exist, 0)  

The fbdev driver does its job but is rather slow. The VideoDriverFaq at https://wiki.freedesktop.org/xorg/VideoDriverFAQ/ mentions that the sis driver should be used but it is clearly not properly autodetected. The sis module is available in a package and installed on the system

How should I approach this?

Managing website with a git server

Posted: 05 Aug 2021 08:55 AM PDT

I have a website up and running with nginx, and I have some content in my website that I want to manage with git so that others can collaborate.

Let us say that have a user named alice and that the root of the website is located somewhere within alice's home directory, as in /home/alice/website/index.html

Let us also assume that I have set up a git server via gitolite for access-control, the usage of which requires setting up a dummy user that we will call git. There are good reasons for this user being a separate one from alice.

I want alice's website to be able to show the content of some git repository web_repo.git, the origin of which is to be found in /home/git/web_repo.git, as the documentation for gitolite requires.

My first idea is to run a post-receive hook in the repository to sync it with some subdirectory of /home/alice/website, but that demands user git copying files into other user's (alice) home directory, which ends up in file permissions error.

There are certain constraints in my build that require the website to be hosted within alice's home directory, and I can't think of a secure and elegant solution to this problem.

I have just started managing this small server and I would kindly appreciate any insights.

It's possible to restrict access with PAM based on groups?

Posted: 05 Aug 2021 08:39 AM PDT

I was wondering if it's possible to create a rule in /etc/security/time.conf in which you restrict users to log in not just by username but instead by the group they belong to.

Changing the case of lines in ed

Posted: 05 Aug 2021 08:13 AM PDT

Can I employ the tr command to change the case of a line or range of lines while working in ed? Or is there another way?

How can I get rid of permission denied error when executing man to read the manpages?

Posted: 05 Aug 2021 10:11 AM PDT

I'm getting a permission denied error when trying to access the manpages of all executables. For example, here's the following command:

% man cat

Here's one of the errors returned:

zsh: permission denied: man

The following is another error returned:

env: 'man': Permission denied

I haven't been able to find out the root cause of these errors.

Here are the outputs requested:

% type man

man is a shell function from /home/procer/.xdg/config/antigen/bundles/robbyrussell/oh-my-zsh/plugins/colored-man-pages/colored-man-pages.plugin.zsh

% ls -l $(command -v man)

ls: cannot access 'man': No such file or directory

Access shell error strings associated with getopt in a bash function

Posted: 05 Aug 2021 08:25 AM PDT

Sometimes I get error by bash when using functions.

bash: unrecognized option '--binglybong'  

Is it possible to access the error string in a bash function when it gets called by the shell ?

I am particularly interested in dealing with parsing errors associated with getopt when using the command

opts=$(getopt -o "$shortopts" -l "$longopts" -n "${0##*/}" -- "$@")  

network: download with one device, upload with other device

Posted: 05 Aug 2021 09:43 AM PDT

Situation: I have an ethernet connection that is fast in upload; and I have a wifi connection that is fast in download. In other words: ethernet download is slower than wifi download. Both connections get me to the same gateway/IP Address.

I run Fedora 34.

Can I define a download route via one device (wifi) and an upload route via another device (eth)?

Caveat: I searched for a bit, and I guess I miss some language to formulate the question precisely. I invite comments that help me revise the question.

How to relace string using a math operation [closed]

Posted: 05 Aug 2021 09:52 AM PDT

I have a gcode file that I need to subtract 101.54 from every value of Z. The format is [Z]:[0-9]*.[0-9]*

Either how can I use sed to do the operation, or another linux tool?

Like: Z:101.6 would end up being changed to Z:0.06.

Thanks in advance

cd /var/lib/NetworkManger

Posted: 05 Aug 2021 10:34 AM PDT

i'm trying to getting access to my network manager using: cd /var/lib/NetworkManger

but it keeps saying "Access denied" any help? (Using Ubuntu btw)

Substitute all multi-line and single-line matches in sed

Posted: 05 Aug 2021 08:44 AM PDT

I would like to replace the same pattern multiple times (/g). The pattern can appear in a single line or span several lines (2+).

So, for instance, I'd like to change

EXPECT_EQ(50, var1);  EXPECT_EQ(10,   var2);  EXPECT_EQ(20,  an_expression  _that_takes  _multiple_lines);  

to something like

EXPECT_EQ(var1, 50);  EXPECT_EQ(var2,   0);    EXPECT_EQ(an_expression  _that_takes  _multiple_lines,  20);  

The issue is that

sed -E 's/EXPECT_EQ\(([0-9]+), (.*)\);/EXPECT_EQ(\2, \1);/g' file.txt  

will only update the first line in the example since the second match I want has 2 lines.

Is there a way to substitute all match independent of how many lines they have? Something like https://regex101.com/r/QmHCyo/1 but with sed.

Whitespaces or new lines in the result string are of no concern. They can have any format since they are automatically fixed later.

Centos/RHEL 8 systemd service not able to reference script from custom location

Posted: 05 Aug 2021 09:35 AM PDT

I am trying to create a systemd service, for a simple script present in a location (other than /usr/local/bin..)

Below script is under /home/vagrant/temp/test.sh

#!/bin/sh    MAX=500  i=0;  while true  do  i=$((i+1));  sleep 2  echo "$i = $(date)"  if [ $i == $MAX ]; then    exit 0;  fi;  done;  fi  

I created simple service named usr-print.service under /etc/systemd/system/. The content of the file is

[Unit]  Description=Simple print service  After=network.target    [Service]    Type=simple  Restart=always  StandardOutput=journal  StandardError=journal    ExecStart=/home/vagrant/temp/test.sh    [Install]  WantedBy=multi-user.target  

when i start the service using systemctl daemon-reload; systemctl start usr-print.service i get below exception.

● usr-print.service - Simple print service     Loaded: loaded (/etc/systemd/system/usr-print.service; disabled; vendor preset: disabled)     Active: failed (Result: exit-code) since Mon 2021-08-02 06:10:39 UTC; 7s ago    Process: 1504 ExecStart=/home/vagrant/temp/test.sh (code=exited, status=203/EXEC)   Main PID: 1504 (code=exited, status=203/EXEC)    Aug 02 06:10:39 localhost.localdomain systemd[1]: usr-print.service: Main process exited, code=exited, status=203/EXEC  Aug 02 06:10:39 localhost.localdomain systemd[1]: usr-print.service: Failed with result 'exit-code'.  Aug 02 06:10:39 localhost.localdomain systemd[1]: usr-print.service: Service RestartSec=100ms expired, scheduling restart.  Aug 02 06:10:39 localhost.localdomain systemd[1]: usr-print.service: Scheduled restart job, restart counter is at 5.  Aug 02 06:10:39 localhost.localdomain systemd[1]: Stopped Simple print service.  Aug 02 06:10:39 localhost.localdomain systemd[1]: usr-print.service: Start request repeated too quickly.  Aug 02 06:10:39 localhost.localdomain systemd[1]: usr-print.service: Failed with result 'exit-code'.  Aug 02 06:10:39 localhost.localdomain systemd[1]: Failed to start Simple print service.  

But if I move the script and update the usr-print.service file with ExecStart=/usr/local/bin/test.sh the service starts as expected.

Is there a way to user the /home/vagrant/temp/test.sh path in service file?

Centos 7 - ExecStart=/home/vagrant/temp/test.sh in service file works (shell is running)

Centos/RHEL 8 - ExecStart=/home/vagrant/temp/test.sh in service file DOES NOT work.

Update:

Looks like the SELinux not able to execute that script though it had Execute permissions.

SELinux is preventing /usr/lib/systemd/systemd from execute access on the file /home/vagrant/temp/test.sh.                                                                  *****  Plugin catchall (100. confidence) suggests   **************************    If you believe that systemd should be allowed execute access on the user-print-service file by default.      Then you should report this as a bug.      You can generate a local policy module to allow this access.      Do allow this access for now by executing:         # ausearch -c '(-service)' --raw | audit2allow -M my-service         # semodule -X 300 -i my-service.pp  

noticed a ticket related to this in Redhat: https://bugzilla.redhat.com/show_bug.cgi?id=1832231

Debian 10-Gnome-NonFree Full Install Onto USB Impeded By Installer Not Seeing USB Drive

Posted: 05 Aug 2021 08:21 AM PDT

I'm an Ubuntu user who is exploring Debian and want to install Debian 10 onto a USB drive. I downloaded an ISO that has Debian 10, Gnome and non-free software for drivers, etc from here https://cdimage.debian.org/images/unofficial/non-free/images-including-firmware/10.10.0-live+nonfree/amd64/iso-hybrid/

I had to change the download file extension from .iso to .img so that the Ubuntu Startup Disk Creator software saw it. After making the live USB installer, I booted from it and choose the default option, i.e. run the Debian 10 live installation. After the live installer finished booting up, I clicked the button to install Debian 10. This Calamares process began a series of of steps leading to a choice of disk to install it on - see screenshot below.

Installation Disk Offered

All the installer "sees" are the 2 SDD drives on my machine: sda (my main drive) and sdd (my backup drive). It ignores the 2 USB drives, one for the live Debian installer, the other for holding the full installation. In a way, the existence of drives sdb and sdc is implied by the sda and sdd designations - but not displayed as installation options.

lsblk output is shown below.

enter image description here

As what I'm trying to do is a common practice among people exploring a new distro - as well as those wanting to permanently configure their live installation disk - I find it odd that I am not facilitated by the Debian 10 installer.

For good measure, I also tried the graphical and non-graphical installer options from the Debian boot menu. But the problem here is that this only seems to look in the CD drive for an installation ISO . . . No option to seek a USB drive as a location for the Debian installer exists - or at least is "seen" by the program.

Debian Install Menu

The installer menu is just like something 15 years ago - it all seems based on a user presenting a CD system image and no option for a USB image exists . . . Funny if not so inconvenient.

Am I missing something here ? Or does Debian only want people to have full installs on a SSD drive ? (I would think this narrow-mindedness most untypical of Debian.)

On YouTube and suchlike I see lots of installs of Debian but they nearly always use KDE as their desktop. I wonder if my choosing Gnome is off the beaten track as far as serious testing goes ? I used to like KDE in the old days on Red Hat but today I find Gnome easier to follow visually and more explicit in its functions.

how to perform a silent install of bandwidthD in ubuntu 20.04

Posted: 05 Aug 2021 09:46 AM PDT

how to perform a silent install of bandwidthD to avoid windows and put IP and interfaces to monitor by command line (for ubuntu 20.04)

sudo apt-get install bandwidthd # with what parameters  

Important:

There is not help bandwidthd. Only help:

bandwidthd --help    Usage: bandwidthd [OPTION]    Options:      -D      Do not fork to background      -l      List detected devices      -c filename Alternate configuration file      --help      Show this help  

thanks

Update:

I found a workaround, and at @muru's suggestion I post it as an answer. If there is a better answer, feel free to post it and I will select it as the best answer.

What are some current transcription or dictation software packages for Linux?

Posted: 05 Aug 2021 09:12 AM PDT

The Mozilla deepspeech project is interesting, but perhaps not sufficiently sophisticated. My results, at least, were underwhelming.

Online transcription or dictation services are fine, but an offline software package would be preferred.

Is this just not that common on Linux and with open source software? Looking to get transcriptions from mp3 files.

Would prefer not to upload files or use an API which uses a similar such service.

OpenLDAP TLS error: TLS negotiation failure

Posted: 05 Aug 2021 09:05 AM PDT

I'm trying to setup OpenLDAP on kubernetes via the helm chart.

It deploys correctly and I am able to access the server over port 389 (unencrypted) both locally from within the container and from other containers like phpldapadmin, in the cluster (via URL: openldap.ldap.svc.cluster.local).

I am not able to access it using tls however. From within the container, if I run this command: ldapsearch -x -ZZ, I get this in the logs:

5e2e6f05 conn=1035 fd=15 ACCEPT from IP=127.0.0.1:44820 (IP=0.0.0.0:389)  5e2e6f05 conn=1035 op=0 EXT oid=1.3.6.1.4.1.1466.20037  5e2e6f05 conn=1035 op=0 STARTTLS  5e2e6f05 conn=1035 op=0 RESULT oid= err=0 text=  TLS: can't accept: (unknown error code).  5e2e6f05 conn=1035 fd=15 closed (TLS negotiation failure)  

Again, locally from within the openLDAP container itself, if I try ldapsearch -x -H ldaps://localhost -b "dc=domain,dc=com" I get:

5e2e6a87 conn=1138 fd=15 ACCEPT from IP=127.0.0.1:45638 (IP=0.0.0.0:636)  TLS: can't accept: (unknown error code).  5e2e6a87 conn=1138 fd=15 closed (TLS negotiation failure)  

I don't know what to check next in terms of debugging this. One issue I can see is that in the docker container, it should be run using the --hostname parameter but I don't think the helm chart does this and I don't know if I need to set the hostname. Maybe it doesn't like the fact that I am accessing via hostname localhost instead of ldap.domain.com which is the domain name of the certificate. If that is the case I'm still not sure how to set the hostname to ldap, assuming that is what I need to do.

Environment:

I installed this chart: https://github.com/helm/charts/tree/master/stable/openldap

which is based on this docker image: https://github.com/osixia/docker-openldap

and set the following parameters for the helm chart:

existingSecret: openldap-admin-pass  tls.enabled: true  tls.secret: ldap-tls  persistence.enabled: true  persistence.accessMode: ReadWriteMany  persistence.existingClaim: openldap-vol  

I also changed the config map so that LDAP_DOMAIN = domain.com

The certificate itself is generated by cert-manager from let's encrypt. The domain name of the certificate is ldap.domain.com. It was signed using DNS validation from cloudflare and is a valid certificate.

The server startup logs to not show any errors, it appears like the TLS configuration and certificates are imported correctly: https://pastebin.com/raw/q9iEZCGN

Would appreciate any help. Thanks.

Write shell script to analysis log file

Posted: 05 Aug 2021 09:51 AM PDT

The log file is as below:-

Source=Mobile  IP=189.23.45.01  STATUS=SUCCESS  TIME=10 sec    Source=Desktop  IP=189.23.34.23  STATUS=FAIL  TIME=101 sec    Source=Mobile  IP=189.23.34.23  STATUS=FAIL  TIME=29 sec  

File keep going so on.

Questions:

  1. Find IP where status is FAIL?
  2. Find Ave time taken by all request where status is "success"?
  3. List how many logins were via Mobile and how much time did it took ?

Add Prime OS (Android_x86_x64) to grub menu

Posted: 05 Aug 2021 10:08 AM PDT

I have debian dual boot with windows and try to install prime os also, while install it i didn't install it's grub because i have debian grub, but after installation i can't found it in grub .

result in fdisk -l /dev/sda2 112687104 133169151 20482048 9.8G 83 Linux i try to install grub-customizer and add it by code

set root='(hd0,2)'  search --no-floppy --fs-uuid --set=root e5d445e4-f59f-5158-b9c7-465f7009bc23  linux android/kernel root=UUID=e5d445e4-f59f-5158-b9c7-465f7009bc23 quiet      androidboot.hardware=generic_x86 SRC=/android acpi_sleep=s3_bios,s3_mode  initrd android/initrd.img  

then it was added successfully but when i open it it's show

android/kernel not found  file located at partition `PrimeOS/android/kernel  

screenshot for partition details
partition details

yum doesn't pick up YUM0 environment variable

Posted: 05 Aug 2021 10:28 AM PDT

I have an environment variable set in my (docker) centos environment:

[arman@7b33ffd8619e ~]$ echo $YUM0  yumrepo.myhost.com  

Note that this also works when I prepend the echo command with sudo.

As per the centos documentation;

$YUM0-9 This is replaced with the value of the shell environment variable of the same name. If the shell environment variable does not exist, then the configuration file variable will not be replaced.

However, when I try to install anything with yum from my container I get an error message clearly indicating that the environment variable is not being picked up by yum:

[arman@7b33ffd8619e ~]$ sudo yum install less  Loaded plugins: fastestmirror  http://$YUM0/x86_64/centos/7.2.1511/base/repodata/repomd.xml: [Errno 14] curl#6 - "Could not resolve host: $YUM0; Name or service not known"  Trying other mirror.       One of the configured repositories failed (centos),   and yum doesn't have enough cached data to continue. At this point the only   safe thing yum can do is fail. There are a few ways to work "fix" this:         1. Contact the upstream for the repository and get them to fix the problem.         2. Reconfigure the baseurl/etc. for the repository, to point to a working          upstream. This is most often useful if you are using a newer          distribution release than is supported by the repository (and the          packages for the previous distribution release still work).         3. Disable the repository, so yum won't use it by default. Yum will then          just ignore the repository until you permanently enable it again or use          --enablerepo for temporary usage:                yum-config-manager --disable centos         4. Configure the failing repository to be skipped, if it is unavailable.          Note that yum will try to contact the repo. when it runs most commands,          so will have to try and fail each time (and thus. yum will be be much          slower). If it is a very temporary problem though, this is often a nice          compromise:                yum-config-manager --save --setopt=centos.skip_if_unavailable=true    failure: repodata/repomd.xml from centos: [Errno 256] No more mirrors to try.  http://$YUM0/x86_64/centos/7.2.1511/base/repodata/repomd.xml: [Errno 14] curl#6 - "Could not resolve host: $YUM0; Name or service not known"  

I am running the following versions:

[arman.schwarz@7b33ffd8619e ~]$ cat /etc/centos-release  CentOS Linux release 7.2.1511 (Core)  [arman.schwarz@7b33ffd8619e ~]$ yum --version  3.4.3  

Manually modifying my .repo files in /etc/yum.repos.d to use the repository name rather than relying on the environment variable causes yum to install without issue, proving that this is not an issue with the repo itself.

Running export YUM0=yumrepo.myhost.com has no effect.

How can I make the YUM0 environment variable available to yum?

Questions about minor page fault

Posted: 05 Aug 2021 09:17 AM PDT

From Stephen's reply and comment at https://unix.stackexchange.com/a/289446/674:

in some cases, data which has been marked for paging out but hasn't yet been paged out.

paging out would indeed involve accessing the disk, but that's another operation which doesn't have anything to do with the page fault: there's a marking operation, a separate paging out operation (which hasn't happened yet), and the page fault which causes the memory to be retrieved (so the page-out probably won't happen at all). Even in (3), servicing the page fault doesn't involve touching the disk (the data isn't there yet), so it's a minor page fault.

  1. What does "the page fault which causes the memory to be retrieved (so the page-out probably won't happen at all)" mean?

    Why won't the page-out probably happen at all?

  2. Why "servicing the page fault doesn't involve touching the disk"?

    Is the reason for not paging-out yet in "data which has been marked for paging out but hasn't yet been paged out" that the data needed isn't on the disk yet?

Thanks.

Differences between keyword, reserved word, and builtin?

Posted: 05 Aug 2021 10:13 AM PDT

From Make bash use external `time` command rather than shell built-in, Stéphane Chazelas wrote:

There is no time bash builtin. time is a keyword so you can do for instance time { foo; bar; }

We can verify it:

$ type -a time  time is a shell keyword  time is /usr/bin/time  

It doesn't show that time can be a builtin command.

  1. What is the definition of a "keyword"?
  2. is "keyword" the same concept as "reserved word" in Bash Reference Manual?

    reserved word

    A word that has a special meaning to the shell. Most reserved words introduce shell fl ow control constructs, such as for and while.

  3. Is a keyword necessarily not a command (or not a builtin command)?

    As a keyword, is time not a command (or not a builtin command)?

    Based on the definitions of keyword and of builtin, why is time not a builtin but a keyword?

  4. Why "you can do for instance time { foo; bar; }" because "time is a keyword"?

No comments:

Post a Comment