Sunday, June 6, 2021

Recent Questions - Unix & Linux Stack Exchange

Recent Questions - Unix & Linux Stack Exchange


Why I can not access Google services on either Debian 64 bit or Debian 32 bit Linux Operating System

Posted: 06 Jun 2021 10:37 AM PDT

Problems describe in this question belongs to Debian 64 bit version of Linux operating system, running on Raspberry Pi 3 & 4 B+ Same services can be access on IE explorer on different machines as explain below:

I have tried to access google Chrome services from Debian 32 bit and Debian 64 bit I was unsuccessful in all my attempts but when I am trying to access same services from IE explorer I can access to all my reported services

Problem with Sync

enter image description here

When clicking on man icon in browser you get following login slide popup

enter image description here

In normal circumstances "Turn on Sync" (TOS) bar color is blue but in my case it is dark gray and when I click on TOS bar it's ask me to sign in but when I enter my google username and password it let me sign in to my google browser but does not restore my previously saved bookmarks, password and usernames etc.

After sign into the browser when I click on application icon and select Drive to access your google drive it will present you another page to enter your username and password after doing this step it will open google drive in browser but

enter image description here

with in couple of second it will appear with following popup:

enter image description here

Now if you click on Sign back in link new browser window will appear and ask you to enter your login details again

username window:

enter image description here

Password:

enter image description here

After completing this task authentication browser tab close itself and you automatically return to your google drive page but after couple of second before you click on any of your saved file the same following pop-up message appear back on your screen

enter image description here

and when you click on authentication link again it will take you back to your google drive page and then after couple of second same pop-up will appear again and again i.e i am unable to do anything in my Chrome browser.

I have even tried to access google Chrome services from Ubuntu, Debian 32 bit and Debian 64 bit but did get anywhere I was unsuccessful in all attempts

Can someone please suggest what I should do to recover my information back

Global Variables across scripts

Posted: 06 Jun 2021 10:18 AM PDT

Why is lets say HOME recognized by all my scripts but my variable DMENU isn't? I export it in my bspwmrc file which is executed at start up.Also had it in my zshrc.

Why do this?

DMENU="-h 27 -z 940 -y 4 -x 210 -i"

I want to have this variable in my scripts so if later I want to change something I don't have to manually change all my scripts.

Could it be that the shebang is #!/bin/sh pointing to dash ? How do i set a global variable then?

systemd: How to list or edit a masked service unit file definition?

Posted: 06 Jun 2021 10:06 AM PDT

Normally one can see what is the definition of a foo.service unit file the following ways:

systemctl cat foo.service  

Or, alternatively:

systemctl edit --full foo.service  

In the examples above, you would be able to see the unit file definition, even if you have no actual intention of editing it.

However, say that unit file is masked:

systemctl mask foo.service  

Question: Is it possible to somehow use systemd to see the definition of the unit file even when it is masked?

The two examples below fail:

$ sudo systemctl edit --full foo.service  Cannot edit foo.service: unit is masked.  $ sudo systemctl cat foo.service  # Unit foo.service is masked.  

Note: Opening the original location of the file with your $EDITOR doesn't count. I am NOT looking for an answer like $ cat /usr/lib/systemd/system/foo.service. It defeats the whole point of the systemctl cat or systemctl edit abstractions. I am looking for something like $ systemctl cat --masked foo, using a systemd-specific tool to read the preset unit file definition.

This information probably doesn't matter, but in case it does: I am using systemd 248.3 on Arch Linux.

Multi-WAN CentOS router - Xfinity connection is Xfinicky. All other WAN connections work fine

Posted: 06 Jun 2021 09:42 AM PDT

This problem has been driving me crazy for days. I'm working on a prototype for a mini router/VPN client device to deploy to our employee's homes. Currently due to family circumstances I'm actually working remotely from a different state and figured this is a good time to do this.

This router will have at least one connection to the end user's (in this case my) internet hardline as well as a connection via 4G/5G cellular for backup.

The Problem: I have 3 WAN interfaces (see below). Two of them work flawlessly. The Xfinity router connection will sometimes just not accept any traffic from my device. I can always ping the Xfinity router itself. Sometimes restarting the interface (e.g. ifdown ethwan0 && ifup ethwan1) will get it working -- other times not. Once it starts working (i.e. transiting traffic to the public internet) it will work fine for an indefinite amount of time.

I can fail back and forth (using a custom script - below) between the other two WAN connections. Once I fail over to the Xfinity connection mostly no traffic gets through. (Neither via my router's SNAT nor a direct connection from my router.)

I have one of those whitelabel fanless mini computers with 4 eth ports and 1 wifi card. It's running CentOS 7 [ please spare me the lecture :) ] Ports are as follows (as renamed in udev/rules.d):

ethwan0 - Xfinity cable modem/router (the Xfinity device is not in bridge mode - NAT'd)

ethwan1 - T-Mobile gateway device (also NAT'd)

ethint - Internal connection for user's network with access to systems on other side of VPN

ethgst - Guest connection for user's family or whatever. Only access to internet.

ethwifi - Verizon hot spot (overkill but I'm testing).

Note that I have NetworkManager and firewalld disabled and I'm using iptables-services (I have a bunch of other centralized routers using iptables scripts so this is for consistency). selinux is disabled.

ifcfg-ethwan0:

TYPE=Ethernet  BOOTPROTO=static  DEFROUTE=no  NAME=ethwan1  DEVICE=ethwan1  ONBOOT=yes  IPADDR=192.168.242.12  PREFIX=24  

ifcfg-ethwan1:

TYPE=Ethernet  BOOTPROTO=static  DEFROUTE=no  NAME=ethwan0  DEVICE=ethwan0  ONBOOT=yes  IPADDR=192.168.38.1  PREFIX=24  

ifcfg-ethwifi:

MODE=Managed  KEY_MGMT=WPA-PSK  TYPE=Wireless  BOOTPROTO=static  DEFROUTE=no  NAME=ethwifi  DEVICE=ethwifi  ONBOOT=yes  IPADDR=192.168.97.245  PREFIX=24  

Note: I'm leaving out the wpa_supplicant config (& etc.) because this connection works OK and is not the issue.

iptables config:

*nat  -A POSTROUTING -m state --state RELATED,ESTABLISHED -j ACCEPT  -A POSTROUTING -s 10.38.168.0/24 -d 192.168.0.0/16 -j ACCEPT  -A POSTROUTING -s 10.38.168.0/24 -d 10.0.0.0/8 -j ACCEPT  -A POSTROUTING -s 10.38.168.0/24 -m state --state NEW -j SNAT --to-source 192.168.242.12  COMMIT  *filter  :OUTPUT ACCEPT [0:0]  :FORWARD ACCEPT [0:0]  -A FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT  -A FORWARD -p icmp -j ACCEPT  -A FORWARD -s 10.38.100.0/22 -d 10.38.168.0/24 -j REJECT --reject-with icmp-host-prohibited  -A FORWARD -m state --state NEW -s 10.38.168.0/24 -j ACCEPT  -A FORWARD -j REJECT --reject-with icmp-host-prohibited  :INPUT ACCEPT [0:0]  -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT  -A INPUT -p icmp -j ACCEPT  -A INPUT -i lo -j ACCEPT  -A INPUT -s 10.38.168.0/24 -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT  -A INPUT -j REJECT --reject-with icmp-host-prohibited  COMMIT  

failover-conn.bsh [interface]:

#!/bin/bash    # Arg $1 is the interface name to fail over to (e.g. ethwan0)    declare -A snat_src  declare -A gateway  declare -A cidr    snat_src[ethwan0]="192.168.242.12"  gateway[ethwan0]="192.168.242.1"    snat_src[ethwan1]="192.168.38.1"  gateway[ethwan1]="192.168.38.11"    snat_src[ethwifi]="192.168.97.245"  gateway[ethwifi]="192.168.97.1"    # Should use awk here but this does work.    snat_line_num=`iptables -t nat -nL --line-numbers |grep SNAT |grep "10\.38\.168\.0\/24" |grep -oP "^[0-9]+"`    # This looks foolish but it's in case a malfunction caused more than one rule to be put in place.. it's happened to me before and this overkill can't hurt.    for whatever in 1 2 3; do            ip route del default            iptables -t nat -D POSTROUTING -s 10.38.168.0/24 -m state --state NEW -j SNAT --to-source ${snat_src[ethwan0]}          iptables -t nat -D POSTROUTING -s 10.38.168.0/24 -m state --state NEW -j SNAT --to-source ${snat_src[ethwan1]}          iptables -t nat -D POSTROUTING -s 10.38.168.0/24 -m state --state NEW -j SNAT --to-source ${snat_src[ethwifi]}    done    ip route add default via ${gateway[$1]} dev $1    iptables -t nat -I POSTROUTING ${snat_line_num} -s 10.38.168.0/24 -m state --state NEW -j SNAT --to-source ${snat_src[$1]}    conntrack -D    # I thought flushing the ARP cache might cause a re-announce to the Xfinity modem and make it play nice.  I tested with/without this and same result.    ip -s -s neigh flush all    # Without the sleep the VPNs timeout a couple of times before connecting anyway.    sleep 10    systemctl restart openvpn@client-REDACTED0  systemctl restart openvpn@client-REDACTED1  

Here are a series of commands/results that illustrate the problem:

./failover-conn.bsh ethwan0    # ping from router or SNAT'd connection to 8.8.8.8 may return one pong then unlimited timeouts, no responses at all, or it will be fully functional.  VPNs may or may not connect after some time; EVEN IF PINGS ARE FAILING CONSTANTLY..???    ifdown ethwan0 && ifup ethwan0    # pings may or may not get responses or timeout depending on the phase of the moon...?    ./failover-conn.bsh ethwan1    # Everything works flawlessly.  Can ping from router, from SNAT'd clients, and VPNs connect.  All traffic transits correctly.    ./failover-conn.bsh ethwifi    # Everything works flawlessly.  Can ping from router, from SNAT'd clients, and VPNs connect.  All traffic transits correctly.    ./failover-conn.bsh ethwan0    # Same as the first time.  May or may not work.  

I think I've narrowed the problem down to the Xfinity router. Here's what I tried:

  • Completely turned off firewall / IDS (I thought maybe the IDS would be annoyed by the double NAT).
  • Added my router as a "reserved device"
  • Verified that all blocking/parental features were off.
  • Connected another laptop directly to the Xfinity device's wifi and left a ping open to ensure that the actual device/internet connection was up while I was testing my router. It was up.
  • Set my router's IP as the DMZ host (long shot but ????? profit)
  • Connecting a spare laptop via hardline to the Xfinity gateway and testing/pinging. That laptop always worked fine when my router was failing.

On my router I tried:

  • Disabling VPNs
  • Disabling iptables
  • Disabling all other interfaces
  • Changing ifcfg-ethwan0 to have DEFROUTE=yes and GATEWAY=192.168.242.1 so that this was just like any "normal" computer
  • Swapping ethwan0 and ethwan1 both physically and in config. Regardless of the port it was connected to, the Xfinity device was unreliable and the T-Mobile gateway worked just fine.
  • Swapping ethernet cables all over the place.

I am so perplexed by this. Every other device I connect to this freakin' Xfinity device works great. It just has a problem with this mini computer. But as I said, the mini computer works great with the T-Mo router, as well as with all the clients on ethint.

I'm out of troubleshooting steps and I'm hoping that one of you had a similar problem and found a solution.

Thanks in advance! -Scott

Is it safe to install packages with multiple instances of emerge running at the same time?

Posted: 06 Jun 2021 09:56 AM PDT

I'm currently installing Gentoo by following the Handbook. I ran emerge -uDN @world after changing USE flags and it's taking hours, but I would like to continue with the next step. Is it fine to emerge the kernel and other packages I'll need in a separate tty without waiting for it to finish?

Pacman prevents you from running multiple instances by checking for pacman.lock, but emerge doesn't seem to do the same.

bash: run command from script, but affect the shell where it´s called from

Posted: 06 Jun 2021 08:38 AM PDT

There is a script that should change the path of the shell where it was run from. I used this code:

#!/bin/bash  TTY=$(tty)  echo -e "cd /tmp/ \r" > $TTY  exit  

But that does not work. Is there a way to achieve this at all?

dpkg -p is not working for some packages (like vim)

Posted: 06 Jun 2021 08:16 AM PDT

I am studying for LPIC exam and one of the tasks in the lab is to find details about package with dpkg -p, concretely about vim with dpkg -p vim. In the solution, the command produces information about the package, but my output is:

root@home:~# dpkg -p vim  dpkg-query: package 'vim' is not available  Use dpkg --info (= dpkg-deb --info) to examine archive files,  and dpkg --contents (= dpkg-deb --contents) to list their contents.  

Vim is installed, I am using it offen and it can also be found on system

root@home:~# type vim  vim is hashed (/usr/bin/vim)  root@home:~# which vim  /usr/bin/vim  

I thought the dpkg -p is somehow broken, but it works with other packages (I have tried several and they all worked fine):

root@home:~# dpkg -p eject  Package: eject  Priority: important  Section: utils  Installed-Size: 160  Origin: Ubuntu  Maintainer: Ubuntu Developers <ubuntu-devel-discuss@lists.ubuntu.com>  Bugs: https://bugs.launchpad.net/ubuntu/+filebug  ...  

My system (VPS) info is

Ubuntu 18.04.5 LTS  Linux 4.15.0-143-generic #147-Ubuntu SMP Wed Apr 14 16:10:11 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux  

Why can I not reproduce the solution from the course? Thank you!

Installing windows software on linux

Posted: 06 Jun 2021 09:11 AM PDT

I want to install a windows software on ubuntu 20.04 without using a VM.

Specifically, I want to install the Windows 3D Builder.

Is there any way of doing it?

Change key bindings in tmux copy mode

Posted: 06 Jun 2021 08:18 AM PDT

I want to change the key binding in tmux copy mode. This is my tmux config:

set-window-option -g mode-keys vi  

So I use the vi keybindings for copy mode. But since I use the colemak keyboard layout which has the keys n,e,i,o instead of j,k,l,o I want to bind the following:

bind n down  bind e up  bind h left  bind i right  

I know how binding keys works but I don't know how the key command for down, up, left, right.

Thanks

Using qgrep and -e doesn't give what I want

Posted: 06 Jun 2021 07:37 AM PDT

I'm using a windows version of grep (qgrep) and I'm using the -e argument and I'm not getting what I want. I have a router log file that I'm trying to process and specifically I have:

SRC=18.x.x.x or SRC=18y.x.x.x

The log file has many different SRC=a.b.c.d, but I'm just focusing here where the source IP starts with "18".

The output I want is 18.x.x.x (ie just the "18." IP's).

My qgrep command is: qgrep -e "SRC=18." source.file destination.file

I am using the quotes exactly like I have it on that line.

But qgrep is ignoring the "." and is giving me 18 and 18x in the output.

Is grep or qgrep with the -e option supposed to ignore the "." in my argument?

EDIT: sorry, my qgrep command included "SRC=" but I didn't originally have that in my question.

GRUB_CMDLINE_LINUX_DEFAULT does not work in debian 10 in qemu

Posted: 06 Jun 2021 07:17 AM PDT

i installed qemu in linux mint, installed debian 10 in qemu, on installation, i unchecked graphical desktop, run it with qemu-system-x86_64 -m 1024 -cdrom debian-10.9.0-amd64-netinst.iso -enable-kvm -drive file=vm1.qcow2,media=disk,if=virtio -nic user,model=virtio,ipv6=off,hostfwd=tcp::8080-:80 -daemonize, then debian console appears in a window. window is too big, if i "zoom out", font is to small. i searched for "debian 10 framebuffer resolution" in google (without quotes), and in the 3th result i saw video=1024x768x32 kernel parameter. (the first google result did not work, maybe for the update-grub problem i ask now). then i tried to add video=800x600 from grub itself (pressing e during boot) and it worked, and i tried same in /etc/default/grub in GRUB_CMDLINE_LINUX_DEFAULT, runned update-grub and reboot after that, and it did not work.

then i searched for "GRUB_CMDLINE_LINUX proc cmdline" and first result is https://stackoverflow.com/questions/48199261/proc-cmdline-does-not-updated-with-update-grub and it links to https://askubuntu.com/questions/898640/how-to-update-grub-after-installing-ubuntu-14-04-through-maas-with-global-kernel . and there i see some answers that look strange for me, i even have not tried them yet:

I discovered that if you're in a KVM doing reboot in cmd line would not work. You need to ask the reboot from outside. In my case I had to use proxmox to call a reboot on the VM.

The reason why the grub config is not updated is because the sudo update-grub command is outputting the changed file to stdout. You need to update the file in /boot with the -o flag. sudo update-grub -o /boot/grub/grub.cfg

why update-grub does not work in this (my) case?

after i have written the above text, i tried to edit grub config, update grub and check grub.cfg. i runned less /boot/grub/grub.cfg and searched for "800", (with "/"). and it is not found. then i tried the update-grub -o /boot/grub/grub.cfg (when logged in as root) and again looked it with less and it is still not found. so, it looks like the file is not updated but the -o option does not help. and, then i also looked for option to try the second strange solution, but i do not see reboot menu in qemu window.

update: the question is invalid. a mistake is found. first i logged in with my user and then run su to run commands, because sudo is not installed. but su does not work properly, i should use su - root. with su, i could not run reboot and update-grub. i tried to reinstall grub, i reinstalled it with apt install grub and apt install --reinstall grub and seems they installed grub-legacy, for that, /etc/default/grub did not work. now, i reinstalled it properly and it works.

Unix command sed replace in a variable

Posted: 06 Jun 2021 07:24 AM PDT

I want to remove brackets [] and quotes around each value and add quotes at start and end.

X= ['8922','9292','3220']

I want output as below.

Y = '8922,9292,3220'

How can I do this. Please suggest.

"ping 10.26.14.16" is successful. "nmap -p 5016 10.26.14.16" is unsuccessful. Why?

Posted: 06 Jun 2021 09:28 AM PDT

192.168.1.*/27                :        |        +---  FW        |        +---  [192.168.1.133]  A         |        +---  [192.168.1.140]  B        |        :                    

A = linux machine as a router [192.168.1.133/27]
B = linux machine [192.168.1.140/27]

A and B are in the same network 192.168.1.127/27. A doesn't have any additional interfaces.

Note:

  1. From B, ping 10.26.14.26 is successful, but nmap -p 5016 10.26.14.26 says

    1 Host is UP but Port/TCP=5016/tcp filtered, service=unknown.  
  2. A is configured as a linux router.

    net.ipv4.ip_forward = 1  

My Requirement: nmap -p 5016 10.26.14.26 should be successful.

I want to bypass FW policy for B. B is in same lan with A. FW has policy to A.

Exclude similar file names from an array using shell script

Posted: 06 Jun 2021 08:57 AM PDT

I have array of files , example file=[a.yml, a.json,b.yml,b.json]. I'm iterating using for loop. I need to exclude the .json files from being executed when it has both .ymlor.yaml and .json in the array. but if I have only .json in the array (example [a.json,b.json], it needs to pass through the loop. Is that possible with shell script ?

Basically i'm trying to compare the strings in an array and exclude the duplicate dynamically.

Is this possible with shell ?

  filename=$(git show --pretty="format:" --name-only $CODEBUILD_RESOLVED_SOURCE_VERSION)  echo "$filename"  mkdir report || echo "dir report exists"  for file in ${filename}; do      echo ${file}      ext=${file##*.}      if [ $ext == "yaml" ] || [ $ext == "yml" ] || [ $ext == "json" ]; then          if [ ${file} != "buildspec.yml" ] && [ ${file} != "stackupdatebuildspec.yml" ] && [ ${file} != "specs.json" ]; then              stack=$(echo ${file} | cut -d "." -f 1)              stackName="${stack//[\/]/-}"              echo ${stackName}              howmany() { echo $#; }              numOfFilesValidated=$(howmany $listOfFilesToScan)              echo "=========================================== Syntax validation started =============================================================="              cfSyntaxLogFile="cf-syntax-validation-output"              numOfFailures=0              numOfValidatedFiles=0              for file_to_scan in $listOfFilesToScan; do                  if [[ $(cfn-lint -t "$file_to_scan" --parameter-values-path "${stack}.json" --append-rules ./append_rules --override-spec ./over_ride_spec/spec.json |& tee -a $cfSyntaxLogFile) == "" ]]; then                      echo "INFO: Syntax validation of template $file: SUCCESS"                      ((numOfValidatedFiles++))                  else                      echo "ERROR: Syntax validation of template $file: FAILURE"                      ((numOfFailures++))                  fi              done'''  

Bash while condition: Is my test correct? Infinity loop entails

Posted: 06 Jun 2021 10:02 AM PDT

as I have already explained in my other thread While statement: Cannot get compound of multiple conditions to work I am trying to parse the input arguments in a recursive descent fashion.

Sadly, the thread, although the participants were by far more knowledgable than me and gave good advice, the problem still persists.

Also I poked around myself some further in the console with no avail.

I have following two while statements in my script, which are problematic, because obviously the endOfInput function has no influence on ending the while loop.

I would poke around and sometimes it wouldn't even enter the loop.

So the outer while-statement is the following:

while [ $current_token_index -le $ARGC ] && [[ "$current_token" =~ ^[a-zA-Z0-9_-]+$ ]] || ! [[ "$current_token" =~ ^[0-9]{1,2}(\.[0-9]{,2})*$ ]]; do  

Also, again, here is the inner while-statement

while [[ "$current_token_index" -le "$ARGC" ]] && [[ "$current_token" =~ ^[0-9]{1,2}(\.[0-9]{,2})*$ ]]; do  

Also the helper methods:

isWord? () {      local pattern="^[a-zA-Z0-9_-]+$"      if [[ $1 =~ $pattern ]]; then          echo true      else          echo false      fi  }    isVersionNumber? () {      local pattern="^[0-9]{1,2}(\.[0-9]{,2})*$"      if [[ $1 =~ $pattern ]]; then          echo true      else          echo false      fi  }    EO_ARGS=false    function endOfInput? {   if [ $current_token_index -ge $ARGC ]; then      EO_ARGS=true   fi   echo $EO_ARGS  }  

Additionally the eat! function that gets us the next token from a copy of the positional

eat! () {          current_token=${ARGV[$current_token_index]}          ((current_token_index += 1))                    current_char=${current_token:0:1}  }  

And above that I declare

# as set by eat!()  current_token=""  current_char=""    current_token_index=0  curent_char_index=0  

My question refers to the possibility, that the endOfInput (originally: endOfInput? but I deleted the ? as I learned, in bash it has a wildcard meaning which can lead to problems. In Ruby you can choose most special characters as identifiers with no problems. Obviously there are many caveats in bash if you come from other programming languages) ... so that the truth value of endOfInput function is not evaluated correctly via multiple definitions of the function together with different testing or comparing syntax in a while-statement with its own exigences towards evaluating whether or not a condition holds or not. So there are three things in the equation that together are responsible for the formulation of the conditional constellation.

The problem with the while loop heads as posted above are that either they aren't entered or they don't stop. Depending on how I test endOfInput or how I group it.

If I replace ! endOfInput && ... just by ! false && ... for instance the while loop would be entered, in the other case not.

This is why I conjecture that there must be a problem with the function. It is not correctly evaluated. The problem may be 1) the definition of endOfInput, or 2) how it gets tested that is,

  • 2.1 with what test (things like -eq, string comparison like =, arithmetic operators like ==)

  • 2.2 what is tested. I.e.

    • 2.2.1 A string "true"/"false"
    • 2.2.2 true and false as literals
    • 2.2.3 0 for true and 1 for false
  • 3 How is this value correctly returned?

    • 3.1 by return
    • 3.2 by exit
    • 3.3 by echo
  • 4 by what testing construct is the returned value compared to something else?

    • 4.1 none, just executing the function
    • 4.2 brackets ([ or [[ command)
    • 4.3 arithmetic parentheses ((...))
    • 4.4 arithmetic expansion $((...))
    • 4.5 one or multiple of those combined

So please take a look at the definition of endOfInput and how it is used in the head of the while statement. What could be the problem, why there is an infinity loop? I mean, the other two functions like isWord? do work.

How should my function definition of endOfInput and how should the testing and concatenation of it in the while statement should look like such that it is correctly evaluated?

Edit: ilkkachu wanted me to post a "minimal, complete and verifiable example"

I hope the following is sufficient.

First I call get_installed_gems_with_versions to get all installed gems and their versions in an associative array. Now I have the global associative array installedGems.

Then I want to parse the option --gems by calling parse_gems_with_versions, which in turn calls parseGemVersions in order to parse a selection of the installed gems and their version into $chosenGemVersions

I let away the code of some tests that are relevant to the parsing part but not to the current problem if the not working while loop.

So here is the code

get_installed_gems_with_versions () {    unset installedGems  declare -Ag installedGems    local last_key=""  local values=""      while read -r line; do          line=${line##*/}            KEY="${line%-*}"            VALUE="${line##*-}"                    if [ -z ${installedGems[$KEY]+x} ]; then                echo "key $KEY doesn't yet exist."              echo "append value $VALUE to key $KEY"                            installedGems[$KEY]="$VALUE"              continue                        else              echo "key already exists"              echo "append value $VALUE to $KEY if not already exists"              installedGems[$KEY]="${installedGems[$KEY]} $VALUE"              echo "result: ${installedGems[$KEY]}"          fi                 done < <(find $directory -maxdepth 1 -type d -regextype posix-extended -regex "^${directory}\/[a-zA-Z0-9]+([-_]?[a-zA-Z0-9]+)*-[0-9]{1,3}(.[0-9]{1,3}){,3}\$")     }    parseGemVersions () {            local version_list            declare -Ag chosenGemVersions            while [[ "$current_token_index" -le "$ARGC" ]] && [[ "$current_token" =~ ^[0-9]{1,2}(\.[0-9]{,2})*$ ]]; do                    if versionOfGemInstalled? $gem $current_token; then                                if [ -z ${chosenGemVersions[$gem]+x} ]; then                                        chosenGemVersions[$gem]=$current_token                  # continue                  else                      chosenGemVersions[$gem]="${chosenGemVersions[$gem]} $current_token"                  fi                                else                  parsing_error! "While parsing function $FUNCNAME, current version number $current_token is not installed!"              fi                            echo "result: ${chosenGemVersions[$gem]}"                    eat!                    done    }    parse_gems_with_versions () {  # option --gems   # --gems gem-name-1 1.1.1 1.2.1 1.2.5 gem-name-2 latest gem-name-3 ...        unset gem      unset chosenGemVersions        gem=""      declare -Ag chosenGemVersions            while [ $current_token_index -le $ARGC ] && [[ "$current_token" =~ ^[a-zA-Z0-9_-]+$ ]] || ! [[ "$current_token" =~ ^[0-9]{1,2}(\.[0-9]{,2})*$ ]]; do                            if isWord? $current_token && [ ! "$current_token" = "latest" ]; then                          if isWord? $current_token; then                              if gemInstalled? $current_token; then                                  # We can conjecture that the word token is in fact a gems' name                                  gem=$current_token                                                                    local version_list                                                                    if isWord? $current_token && [ "$current_token" = "latest" ]; then                                      version_list=(${installedGems[$gem]})                                      chosenGemVersions[$gem]="${version_list[${#version_list} -1]}"                                  else                                      # gets list chosenGemVersions                                      parseGemVersions                                  fi                              else                                  parsing_error! "Gem $token_name not installed!"                              fi                                fi               else                  parsing_error! "While parsing function $FUNCNAME, "latest" is not a gemname!"               fi                            eat!                        done  }   

Can you create a device file by yourself

Posted: 06 Jun 2021 10:05 AM PDT

I'm learning about Linux and Unix and I'm curious if you can create a block or tty device file without having to plug a real one in, using a command, and in a custom place other than /dev. Something maybe like:

root@linuxthing:~# mkdevicefile fakedevice  

And then you can ls and get this:

fakedevice  

Is this possible? Thank you!

bash: Read from stdin until a string delimiter

Posted: 06 Jun 2021 09:46 AM PDT

Let's say I have two files containing arbitrary bytes: ./delimiter and ./data.

I want to read from ./data up to and excluding the first occurrence of the byte sequence in ./delimiter.

How would I do this using bash?

Example:

./delimiter: world

./data: helloworld

Expected result: hello

Similar/Equivalent Question:

Note: read -d delim does not solve my problem, because it only support a single-character delimiter, not a string. Also, it stores the result in a variable, and variables don't support NUL bytes. I want the output on stdout.

Incrementally Swap Lines Between Two Regex Patterns in a File

Posted: 06 Jun 2021 10:13 AM PDT

I'm trying to do some text processing on a file using a bash script. The goal is to take all of the lines starting with "field:" indented under an 'attribute:' label and swap them with the associated line starting with "- attr:" that follows.

So far I think I have regex patterns that should match the labels:

/ *field:(.*)/g

/ *- attr:(.*)/g

But I haven't had any success with the logic to parse through the desired fields and get them to swap correctly.

Example Input Text

- metric: 'example.metric.1'    attributes:        field: 'example 1'      - attr: 'example1'        field: 'example 2'      - attr: 'example2'        field: 'example 3'      - attr: 'example3'        field: 'example 4'      - attr: 'example4'  - metric: 'example.metric.2'    attributes:        field: 'example 5'      - attr: 'example5'        field: 'example 6'      - attr: 'example6'        field: 'example 7'      - attr: 'example7'  - metric: 'example.metric.3'  ...  

Desired Output

- metric: 'example.metric.1'    attributes:      - attr: 'example1'        field: 'example 1'      - attr: 'example2'        field: 'example 2'      - attr: 'example3'        field: 'example 3'      - attr: 'example4'        field: 'example 4'  - metric: 'example.metric.2'    attributes:      - attr: 'example5'        field: 'example 5'      - attr: 'example6'        field: 'example 6'      - attr: 'example7'        field: 'example 7'  - metric: 'example.metric.3'  ...   

How would I go about accomplishing this?

Parent child directory same name , move files to parent directory

Posted: 06 Jun 2021 09:07 AM PDT

I need a way to search directories for child directories with the same name and then move all files in the child directory to the parent. Thus from /recup-dir1/recup-dir1/files to /recup-dir1/files. The child directories can be left empty because i can use something like find . -type -d -empty -delete to delete all empty dirs

So the problem is i have no idea in which directories there are the child directories with the same name and in which there are not.

In pseudo code i need something like this.

While more directories are unchecked  get name-x of  next dir     enter dir       If name-x/name-x exist     move all files in name-x/name-x to name-x     mark dir as done  next   

My best guess is to create a little python script to make a list of all directories which have a child with the same name and loop this list throug a command like find something something -exec mv

Maybe this could be done with bash scripting or another solution exists. Like some rsync command, however since i created this mess probably with rsync i don't think that will be the solution.

Are ∈ and ℝ symbols available in eqn/roff?

Posted: 06 Jun 2021 09:41 AM PDT

A set of commonly used symbols to represent that a variable belongs to a given real coordinate space are ∈ ("ELEMENT OF", Unicode U+2208) and ℝ ("DOUBLE-STRUCK CAPITAL R", Unicode U+211D).

Are those two symbols available in eqn, troff, and/or groff? I can not find them in the documentation.

Edit:

I have tested provided answer and I can get symbol ∈ ("ELEMENT OF", Unicode U+2208), but not symbol ℝ ("DOUBLE-STRUCK CAPITAL R", Unicode U+211D).

Specifically, if I do:

.TL   Test    .NH  Introduction    .LP  Given an input in subspace \[u211D]:  .EQ  x \[mo] \[u211D] sup 2  .EN  with output estimated value:  .EQ  y hat  .EN  

I get the following error:

cat test.ms | eqn | groff -ms > test.ps  troff: <standard input>:8: warning: can't find special character 'u211D'  

As it can be seen in the PS output ∈ is shown, but ℝ is not:

groff output

I am using FreeBSD 12 eqn and groff.

Keycloak Account Management Console not working with Nginx reverse proxy

Posted: 06 Jun 2021 08:18 AM PDT

running into a strange issue. I have Keycloak up and running with the config below, the Admin console works great.

Unfortunately when I try and access the "account" client (Account Management Console, for example by selecting "Impersonate" from the user list) I get a pop up that Keycloak failed to load and an infinite loading spinner. Firefox's development tools tell me that this is due to a 403 error.

If I test it by accessing without SSL (and without Nginx in the way), everything works fine. Here is the config that I am using:

server {      listen 80;      server_name keycloak.domain.org;      return 301 https://$host$request_uri;  }    server {      listen 443 ssl;      server_name keycloak.domain.org;        ssl_certificate /etc/letsencrypt/live/keycloak.domain.org/fullchain.pem;      ssl_certificate_key /etc/letsencrypt/live/keycloak.domain.org/privkey.pem;        location /auth {          proxy_pass http://localhost:8080;          proxy_set_header X-Forwarded-For $remote_addr;          proxy_set_header X-Forwarded-Proto https;          proxy_set_header Host $host;          proxy_buffers 4 16k;      }  }  

I'm starting Keycloak with this command (my next chore is to configure it to use Postgres of course):

docker run -d -p 8080:8080 \          -e KEYCLOAK_USER=admin\          -e KEYCLOAK_PASSWORD=admin\          -e PROXY_ADDRESSFORWARDING=true\          -t quay.io/keycloak/keycloak:12.0.4\                  -b 0.0.0.0 \                  -Dkeycloak.frontendUrl=https://keycloak.domain.org/auth/  

openbox: how can i rearrange desktops

Posted: 06 Jun 2021 10:34 AM PDT

I am using openbox in archlinux

I have 5 virtual desktops

I have opened a set of windows in each desktop

now i want to work on desktop5

I want to bring desktop 5 to first

i.e change desktop 5 as desktop 1 and change all others numbers accordingly

I have set the shortcut key for desktop 1 as W-F1

KVM Linux guest cannot get network address

Posted: 06 Jun 2021 10:02 AM PDT

I use libvirt-manager to manager my VMs. I create a new VM, and a default virtual network, which uses NAT.

virsh net-edit default gives me:

<network>    <name>default</name>    <uuid>ec2b5979-dd0c-43db-ab16-99f2e48ef0dd</uuid>    <forward mode='nat'/>    <bridge name='virbr0' stp='on' delay='0'/>    <mac address='52:54:00:0e:b1:4f'/>    <domain name='default'/>    <ip address='192.168.110.1' netmask='255.255.255.0'>      <dhcp>        <range start='192.168.110.128' end='192.168.110.254'/>      </dhcp>    </ip>  </network>  

I have configured my Linux guest to use this network, and the device is set to rtl8139. After I started the VM, it can see the device, but it cannot get network address.

brctl show gives me:

bridge name     bridge id               STP enabled     interfaces  virbr0          8000.5254000eb14f       yes             virbr0-nic  

ip link show gives me:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1      link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00  2: wlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DORMANT group default qlen 1000      link/ether c8:ff:28:78:44:01 brd ff:ff:ff:ff:ff:ff  3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default qlen 1000      link/ether 52:54:00:0e:b1:4f brd ff:ff:ff:ff:ff:ff  4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN mode DEFAULT group default qlen 1000      link/ether 52:54:00:0e:b1:4f brd ff:ff:ff:ff:ff:ff  

One odd thing I found is that in "Connection Details"/"Network Interfaces" page, the virbr0 is shown as inactive, and I cannot activate it. Trying to activate it gives me:

libvirtError: this function is not supported by the connection driver: virInterfaceCreate

How exactly does the typical shell "fork bomb" call itself twice?

Posted: 06 Jun 2021 08:30 AM PDT

After going through the famous Fork Bomb questions on Askubuntu and many other Stack Exchange sites, I don't quite understand what everyone is saying like it's obvious.

Many answers (Best example) say this:

"{:|: &} means run the function : and send its output to the : function again "

Well, what exactly is the output of : ? What is being passed to the other :?

And also:

Essentially you are creating a function that calls itself twice every call and doesn't have any way to terminate itself.

How exactly is that executed twice? In my opinion, nothing is passed to the second : until the first : finishes its execution, which actually will never end.

In C for example,

foo()  {      foo();      foo(); // never executed   }  

the second foo() is not executed at all, just because the first foo() never ends.

I am thinking that the same logic applies to :(){ :|: & };: and

:(){ : & };:  

does the same job as

:(){ :|: & };:  

Please help me understand the logic.

Run vsim from dmenu — it only works when directly invoked in the terminal

Posted: 06 Jun 2021 08:02 AM PDT

  • Works: vsim, sh -c vsim
  • Doesn't work: echo "vsim" | sh, echo "vsim" | xargs -I {} sh -c "{}"

I want to run ModelSim (vsim) with dmenu, which is triggered using xbindkeys.


Details

vsim is a executable for ModelSim, installed in /opt/altera/modelsim_ase/bin.

When I run it directly, it runs. But when I run it with xargs (eg. from dmenu), it does not work at all - the script itself launches, but probably in the wrong directory or something, I'm really clueless what's wrong.

My path (I added newlines for clarity):

[ondra@x201 ~]$ echo $PATH  /usr/local/sbin:  /usr/local/bin:  /usr/bin:  /usr/lib/jvm/default/bin:  /usr/bin/site_perl:  /usr/bin/vendor_perl:  /usr/bin/core_perl:  /opt/altera/quartus/bin:  /opt/altera/modelsim_ase/bin:  /home/ondra/bin:  /home/ondra/.gem/ruby/2.1.0/bin:  /opt/altera/University_Program/Monitor_Program/bin/bin  

Where is vsim?

[ondra@x201 ~]$ which vsim  /opt/altera/modelsim_ase/bin/vsim  

Run it with xargs:

[ondra@x201 ~]$ echo "vsim" |  xargs -I {} sh -c '{} &'  [ondra@x201 ~]$ Reading /opt/altera/modelsim_ase/tcl/vsim/pref.tcl     # 10.1d    #   # <EOF>   ^C  

Run it directly:

[ondra@x201 ~]$ vsim  Reading /opt/altera/modelsim_ase/tcl/vsim/pref.tcl   # --- and modelsim starts fine now ---  

Any ideas welcome.

How to install Kali linux on to a specific (existing) partition on a USB stick

Posted: 06 Jun 2021 09:07 AM PDT

I'm endeavoring to put Kali linux onto a USB stick - I know it's already written up, but I'd like to use only a portion of the total space (the aforementioned link will use the entire drive space).

Let's have my 16GB usb stick mounted as sdb ... the goal is:

16 GB total, split like this...  ----------------------------  |     11     |  01  |  04  |   (GB)  ----------------------------       sdb1      sdb2   sdb3     (partition ID)       FAT32     FAT32  FAT32    (format)      storage   fatdog  kalipart (label)  
  • sdb1 is FAT32 and the main storage area (so that [windows can see it][2] along with any other OSes)
  • sdb2 is bootable and has Fatdog64 (6.3.0) and Precise Puppy (5.7.1) installed (multi-booting from one syslinux menu)
  • sdb3 is the target partition for Kali to use

The objective is to multi-boot Fatdog64, Puppy, and Kali linux. Currently, sdb2 is bootable (syslinux) and successfully passes to Fatdog and Puppy, both on sdb2. Next I'd like to add chainloading to Kali on sdb3. It seems to me that the best way to do that is to load GRUB4DOS from syslinux (both on sdb2), map sdb3 and chainload to sdb3 from GRUB4DOS.

So I ask: How do I install Kali onto an existing partition on this USB stick?

Other options:

  • Install live Kali onto the USB stick/partition from the Kali distro itself - but this doesn't seem to be an option the same way it is with Fatdog/Puppy/Ubuntu
  • Boot direclty to sdb3, chainloading to sdb2 if necessary (not preferred, but an option)

Update:

  1. I have tried copying the files from a mounted iso to sdb3 using Fatdog64 and noticed several errors, mostly in copying the firmware files. Here's two examples:

    Copying /mnt/+mnt+sda1+isos+kali-linux-1+0+6-i286+kali-linux-1+0+6-i286+iso/firmware/amd64/microcode_1.20120910-2_i386.deb as /mnt/sda3/firmware/amd64-microcode_1.20120910-2_i286.deb  ERROR: Operation not permitted  Copying /mnt/+mnt+sda1+isos+kali-linux-1+0+6-i286+kali-linux-1+0+6-i286+iso/debian as /mnt/sda3/debian  ERROR: Operation not permitted  

    These errors look like permissions errors, but I can't tell if they affect booting or not (I can troubleshoot other errors later, I'd prefer to keep this question to just multi-boot).

  2. I'm chainloading GRUB4DOS from the SYSLINUX installed by default via Fatdog64 ...

    label grub4dos  menu label grub4dos  boot /boot/grub/grldr  text help  Load grub4dos via grldr (in /boot/grub)  endtext  

    ... and then once in GRUB4DOS, I have successfully chainloaded GRUB2 (on the kali partition) ...

    title Load GRUB2 inside of kali  find --set-root /g2ldr.mbr  chainloader /g2ldr.mbr  

    ... but all this gives me is a grub> prompt, and I haven't figured out any proper combinations of GRUB4DOS commands to load GRUB2 with a GRUB2 config file - and to add to the confusion, I thought the live CD iso of Kali ran on syslinux. (@jasonwryan @user63921)

Setting DISPLAY in systemd service file

Posted: 06 Jun 2021 10:19 AM PDT

I'm trying to learn systemd services by trying to start xclock as a service; the service file is below

[Unit]  Description=clock    [Service]  Environment=DISPLAY=:0  ExecStart=/usr/bin/xclock    [Install]  WantedBy=graphical.target  

Any ideas what's wrong here? I'm getting an error saying "cannot connect to display."

Cross-distribution/OS packaging

Posted: 06 Jun 2021 09:33 AM PDT

Fedora, FreeBSD, OS X (Homebrew, MacPorts), Ubuntu, Debian, and others all use different packaging systems for binary and source distribution.

When I develop a new application I want to make it available to as many users as possible right out of the gate. But learning all the different packaging tools and conventions is a lot of work. I can manage, but there has to be an easier way.

Is there a super-tool that I should be aware of that can be used to ease the overhead of maintaining and learning all these packaging systems?

No comments:

Post a Comment