Wednesday, June 9, 2021

Recent Questions - Unix & Linux Stack Exchange

Recent Questions - Unix & Linux Stack Exchange


Wireguard service is "inactive (dead)" on boot

Posted: 09 Jun 2021 12:55 PM PDT

I have a Raspberry Pi 4 running Ubuntu 21.04 using the system-installed Wiregaurd, and it doesn't start correctly on boot. I previously had Ubuntu 20.10 installed, and the same occurred there.

On boot:

$ systemctl status wg-quick@wg0.service  ● wg-quick@wg0.service - WireGuard via wg-quick(8) for wg0       Loaded: loaded (/lib/systemd/system/wg-quick@.service; disabled; vendor preset: enabled)       Active: inactive (dead)         Docs: man:wg-quick(8)               man:wg(8)               https://www.wireguard.com/               https://www.wireguard.com/quickstart/               https://git.zx2c4.com/wireguard-tools/about/src/man/wg-quick.8               https://git.zx2c4.com/wireguard-tools/about/src/man/wg.8    $ systemctl start wg-quick@wg0.service    $ systemctl status wg-quick@wg0.service  ● wg-quick@wg0.service - WireGuard via wg-quick(8) for wg0       Loaded: loaded (/lib/systemd/system/wg-quick@.service; disabled; vendor preset: enabled)       Active: active (exited) since Wed 2021-06-09 12:52:16 PDT; 2s ago         Docs: man:wg-quick(8)               man:wg(8)               https://www.wireguard.com/               https://www.wireguard.com/quickstart/               https://git.zx2c4.com/wireguard-tools/about/src/man/wg-quick.8               https://git.zx2c4.com/wireguard-tools/about/src/man/wg.8      Process: 2366 ExecStart=/usr/bin/wg-quick up wg0 (code=exited, status=0/SUCCESS)     Main PID: 2366 (code=exited, status=0/SUCCESS)    Jun 09 12:52:15 pi4 systemd[1]: Starting WireGuard via wg-quick(8) for wg0...  Jun 09 12:52:15 pi4 wg-quick[2366]: [#] ip link add wg0 type wireguard  Jun 09 12:52:16 pi4 wg-quick[2366]: [#] wg setconf wg0 /dev/fd/63  Jun 09 12:52:16 pi4 wg-quick[2366]: [#] ip -4 address add 10.9.0.1/24 dev wg0  Jun 09 12:52:16 pi4 wg-quick[2366]: [#] ip link set mtu 1420 up dev wg0  Jun 09 12:52:16 pi4 wg-quick[2366]: [#] iptables -A FORWARD -i wg0 -j ACCEPT; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE  Jun 09 12:52:16 pi4 wg-quick[2366]: [#] ip6tables -A FORWARD -i wg0 -j ACCEPT; ip6tables -t nat -A POSTROUTING -o eth0 -j MASQUERADE  Jun 09 12:52:16 pi4 systemd[1]: Finished WireGuard via wg-quick(8) for wg0.    $ cat /lib/systemd/system/wg-quick@.service  [Unit]  Description=WireGuard via wg-quick(8) for %I  After=network-online.target nss-lookup.target  Wants=network-online.target nss-lookup.target  PartOf=wg-quick.target  Documentation=man:wg-quick(8)  Documentation=man:wg(8)  Documentation=https://www.wireguard.com/  Documentation=https://www.wireguard.com/quickstart/  Documentation=https://git.zx2c4.com/wireguard-tools/about/src/man/wg-quick.8  Documentation=https://git.zx2c4.com/wireguard-tools/about/src/man/wg.8    [Service]  Type=oneshot  RemainAfterExit=yes  ExecStart=/usr/bin/wg-quick up %i  ExecStop=/usr/bin/wg-quick down %i  ExecReload=/bin/bash -c 'exec /usr/bin/wg syncconf %i <(exec /usr/bin/wg-quick strip %i)'  Environment=WG_ENDPOINT_RESOLUTION_RETRIES=infinity    [Install]  WantedBy=multi-user.target  

How to change SNP Ids format from chr1_position_C_T to chr1?position in .bim file

Posted: 09 Jun 2021 12:51 PM PDT

Hello every one i need help in editting my .bim file. So my .bim file looked like following. I want to convert the ids with chromosomal position from chr1_867635_C_T to chr1_867635 while others remained as it is

1       rs2880024             866893  A       G  1       chr1_867635_C_T       867635  A       G  1       chr1_869303_C_T       869303  A       G  1       chr1_873558_G_T       873558  C       A  1       exm2275405           879439  A       G  

WiFi does not work in my installation of Ubuntu

Posted: 09 Jun 2021 12:29 PM PDT

I have tried fixing it by searching up about it but it either doesn't work or is too complicated for me to understand. I tried mounting an ubuntu iso, however it fails ( this is from this guide https://itsfoss.com/fix-no-wireless-network-ubuntu/ )

Would someone be able to help? I have very little understanding on linux and ubuntu and just want to install it on an old computer so that it runs faster.

Thanks in advance!

Using awk to add a formula as a new column

Posted: 09 Jun 2021 12:23 PM PDT

I have a bed file where I want to enter a new column at the end that generates a decimal ratio of the 3rd and 9th column (so the 3rd column divided by the total of the 3rd and 9th column) from the following .bed file:

chr1H   4612679 57      42      76      85      142     37      155  chr1H   4612680 60      43      83      89      145     38      160  chr1H   4612681 60      45      83      90      144     38      163  chr1H   4612682 60      45      85      98      148     38      164  chr1H   4612683 60      46      86      99      147     39      164  chr1H   4612684 60      46      86      99      147     39      164  chr1H   4612685 61      46      89      99      149     41      168  chr1H   4612686 62      46      92      100     150     42      168  

I've read multiple posts but I'm struggling with the formula. I tried awk -F '\t' '{$(NF+1)= $3 / sum += $3, $9; print}' file.bed. This is a tab delimted file but I feel like I'm also missing something. I want it to look something like this:

chr1H   4612679 57  42  76  85  142 37  155 0.268867925  chr1H   4612680 60  43  83  89  145 38  160 0.272727273  chr1H   4612681 60  45  83  90  144 38  163 0.269058296  chr1H   4612682 60  45  85  98  148 38  164 0.267857143  chr1H   4612683 60  46  86  99  147 39  164 0.267857143  chr1H   4612684 60  46  86  99  147 39  164 0.267857143  chr1H   4612685 61  46  89  99  149 41  168 0.266375546  chr1H   4612686 62  46  92  100 150 42  168 0.269565217  

Thanks for any help!

ksh SET -A array read issue

Posted: 09 Jun 2021 01:04 PM PDT

hello i was trying to read a line to array using ksh script . But some of its values are stored multiple times in adjacent array elements when there is a comma in the field value. How can this is avoided my delimiter is ~

input

17~4~~~char~Y~\"[_a-zA-Z0-9\.]\+@[a-zA-Z0-9]\+\.[a-z]\{2,\}\"~40~email id    

code

while IFS= read -r line    do               if [ $n != 1 ]; then                  IFS="~"                      set -A star $line                  col_pos=${star[1]}                  col_patt=${star[6]}                  col_len=${star[7]}                  col_file_id=${star[0]}    

value of $line

 17 4   char Y \"[_a-zA-Z0-9\.]\+@[a-zA-Z0-9]\+\.[a-z]\2\" \"[_a-zA-Z0-9\.]\+@[a-zA-Z0-9]\+\.[a-z]\\\" 40 email id  

The issue is that pattern ("[_a-zA-Z0-9.]+@[a-zA-Z0-9]+.[a-z]\2") is duplicated while reading, but in actual input file the pattern field is defined once.

How to quote exclamation mark in a filename in tcsh shell script?

Posted: 09 Jun 2021 12:14 PM PDT

I have a script that searches for space characters ' ' and dollar signs '$' in filenames and replaces both with an underscore '_'. However, it does not handle file names with exclamation marks in them, at all. Files to be renamed are searched for in the current folder and all sub-folders up to a certain depth (not very relevant here).

This is the desired renaming scheme:

This is cool!.txt     -->    This_is_cool_.txt  Thank$.log            -->    Thank_.log  

Here is the script:

#!/bin/tcsh -f      foreach n ( 1 2 3 4 5 6 7 8 )      find . -mindepth $n -maxdepth $n -iname '* *' | fgrep -v \" | \      awk '{printf "mv -i -- \"%s\" \"%s\"\n", gensub("!","\\\\\!","g",$0), gensub(" ","_","g",gensub("\\$","\\\\$","g",gensub("!","\\\\\!","g",$0)))}' | tcsh -cf  end   

I have varied the number of backslashes '\' before both exclamation marks but to no effect. Error messages may look like this (even number of backslashes):

awk: cmd. line:1: warning: escape sequence `\!' treated as plain `!'  

or the shell script (apparently) opens an interactive shell?! Sometimes nothing happens at all (I suppose when there are too many backslashes in the first argument to mv and the file is not found).

Find all directories that contain only hidden files and/or hidden directories

Posted: 09 Jun 2021 11:50 AM PDT

Issue

  • I have been struggling a Bash command that is able to recursively search a directory and then return the paths of every sub-directory (up to a certain max-depth) that contains EXCLUSIVELY hidden files and/or hidden directories.

Visual Explaination

  • Consider the following File System excerpt:
+--- Root_Dir  |   +--- Dir_A  |   |   +--- abc.txt  |   |   +--- 123.txt  |   |   +--- .hiddenfile  |   |   +--- .hidden_dir  |   |   |   +--- normal_sub_file_1.txt  |   |   |   +--- .hidden_sub_file_1.txt  |   |       |   +--- Dir_B  |   |   +--- abc.txt  |   |   +--- .hidden_dir  |   |   |   +--- normal_sub_file_2.txt  |   |   |   +--- .hidden_sub_file_2.txt  |   |      |   +--- Dir_C  |   |   +--- 123.txt  |   |   +--- program.c  |   |   +--- a.out  |   |   +--- .hiddenfile  |   |     |   +--- Dir_D  |   |   +--- .hiddenfile  |   |   +--- .another_hiddenfile  |   |       |   +--- Dir_E  |   |   +--- .hiddenfile  |   |   +--- .hidden_dir  |   |   |   +--- normal_sub_file_3.txt   # This is OK because its within a hidden directory, aka won't be checked  |   |   |   +--- .hidden_sub_file_3.txt  |   |   |   +--- Dir_F  |   |   +--- .hidden_dir  |   |   |   +--- normal_sub_file_4.txt  |   |   |   +--- .hidden_sub_file_4.txt  

Desired Output

  • The command I am looking for would output
    ./Root_Dir/Dir_D  ./Root_Dir/Dir_E  ./Root_Dir/Dir_F  
    • Dir_D because it only contains hidden files.
    • Dir_E because it only contains a hidden file and a hidden directory at the level I am searching.
    • Dir_F because it only contains a hidden directory at the level I am searching.

Attempts

  • I have attempted to use the find command to get the results I am looking for but I can't seem to figure out what other command I need to pipe the output to or what other options I should be using.
  • I think the command that will work would look something like this:
    bob@linux:/$ find ./Root_Dir -mindepth 2 -maxdepth 2 -type d -name "*." -type -f -name "*." | command to see if these are the only files in that directory    

Can you explain me this 'sed' command and help me improve it?

Posted: 09 Jun 2021 12:21 PM PDT

I found a command lines and I can't understand what exactly happens.

pdftk $1 dump_data | sed -e 's/\(InfoValue:\)\s.*/\1\ /g'  

In this line the command sed is my principal doubt. What exactly is going on?

I'm newbie and I think that in this part */\1\ the sed command is calling to $1, so if I change the $1 I need change sed command. Maybe this form:

pdftk out dump_data | sed -e 's/\(InfoValue:\)\s.*/\out\ /g'  

But as I don't understand the theory of sed I can't be sure of my assumption.

Save input from a pipe in another bash script

Posted: 09 Jun 2021 11:20 AM PDT

I am having some problems to find a solution to a bash scripting problem that involves taking input from a pipe to a bash script.

The parent script passes strings to other scripts like this one through a pipe

In my case the pipe will try to give input to the myScript.sh like this:

string_to_search | myScript.sh

this script is normally run as part of a larger script which creates named pipes which call other scripts (like this one)

The script (myScript.sh) should just be constantly looping and receiving input from the main script and should read the input and grep for the pattern given from a set of res files

Here is a small example of what myScript.sh is doing:

#!/bin/bash    while :  do            echo "Enter TESTCASE ROUTERID and PATTERN [or type quit]:"          read var                    TESTCASE=$(echo $var | cut -d ' ' -f1 )          ROUTER_ID=$(echo $var | cut -d ' ' -f2 )          PATTERN=$(echo $var | cut -d ' ' -f3 )            logfile=${TESTCASE}_O${ROUTER_ID}.res          echo "logfile is:" $logfile            echo "Grepping logfile for pattern"          patternMatch=$(cat $RESULTS/$logfile | grep $PATTERN)          lineCount=$(cat $RESULTS/$logfile | grep $PATTERN | wc -l)            echo "found $lineCount matching lines"          echo "$patternMatch"    done  

It would be easier to allow the parent script to just echo the search string instead of just passing it as a string but the client does not want us to do that.

Is there any way to allow the myScript.sh to accept the input from the pipe so it can be saved as a variable or even read in a loop?

Deepin beeps when using two fingers on touchpad

Posted: 09 Jun 2021 10:22 AM PDT

Why is Deepin 20 making the same sound as it does for "volume up / volume down" action when simulating a right click by using two fingers on the touchpad (two-finger-tap)?

It started a while ago and is starting driving me crazy.

Write on /dev/fb0 has no effect on GUI console

Posted: 09 Jun 2021 12:17 PM PDT

I run X in this way /usr/bin/X vt07 -retro to have the GUI on console 7 and the gray grid as background. On console 1 I write while [ 1 ] ; do cat /dev/random > /dev/fb0; done and I see the random colored pixel on screen; I can switch over all console and see the same, but when I switch on console 7, where the XServer is active, I see the same gray grid. How can I see what I write on /dev/fb0 in the console where X is active?

If I use fbdev driver in xorg.conf I can see the random pixel also on the GUI console, but I need to use amdgpu or radeon. There are some X options, kernel params, module params that I have to change to write over X?

ICMP Reply's not forwarded on Kali Linux during ARP Spoof

Posted: 09 Jun 2021 10:51 AM PDT

I'm currently working on a school project where I am using an ARP-spoof to intercept packets from a virtual machine to sniff the URLs it is visiting.

I'm using the command arpspoof -i eth1 192.168.99.21 192.168.99.30 to send out the arp spoof and the Kali machine is configured to be the router with sysctl -w net.ipv4.ip_forward=1

Now, when I use ping on the ubuntu machine all packets are just lost and do not receive an ICMP Reply.

I used tcpdump to see where it's stuck and it looks like the packet is not being forwarded from the NAT network card to the other subnet (where the client is located) on the Kali machine.

AIX 7 - Command To Check If Never Used User Accounts Are Locked

Posted: 09 Jun 2021 10:52 AM PDT

On AIX 7, I am trying to craft a command that would check the user accounts on my system that have never logged in and to check if they are locked. If a user account that has never logged in exists and is not in the locked state, to output this information.

Also, I am trying to only query the user accounts with UID greater than 100.

I've successfully crafted a command that will check the list of users with UID greater than 100 but I am now struggling to craft the command that will check the lastlogon/locked state:

cat /etc/passwd | awk -F: '$3 > 100 { print $1 }'  

I have tried to experiment with the "lsuser -a account_locked " & "lsuser -a time_last_login " commands, but this has been unsuccessful.

Can anyone help me, please?

Script for creating multiple users with random password

Posted: 09 Jun 2021 11:24 AM PDT

How to wright bash script, which is run from mobaxterm from my windows workstation. It creates multiple new users with random password on multiple Linux servers with specific uid and gid.

read -p "Enter employee name: " ename; echo "Employee name: $ename"; read -p "Enter employee ID: " eid; echo "Employee ID: $eid"; do password=("$ename"123); useradd -m -u "$eid" -g "$eid" -p "$password" "$ename"; [ $? -eq 0 ] && echo "User $i has been added to system!" || echo "Failed to add a user!"; done

I need to improve my script same as requirement.

Why the .sock file not getting generated for uwsgi?

Posted: 09 Jun 2021 10:27 AM PDT

I have already banged my head for last 6 hours, and nothing seems to be working for me.

I am trying to host Django through nginx and uwsgi. As per the best of my understanding and study I have configured nginx.conf and uwsgi.ini files.

nginx.conf

# mysite_nginx.conf    # the upstream component nginx needs to connect to  upstream django {      server unix:/tmp/abc.sock; # for a file socket  }    # configuration of the server  server {      # the port your site will be served on      listen      80;      # the domain name it will serve for      server_name localhost; # substitute your machine's IP address or FQDN      charset     utf-8;        # max upload size      client_max_body_size 2048M;   # adjust to taste        # Django media      location /media  {          alias /<path_to>/media;  # your Django project's media files - amend as required      }        location /static {          alias /<path_to>/static; # your Django project's static files - amend as required      }        # Finally, send all non-media requests to the Django server.      location / {          include     /<path_to>/uwsgi_params; # the uwsgi_params file you installed          uwsgi_param REMOTE_USER $remote_user;          uwsgi_param DATE_GMT $date_gmt;          uwsgi_param DATE_LOCAL $date_local;          uwsgi_param AUTH_TYPE Basic;          uwsgi_read_timeout 600s;          uwsgi_pass  django;      }  }  

uwsgi.ini

[uwsgi]    # Django-related settings  # the base directory (full path)  chdir           = <path_to>/site  # Django's wsgi file  module          = pjgweb.wsgi:application  # the virtualenv (full path)  home            = /<path_to>/ENV    # process-related settings  # gid  gid             = www-data  # uid  uid             = www-data  # master  master          = true  # maximum number of worker processes  processes       = 6  # maximum number of threads for each worker process  threads         = 5  # the socket (use the full path to be safe  socket          = /tmp/abc.sock  # ... with appropriate permissions - may be needed  chmod-socket    = 666  # clear environment on exit  vacuum          = true  thread_lock     = true  python_thread   = true  # set the DJANGO MODULE SETTINGS  env             = DJANGO_SETTINGS_MODULE=pjintegweb.settings_production  # Log to  logto           = /<path_to>/uwsgi-app.log  # Statistics  stats           = :9191  stats-http      = true  
  • I have also created the symbolic links for both nginx.conf and uwsgi.ini.

  • I did configure emperor.uwsgi.service file for systemd

      [Unit]    Description=uWSGI Emperor service    After=syslog.target      [Service]    ExecStart=/usr/local/bin/uwsgi --emperor /etc/uwsgi/vassals --uid www-data --gid www-data    Restart=always    KillSignal=SIGQUIT    Type=notify    StandardError=syslog    NotifyAccess=all      [Install]    WantedBy=multi-user.target  

When I check the nginx log file, I see below

: x.x.x.x, server: x.x.x.x, request: "GET / HTTP/1.1", upstream: "uwsgi://unix:/run/uwsgi/pjintegweb.sock:", host: "x.x.x.x"  2021/06/09 16:05:37 [crit] 1949#1949: *1 connect() to unix:/run/uwsgi/pjintegweb.sock failed (2: No such file or directory) while connecting to upstream, client: x.x.x.x, server: x.x.x.x, request: "GET / HTTP/1.1", upstream: "uwsgi://unix:/run/uwsgi/pjintegweb.sock:", host: "x.x.x.x"  2021/06/09 16:05:37 [crit] 1949#1949: *1 connect() to unix:/run/uwsgi/pjintegweb.sock failed (2: No such file or directory) while connecting to upstream, client: x.x.x.x, server: x.x.x.x, request: "GET / HTTP/1.1", upstream: "uwsgi://unix:/run/uwsgi/pjintegweb.sock:", host: "x.x.x.x"  2021/06/09 16:39:04 [alert] 2890#2890: *1 open socket #21 left in connection 3  2021/06/09 16:39:04 [alert] 2890#2890: *2 open socket #22 left in connection 4  2021/06/09 16:39:04 [alert] 2890#2890: aborting  

uwsgi logs

bind(): No such file or directory [core/socket.c line 230]  *** Starting uWSGI 2.0.19.1 (64bit) on [Wed Jun  9 18:48:43 2021] ***  compiled with version: 7.5.0 on 02 June 2021 07:22:38  os: Linux-5.4.0-42-generic #46~18.04.1-Ubuntu SMP Fri Jul 10 07:21:24 UTC 2020  machine: x86_64  clock source: unix  detected number of CPU cores: 12  current working directory: /<path_to>  detected binary path: /usr/local/bin/uwsgi  !!! no internal routing support, rebuild with pcre support !!!  chdir() to /<path_to>  your processes number limit is 127942  your memory page size is 4096 bytes  detected max file descriptor number: 1024  lock engine: pthread robust mutexes  thunder lock: disabled (you can enable it with --thunder-lock)  uwsgi socket 0 bound to UNIX address /tmp/abc.sock fd 3  Python version: 3.6.9 (default, Jan 26 2021, 15:33:00)  [GCC 8.4.0]  PEP 405 virtualenv detected: /<path_to>/ENV  Set PythonHome to /<path_to>/ENV  Python main interpreter initialized at 0x55cc047f50c0  python threads support enabled  your server socket listen backlog is limited to 100 connections  your mercy for graceful operations on workers is 60 seconds  mapped 802648 bytes (783 KB) for 30 cores  *** Operational MODE: preforking+threaded ***    WSGI app 0 (mountpoint='') ready in 2 seconds on interpreter 0x55cc047f50c0 pid: 2828 (default app)  *** uWSGI is running in multiple interpreter mode ***  spawned uWSGI master process (pid: 2828)  spawned uWSGI worker 1 (pid: 2833, cores: 5)  spawned uWSGI worker 2 (pid: 2834, cores: 5)  spawned uWSGI worker 3 (pid: 2835, cores: 5)  spawned uWSGI worker 4 (pid: 2840, cores: 5)  spawned uWSGI worker 5 (pid: 2845, cores: 5)  spawned uWSGI worker 6 (pid: 2850, cores: 5)  *** Stats server enabled on :9191 fd: 21 ***  SIGINT/SIGQUIT received...killing workers...  worker 1 buried after 1 seconds  worker 2 buried after 1 seconds  worker 3 buried after 1 seconds  worker 4 buried after 1 seconds  worker 5 buried after 1 seconds  worker 6 buried after 1 seconds  goodbye to uWSGI.  VACUUM: unix socket /tmp/abc.sock removed.  
  • When I execute the below command, I do not see any error, but the it never returns to the command line ever. Not sure why

    uwsgi --ini uwsgi.ini [uWSGI] getting INI configuration from /path_to/uwsgi.ini

It remains there for ever. I have seen multiple post on the internet, but none of them seems to helping me. Please help. Thank you.

split argument and create a new variables

Posted: 09 Jun 2021 10:23 AM PDT

My argument look like: My_Submit.sh May5_2014 and I want to create a new variable inspired from the argument, this variable should look like May14_5.

output entries that have highest score

Posted: 09 Jun 2021 11:58 AM PDT

I have a file as below. Based on score column I would like to output just the people with the highest scores for each subject by printing the entire line as below. A particular order is not needed but subject column should have all unique entries.

name subject score remarks  john   Math    67   satis  lewis  History 56   poor  sarah  Math    89   good  fiona  Geo     65   satis  george History 99   very good  

Output desired:

name   subject score remarks  sarah  Math    89   good  george History 99   very good  fiona  Geo     65   satis  

All columns are tab separated and only the remarks column have space separated words. if the same score exists for same subject I would like to output all of those people.

Oracle Free Tier: Wireguard and iptables

Posted: 09 Jun 2021 12:09 PM PDT

Problem: Traffic not making it from "client" to "server" and back.

Configs:

"server":

[Interface]  Address = 10.8.0.1/24  ListenPort = 51820  PrivateKey = [redacted]    PostUp = iptables -A FORWARD -i wg0 -j ACCEPT; iptables -A FORWARD -o wg0 -j ACCEPT; iptables -t nat -A POSTROUTING -o ens3 -j MASQUERADE  PostDown = iptables -D FORWARD -i wg0 -j ACCEPT; iptables -D FORWARD -o wg0 -j ACCEPT; iptables -t nat -D POSTROUTING -o ens3 -j MASQUERADE    [Peer]  PublicKey = [redacted]  AllowedIPs = 10.8.0.2/32  

"client":

[Interface]  Address = 10.8.0.2/24  PrivateKey = [redacted]  PostUp = ping -c1 10.8.0.1  DNS = 1.1.1.1    [Peer]  PublicKey = [redacted]  Endpoint = [redacted]:51820  AllowedIPs = 0.0.0.0/0, ::/0  

/etc/iptables/rules.v4:

# CLOUD_IMG: This file was created/modified by the Cloud Image build process  # iptables configuration for Oracle Cloud Infrastructure        # See the Oracle-Provided Images section in the Oracle Cloud Infrastructure  # documentation for security impact of modifying or removing these rule        *filter  :INPUT ACCEPT [0:0]  :FORWARD ACCEPT [0:0]  :OUTPUT ACCEPT [463:49013]   :InstanceServices - [0:0]  -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT  -A INPUT -p icmp -j ACCEPT  -A INPUT -i lo -j ACCEPT  -A INPUT -p udp --sport 123 -j ACCEPT  -A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT  -A INPUT -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT  -A INPUT -p tcp -m state --state NEW -m tcp --dport 443 -j ACCEPT  -A INPUT -p tcp -m state --state NEW -m tcp --dport 64738 -j ACCEPT  -A INPUT -p udp -m state --state NEW -m udp --dport 64738 -j ACCEPT        # I added the following rule manually. The preceding 5 rules and were either  # added by Oracle (22) or dockers (the other ports). All else is added by Oracle.   -A INPUT -p udp -m state --state NEW -m udp --dport 51820 -j ACCEPT  #        # Commenting out the following two lines makes everything work, but defeats   # the point of iptables.  -A INPUT -j REJECT --reject-with icmp-host-prohibited  -A FORWARD -j REJECT --reject-with icmp-host-prohibited  #        -A OUTPUT -d 169.254.0.0/16 -j InstanceServices  -A InstanceServices -d 169.254.0.2/32 -p tcp -m owner --uid-owner 0 -m tcp --dport 3260 -m comment --comment "See the Oracle-Provided Images section in the Oracle Cloud Infrastructure documentation for security impact of modifying or removing this rule" -j ACCEPT  -A InstanceServices -d 169.254.2.0/24 -p tcp -m owner --uid-owner 0 -m tcp --dport 3260 -m comment --comment "See the Oracle-Provided Images section in the Oracle Cloud Infrastructure documentation for security impact of modifying or removing this rule" -j ACCEPT  -A InstanceServices -d 169.254.4.0/24 -p tcp -m owner --uid-owner 0 -m tcp --dport 3260 -m comment --comment "See the Oracle-Provided Images section in the Oracle Cloud Infrastructure documentation for security impact of modifying or removing this rule" -j ACCEPT  -A InstanceServices -d 169.254.5.0/24 -p tcp -m owner --uid-owner 0 -m tcp --dport 3260 -m comment --comment "See the Oracle-Provided Images section in the Oracle Cloud Infrastructure documentation for security impact of modifying or removing this rule" -j ACCEPT  -A InstanceServices -d 169.254.0.2/32 -p tcp -m tcp --dport 80 -m comment --comment "See the Oracle-Provided Images section in the Oracle Cloud Infrastructure documentation for security impact of modifying or removing this rule" -j ACCEPT  -A InstanceServices -d 169.254.169.254/32 -p udp -m udp --dport 53 -m comment --comment "See the Oracle-Provided Images section in the Oracle Cloud Infrastructure documentation for security impact of modifying or removing this rule" -j ACCEPT  -A InstanceServices -d 169.254.169.254/32 -p tcp -m tcp --dport 53 -m comment --comment "See the Oracle-Provided Images section in the Oracle Cloud Infrastructure documentation for security impact of modifying or removing this rule" -j ACCEPT  -A InstanceServices -d 169.254.0.3/32 -p tcp -m owner --uid-owner 0 -m tcp --dport 80 -m comment --comment "See the Oracle-Provided Images section in the Oracle Cloud Infrastructure documentation for security impact of modifying or removing this rule" -j ACCEPT  -A InstanceServices -d 169.254.0.4/32 -p tcp -m tcp --dport 80 -m comment --comment "See the Oracle-Provided Images section in the Oracle Cloud Infrastructure documentation for security impact of modifying or removing this rule" -j ACCEPT  -A InstanceServices -d 169.254.169.254/32 -p tcp -m tcp --dport 80 -m comment --comment "See the Oracle-Provided Images section in the Oracle Cloud Infrastructure documentation for security impact of modifying or removing this rule" -j ACCEPT  -A InstanceServices -d 169.254.169.254/32 -p udp -m udp --dport 67 -m comment --comment "See the Oracle-Provided Images section in the Oracle Cloud Infrastructure documentation for security impact of modifying or removing this rule" -j ACCEPT  -A InstanceServices -d 169.254.169.254/32 -p udp -m udp --dport 69 -m comment --comment "See the Oracle-Provided Images section in the Oracle Cloud Infrastructure documentation for security impact of modifying or removing this rule" -j ACCEPT  -A InstanceServices -d 169.254.169.254/32 -p udp --dport 123 -m comment --comment "See the Oracle-Provided Images section in the Oracle Cloud Infrastructure documentation for security impact of modifying or removing this rule" -j ACCEPT  -A InstanceServices -d 169.254.0.0/16 -p tcp -m tcp -m comment --comment "See the Oracle-Provided Images section in the Oracle Cloud Infrastructure documentation for security impact of modifying or removing this rule" -j REJECT --reject-with tcp-reset  -A InstanceServices -d 169.254.0.0/16 -p udp -m udp -m comment --comment "See the Oracle-Provided Images section in the Oracle Cloud Infrastructure documentation for security impact of modifying or removing this rule" -j REJECT --reject-with icmp-port-unreachable  COMMIT  

Commentary:

I can make the wireguard connection and ping between the two peers, but I cannot get from the "client" peer to the "server" peer and on to the Internet and back.

"client" side:

$ sudo wg-quick up wg0  [#] ip link add wg0 type wireguard  [#] wg setconf wg0 /dev/fd/63  [#] ip -4 address add 10.8.0.2/24 dev wg0  [#] ip link set mtu 1420 up dev wg0  [#] resolvconf -a wg0 -m 0 -x  [#] wg set wg0 fwmark 51820  [#] ip -6 route add ::/0 dev wg0 table 51820  [#] ip -6 rule add not fwmark 51820 table 51820  [#] ip -6 rule add table main suppress_prefixlength 0  [#] ip6tables-restore -n  [#] ip -4 route add 0.0.0.0/0 dev wg0 table 51820  [#] ip -4 rule add not fwmark 51820 table 51820  [#] ip -4 rule add table main suppress_prefixlength 0  [#] sysctl -q net.ipv4.conf.all.src_valid_mark=1  [#] iptables-restore -n  [#] ping -c1 10.8.0.1  PING 10.8.0.1 (10.8.0.1) 56(84) bytes of data.  64 bytes from 10.8.0.1: icmp_seq=1 ttl=64 time=169 ms    --- 10.8.0.1 ping statistics ---  1 packets transmitted, 1 received, 0% packet loss, time 0ms  rtt min/avg/max/mdev = 168.716/168.716/168.716/0.000 ms    $ curl -4 ifconfig.me  curl: (6) Could not resolve host: ifconfig.me  

"client" /etc/resolv.conf:

# Generated by resolvconf  nameserver 1.1.1.1  

"server" /etc/resolv.conf:

nameserver 127.0.0.53  options edns0 trust-ad  search vcn[redacted].oraclevcn.com  

As mentioned in the comments above, commenting out the INPUT REJECT rule in /etc/iptables/rules.v4 solves the problem, but it is undesirable from a firewall-security point of view, if I understand correctly.

The curl error hints at a DNS problem, if I am not mistaken.

I have the following ingress rules in the oracle cloud config:

Ingress Rules

I have the following Egress rule:

Egress Rules

I am a newbie at iptables, networking, and sysadmin, so I've been struggling over this for a while. I haven't found much on the Internet that sheds light on what exactly I'm missing here, just bits and pieces that got me this far.

I think I have narrowed the problem down to the configuration of iptables, but I'm stumped as to what to do.

Any advice on what exactly the problem is here and what a good (and secure) solution would be is much appreciated!!!

Thanks!

RTMP Connection failed: -2 (OBS Studio on debian buster) - Something doesn't work. Help me

Posted: 09 Jun 2021 10:11 AM PDT

I gather log information as per below:

copy obs-studio log

23:32:48.304: [rtmp stream: 'simple_stream'] Connecting to RTMP URL rtmps://rtmp-api.facebook.com:443/rtmp/...  23:32:48.712: RTMP_Connect1, no SSL/TLS support  23:32:48.712: [rtmp stream: 'simple_stream'] Connection to rtmps://rtmp-api.facebook.com:443/rtmp/ failed: -2  

Then I test connection from Buster to rtmp-api.facebook.com 443 telnet rtmp-api.facebook.com 443

Trying 179.60.194.1...  Connected to edgetee-upload.xx.fbcdn.net.  Escape character is '^]'.  ^CConnection closed by foreign host.  

Kindly advised. Many thanks.

Error when adding "dnf copr enable @spacewalkproject/spacewalk-2.10-client"

Posted: 09 Jun 2021 10:16 AM PDT

I am getting this error when trying to install spacewalk client on one of the Fedora 31 server.

# dnf copr enable @spacewalkproject/spacewalk-2.10-client  Enabling a Copr repository. Please note that this repository is not part of the main distribution, and quality may vary.    The Fedora Project does not exercise any power over the contents of this repository beyond the rules outlined in the Copr FAQ at <https://docs.pagure.org/copr.copr/user_documentation.html#what-i-can-build-in-copr>, and packages are not held to any quality or security level.   Please do not file bug reports about these packages in Fedora Bugzilla. In case of problems, contact the owner of this repository.    Do you really want to enable copr.fedorainfracloud.org/@spacewalkproject/spacewalk-2.10-client? [y/N]: y  Error: This repository does not have any builds yet so you cannot enable it now.  

I remember able to install it on server A but when try same on server B getting this error. I am not sure whats different.

Server A $ cat /etc/os-release .  NAME=Fedora  VERSION="31 (Server Edition)"  ID=fedora  VERSION_ID=31  VERSION_CODENAME=""  PLATFORM_ID="platform:f31"  PRETTY_NAME="Fedora 31 (Server Edition)"  ANSI_COLOR="0;34"  LOGO=fedora-logo-icon  CPE_NAME="cpe:/o:fedoraproject:fedora:31"  HOME_URL="https://fedoraproject.org/"  DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora/f31/system-administrators-guide/"  SUPPORT_URL="https://fedoraproject.org/wiki/Communicating_and_getting_help"  BUG_REPORT_URL="https://bugzilla.redhat.com/"  REDHAT_BUGZILLA_PRODUCT="Fedora"  REDHAT_BUGZILLA_PRODUCT_VERSION=31  REDHAT_SUPPORT_PRODUCT="Fedora"  REDHAT_SUPPORT_PRODUCT_VERSION=31  PRIVACY_POLICY_URL="https://fedoraproject.org/wiki/Legal:PrivacyPolicy"  VARIANT="Server Edition"  VARIANT_ID=server    Server B# cat /etc/os-release  NAME=Fedora  VERSION="31 (Server Edition)"  ID=fedora  VERSION_ID=31  VERSION_CODENAME=""  PLATFORM_ID="platform:f31"  PRETTY_NAME="Fedora 31 (Server Edition)"  ANSI_COLOR="0;34"  LOGO=fedora-logo-icon  CPE_NAME="cpe:/o:fedoraproject:fedora:31"  HOME_URL="https://fedoraproject.org/"  DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora/f31/system-administrators-guide/"  SUPPORT_URL="https://fedoraproject.org/wiki/Communicating_and_getting_help"  BUG_REPORT_URL="https://bugzilla.redhat.com/"  REDHAT_BUGZILLA_PRODUCT="Fedora"  REDHAT_BUGZILLA_PRODUCT_VERSION=31  REDHAT_SUPPORT_PRODUCT="Fedora"  REDHAT_SUPPORT_PRODUCT_VERSION=31  PRIVACY_POLICY_URL="https://fedoraproject.org/wiki/Legal:PrivacyPolicy"  VARIANT="Server Edition"  VARIANT_ID=server  

Can I call a mdoc macro inline?

Posted: 09 Jun 2021 12:10 PM PDT

Despite the many conveniences of mdoc, I find syntax of troff is still a bit too newline-rich or "spaghetti" compared to the other languages. Consider this example:

.Sh DESCRIPTION  The  .Nm  utility extracts C interface descriptions from a header file composed  using the SQLite documentation syntax (see  .Sx SYNTAX ) .  It then emits the corresponding set of  .Xr mdoc 7  manpages, one per interface description.  By default,  .Nm  reads from standard input and outputs files into the current directory.  Its arguments are as follows:  

All these lines for a paragraph starting a list is quite odd to me. Is there a way to call macros inline?


What I have tried involves using the strings syntax to invoke the macros, as groff's manual states that strings, diversions, and macros are stored in a shared namespace. This does get me somewhere, but the .if conditions on at the start of the macros end up not processed and spit out raw.

.Sh DESCRIPTION  The \*[Nm] utility extracts C interface descriptions from a header file composed  using the SQLite documentation syntax (see \*[Sx SYNTAX]).  It then emits the corresponding set of \*[Xr mdoc 7] manpages, one per interface  description. By default, \*[Nm] reads from standard input and outputs files into  the current directory.  Its arguments are as follows:  

I then tried to add another string definition to insert a newline. .ds n \! didn't quite work, and what I ended up having was a diversion:

.box n  \!  .box    .Sh DESCRIPTION  The \*n\*[Nm] utility extracts C interface descriptions... (see \*n\*[Sx SYNTAX]).  It then...  

Now the .if spam is gone, but unfortunately I get a stray, actual newline in the output. Is there a more perfect way to call these macros? (We will think about mandoc compatibility later.)

Addendum:

  • With the "see SYNTAX" bit added, it appears that the newline is somehow inherent to the .Nm macro, as the .Sx one only produces a stray trailing space. Still, can we get rid of both?
  • If I remove the space after \*n\*[Nm], the extra newline is gone, but that looks... odd in the source. The SYNTAX one is improved by calling \n*[Sx Syntax ) .] like in the original, but it somehow produces a paragraph break too.

nmap taking ages

Posted: 09 Jun 2021 11:27 AM PDT

I am using Try Hack Me to attempt to learn but they tell me to run nmap -A -p- and it is taking over an hour for a scan.

I am really new to this and want to know is there a faster way to perform a scan that shows open ports and os version?

Formatting Proof in Groff

Posted: 09 Jun 2021 12:21 PM PDT

Is there a way to append a tombstone symbol (usually a square) to the right at the end of line of a proof similar to LaTeX proof?

dhclient command does not allocate ip address to my interface

Posted: 09 Jun 2021 12:01 PM PDT

So I was trying to access a website through my browser but I couldn't. So I issued the ifconfig command to see if there was something wrong with the IP address
enter image description here

I noticed that the interface enp0s3 does not have an IP address. So I issued this command in vain:
enter image description here

Still nothing changed. So I decided to check the /etc/network/interfaces file to check the configuration of the interfaces. I was surprised to find this:
enter image description here

Does anyone know what might be the problem and how I can solve it?


I forgot to precise I am running ubuntu on virtual box. And it is unable to connect to my home router. (Other Pcs can) sorry.

How do I print an ASCII character by different code points in Bash?

Posted: 09 Jun 2021 10:55 AM PDT

In the ASCII table the 'J' character exists which has code points in different numeral systems:

Oct   Dec   Hex   Char  112   74    4A    J  

It's possible to print this char by an octal code point by printing printf '\112' or echo $'\112'. How do I print the same character by decimal and hexadecimal code point presentations?

Install 32 Bit Application on 64 Bit Arch Linux

Posted: 09 Jun 2021 01:05 PM PDT

I have an ancient openoffice plugin which requires the 32bit version of openoffice/libreoffice and does not work with the 64 bit version. So I would like to know, if there is any way to install the 32bit version of libreoffice on a standard 64 bit multilib enabled Arch Linux with pacman?

In the past I placed an 32 bit version manually somewhere in the filesystem, but that is more or less only a workaround and has several disadvantages.

How do I find which font provides a particular Unicode glyph?

Posted: 09 Jun 2021 11:08 AM PDT

On the Feodora 23 system I upgraded from F22, the U1F32D symbol 🌭 shows up just fine in the terminal, but on the one I installed from scratch, I get the box-with-numbers placeholder. I just checked and I have about 60 font packages on the system where it doesn't work, and over 200 where it does. Without inspecting each one manually, is there a way to identify which font I need to add?

SC_ERR_NOT_SUPPORTED(225) with Suricata on CentOS 6.5

Posted: 09 Jun 2021 11:02 AM PDT

I have just installed installed Suricata 2.0.3 RELEASE on CentOS 6.5. I try running it with

sudo suricata -c /etc/suricata/suricata.yaml -i eth0  

and get the following warning

<Warning> - [ERRCODE: SC_ERR_NOT_SUPPORTED(225)] - Eve-log support not compiled in. Reconfigure/recompile with libjansson and its development files installed to add eve-log support.  

I compilied and install Jannson as outlined here and then rebuilt Suricata as follows.

sudo make  sudo make-install  sudo make-install-full  

and everything proceeded without any errors. However I still get the SC_ERR_NOT_SUPPORTED warning. I did a search for the error message and found a link that said it was a known issue that won't be fixed due to an issue with the Jannson library. Is this the case? If so, can I disable this warning?

Securest way to authenticate users against /etc/shadow using PHP?

Posted: 09 Jun 2021 11:47 AM PDT

This question became important to me, since I'm setting up a Horde environment and want to use the system accounts. Doing some search I found mainly 4 ways of doing that, which all seem to have some drawbacks. Now I'm wondering, which one is most secure or if I'm missing some alternatives.

  • Method 1: Use PAM (PHP)

    Horde offers an auth plugin that makes use of the PAM library. Works fine, but the webserver needs read access to /etc/shadow. I found some statements saying this is as secure as publishing the password file for download.

  • Method 2: Use /bin/su (PHP)

    In addition, Horde provides a module that uses /bin/su to authenticate by calling fopen (file open) on /bin/su -c /bin/true $username and writing the password to this "file". Currently, this does not work for me. AFAIK, this is probably because /bin/su is configured to only work in a terminal. I read it was possible to allow this usage of /bin/su, but it would be insecure.

  • Method 3: Use external program (command line)

    I could write a small program that makes use of the PAM library and is invoked by PHP. Using SUID, no one could access /etc/shadow. However, I have to pass the password as an argument to or pipe it into my program, which again is said to be a security risk. (I guess that's the reason, why /bin/su does not allow it by default.)

  • Method 4: Use external program (local network)

    Same as method 3, but the communication could be done via unix sockets or tcp connections. I don't know whether this is more secure than #3, probably equivalent to the question whether it's harder to spy on local network connections than on tty!?

I think it is clear, why the first option isn't a good idea. But I don't know, what's wrong with the others, so I would be glad about some short explanations.

Readable comments on separate lines in a multi-line bash command with pipelines?

Posted: 09 Jun 2021 12:55 PM PDT

When creating shell scripts using pipelines, and using the backslash to continue lines, I want to insert comments on separate lines, in a robust, readable and portable fashion.

For example, given this uncommented multi-line command (stolen from @DigitalRoss for its clarity):

echo abc |       tr a-z A-Z |       sort |       uniq  

... the below is aesthetically closest to what I want to accomplish, but for obvious reasons, does not work ... and yes, I'm well aware that this isn't something normally worth commenting:

# Perform critical system task.  # NOTE - An example of what does *not* work.  echo abc |      # Convert lowercase to uppercase.       tr a-z A-Z |         # Sort the results.       sort |         # Only show unique lines.       uniq  

Existing related answers appear unsatisfactory for me, as follows:

First, glenn jackman's answer (adding arguments to an array, and then executing the array) works for single commands, but does not work for pipelining (and even if it did, it adds complexity that I would like to avoid).

Second, @Gilles' answer here (which uses :) also doesn't appear to work with pipelining, because it alters the flow of the pipeline:

$ echo "abc" | :  $  

(NOTE: If there is an equivalent to : that passes output unmodified, that would be aesthetically acceptable, but I haven't been able to find one. I could write a custom one, but it would reduce portability.)

Finally, the last part of DigitalRoss' answer on StackOverflow works well for adding comments on the same line, but I strongly prefer comments on separate lines. Otherwise, when lines have widely varying lengths, readability is reduced:

echo abc |         # normal comment OK here`       /usr/local/bin/really/long/path/ridiculously-long-filename.sh |  # another normal comment OK here       sort |        # the pipelines are automatically continued       uniq          # final comment  

I'm looking for answers that preserve readability and minimize complexity, or else some background on why what I'm looking for is infeasible.

No comments:

Post a Comment