Tuesday, September 28, 2021

Recent Questions - Unix & Linux Stack Exchange

Recent Questions - Unix & Linux Stack Exchange


`sort` using ascii order

Posted: 28 Sep 2021 10:41 AM PDT

I would like to use sort to sort by ascii value, or at least by not disregarding punctuation, i.e:

sort <<DATA  a.01  a.04  a2  a.3  a.2  DATA  

should produce

a.01  a.04  a.2  a.3  a2  

especially these properties are important:

  • dots are not ignored, so a.2 < a.3 < a2
  • numbers are not treated special, so a.04 < a.3

how do i achieve this?

How to open the n most recently modified files in less/vim?

Posted: 28 Sep 2021 10:38 AM PDT

How can I open all and only the n (e.g. 5) most recently modified files in a directory in less or vim?

I know I can use ls -t to sort the output from most recent to oldest. So my intuition would be to use ls -t | head -5 | vim but that doesn't work since ls output is treadted by vim as raw text not filenames.

How do I do this with find? My problem with find is that I always use ls to browser directories, so know how to use it - but find I never use for that purpose.

The strange power consumption behaviour of a Quadro card when `vfio-pci` has been removed an `nvidial` reattached

Posted: 28 Sep 2021 10:21 AM PDT

I have built a system with a Geforce GTX 960 and a Quadro M4000 graphics card, that I usually pass through to a virtual machine. The GTX 960 card is only used by the host.

Normally, the Quadro card would not be available by the host, because the kernel driver vfio-pci prevents it from being used. However, when I don't use it in the virtual machine, then I would like to have it accessible from the host machine, e.g. to do some computation.

But, there is this very strange behaviour in power consumption and fan speed... How can I reduce the power consumption and fan speed without needing to have nvidia-setttings open all the time?

From my notes:

Reuse a Passed-through-ready Device on the Host

Supposed a secondary graphics card, that has been prepared for passing it through to a guest, should be used on the host instead. The device would normally not be usable on the host, since the wrong driver is loaded. Here, the Quadro M4000 has the vfio-pci driver in use, but instead the nvidia driver should be used.

sudo lspci -nnk | egrep -A3 "VGA|Display|3D"    # 0b:00.0 VGA compatible controller [0300]: NVIDIA Corporation GM206 [GeForce GTX 960] [10de:1401] (rev a1)    # Subsystem: Gigabyte Technology Co., Ltd Device [1458:36ac]    # Kernel driver in use: nvidia    # Kernel modules: nouveau, nvidia_drm, nvidia    # --    # 0c:00.0 VGA compatible controller [0300]: NVIDIA Corporation GM204GL [Quadro M4000] [10de:13f1] (rev a1)    # Subsystem: Hewlett-Packard Company Device [103c:1153]    # Kernel driver in use: vfio-pci    # Kernel modules: nouveau, nvidia_drm, nvidia  

Unload the vfio-pci driver and check the device status again. No kernel driver should be in use, hence line Kernel driver in use: ... is gone.

sudo modprobe -r vfio-pci  sudo lspci -nnk | egrep -A3 "VGA|Display|3D"    # 0b:00.0 VGA compatible controller [0300]: NVIDIA Corporation GM206 [GeForce GTX 960] [10de:1401] (rev a1)    # Subsystem: Gigabyte Technology Co., Ltd Device [1458:36ac]    # Kernel driver in use: nvidia    # Kernel modules: nouveau, nvidia_drm, nvidia    # --    # 0c:00.0 VGA compatible controller [0300]: NVIDIA Corporation GM204GL [Quadro M4000] [10de:13f1] (rev a1)    # Subsystem: Hewlett-Packard Company Device [103c:1153]    # Kernel modules: nouveau, nvidia_drm, nvidia    # 0c:00.1 Audio device [0403]: NVIDIA Corporation GM204 High Definition Audio Controller [10de:0fbb] (rev a1)  

Also check the output of the nvidia driver tool nvidia-smi. It should list only one graphics card (the not-passed-through GTX 960).

sudo nvidia-smi     # Tue Sep 28 18:19:36 2021           # +-----------------------------------------------------------------------------+    # | NVIDIA-SMI 470.74       Driver Version: 470.74       CUDA Version: 11.4     |    # |-------------------------------+----------------------+----------------------+    # | GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |    # | Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |    # |                               |                      |               MIG M. |    # |===============================+======================+======================|    # |   0  NVIDIA GeForce ...  Off  | 00000000:0B:00.0  On |                  N/A |    # |  0%   51C    P8    19W / 160W |    477MiB /  4040MiB |      0%      Default |    # |                               |                      |                  N/A |    # +-------------------------------+----------------------+----------------------+    # ...  

Remove all associated PCI devices from the system. In this case, those are 0c:00.0 and 0c:00.1. Then check that those are actually gone.

echo 1 | sudo tee /sys/bus/pci/devices/0000\:0c\:00.0/remove  echo 1 | sudo tee /sys/bus/pci/devices/0000\:0c\:00.1/remove  sudo ls /sys/bus/pci/devices/ | grep 0c:00.    # nothing...  

Then let it rescan for PCI devices and check if the devices are there again and enabled. Also check which kernel driver is in use and what nvidia-smi is telling.

echo 1 | sudo tee /sys/bus/pci/rescan  sudo ls /sys/bus/pci/devices/ | grep 0c:00.  sudo cat /sys/bus/pci/devices/0000\:0c\:00.?/enable    # 1    # 1  sudo lspci -nnk | egrep -A3 "VGA|Display|3D"    # 0b:00.0 VGA compatible controller [0300]: NVIDIA Corporation GM206 [GeForce GTX 960] [10de:1401] (rev a1)    # Subsystem: Gigabyte Technology Co., Ltd Device [1458:36ac]    # Kernel driver in use: nvidia    # Kernel modules: nouveau, nvidia_drm, nvidia    # --    # 0c:00.0 VGA compatible controller [0300]: NVIDIA Corporation GM204GL [Quadro M4000] [10de:13f1] (rev a1)    # Subsystem: Hewlett-Packard Company Device [103c:1153]    # Kernel driver in use: nvidia      # <-- here!    # Kernel modules: nouveau, nvidia_drm, nvidia  sudo nvidia-smi     # Tue Sep 28 18:26:16 2021           # +-----------------------------------------------------------------------------+    # | NVIDIA-SMI 470.74       Driver Version: 470.74       CUDA Version: 11.4     |    # |-------------------------------+----------------------+----------------------+    # | GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |    # | Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |    # |                               |                      |               MIG M. |    # |===============================+======================+======================|    # |   0  NVIDIA GeForce ...  Off  | 00000000:0B:00.0  On |                  N/A |    # |  0%   47C    P8    19W / 160W |    479MiB /  4040MiB |      0%      Default |    # |                               |                      |                  N/A |    # +-------------------------------+----------------------+----------------------+    # |   1  Quadro M4000        Off  | 00000000:0C:00.0 Off |                  N/A |    # | 45%   37C    P0    42W / 120W |      0MiB /  8127MiB |      2%      Default |    # |                               |                      |                  N/A |    # +-------------------------------+----------------------+----------------------+    # ...  

Funny enough, the Quadro M4000 consumes about 42 Watts under absolutely no load. I guess this is due to a driver problem...

However, if the graphical nvidia-settings program is loaded, the power demand drops to about 12 Watts.

# Terminal A  watch -d -n 1 sudo nvidia-smi  # Terminal B  nvidia-settings  

Watch nvidia-smi and listen to the fan noise when the magic happens...

watch -d -n 1 sudo nvidia-smi    # ...    # +-------------------------------+----------------------+----------------------+    # |   1  Quadro M4000        Off  | 00000000:0C:00.0 Off |                  N/A |    # | 46%   38C    P0    10W / 120W |      0MiB /  8127MiB |      0%      Default |    # |                               |                      |                  N/A |    # +-------------------------------+----------------------+----------------------+    # ...  

Best of all -- nvidia-settings does not even list my Quadro card... No Quadro card in nvidia-settings

MX Linux Won't start

Posted: 28 Sep 2021 10:13 AM PDT

I recently installed MX linux KDE and installed updates. After I rebooted and selected the default option in GRUB, it begins the loading splash screen only to complete and show me a black screen.

I ESC'd and saw that it's booting normally, but something called regulatory isn't working properly, and it causes the wifi driver to fail also.

And the weirdest part is that whenever I boot in terminal, it works fine. However, if I try to startx it'll hang the system. I'm stuck in terminal with no idea what to do. I tried downloading an another linux distro and using DD to make a bootable usb but it seems to not be working aswell.

Any ideas what I should do?

No OpenGL on amd64 chroot on Arch Linux ARM

Posted: 28 Sep 2021 10:36 AM PDT

I am on Arch Linux ARM, and am running a 64-bit chroot in order to get access to some things that are only x86_64. However, when trying to start yuzu or dolphin-emu, it starts, but the text is not there, and it has this error message:

libGL error: failed to create dri screen  libGL error: failed to load driver: virtio_gpu  libGL error: failed to get magic  libGL error: failed to load driver: virtio_gpu  

All the proper drivers are installed, and OpenGL works with no errors on host.

EDIT: It seems that for some reason, llvmpipe is being used on the chroot instead of virgl.

Adding multiple user inputs into comma separated line in shell script

Posted: 28 Sep 2021 09:19 AM PDT

I have to get user inputs of multiple domain names until the user is done with inputs. Input can be just one domain name or more than one. and the domain name should be separated by a comma ,.

How can I pass the domain names to the below command. Kindly please help me.

keytool -genkey -keystore tc_keystore.jks -keysize 2048 -keypass password    -storepass password -keyalg RSA -dname  "CN=domain1.com,OU=Devteam,O=Softech,L=Chicago,ST=IL,C=US"   -alias domain1.com -ext san=dns:domain2,domain3,domain4,domain5,domain6,domain7  

As I am new to shell script, I am reading a single user input and store it in a variable and call the variable. like the below one. But When it comes to multiple user input that should be separated by comma, I am stuck.

keytool -genkey -keystore $keystore -keysize 2048 -keypass $password -storepass $password -keyalg RSA -dname "CN=$domain1,OU=Devteam,O=Softech,L=Chicago,ST=IL,C=US" -alias $domain1 -ext san=dns:domain2,domain3,domain4,domain5,domain6,domain7

Why is vxlan not forwarding the packet to the other end?

Posted: 28 Sep 2021 08:51 AM PDT

I want to communicate directly with host2's netns in host1's netns.

But Linux tells me that the network is unreachable.

It looks like the packet is not being routed to the gateway and processed correctly by VxLAN.

I hope someone can tell me what's wrong.

This is the command I used to create the vxlan.

$ ip link add vxlan type vxlan id 1 dstport 4789 dev eth0 nolearning proxy  

This is the record of my command execution in host1.

$ ip netns exec netns2 ping 192.168.1.3     connect: Network is unreachable    $ IP netns exec n1 arping -I v1 192.168.1.0    ARPING 192.168.1.0 from 192.168.2.3 v1    Unicast reply from 192.168.1.0 [A2:C0:85:A6:28:1C]  0.533ms    $ ip netns exec netns2 ping 192.168.1.0     connect: Network is unreachable  

This is my network topology. L3 is interoperable between two hosts. enter image description here

Can I have custom folder icons for some specific (document) folders?

Posted: 28 Sep 2021 08:34 AM PDT

I am using Linux Mint, with Cinnamon Desktop Environment.

I come from MAcOS, and in that system I am used to set custom icons for some of the sub-folders in my documents folder. Is it possible to do the same in Linux? Maybe with some additional 'tool'?

awk command inside bash shell script loop

Posted: 28 Sep 2021 07:56 AM PDT

I have a bash script which is supposed to go thru a series of files text file. I have setup a for loop to do this job automatically for me but I am not getting any output files when the script runs. I have attempted various single line commands I have found online with no luck. I am looking to break these large files by "year" Any suggestions?

#!/usr/awk -f          for i in *yyyymm.txt          do  #       {FS = "," }          awk -F "," 'BEGIN '$1 == 2002'END{ print $0 }' $i > "$i"-2002.dat          gawk '$ 1==2002 { print $0 }' "$i" > "$i"-2002.dat          awk '/2002/' "$i" > "$i"-2002.dat   

How to use blocks of repeat code within .bashrc

Posted: 28 Sep 2021 07:57 AM PDT

I have a 4 functions that are 100 rows of simple code in my .bashrc

These functions are identical except for the function name and the first 3 lines of code which are variables.

How can I pull the 97 lines of code that are common out and into a seperate instantiated block that I then call from each of the 4 functions?

I tried making a common function block of these common 97 lines and calling that from a very small funcion but I could not get that to work.

Here is the function with the 3 unique lines at the top and the common code below that

function download_podcaster()        ######################################################################  ##########                                                  ##########   #   UNIQUE CODE FOLLOWS  #    #set-up the directory variables    dir_zz="/home/user/podcast_author"    pone_dir="podcast_author"    downloads_z="~/Downloads"      ##########   UNIQUE CODE ABOVE                              ##########  ##########                                                  ##########   ######################################################################      #  vvvvv COMMON CODE BELOW   vvvvvv         echo; echo "   .... this output is from youtube-dl ..."; echo       #download the file     youtube-dl -f 140 --restrict-filenames -o $dir_zz'/%(title)s.%(ext)s' $1         #make dir if does not aleready exist     mkdir -p $dir_zz       #change to the downloads directory     cd $dir_zz     echo     echo "current dir is:                                   "$(pwd)     echo         #open the downloads location to show the files with pcmanfm      #pcmanfm ~/Downloads &     pcmanfm $dir_zz &> /dev/null          file_name_z=$(youtube-dl --get-filename --restrict-filenames "$1")     echo; echo "file name from provided by youtube-dl:            "$file_name_z; echo       #grab the filename from youtube, and parse it into something useful     #remove 11 digits before end of file name                     file_name_z=$(echo $file_name_z | sed 's|...........\.mp4$|.mp4|g' | sed 's|...........\.m4a$|.m4a|g' \         | sed 's|...........\.webm$|.webm|g' \         | sed 's|,||g' | sed 's|!||g' | sed 's| |_|g'  |  \         sed 's|-\.||g' | sed 's|webm||g' | sed 's|mp4||g' | sed 's|\?||g').m4a             #  remove ,      remove !       replace " " with "_"             #  remove "-."    remove "webm"     echo; echo "file name after , ! \" \" ? removed:                "$file_name_z; echo       var1=$(ls -t | grep -E "^[0-9]{3}" | sort | tail -n  1 | cut --bytes=1-3)     echo; echo "\$var1 3 digit highest number from file set is:    "$var1; echo          sleep .25     #create the variable to assign the next file number to front of file name     next_file_number=$(printf "%03d\n" $((10#$var1+5)))     echo; echo "File number plus 5 is:                            "$next_file_number; echo       sleep .25     #new file name with three digit number in front of filename     file_name_y=${next_file_number}_${file_name_z}     echo; echo "concatenated filename is \$file_name_y:            ""$file_name_y"; echo         #move the old file to the new file name     mv "$file_name_z" "$file_name_y"      echo; echo "                                                  ""$file_name_y"; echo         #plug phone in. Phone mount point in file system can be seen here     #echo; cd /var/run/user/$UID/gvfs; ls; echo     #reference     #https://askubuntu.com/questions/342319/where-are-mtp-mounted-devices-located-in-the-filesystem     #     #How to get to the phone directory on the phone if the directory is dynamically allocated on the      #reference       #https://askubuntu.com/a/454697/624987     #phone or changes, use this     #cd /var/run/user/$UID/gvfs; cd * ; cd *; cd Music; mkdir -p $phone_dir; cd $phone_dir; ls     #            #"cd *" changes to the first directory shown          #grab the directory on the phone in which to place the file     cd /var/run/user/$UID/gvfs; cd * ; cd *; cd Music; mkdir -p $phone_dir; cd $phone_dir     phone_dir_long_path=$(pwd)         echo "  ... now copying the file to the phone -->"; echo     #copy file to phone     cp $dir_zz/"$file_name_y" "$phone_dir_long_path"     echo          #open terminal at directory of files     gnome-terminal --title="test" --command="bash -c '$phone_dir_long_path; ls; $SHELL'"       echo     #open the file name with the default app, usually vlc     xdg-open $dir_zz/$file_name_y &       echo  }  

Return string of a line in a file with multiple line after a matching pattern

Posted: 28 Sep 2021 08:34 AM PDT

I have a file with multiple lines in it. I am trying to get a string that appears after a specific pattern...

Here is the command i am trying to do:

sed -n -e 's/^.*DB_PASS=//p' <<< $(cat /root/dadosDB)  

But instead of giving me only the line i want, its giving me everything after the pattern I specified...

What I expect:

Return only the output of the line that contains DB_PASS and ignore other lines.

What the command is doing:

Printing everything of the entire file after DB_PASS and ignoring everything that appears before that pattern.

Any help is apprreciate it!

Here is what I have inside dadosDB:

DB_USER=myusername  DB_PASS=mypassword  DB_NAME=mydatabase  

I want to get only what is after DB_PASS= and stop at that line, ignoring the following ones.

Actual output of the sed:

$ sed -n 's/^.*DB_PASS=//p' <<< $(cat /root/dadosDB)  mypassword DB_NAME=mydatabase  

What I need:

mypassword  

2 files with two cols if col1 value of file 1found in file2 it should print col1 of file1|corresponding col2 val |corresponding col2 of file2 value|OK

Posted: 28 Sep 2021 08:48 AM PDT

2 files with two cols if col1 value of file 1found in file2 it should print col1 of file1|corresponding col2 val |corresponding col2 of file2 value|OK

please see below
file 1

col1 | col2    abc | 5    xyz | 6  

file2

col1 | col2    abc | 3    xyz | 6  

output file3

col1 | col12 | col22 | status    abc | 5 | 3 | NOK    xyz | 6 | 6 | OK  

Remove [CRLF] from specific colum in csv file

Posted: 28 Sep 2021 07:30 AM PDT

I want to remove [CRLF] form a specific column in csv file using sed or awk

Example :

a,b,c  test1,test2  test2 bis,  test3  

Output:

a,b,c  test1,test2 test2 bis,test3    

match some or all patterns with awk

Posted: 28 Sep 2021 10:59 AM PDT

I have a small problem with awk multiple pattern matching which I cannot figure out. I have the following awk line:

awk '/pat1/{v1=$4; next} /pat2/{v2=$5; next} /pat3/{v3=$6;next} /pat4/{v4=$5; print v1,"    ",v2,"    ",v3"    ",v4}' myfile.out  

This gives the result I want (have the mathed results printed on a line every time they match) given that ALL of them match. If one of the patterns is not present then nothing will match.

So if all match I get what I expect:

pat1    pat2    pat3    pat4  pat1    pat2    pat3    pat4  pat1    pat2    pat3    pat4  pat1    pat2    pat3    pat4  .  .  .  

The patX values are different in each row!

Is there a way to tell awk to look for these patterns and if they do not appear to leave the place empty?

So for example if in the first instance pat3 and pat4 do not yet appear in the document that is updating, then I should get:

pat1    pat2      pat1    pat2   pat3    ------> (here let's assume that pat3 has made an appearange)  pat1    pat2   pat3    pat4 ------> (here pat4 started to appear too)  pat1    pat2   pat3    pat4  pat1    pat2   pat3    pat4  .  .  .  

Can this be done with awk?

Edit: Here is the two example scenarios I am facing. My files start off as empty and then they fill with data and I need to filter some patterns from them. Not all the patterns appear from the beginning. So the file will start off as:

some text here pat1  some more text here    some more text here pat2    some more text here and pat3      

If I use the awk command above it will give an empty result because pat4 is not present yet! As time goes by it will eventually appear.

some text here pat1  some more text here    some more text here pat2    some more text here and pat3    some more text here pat4    some text here pat1  some more text here    some more text here pat2    some more text here and pat3    some more text here pat4    some text here pat1  some more text here    some more text here pat2    some more text here and pat3    some more text here pat4  

The result of the awk command looks as expected:

pat1      pat2      pat3     pat4  pat1      pat2      pat3     pat4  pat1      pat2      pat3     pat4  

However, at the beginning I would like to obtain the result:

pat1    pat2    pat3     

I hope this is clearer now ( I have rewritten and tested the awk command above to make it simpler for this example).

How to run a command using another command output as part of its text in terminal?

Posted: 28 Sep 2021 07:14 AM PDT

If command1 is:

curl -k -v -u user:password https://example.com/v2/image/manifests/tag -H Accept: application/vnd.docker.distribution.manifest.v2+json 2>&1 | grep Docker-Content-Digest | awk '{print ($3)}'   

which output, for example, the following Docker-Content-Digest:

> sha256:12345...  

knowing that running each command separately works, how to inject command1 output as part of command2 text as:

curl -k -v -u user:password -X DELETE https://example.com/v2/image/manifests/(command1 output)  

I am just trying to run one command only!


Update:

When I combine them together using the double quote as following:

curl -k -v -u user:password -X DELETE https://example.com/v2/image/manifests/"$(curl -k -v -u user:password https://example.com/v2/image/manifests/tag -H Accept: application/vnd.docker.distribution.manifest.v2+json 2>&1 | grep Docker-Content-Digest | awk '{print ($3)}')"  

I get the following output and command2 never executed:

> * Illegal characters found in URL  > * Closing connection -1   > * curl: (3) Illegal characters found in URL  

Update 2

when I use single quote, I get the following output:

> < HTTP/1.1 404 Not Found   < Server: nginx/1.21.3   < Date: Tue, 28 Sep  > 2021 13:46:50 GMT   < Content-Type: text/plain; charset=utf-8 <  > Content-Length: 19   < Connection: keep-alive <  > Docker-Distribution-Api-Version: registry/2.0 <  > X-Content-Type-Options: nosniff < 404 page not found  

How does exec > work in AWS EC2 user data?

Posted: 28 Sep 2021 08:01 AM PDT

I was writing a Terraform module for AWS EC2 that involved executing a bash in the user data section. While I was developing I had an issue in the script I wrote but neither AWS nor Terraform provided any logs for the errors in I got until I found this line in AWS support forum:

exec > >(tee /var/log/user-data.log|logger -t user-data -s 2>/dev/console) 2>&1  

This line writes all the output of the script executed in user_data to /var/log/user-data.log successfully but I don't understand the whole line. I know exec > writes all the output to a file, which in this case >(..) but I don't understand why it's using tee or the need for a pipeline there.

Argument list too long while generating txt file which includes file names in a directory [closed]

Posted: 28 Sep 2021 07:08 AM PDT

I am trying to write all of the file names in the directory into txt file.

I used the command,

ls /path/of/directory/ > file_name_list.txt  

But it gives error

ls: cannot open directory /path/of/directory/: Argument list too long  

How can I get that txt file that has file names in the directory?

Oracle Linux or Ubuntu Server as a hypervisor?

Posted: 28 Sep 2021 07:01 AM PDT

I own a HP Proliant DL380 Rack Server. I would like to use it as a hypervisor for a production environment. I was thiking of installing CentOS with KVM on it, but unfortunately CentOS is not realiable to use as a hypervisor since it got it's EOL aheaded.

Now I searched on the internet for an alternative and I found that Oracle has some kind of GNU / Linux distribution that has compatibility with RHEL-CentOS and that really looks rock-solid stable.

But in the other hand, I readed some comments talking about how bad Oracle usually treats Open Source software, which makes me a little bit uncomfortable.

My last alternative would be to install Ubuntu Server with KVM and start working. Do you think Oracle Linux performance and support are really worth it to use it as a hypervisor ? Or should I just forget about it and start using Ubuntu Server ?

Why do threads have their own PID?

Posted: 28 Sep 2021 07:26 AM PDT

I'm using htop and looking at a process (rg) which launched multiple threads to search for text in files, here's the tree view in htop:

PID   Command  1019  |- rg 'search this'  1021     |- rg 'search this'  1022     |- rg 'search this'  1023     |- rg 'search this'  

Why am I seeing PIDs for the process' threads? I thought threads didn't have a PID and they just shared their parent's PID.

Ufw allow http traffic out

Posted: 28 Sep 2021 07:09 AM PDT

I have a server machine with ubuntu 20 on wich i've installed ufw, this are my rules

To                         Action      From  --                         ------      ----  22/tcp                     LIMIT       Anywhere                    Nginx Full                 ALLOW       Anywhere                    5000                       ALLOW       Anywhere                    25                         ALLOW       Anywhere                    22                         LIMIT       Anywhere                   # allow SSH connections in  80/tcp                     ALLOW       Anywhere                    443/tcp                    ALLOW       Anywhere                   # allow https traffic update  Apache Full                ALLOW       Anywhere                    587                        ALLOW       Anywhere                    993                        ALLOW       Anywhere                   # godaddy IMAP  995                        ALLOW       Anywhere                   # godaddy POP3  465                        ALLOW       Anywhere                   # godaddy SMTP  SMTPTLS                    ALLOW       Anywhere                    80                         ALLOW       Anywhere                    22/tcp (v6)                LIMIT       Anywhere (v6)               Nginx Full (v6)            ALLOW       Anywhere (v6)               5000 (v6)                  ALLOW       Anywhere (v6)               25 (v6)                    ALLOW       Anywhere (v6)               22 (v6)                    LIMIT       Anywhere (v6)              # allow SSH connections in  80/tcp (v6)                ALLOW       Anywhere (v6)               443/tcp (v6)               ALLOW       Anywhere (v6)              # allow https traffic update  Apache Full (v6)           ALLOW       Anywhere (v6)               587 (v6)                   ALLOW       Anywhere (v6)               993 (v6)                   ALLOW       Anywhere (v6)              # godaddy IMAP  995 (v6)                   ALLOW       Anywhere (v6)              # godaddy POP3  465 (v6)                   ALLOW       Anywhere (v6)              # godaddy SMTP  SMTPTLS (v6)               ALLOW       Anywhere (v6)               80 (v6)                    ALLOW       Anywhere (v6)                 53                         ALLOW OUT   Anywhere                   # allow DNS calls out  123                        ALLOW OUT   Anywhere                   # allow NTP out  80/tcp                     ALLOW OUT   Anywhere                    443/tcp                    ALLOW OUT   Anywhere                   # allow HTTPS traffic out  43/tcp                     ALLOW OUT   Anywhere                   # allow whois  25                         ALLOW OUT   Anywhere                   # allow MAIL out  SMTPTLS                    ALLOW OUT   Anywhere                   # open TLS port 465 for use with SMPT to send e-mails  21/tcp                     ALLOW OUT   Anywhere                   # allow FTP traffic out  53 (v6)                    ALLOW OUT   Anywhere (v6)              # allow DNS calls out  123 (v6)                   ALLOW OUT   Anywhere (v6)              # allow NTP out  80/tcp (v6)                ALLOW OUT   Anywhere (v6)               443/tcp (v6)               ALLOW OUT   Anywhere (v6)              # allow HTTPS traffic out  43/tcp (v6)                ALLOW OUT   Anywhere (v6)              # allow whois  25 (v6)                    ALLOW OUT   Anywhere (v6)              # allow MAIL out  SMTPTLS (v6)               ALLOW OUT   Anywhere (v6)              # open TLS port 465 for use with SMPT to send e-mails  21/tcp (v6)                ALLOW OUT   Anywhere (v6)              # allow FTP traffic out    

I'm trying to make a curl request to another server

curl http://my.ip:5000  

But this command gives a "Connection timed out" error.

I thought that the problem was ufw not allowing HTTP traffic out so I enabled port 80 to allow traffic in and out, but didn't work. If I totally disable ufw the curl command works correctly and returns the response but I cannot figure out what rule i need to add to make it work with ufw active.

zsh completion: have the most recents files and directories near to the prompt and suggest first these most recent file and directory with "l" command

Posted: 28 Sep 2021 10:39 AM PDT

On MacOS Big Sur 11.3, here is my .zshrc. I would like to get the latest modified or create files and directories near to the prompt (sorted from the most recent up to the oldest ones):

autoload -Uz compinit  compinit  # Colorize completions using default `ls` colors.  zstyle ':completion:*' list-colors "${(s.:.)LS_COLORS}"    bindkey '^[[Z' reverse-menu-complete    # To get new binaries into PATH  zstyle ':completion:*' rehash true    # Disable prompt disappearing on multi-lines  export COMPLETION_WAITING_DOTS="false"    zstyle ':completion:*' file-sort date reverse  

The issue is that when I press TAB after a "l" which is actually the alias:

alias l='grc -es --colour=auto ls --color -Gh -C -lrt'  

grc is a tool to colorify the files.

Indeed, I have not as the first result the most recent modified or created file or directory which are suggested.

Which option could I add in zsh completion to get as first results after pressing TAB these last recent (modification or creation) files or directories?

Update

Below an illustration of my issue:

enter image description here

The first command applied is "l" which corresponds to the alias:

alias l='grc -es --colour=auto ls --color -Gh -C -lrt'  

The problem is that once I type again "l", I want, when I touch the TAB (auto)-completion , the most recent modified files as suggestions near to the prompt from which I perform the "l" + TAB completion, in both cases, i.e in the case where prompt is located on top and if I have the prompt below the suggestions.

That is to say, I would like to get displayed the most modified recent files (like if I did a "ls -lrt") in first and second less recent after a second completion TAB and etc.

EDIT 1:

The options

autoload -U compinit  compinit  zstyle ':completion:*' file-sort modification reverse  

(reverse here to see newest files in the end of the list, because the upper part might not be visible on the screen.)

But If I do it like for example on this screen below :

last modified on below

this is not what I want : I would like the file Fisher_GCph_WL_XSAF appears in first near to the prompt, after, Fisher_GCs_WL_TSAF and so on (that is to say, the reverse showed order).

How to modifiy this behavior ? Having in all situations the most recent files and directory near from the prompt ?

It seems that everything depends if the list of files and directories can be contained all into the iTerm2 terminal window : if not, the order is not reversed, if yes, it is reversed. I don't know what to do.

EDIT 2: I can confirm what I said above.

With the option zstyle ':completion:*' file-sort date or zstyle ':completion:*' file-sort modification :

  1. If all the list of files and directories can't be displayed in the terminal window, the most recent files and directories doesn't appear near to the prompt when I do a "l +TAB" (with "l" the alias defined above in the post).

  2. If all the list of files and directories can be displayed in the terminal window, the most recent files and directories appear near to the prompt when I do a "l +TAB".

How to manage to get the same behavior in both cases ? : the most recent files and directories always near to the prompt ?

EDIT 3: I realize that the ordering display depends if all the listed files can be contained in the iTerm2 pane where I perform the command alias 'l' . I recall that alias 'l' is defined by :

alias l='grc -es --colour=auto ls --color -Gh -C -lrt'  

Option 1) if all files listing can be contained, then the more recent files/directories are near to the prompt when performing 'l+TAB'

Option 2) if there is a long list of files/directories, 'l+TAB' is moved bottom and just above the oldest ordering are near to the prompt.

This is this option 2) that I want to fix, i.e having with the more recent ordering files/directories just above the prompt.

How to fully upgrade Debian from command line (including release_version)?

Posted: 28 Sep 2021 10:10 AM PDT

I desire to totally upgrade everything in Debian:Stable including the release version, to the newest stable release available:

  • Packages update
  • Packages upgrade
  • D:S minor_version
  • D:S major_version
  • D:S release_version

Each action will be done respective to others in that entire recursive (monthly/yearly) single process, while I assume that release_version will surly be the last.

In other words, I'd like to create a "fully rolling release stable Debian".

I do it when having at least weekly/daily automatic backups (per month) of all the data so if something was broken I restore a backup.

What will be the command to "brutally" upgrade everything whatsoever including doing a release upgrade? I was thinking about:

apt-get update -y && apt-get upgrade -y && apt-get dist-upgrade -y  

Mark RPM as Automatically or Manually Installed

Posted: 28 Sep 2021 08:47 AM PDT

A question exists regarding how to mark an RPM as automatically installed, but that question concerns Fedora.

I am using zypper on OpenSUSE as an end-user and would like to know how to mark a package so that it will (or will not) show in the list of unneeded packages, with zypper packages --unneeded. I am looking for something along the lines of zypper mark autoselected [packageName].

A Novell Bugzilla bug mentions a status of byUser, and possibly autoselected, so it sounds like this information exists somewhere. I would like to know how to modify it.

Unable to export specific gpio pins. How to check what uses GPIO pins and how to access register?

Posted: 28 Sep 2021 10:01 AM PDT

I use an i.mx6 board (yocto(jethro)) and am configuring a device tree. I changed a dts file and put the dtb file in a boot partition. I set GPIO4_IO19 in dts file as follows.

&iomuxc {  pinctrl-names = "default";  pinctrl-0 = <&pinctrl_hog_1>;  imx6ul-evk {  pinctrl_hog_1: hoggrp-1 {  fsl,pins = <  ...  MX6UL_PAD_CSI_VSYNC__GPIO4_IO19 0x0000B0B0  ...  >;  };  ...  

At first MX6UL_PAD_CSI_VSYNC__GPIO4_IO19 was defined in other group (usdhcgrp) but I commented out them.

After booting, I checked if gpio is successfully determined as I set. The result is

echo 115 > /sys/class/gpio/export  -sh: echo: write error: Device or resource busy  

So I checked this.

cat /sys/kernel/debug/gpio        GPIOs 0-31, platform/209c000.gpio, 209c000.gpio:      gpio-10 (phy-reset ) out lo      GPIOs 32-63, platform/20a0000.gpio, 20a0000.gpio:      GPIOs 64-95, platform/20a4000.gpio, 20a4000.gpio:      gpio-68 (ft5x06_irq_gpio ) in hi      GPIOs 96-127, platform/20a8000.gpio, 20a8000.gpio:      gpio-109 (? ) out lo      gpio-115 (cd ) in lo      gpio-116 (? ) out lo      gpio-117 (? ) out lo      gpio-118 (sysfs ) in hi      GPIOs 128-159, platform/20ac000.gpio, 20ac000.gpio:      gpio-128 (phy-reset ) out lo  

gpio-115 is used by cd. Maybe it means card detection. I want to use it as sysfs to read the state. Any other way to read it ? Furthermore, gpio-10, 68, 109, 116, 117 is used by other device. How can I use them by sysfs?

I think I need to do is checking whether register is correctly set value or not. If the register value is not the same as I set, I guess the other process set the pin control.However I do not know the way of accessing a register.

What I know about gpio115 is as follows

 mux_reg address: 0x01DC and the value.   conf_reg address: 0x0468 and the value.   input_reg address: 0x0000 and the value.  

The same as the other gpios.

How can I access 0x01DC and then get the value in linux(yocto)?

Thank you for your cooperation.

Set up nftables to only allow connections through a vpn and block all ipv6 traffic

Posted: 28 Sep 2021 08:03 AM PDT

I am trying to set up a nftables firewall on my archlinux distribution that only allows traffic through a vpn (and blocks all ipv6 traffic in order to prevent any ipv6 leaks)

I have been playing around with it for a while now and ended up with a configuration that lets me browse the web, even though as far as I understand nftable so far, it should not let me do that. The ruleset is pretty short and looks like this:

table inet filter {      chain input {              type filter hook input priority 0; policy drop;              jump base_checks              ip saddr VPN_IP_ADRESS udp sport openvpn accept      }        chain forward {              type filter hook forward priority 0; policy drop;      }        chain output {              type filter hook output priority 0; policy drop;              ip daddr VPN_IP_ADRESS udp dport openvpn accept              oifname "tun0" accept      }        chain base_checks {              ct state { related, established} accept              ct state invalid drop      }  }  

I tried to find my way thorugh with trial and error and had many other rules in there, but with just this, i am able to connect to the VPN server first and then browse the web. Once I remove the last rule from the outout chain though, it won't let me browse the web anymore.

I am completely new to this and pretty much overall clueless, trying to learn. Unfortunately, the documentation on nftables is not that extensive, so I am kind of stuck at the moment.

From what I understand so far, this setup should allow to make a connection to the vpn but it should not allow any other incoming traffic - yet I can browse the web without problems.

Does anyone know why it works and how i should proceed with the setup of nftables to get a more complete setup?

How can I install Fiddler ca-certificate on Ubuntu to decrypt HTTPS?

Posted: 28 Sep 2021 11:00 AM PDT

I am trying to get my Ubuntu machine to properly recognize and use the certificate from Fiddler as a trusted source so I can decryt HTTPS traffic (specifically to google-analytics). I had this working once before, but had to since reinstall Ubuntu and now have to re-setup Fiddler. I can't remember what I did in the first place and I've spent the better part of today trying to figure it out.

I think I am inching closer to getting this certificate to recognize. By that I mean that when I went to Google a few hours ago, while using Fiddler, I would see the 'Connection Not Secure message' - which I think means Google is just actively refusing to recognize Fiddler's certificate. Now, I am getting a This Site Can't Be Reached page (ERR_SOCKET_NOT_CONNECTED) page.

I have tried a number of different things today to try to get this to work, but this is what I did with my last attempt:

Used THIS SITE as a jumping off point to get Fiddler installed.

  • Installed mono 4.8.0

  • Did not run the '/usr/lib/mono//mozroots --import --sync' command from the Linux setup page since when I tried I got a message in Terminal saying that mozroots is depreciated and to use client_sync instead. (client_sync seems to just update the mono cert store with whatever CRT file you pass to it.

  • Installed Fiddler (Left it as default as I could - using 8888 as listing port)

  • Ticked the 'Decrpyt HTTPS' box in Fiddler

  • Exported the Fiddler certificate to the desktop

  • Converted the CER cert file to PEM format (CRT specifically) with openssl (update-ca-certificates on ubuntu needs a PEM formatted cert file and the CER file Fiddler exports is in a binary format.)

  • Copied the CRT file to /usr/share/ca-certificates/

  • From terminal ran 'sudo dpkg-reconfigure ca-certificates' (Clicked 'Ask' then 'OK') (this re configures ca-certificates, runs update-ca-certificate, and updates mono cert store (by running client_sync from mono and passes it the updated ca-certificates.crt file that this process creates). This places a PEM version of the Fiddler CRT file into /etc/ssl/ca-certificates/ and packages it into the bigger ca-certificates.conf file.

This is pretty much where I am at right now. Turning Fiddler off - I can get to Google just fine, turning it on gives me the page I mentioned at the top of this post. I can see all other HTTP requests as expected.

When I got this to work last time, I was reading a lot of suggestions of the web for how to get a CA certificate installed on Ubuntu and tried to pick that trail up again, but everything I read has since blended together. I do vaguely remember importing the Fiddler cert file into Firefox as a Person, exporting that cert, then importing the file I just exported back into FF as a CA trusted root, then deleted the person cert that I installed in the first place. I think I them used the cert exported from FF to import to the system with 'update-ca-certificates'. I have no idea if this was a critical step or not.

I was also playing around with mitmproxy at the same time which also needed a proxy - again, no idea if that helped the process at all.

I am basically throwing things at a wall right now and seeing what sticks.

Visualizing layout key maps in xkb

Posted: 28 Sep 2021 08:23 AM PDT

I'm using

setxkbmap -query layout us,in -variant ,tam  

to be able to enter tamil characters. I've not used it before, so I can't find the keys on the keyboard very easily. I've used

xkbcomp /usr/share/X11/xkb/geometry/microsoft - | xkbprint -color -o - - | ps2pdf - > out.pdf  

to view a map of the geometry of the keyboard. But I'd like to be able to view the actual unicode symbols on the keys. I see things like <AE00> on the pdf.

Deleted /usr/bin/touch and /bin/touch. Can't seem to install anything now, nor create any files?

Posted: 28 Sep 2021 08:43 AM PDT

After updating GNOME to 3.20.2 I had some problems with my touchpad so in that confusion I mistakenly deleted /usr/bin/touch and after that I even became more stupid and deleted /bin/touch.

Now after this I can't seem to install any of the programs. Here's the error generated in installing a program..

user1@pqrx:~$ sudo apt-get install gir1.2-gtop-2.0  [sudo] password for user1:     Reading package lists... Done  Building dependency tree         Reading state information... Done  The following NEW packages will be installed:  gir1.2-gtop-2.0  0 upgraded, 1 newly installed, 0 to remove and 1 not upgraded.  Need to get 0 B/54.6 kB of archives.  After this operation, 104 kB of additional disk space will be used.  Selecting previously unselected package gir1.2-gtop-2.0:amd64.  (Reading database ... 351267 files and directories currently installed.)  Preparing to unpack .../gir1.2-gtop-2.0_2.34.0-1_amd64.deb ...  Unpacking gir1.2-gtop-2.0:amd64 (2.34.0-1) ...  Setting up gir1.2-gtop-2.0:amd64 (2.34.0-1) ...  sh: 1: touch: not found  update-kali-menu: error: Can't open /var/lock/kali-menu: No such file or directory  E: Problem executing scripts DPkg::Post-Invoke '[ ! -x /usr/share/kali-menu/update-kali-menu ] || /usr/share/kali-menu/update-kali-menu wait_dpkg'  E: Sub-process returned an error code  

Any help will be much appreciated.

Nemo in Linux Mint - Reset all Preferences and "My computer" pane in List View

Posted: 28 Sep 2021 09:03 AM PDT

I am struggling a little bit with this issue I'm having with Nemo in Linux Mint (I just upgraded to 17.3 but this is happening since 17.1).

In brief, I accidentally removed some shortcuts from the "My computer" section in the left sidebar of nemo as a user. I tried to restore them, nbut apparently I can't add icons to that tab, nor drag them from the "bookmark" section to the "My Computer" one. If I start nemo as root, I can add, remove, drag and drop icons in that tab, but of course the changings does not apply to the "user" version.

Moreover, I tried to perform a clean install of nemo after purging it, but unsuccessfully.

Can someone explain me where th configuration file of this sidebar are located and hot to delete them, or set their permission to user in order to drag and drop correctly into that tab? Alternatively, is there a "real" way to clean install Nemo and only Nemo without touching the other GTK features?

Thanks a lot for your time, any help is really appreciated!

No comments:

Post a Comment