Thursday, June 17, 2021

Recent Questions - Unix & Linux Stack Exchange

Recent Questions - Unix & Linux Stack Exchange


APACHE ON CENTOS 7 - Error while trying to access web page by using url domain name

Posted: 17 Jun 2021 11:21 AM PDT

I installed Apache web server (httpd) on CentOS 7 server, virtual machine, created using Vagrant on Ubuntu 20.04 GUI. On CentOS server, I specifically configured certain files for Apache server to work. I tested it with IP on Ubuntu browser and it worked like a charm. Once again, in this case, CentOS server is just a virtual machine that I created on Ubuntu GUI, and Ubuntu is the only place where I can test if apache works on a browser.

Afterwards I wanted to configure apache so that it uses domain name url alongside IP. Here are the steps I followed:

Setting up virtual host:

mkdir -p /var/www/example.com/html      chown -R $USER:$USER /var/www/example.com/html      chmod -R 755 /var/www/example.com/html/index.html      nano /var/www/example.com/html/index.html  

In index.html I just wrote some random text to display.

After that, I created folders /etc/httpd/sites-available and /etc/httpd/sites-enabled, and also the config file example.com.conf and filled the config file with this:

<VirtualHost *:80>      ServerName www.example.com      ServerAlias example.com      DocumentRoot /var/www/example.com/html      ErrorLog /var/www/example.com/log/error.log      CustomLog /var/www/example.com/log/requests.log combined  </VirtualHost>  

In /etc/httpd/conf/httpd.conf I just added IncludeOptional sites-enabled/*.conf at the end of the text to tell Apache to look for virtual hosts in sites-enabled folder.

Then I just restarted httpd as so sudo systemctl restart httpd.

Also to mention, in this scenario I am using example.com just for demonstration, but I did change that to my own random domain name.

Now comes the big part, on Ubuntu I opened the browser, typed in the domain name that I created, and it outputs

Hmm. We're having trouble finding that site.

So after little bit of researching, I figured that I might have to define CentOS server's IP alongside the domain name in /etc/hosts in Ubuntu. And that is what I did. Then again I tried to execute domain name on the browser and then it outputs this:

Bad Request

Your browser sent a request that this server could not understand.

Additionally, a 400 Bad Request error was encountered while trying to use an ErrorDocument to handle the request.

So just to clarify, on Ubuntu I can open CentOS apache server by using its IP, but not the domain name.

Any suggestions would be highly appriciated.

Command history for multiple users from sudo/wheel group

Posted: 17 Jun 2021 11:14 AM PDT

I working in an environment with multiple users who can invoke sudo with their own passwords.
Most users login via ssh, call sudo -i and do "work".
The problem is when it comes to identify WHO did something that broke working service/script.

Is there a way to preserve separate cmd history? Right now I have to su root and call history which:

  1. wont show me all called commands - because multiple root sessions will overwrite bash history.
  2. I dont know who called which command as root

I know that each user could skip sudo -i and call each command sudo <cmd> and that would be saved in each users bash history, but I wont be able to "force" that with words(only with configuration)

Perl replacement with strings contain wildcard causes wrong output

Posted: 17 Jun 2021 10:32 AM PDT

I am trying to perform this regex with perl

inf=$(echo "$info" | perl -p -e 's/(?:\d[\s-.]*){12,19}/**********/g')

Trying to match any digits range with 12 to 19 regardless of whitespaces,-, or . characters

When I perform this on $info, and let's say info looks like this

<ns4:SpecialRequestOption>  <ns4:SpecialRequest>** LoyalMember #1234123412341234 **</ns4:SpecialRequest>  <ns4:BookingNote/>  </ns4:SpecialRequestOption>  

For some odd reason, it sets SpecialRequest into a string with all of the names of the files on my current directory. Like an ls command. Let's say my ls outputs

bthai~->ls  awscli-bundle.zip        script.sh        item.csv        misc.gz  

Then the resulting info turns into

<ns4:SpecialRequestOption>  <ns4:SpecialRequest>* awscli-bundle.zip item.csv LoyalMember misc.gz script.sh #********** **</ns4:SpecialRequest>  <ns4:BookingNote/>  </ns4:SpecialRequestOption>  

Even if I dont use wildcard to replace, but using letter x, somehow the * (wildcard with space within the xml value or in the replacment) causes the file names in directory to pop into the replace string...

inf=$(echo "$info" | perl -p -e 's/(?:\d[\s-.]*){12,19}/xxxxxxxxxx/g')

<ns4:SpecialRequestOption>  <ns4:SpecialRequest>* awscli-bundle.zip item.csv LoyalMember misc.gz script.sh #xxxxxxxxxx **</ns4:SpecialRequest>  <ns4:BookingNote/>  </ns4:SpecialRequestOption>  

Strace displaying results in ASCII only at process ending and not runtime

Posted: 17 Jun 2021 10:17 AM PDT

I'm searching a way for strace to print the content of the write(...) syscall to ASCII and not useless bytes !


The Strace command I use :

sudo strace -e write=1  -e trace=write -s9999 -p 551 2>&1  

The output is made of 3 different sections :

1) The gibberish

write(4, ",\307\3440#\360\277c\355)\246}\235H\320\301H\356Q[\0370\255T50\361\345VM\203\266\344\3320\352\210A3\262\276\356q< \244\215W\323\278\312\325\235\377\273\235sp\"\317?\264\204D\304D\334\16~\352\245Y\264\336\367\307\\\367W\214\217\30m\336q\367\300\366\257Z\213\vV\333\303$\320\3428\253\303\321\f]/\305\34\17w\233\243\311+\361\7\337\362\362P\30\357\317n\326\245f\25f\253m\352\224\"\330mqy\212\16\221V\20!\4\25\35\336\263`\202\312\200\353Bg\374\244l;\365\341|,M\6\35\2559s\306\315\4\226\247\216a\372\177\376\36l\16\271\vE\330\223\313j\t\317P(\177;\353\fA'\1>Ri\333/\322\306\n\310\0106\n\357\370\335sug\212W\22'\302\244\317\322\271e\356\272\\\212n\204\202\372\376\236\16\270`\254\234]\326ei\2520\27O\217\344\225\376\225\255\"\241\221\27\374f[/\r\325\343\35V\204\377y\240n4Gd\t\257\357\35u\251\23\213g\314>\25\35\276\275\251Fl\21\263\204\2257\211\354\201(\274\237`\373\17\247;\221\373._E\234\337\276\312\300(\374\227\323\323\364\357\203\32\231\265X\2\31\323\2-\334S\252\334\277\243\242'\343\273\231\n\1N\221<O\227\261%\357\366\272\264\273\261\251-a\10A\223Co\25\305\6\324\202\364/\7\360\353\313V~\347\26P\266\223\341!\302\275\24\336\337\216\17\267w\312\35U.\3t\377y\261U\v\362m?.\33\363\212K\25\342|\372\257\236\312Q\10\372\372\\\\\234oc\24P\221\325\321\205\217G\241\300\202\364D\3031d\266}\277k@\243\271\321\34\365Z\344\\\20\16\202\245>\31S\3214\7\366\340$\"\223#\201\261':\226\343\215\375\356\25\316,m\243\335\16\334\36\322x\210\365*\326\306L\225n_glJ\264\0163\237\274\270y&6\314\323V\325\264\206]\312!\240+\27\252\244\25\301\23P\341\35\261\301\363\3\320\2727\341s\333f\272\343\277\374g\207\341\320d(+\357\266N\244\231\vE\17\217\243:\322\217\250I\31\246.-ty\271X\320\4/S\217\364\362W\226_\234\257s\263yl\23\277\361?\210\217\242\274\311\366Q\351<\251\223B\332J\263\201\365\321zuK\217\352\257x@\2\322\211_\2663\21SMl\317\t\251\335r\367 q\236,\203\224\377\4G\24R\2635\1b\302\271\334\350\2333\213\201\300Y\200\31\340\2474\35\322\365_\4`\325b\17\233\24\236\257\265\304\36\263W\344\3+1\265\374\317qZ\2270\205\314\177\2021-  ..., 4096) = 4096  write(4, "a\n\32\"\254\255\254\34\371\320\20s)\341\202\207<\327_\334\333\r\336\273\6\275\322.\343\r\251\313A^V\313\24\255A)\226R\28\217L\267\271\10\352@7\24\342J\32J#\203;\201\376\21\330\241\374y\24\34\343\203\270ZV\255;A\346\375(\314\36\236O\250\326\0\330d\271\34\315\234\311\330\217Q:C\225D\261\376o\244\223j]\2003\204l\36s\263\37\204UQ\301\364aOUc\330b\24k*q\35YhS\377\364\343\207P\217\\\330\314\301U\227\35F\24v,V\222\313}\350\261^\307\\\0054OyN\227\275\260&g\302(\312\24\331zf\n\300\312\\s+ZS\341~\245t\263'\226\254\200+8m\207\345\224E\3478\rR\327t\202\261\206\256\2457\324%u\210c\336\2\373\6\6\10\10w\211\206W\365\355c\313\373\273\334`\375@g\234;\357\227\4\264\240\232\310\270\235t\37\235\177\322*`0\354C\212X\354#\351\307|Y\237~\314\346}FI\321<n\36EH\354\277\235\227\5|\330\353\212\251\301\235\376\360\377\267\257ba\370\267\36W\300\212\237\351\241\206(E\241S\322m\32p\353\336\27\375\325\223\352x\0yR\317\233\\\31\314\217\17i\205\372\373\20\27\247\353\213\265\217,\262\6\26\333\tx\323\250\t\35L\366/\323K\331\332\\\36\211$1\377\300u\377\342d\214\320g\363\352FX\337U\233\341\260X\210\334\217\"\244\204\261@-2111\213Zg\6\273\376\30\7\t\220\272\355\363Jh\330Us\357~R\250L\375\346\374\35\257\355_5\344\327\346L2\313\251}\221\v\257;y^\316b\230r\312\226\361\223|\2248F\341\323\372\311\20\277\3105b\220\251\261p\241\0\340\2302\21\264\361\"\31\4\263\200  ..., 4096) = 4096  ...  ...  

WTF is all of this ? I got thousand of this, feels like there is too many.
This is the only output I get during 99,9% of runtime... How can I transcript those bytes to actual ascii to know what is written ?

2) Just the one log displayed correctly

--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=568, si_uid=0, si_status=0, si_utime=7, si_stime=3} ---  write(1, ACTUAL_LOGS_OF_THE_APP ... , 5754) = 5754  

Here I do have ONE line that is an actual log. This only appears at the end of the process.
As this section is separated from the previous ones by "SIGCHLD" it feels like this write() is NOT a random write made by the app... Why would I have 99,9% of strangely encoded data and only one line displayed correctly...?

3) Ends with the hexa table

 | 00000  5b 49 4c 44 4g 5a 3e 28  3e 3e 3f 3f 3f 52 72 60  [INFO]: some log |   | 00010  71 71 61 6a 61 29 69 76  6a 62 72 62 6b 6b 22 68  I want to see an |   | 00020  6E 6a 6a 63 63 76 2a 22  72 6b 62 6a 6a 79 73 62  d is redacted as |   | 00030  71 71 26 62 6b 60 29 62  6a 6a 72 60 71 73 28 69  posting on so is |  ...  

I guess this is NOT the exact output of my app as it doesn't output hexa, lol. So it's coming from strace and contains exactly the logs I would like to see !
Though I don't want to see them at process end but at runtime !


I would apreciated some help to figure out what I should do with (1) or maybe get (3) at runtime and not only at the end...

thx !

curl not able to write to /tmp directory owned by user

Posted: 17 Jun 2021 09:26 AM PDT

I tried running the script as instructed in https://docs.docker.com/engine/security/rootless/:

$ curl -fsSL https://get.docker.com/rootless | sh  

But the script crashed in the following line:

curl -L -o docker.tgz "$STATIC_RELEASE_URL"  

With the message:

Warning: Failed to create the file docker.tgz: Permission denied  curl: (23) Failure writing output to destination  

I narrowed down the problem to curl trying to write to the tmp folder created by mktemp -d, but I don't understand why it fails.

Some context:

$ whoami  thiago    $ uname -a  Linux thiago-acer 5.8.0-55-generic #62~20.04.1-Ubuntu SMP Wed Jun 2 08:55:04 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux    $ mktemp -d  /tmp/tmp.U1nPTN5dlS    $ cd /tmp/tmp.U1nPTN5dlS    $ ls -la  total 8  drwx------  2 thiago thiago 4096 Jun 17 18:20 .  drwxrwxrwt 25 root   root   4096 Jun 17 18:20 ..  

After running the commands above, I tried:

# this fails with the same message as above  curl https://download.docker.com/linux/static/stable/x86_64/docker-20.10.7.tgz -O    # this works just fine  curl https://download.docker.com/linux/static/stable/x86_64/docker-20.10.7.tgz -o - > docker-20.10.7.tgz    # this also works  wget https://download.docker.com/linux/static/stable/x86_64/docker-20.10.7.tgz  

The curl -O command also works if I try it on some other folder, like my home folder.

Any help is appreciated.

Converting files in multiple directories from tab separated to comma separated

Posted: 17 Jun 2021 09:49 AM PDT

I am running Mac OS.

I have a directory /Users/sethparker/Documents containing several subdirectories /Users/sethparker/Documents/dir1,/Users/sethparker/Documents/dir2,/Users/sethparker/Documents/dir3. Each subdirectory is filled with identically named, tab separated files file1.txt,file2.txt,file3.txt.

I would like all of the files in all of the subdirectories converted to comma separated, though the extension itself does not matter. My current approach is to run a short script in each subdirectory.

cat tsv_to_csv.sh  
for ifile in {1..3};  do  sed -i "" 's/\t/,/g' file${ifile}*  done  

Is there an efficient way to apply this type of processing to all files in all subdirectories at once?

Between Debian and Ubuntu 18.04, which is the less bloated to create a server?

Posted: 17 Jun 2021 09:28 AM PDT

I want to create a small server on Azure (1 VCPU and 1 GB of RAM) for testing and I was wondering which is the less bloated between Debian and Ubuntu

Windows PC can't connect to shared printer on Ubuntu 16.04.3 with "The Print Processor Does Not Exist"

Posted: 17 Jun 2021 09:10 AM PDT

I have a file and print server using Samba version 4.3.11, running Ubuntu 16.04.3 (yes I know this version has reached EOL, I plan to update it after I get this working, I'm trying to not change too many variables at once.) It used to only be a file server but now I'm trying to add printer shares to it. I used this tutorial to share the printer, and it seems to have worked. Then, I used this tutorial to set up automatic driver downloads. This where (I think) I'm having the problem.

When I manually install the printer drivers, the computer connects fine and can see the printer queue (tested on a Windows 10 Pro VM.) However, when I don't have the drivers installed, it finds them on the print$ share and seems to install them, however it can't connect to the printer and shows the "Windows cannot connect to the printer. The print processor does not exist."

I also have CUPS installed on the server and am using that as the backend. Not sure if that's relevant. TIA.

Problem with zsh_history file

Posted: 17 Jun 2021 11:10 AM PDT

I am currently trying to clean up my home directory by following this XDG BASE DIRECTORIES website.

To do so, i have stored in my ~/.zshenv:

# ---- Default editors ---- #  export EDITOR="nvim"  export VISUAL="nvim"    # ---- XDG BASE DIRECTORY ---- #  # https://wiki.archlinux.org/title/XDG_Base_Directory  export XDG_CONFIG_HOME=${XDG_CONFIG_HOME:="$HOME/.config"}  export XDG_CACHE_HOME=${XDG_CACHE_HOME:="$HOME/.cache"}  export XDG_DATA_HOME=${XDG_DATA_HOME:="$HOME/.local/share"}  export XDG_STATE_HOME=${XDG_STATE_HOME:="$HOME/.local/state"}    # ---- ZSH ---- #  export ZDOTDIR="$XDG_CONFIG_HOME/zsh"  export HISTFILE="$XDG_STATE_HOME/zsh/history"  export SHELL_SESSION_DIR="$XDG_STATE_HOME/zsh/sessions"  export SHELL_SESSION_FILE="$SHELL_SESSION_DIR/$TERM_SESSION_ID"  

The problem is that when i echo $HISTFILE, it gives ~/.config/zsh/.zsh_history, which is not what i want, and doesn't correspond to $XDG_STATE_HOME/zsh/history.

I tried adding echo $HISTFILE at the end of my ~/.zshenv file, and it gives ~/.local/state/zsh/history, which is exactly what i want. However, if i run echo $HISTFILE in terminal, it still gives ~/.config/zsh/.zsh_history. So it seems like the $HISTFILE set in the ~/.zshenv is not being used, i don't understand why and how to fix it...

The solution that i feel could work is to change the /etc/zprofile file, but i don't like doing, because i think that whenever i update my computer software, all those files get reset (I THINK...). If that is not the case, that i will modify them there to overwrite the $HISTFILE.

But, if there is a better solution to the problem, please help.. (i would also like to know if the /etc/zprofile, /etc/zshrc,... files do indeed get reset when updating computer software, or not...)

By the way, I am not using oh-my-zsh and zsh --version is 5.8

How to resolve IPv4 first on alpine linux?

Posted: 17 Jun 2021 09:52 AM PDT

It seems that when resolving hosts on Alpine the default behavior is to try IPv6 first and falling back to IPv4, but sometimes it takes a lot of time to resolve, and there are connections when IPv6 is blocked entirely making it frustating.

Is there a way to configure the resolver to try IPv4 first?

In a bash/zsh function how do I do something, then depending on the output, do A or B?

Posted: 17 Jun 2021 10:17 AM PDT

This function looks through each local git repository in folder ~/src, and does git pull on it. More and more I keep getting an error

Please commit your changes or stash them before you merge.  Aborting  

that requires me to run git reset --hard to fix, but I cannot for the life of me figure out how to run that command depending on git pull's output, where if I get the above message, to run git reset --hard, and if not, to continue on like normal.

The current, semi-working, function is:

updatesrc() {    for i in */.git;      do (       echo $i;       cd $i/..;       git pull;      );    done  }  

How to use Record separator of AWK command in linux [closed]

Posted: 17 Jun 2021 09:02 AM PDT

I have a file named marks.txt which has marks of students as shown below

Jones  2143  78  84  77    Gondrol  2321  56  58  45    RinRao  2122  38  37  65    Edwin  2537  78  67  45    Dayan  2415  30  47  20  

Here, each student is separated by two newlines and for a particular student his marks are separated by a single newline. Now, I want to print these details in a tabular format using below command

$ awk 'BEGIN{RS="\n\n", FS="\n";} {print $1,$2,$3,$4,$5}' marks.txt  

But it is showing syntax error.

awk: line 1: syntax error at or near ,  

Someone, please help me with this problem.

Install mdatp 404 not found

Posted: 17 Jun 2021 11:11 AM PDT

I'm following a Microsoft guide for installing mdatp (Microsoft Defender) and I'm ending up with a 404 Not Found when doing the apt install mdatp. I'm on Ubuntu 21.04 and I'm using a insider-fast.list with (note the 20.04 being used):

deb [arch=amd64,armhf,arm64] https://packages.microsoft.com/ubuntu/20.04/prod insiders-fast main  

Doing sudo apt install mdatp I end up with the following result:

...  After this operation, 153 MB of additional disk space will be used.  Err:1 https://packages.microsoft.com/ubuntu/20.04/prod insiders-fast/main amd64 mdatp amd64 101.32.55-insiderfast    404  Not Found [IP: 104.214.230.139 443]  E: Failed to fetch https://packages.microsoft.com/ubuntu/20.04/prod/pool/main/m/mdatp/mdatp_101.32.55-insiderfast.amd64.deb  404  Not Found [IP: 104.214.230.139 443]  

I've successfully done this with Ubuntu 20.04 at a earlier time, but now I'm on 21.04 with the 20.04 list. The 404 looks real, but I'm unsure where the issue lies. Is their software repo out of order?

Calling Linux kernel methods from a kernel module

Posted: 17 Jun 2021 11:29 AM PDT

What is the right way of calling kernel functions in a C file from a kernel module in Linux?

I want to call exit_task_namespaces in linux/nsproxy.c from my first ever kernel module

I am doing this:

#include <linux/nsproxy.h>  …    static ssize_t device_read(struct file *flip, char *buffer, size_t len, loff_t *offset)  {      struct task_struct *task = current;      …      exit_task_namespaces(task);      …  }  

when I try to make I get the following error:

ERROR: "exit_task_namespaces" [/home/.../lkm_example.ko] undefined!  make[2]: *** [scripts/Makefile.modpost:94: __modpost] Error 1  make[1]: *** [Makefile:1673: modules] Error 2  make[1]: Leaving directory '/usr/src/linux-headers-5.4.0-73-generic'  make: *** [Makefile:3: all] Error 2  

I can see that in the file /usr/src/linux-headers-5.4.0-73-generic/include/linux/nsproxy.h the method exists.

This is my Makefile:

obj-m += lkm_example.o  all:      make -C /lib/modules/$(shell uname -r)/build M=$(PWD) modules  

search multiple files for a string

Posted: 17 Jun 2021 10:27 AM PDT

I was trying to search multiple files for a string like this. Using AIX if this matters.

grep -r "gap" /u/user/.History/  /u/user/.History/server/user:  /u/user/.History/server/user:  

This mostly worked except when files have strange characters listing the file name but not showing the match. So I added the string command like this. None of these methods worked. None of these commands give any output at all. It is acting like there are no matches even though I know there are.

strings /u/user/.History | grep gap  strings /u/user/.History/* | grep gap  cat /u/user/* | strings | grep gap  cat /u/user/.History/* | strings | grep gap  cat /u/user/.History/ | strings | grep gap  

When I grepped on a single file with strings it gave this output. It just will not work when I do it on multiple files.

>strings /u/user/.History/server/user | grep gap  cd gap #▒#1612466141#▒#  gzip -cd gap2021-01-28.log.gz #▒#1612466384#▒#  

Script function call: function vs $(function)

Posted: 17 Jun 2021 08:58 AM PDT

Taking how reference the following code for simplicity

#!/bin/bash    number=7    function doSomething() {    number=8  }    doSomething  echo "$number"    

It prints 8.

But with:

#!/bin/bash    number=7    function doSomething() {    number=8  }    $(doSomething)  echo "$number"  

It prints 7.

I have the following questions:

  • What are the technical names for each one?, I mean functioncall and $(functioncall)
  • How does work each approach? It seems the former considers (affects) the variables outside of the function itself, the latter not
  • When is mandatory use one approach over the other (it mostly about performance issues - of course - if there is any), if there are other reasons, they are welcome.

Applying simple string mappings on JSON files

Posted: 17 Jun 2021 09:22 AM PDT

Somehow I think there must be a one-liner to apply a simple mapping on the command line. In this case the keys in JSON will (as usual) provide context, ensuring that we don't foolishly replace strings that shouldn't.

Suppose we are given a library catalog in a JSON file using the Dewey Decimal Classification

[    {      "Title": "Design Pattern",      "Call Number": "005.12 DES"    },    {      "Title": "Intro to C++",      "Call Number": "005.133 C STR"    }  ]  

as well as a mapping between Dewey and the Library of Congress call numbers

[    {      "Dewey": "005.12 DES",      "Congress": "QA76.64 .D47 1995X"    },    {      "Dewey": "005.133 C STR",      "Congress": "QA76.73.C153 S77 2013"    }  ]    

and want to produce the output file:

[    {      "Title": "Design Pattern",      "Call Number": "QA76.64 .D47 1995X"    },    {      "Title": "Intro to C++",      "Call Number": "QA76.73.C153 S77 2013"    }  ]  

Does this still fit within the one-line set of transformations that jq will handle?

Mapping linux user to windows NFS exported share

Posted: 17 Jun 2021 09:21 AM PDT

I shared a Windows 2012 directory using NFS using this configuration:

enter image description here

Now, from the Linux side I can mount it using this command:

sudo mount -t nfs 192.168.0.20:/SHARED /mnt/windows_shared  

Then, when I do an ls -lah /mnt/windows_shared I get a Permission denied error, and with sudo ls -lah /mnt/windows_shared I get this:

total 18M  drwx------ 2 4294967294 4294967294 4,0K feb  6  2018 .  drwxr-xr-x 3 root       root       4,0K sep 21 08:29 ..  -rwx------ 1 4294967294 4294967294  862 oct 11  2016 certificate.pem  -rwx------ 1 4294967294 4294967294 2,3M mar  2  2017 report.pdf  -rwx------ 1 4294967294 4294967294  598 ago  9 11:57 initialization.ini  drwx------ 2 4294967294 4294967294   64 feb  6  2018 client  -rwx------ 1 4294967294 4294967294 625K ago  4  2015 report2.pdf  drwx------ 2 4294967294 4294967294   64 sep 18 18:44 photos  drwxrwx--- 2 4294967294 4294967294 128K sep 18 10:48 another_file    

I need to be able to mount and access the shared folder with a specific Linux user (without sudo), also the owner and group should be displayed correcly, not those 4294967294. How can I do that?.

fio: how to reduce verbosity?

Posted: 17 Jun 2021 10:06 AM PDT

When I run fio command, I get huge file with following lines which fills up the entire space. I am interested in only the final fio output summary. How can I reduce this fio verbosity?

::::  Jobs: 4 (f=4): [W(3),X(1),R(1)][0.1%][r=1244MiB/s,w=2232MiB/s][r=319k,w=17.9k IOPS][eta 03h:04m:45s]  Jobs: 4 (f=4): [W(3),X(1),R(1)][0.1%][r=1243MiB/s,w=2252MiB/s][r=318k,w=18.0k IOPS][eta 03h:02m:18s]  ::::  

fio is run using following command.

[root@system user]# fio  iops_wipc.fio --eta=always --eta-newline=1 | tee /tmp/iops_wipc_op    [root@system user]# cat iops_wipc.fio  [wipc-iops]  group_reporting  direct=1  ioengine=libaio  allow_mounted_write=1  refill_buffers  scramble_buffers=1  thread=1  #eta-newline=10  bs=128k  numjobs=4  iodepth=32  rw=write  size=768G  [device0]  filename=/dev/nvme2n1  

Firewall rules based on Domain name instead of IP address

Posted: 17 Jun 2021 11:29 AM PDT

I am running Guacamole remote desktop gateway test setup to manage access to cloud VM instances.

As I got one strange POC request from one client to restrict Guacamole RDG access to one specific domain which doesn't have static IP, I am out of options. Client might be using services like dynamic DNS to have their domain resolve back to whatever dynamic IP they get.

So basically I have to set inbound Firewall rules in my Guacamole RDG server based on one domain name instead of IP address. Apart from basic networking logic, is there any way to achieve this requirement?

I tried below command to set iptables rule based on domain name but upon execution, it actually resolve domain name and apply rule to iptables with resolved IP address.

iptables -A INPUT -p tcp --src domain.com --dport 3128 -j ACCEPT  

How can I get neovim to load my init.vim file when in sudo mode?

Posted: 17 Jun 2021 09:23 AM PDT

I'm having an issue with neovim on a Raspberry Pi coding with Python.

I have installed it by sudo apt-get install neovim, and it works using just the nvim in commandline. For some reason I can create a file with nvim filename.py, but it will end as a readonly file.

If I run neovim as sudo nvim instead, then I can write to the file but my init.vim file is not being loaded then. I have created it here: /home/pi/.config/nvim/init.vim Does it have to be placed elsewhere or can I make some sort of link to it?

I have also tried giving filename.py writing permission with: sudo chmod a+w filename.py but that just leads me to an errorcode E509 when trying to save by :wq. It will save with :wq! though

How to merge json files using jq or any tool?

Posted: 17 Jun 2021 11:19 AM PDT

Modifying the question to be more specific. There will be 2 JSON files where first one will have few blocks. And second one will have few blocks with few addition of redirection rules as you can see

"values": [    "/businessclass/articles/money.page",    "/businessclass/articles/1.page",    "/businessclass/articles/2.page"  ],  

Output should be merge of 2 files. Including json File 1 changes and Json 2 updated redirect rules.

JSON file 1

[    {      "name": "caching",      "options": {        "behavior": "MAX_AGE",        "mustRevalidate": false,        "ttl": "10m",        "defaultTtl": "30m"      }    },    {      "name": "/businessclass/articles/money.page",      "children": [],      "behaviors": [        {          "name": "redirect",          "options": {            "destinationPathOther": "/businessclass/articles/finance-and-operations.page"          }        }      ],      "criteria": [        {          "name": "path",          "options": {            "matchOperator": "MATCHES_ONE_OF",            "values": [              "/businessclass/articles/money.page"            ],            "matchCaseSensitive": false          }        }      ],      "criteriaMustSatisfy": "all",      "comments": ""    }  ]  

JSON file 2

{    "name": "/businessclass/articles/money.page",    "children": [],    "behaviors": [      {        "name": "redirect",        "options": {          "destinationPathOther": "/businessclass/articles/finance-and-operations.page"        }      }    ],    "criteria": [      {        "name": "path",        "options": {          "matchOperator": "MATCHES_ONE_OF",          "values": [            "/businessclass/articles/money.page",            "/businessclass/articles/1.page",            "/businessclass/articles/2.page"          ],          "matchCaseSensitive": false        }      }    ],    "criteriaMustSatisfy": "all",    "comments": ""  }  

Expected output

[    {      "name": "caching",      "options": {        "behavior": "MAX_AGE",        "mustRevalidate": false,        "ttl": "10m",        "defaultTtl": "30m"      }    },    {      "name": "/businessclass/articles/money.page",      "children": [],      "behaviors": [        {          "name": "redirect",          "options": {            "destinationPathOther": "/businessclass/articles/finance-and-operations.page"          }        }      ],      "criteria": [        {          "name": "path",          "options": {            "matchOperator": "MATCHES_ONE_OF",            "values": [              "/businessclass/articles/money.page",              "/businessclass/articles/1.page",              "/businessclass/articles/2.page"            ],            "matchCaseSensitive": false          }        }      ],      "criteriaMustSatisfy": "all",      "comments": ""    }  ]  

Well. The order of rules can be changed. And, sometimes new rules with new names can be added to the json file like "name": "/businessclass/articles/money.page". So whatever be the changes, have to find the delta and need to merge the files accordingly. Or can create new json file which contains merged changes also.

Pulseaudio no LFE (subwoofer output) in 2.1 or 4.1 configuration

Posted: 17 Jun 2021 09:01 AM PDT

I'm using pulseaudio with a 6-channel USB sound device (which reports itself as a "CM106 Like Sound Device"). I added the following to ~/.config/pulse/daemon.conf:

enable-lfe-remixing = yes  lfe-crossover-freq = 50  

If I open pavucontrol and, under the configuration tab, select "Analog Surround 5.1 Output," everything works fine as best I can tell, except for the fact that I don't have 5 speakers. (I can test by moving the speakers around to different output ports and playing surround sound test files.)

Unfortunately, if I select either "Analog Surround 4.1 Output" or "Analog Surround 2.1 Output", I get no sound out of my subwoofer. Is there any way to make my subwoofer work with fewer than 6 channels?

ssh unable to negotiate - no matching key exchange method found

Posted: 17 Jun 2021 11:30 AM PDT

I am trying to log in to my DSL router, because I'm having trouble with command-line mail. I'm hoping to be able to reconfigure the router.

When I give the ssh command, this is what happens:

$ ssh enduser@10.255.252.1    Unable to negotiate with 10.255.252.1 port 22: no matching key exchange method found. Their offer: diffie-hellman-group1-sha1  

so then I looked at this stackexchange post, and modified my command to this, but I get a different problem, this time with the ciphers.

$ ssh -oKexAlgorithms=+diffie-hellman-group1-sha1 enduser@10.255.252.1    Unable to negotiate with 10.255.252.1 port 22: no matching cipher found. Their offer: 3des-cbc  

so is there a command to offer 3des-cbc encryption? I'm not sure about 3des, like whether I want to add it permanently to my system.

Is there a command to allow the 3des-cbc cipher?

What is the problem here? It's not asking for password.

Namservers reverted to normal shortly after connecting VPN using Openconnect

Posted: 17 Jun 2021 11:18 AM PDT

I've been working at my company for over a year, and have never had this particular issue with my VPN. Unfortunately, I don't know much about networking so I'm a little confused at what's happening. Here's the behavior on a Fedora 25 workstation totally fresh install.

run sudo openconnect --juniper somevpn.com
cat /etc/resolv.conf immediately after the connection is made shows all the various nameservers I can connect to at work. trying to actually navigate to any of the sites on the local network fails, and even regardless of that, if I check the resolve.conf again just a few seconds after the connection is made, I'll see that I'm back on my local network, although the process for the VPN is still going.

So is there some black-list that I'm not aware of? What's going in and rewriting my resolve.conf? I've got VPN connected on other devices, so I know my credentials are fine, and I'm positive I'm below the maximum number of allowed connections.

Suppress systemd: "Directory to mount over is not empty, mounting anyway."

Posted: 17 Jun 2021 10:27 AM PDT

I am purposefully mounting a read-write copy of a directory on top of the read-only version of itself.

This generates a log message such as:

Aug 27 14:31:02 svelte systemd[1]: mnt-btrfs\x2dvol-rootfs.mount: Directory /mnt/btrfs-vol/rootfs to mount over is not empty, mounting anyway.  

Is there any way to supress this message to reduce log noise?

How to assign e1000e driver to Ethernet adapter

Posted: 17 Jun 2021 10:01 AM PDT

Is there a way to instruct an Ethernet adapter to use a certain driver? Or perhaps the way it works is to have a way to instruct a driver to support a specific adapter?

I have a system running a recently installed RHEL Server 7.3 OS (kernel 3.10.0-514.el7.x86_64), where the e1000e driver is not linked to an on-board I219-LM Ethernet adapter. This condition was found while investigating why the adapter is not working properly. The other Ethernet adapter, which works fine, is a PCI card attached to the MB.

A simple lspci says:

# lspci | grep net  00:1f.6 Ethernet controller: Intel Corporation Ethernet Connection (2) I219-LM (rev 31)  06:00.0 Ethernet controller: Intel Corporation 82572EI Gigabit Ethernet Controller (Copper) (rev 06)  

Verbose lspci for the I219-LM device does not report a driver in use:

# lspci -v -s 00:1f.6  00:1f.6 Ethernet controller: Intel Corporation Ethernet Connection (2) I219-LM (rev 31)      Subsystem: Intel Corporation Device 0000      Flags: fast devsel, IRQ 16      Memory at a1700000 (32-bit, non-prefetchable) [size=128K]      Capabilities: [c8] Power Management version 3      Capabilities: [d0] MSI: Enable- Count=1/1 Maskable- 64bit+      Capabilities: [e0] PCI Advanced Features      Kernel modules: e1000e  

Conversely, the same command for the other adapter states that e1000e is being used by the device:

# lspci -v -s 06:00.0  06:00.0 Ethernet controller: Intel Corporation 82572EI Gigabit Ethernet Controller (Copper) (rev 06)      Subsystem: Intel Corporation PRO/1000 PT Server Adapter      Flags: bus master, fast devsel, latency 0, IRQ 130      Memory at a1320000 (32-bit, non-prefetchable) [size=128K]      Memory at a1300000 (32-bit, non-prefetchable) [size=128K]      I/O ports at 4000 [disabled] [size=32]      Expansion ROM at a1340000 [disabled] [size=128K]      Capabilities: [c8] Power Management version 2      Capabilities: [d0] MSI: Enable+ Count=1/1 Maskable- 64bit+      Capabilities: [e0] Express Endpoint, MSI 00      Capabilities: [100] Advanced Error Reporting      Capabilities: [140] Device Serial Number [edited]      Kernel driver in use: e1000e      Kernel modules: e1000e  

I have another system available, using the same OS and type of on-board (and properly functioning) I219-LM adapter, where I verified that, indeed, the driver should be linked to the device.

Browsing the /sys/bus/pci/drivers/e1000e and /sys/devices/pci0000:00/0000:00:1f.6 areas has shown a couple of missing things:

  1. In the .../drivers/e1000e folder, there is a soft-link using the PCI address of the 82572EI adapter that points to the /sys/devices/ area, but none with the I219-LM adapter's one. In comparison, in the mentioned "control" system, there are links for all the adapters it has.
  2. In the /sys/devices/pci0000:00/0000:00:1f.6 area, there is no driver soft-link. However, that soft-link is present in the corresponding folder for the other adapter (../pci0000:00/0000:06:00.0), pointing to the /sys/bus/pci/drivers/e1000e path as it should.

Let me know if more info is needed to help me on this.

Thank you.

Exundelete can't restore the file

Posted: 17 Jun 2021 11:08 AM PDT

I'm trying to restore 2 important tar.gz files I know their directory but extundelete not restoring them although it's giving me the inode number.

Loading filesystem metadata ... 2127 groups loaded.  Loading journal descriptors ... 26473 descriptors loaded.  Unable to restore inode 3538958 (file.tar.gz): No data found.  Unable to restore file file.tar.gz  extundelete: Operation not permitted when trying to examine filesystem  extundelete: Operation not permitted when trying to examine filesystem  

And

Loading filesystem metadata ... 2127 groups loaded.  Loading journal descriptors ... 26473 descriptors loaded.  Unable to restore inode 3538958 (file.tar.gz): No data found.  Unable to restore file file2.tar.gz  extundelete: Operation not permitted when trying to examine filesystem  extundelete: Operation not permitted when trying to examine filesystem  

Is there a way to repair the inode or get the file?

Do you advice to use other recovering software for CentOS 6 64bit

How to show zero before decimal point in bc?

Posted: 17 Jun 2021 09:00 AM PDT

echo "scale=3;1/8" | bc  

shows .125 on the screen. How to show 0.125 if the output result is less than one?

Use config file for my shell script

Posted: 17 Jun 2021 09:58 AM PDT

I need to create a config file for my own script:
Here is an example:

script:

#!/bin/bash  source /home/myuser/test/config  echo "Name=$nam" >&2  echo "Surname=$sur" >&2  

Content of /home/myuser/test/config:

nam="Mark"  sur="Brown"  

that works!

My question: is this the correct way to do this or there're other ways?

No comments:

Post a Comment