Friday, April 23, 2021

Recent Questions - Unix & Linux Stack Exchange

Recent Questions - Unix & Linux Stack Exchange


Npm won't install SOCKET.io due to Node version?

Posted: 23 Apr 2021 11:05 AM PDT

I need to install socket.io for my node program. So, whenever I try to install socket.io by running the following commandline instruction, it says node version needs to be greater than or equal to 10 and bunch of other errors, but my node version is 11.

sudo npm install -g socket.io

However, I did notice something strange. Isn't nodejs and node program pretty much the same thing? I supposedly deleted or uninstalled the first installation of node. So, I can upgrade to the latest and the greatest node using Node Version Manager (NVM). So, I did by running the following command, nvm install v11, and it installed successfully. After that, I try to install socket.io thinking current version of Node should allow the installation of socket.io, but nope. It keeps telling me the same thing that node version has to be greater than or equal to 10. So, then I tried to verify node's version. nodejs -v command produced old version number and node -v produced the version 11. I am bit confused. Installation is still looking at the old version of the NODE.

Have a look at the image below.

enter image description here

Bash: Iterating over multiple arrays changes array values

Posted: 23 Apr 2021 10:52 AM PDT

If I iterate in bash over multiple arrays and printing its values then the values of the arrays changes. Why?

#!/bin/bash    a=("02" "20")  b=("02" "20")  n=("02" "20")  p=("02" "20")    for p in ${p[@]}  do      for b in ${b[@]}       do          for a in ${a[@]}           do              for n in ${n[@]}              do               echo $b-$a-$n-$p             done;          done;      done;  done  echo "${a[*]}"  

The command echo "${a[*]}" yields 20 20 but should be 02 20

Thanks!

3D renderings have triangular cuts in several applications

Posted: 23 Apr 2021 10:43 AM PDT

For a few days, I've noticed that in at least 2 applications (Cura and Onshape running in Brave Browser), 3D objects are rendered with triangular holes in their surface that have no business being there (see image below).

I'm running Ubuntu 20.10 (KDE Plasma).

Graphics card info (lspci -v | egrep -i 'vga|3d|2d') :

00:02.0 VGA compatible controller: Intel Corporation Xeon E3-1200 v2/3rd Gen Core processor Graphics Controller (rev 09) (prog-if 00 [VGA controller])

Does anyone have an idea why I have these triangular holes ?

Rendering of an STL file in Cura

Jq Invalid numeric literal at EOF

Posted: 23 Apr 2021 10:41 AM PDT

I'm try to add in a Json file a record structured for Raid info

jq '.raid.c0.e252.s0  +={"device": "/c0/e252/s0"}' file.json  

But I got 2 errors:

jq: error: Invalid numeric literal at EOF at line 1, column 5 (while parsing '.e252') at <top-level>, line 1:  .raid.c0.e252.s0  +={"device": "/c0/e252/s0"}  jq: error: syntax error, unexpected LITERAL, expecting $end (Unix shell quoting issues?) at <top-level>, line 1:  .raid.c0.e252.s0  +={"device": "/c0/e252/s0"}  jq: 2 compile errors  

After some tests, I understand that problem is field name. Apparently e<number> is not accepted. in fact, using:

jq '.raid.c0.p252.s0  +={"device": "/c0/e252/s0"}' file.json  

or

jq '.raid.c0.eid252.s0  +={"device": "/c0/e252/s0"}' file.json  

in both case I got the expected results:

{    "raid": {      "c0": {        "eid252": {          "s0": {            "device": "/c0/e252/s0"          }        }      }    }  }  

Is clearly not a big problem, I can use any field name, but staring from device name /c0/e252/s0 should be more simple to query .c0.e252.s0

jq version is 1.6 and I would like to keep the version present on official repo.

Someone know about a way to fix this?

Thank you

Why does my env command open the wrong executable? [closed]

Posted: 23 Apr 2021 10:17 AM PDT

When I run env in the terminal, it opens up Node.js for some reason.

Welcome to Node.js v14.16.1.  Type ".help" for more information.  

I'm new to Linux and I was trying to install Node.js using pacman.

My PATH looks like this:

/home/czar/.cargo/bin:/home/czar/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl:/var/lib/snapd/snap/bin:/home/czar/bin  

I'm also not sure what other relevant information one might need to resolve this issue, but just let me know, and I can provide it.

How to use rsync correctly to compare file hashes of two directories incl. subdirectories and interpret its output?

Posted: 23 Apr 2021 09:57 AM PDT

To check equality of a local and a remote directory I want to compare the hashes of all the files and did:

$ rsync -avcn /home/me-here/thisss/ me@other.server:/home/me-there/thisss/ > diffs.txt  

The diffs.txt then contains the following:

sending incremental file list  ./  MAIN/  MAIN/1563410001/  MAIN/1563410002/  MAIN/1563410003/  MAIN/1563410016/  MAIN/1563410017/  MAIN/1563410018/  MAIN/9846540055/  MAIN/9846540059/  MAIN/9846540073/  MAIN/9846540090/  MAIN/7654330036/  MAIN/7654330038/  MAIN/7654330070/  thisss/  thisss/xyz/  thisss/Data/    sent 76,267 bytes  received 5,695 bytes  64.21 bytes/sec  total size is 14,706,824,172  speedup is 179,434.67 (DRY RUN)  

Actually I think this looks good, therefore just two quick questions:

May I assume according to this output that all hashes are equal and nothing is corrupt? How would this output look like if there actually were some differences?

How to fix Read only permission for secondary hard disk drives

Posted: 23 Apr 2021 09:30 AM PDT

I have two storage drive.

  1. 128GB SSD [installed Linux Mint]
  2. 1 Terabyte HDD [contain few ntfs drives and c drive contain Windows OS]

I disabled fastboot in windows few months ago and then I am not using/booting in windows OS.

Today when I boot in my Linux mint, everything thing is ok except, when I tried to create or delete any file or folder in the drives, it shows error. which is Error While Creating directory Untitled Folder.

Some commands I use to fix and fails are:

shahalom@tutpub:~$ sudo chmod 777 /media/shahalom/Movies/  [sudo] password for shahalom:         chmod: changing permissions of '/media/shahalom/Movies/': Read-only file system  shahalom@tutpub:~$ sudo mount -o remount,rw '/media/shahalom/Movies'  shahalom@tutpub:~$ sudo mount -o remount,rw '/media/shahalom/Works'  shahalom@tutpub:~$ sudo mount -o remount,rw '/media/shahalom/Others'  shahalom@tutpub:~$ sudo ntfsfix /media/shahalom/Movies  Mounting volume... Error opening '/media/shahalom/Movies': Is a directory  FAILED  Attempting to correct errors... Error opening '/media/shahalom/Movies': Is a directory  FAILED  Failed to startup volume: Is a directory  Error opening '/media/shahalom/Movies': Is a directory  Volume is corrupt. You should run chkdsk.  shahalom@tutpub:~$ sudo umount /media/shahalom/Movies   [sudo] password for shahalom:         shahalom@tutpub:~$ sudo mkdir /media/shahalom/MoviesCollection  shahalom@tutpub:~$ sudo mount -o rw,uid=1000,gid=1000,user,exec,umask=003,blksize=4096 /media/shahalom/Movies /media/shahalom/MoviesCollection  mount: /media/shahalom/MoviesCollection: special device /media/shahalom/Movies does not exist.  shahalom@tutpub:~$ ls -ld /media/shahalom/Movies  ls: cannot access '/media/shahalom/Movies': No such file or directory  shahalom@tutpub:~$ ls -ld /media/shahalom/Works  drwxrwxrwx 1 shahalom shahalom 4096 Apr 23 17:09 /media/shahalom/Works  shahalom@tutpub:~$ ls -ld /media/shahalom/Movies  drwxrwxrwx 1 shahalom shahalom 4096 Apr 23 17:09 /media/shahalom/Movies    

enter image description here enter image description here

Using virt-install within a larger script

Posted: 23 Apr 2021 10:41 AM PDT

I'm trying to use virt-install within a larger script and running into a problem. Once a new guest is installed via kickstart, I need to run some commands on the new VM. Right now, I'm simply adding those commands within a larger BASH script after the virt-install command.

The problem is that, no matter whether I specify 'shutdown' or 'halt' or another option in the kickstart file, it doesn't disconnect from the VM console. I have to manually intervene to disconnect the console and have it return to the script running on the host, which is undesirable. I've tried adding the --noautoconsole flag to keep the console from attaching but the problem there is that the script then continues on the host before the installation is complete.

Is there a way to run the virt-install command and either detach from the console at the end and return to the host machine's console? If not, is there a way to use the --noautoconsole flag such that my script doesn't continue prior to the guest being fully installed?

Thank you!

Error output from ls -A results in error output, why?

Posted: 23 Apr 2021 09:32 AM PDT

I have a script I run regularly using cron. I would like to get notified by email when these scripts fail. I do not wish to be notified every time they run and produce any output at all.

As such, I am using the script Cronic to run my jobs inside cron, which should mean only error output gets sent, and not just any output.

However, in one script I have a command like this:

if [ -z "$(ls -A ${gitea_backup_dir})" ]; then    echo "${gitea_backup_dir} is empty"  else    mv ${gitea_backup_dir}/* ${gitea_prev_backup_dir}/  fi  

The ls -A ${gitea_backup_dir} is intended to test if a directory is empty. My problem is that this command seems to result in output which is recognized as error output cronic. Cronic defines an error as any non-trace error output or a non-zero result code. For example:

Cronic detected failure or error output for the command:  /usr/local/sbin/run_backup    RESULT CODE: 0    ERROR OUTPUT:  appdata_ocgcv9nemegb  files_external  flow.log  flow.log.1  __groupfolders  .htaccess  index.html  nextcloudadmin  nextcloud-db.bak  nextcloud.log  nextcloud.log.1  .ocdata  rdiff-backup-data  Test_User  updater.log  updater-ocgcv9nemegb ]  custom  gitea-db.sql  log ]    % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current                                   Dload  Upload   Total   Spent    Left  Speed      0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0  100   365    0     0  100   365      0    302  0:00:01  0:00:01 --:--:--   303  100   365    0     0  100   365      0    165  0:00:02  0:00:02 --:--:--   165  100   365    0     0  100   365      0    113  0:00:03  0:00:03 --:--:--   113  100   365    0     0  100   365      0     86  0:00:04  0:00:04 --:--:--    86  100   365    0     0  100   365      0     70  0:00:05  0:00:05 --:--:--    70  100   365    0     0  100   365      0     58  0:00:06  0:00:06 --:--:--     0  100   365    0     0  100   365      0     50  0:00:07  0:00:07 --:--:--     0  100   365    0     0  100   365      0     44  0:00:08  0:00:08 --:--:--     0  100   365    0     0  100   365      0     39  0:00:09  0:00:09 --:--:--     0  100   365    0     0  100   365      0     37  0:00:09  0:00:09 --:--:--     0  100 10.4M    0 10.4M  100   365  1016k     34  0:00:10  0:00:10 --:--:-- 2493k  100 11.6M    0 11.6M  100   365  1128k     34  0:   00:10  0:00:10 --:--:-- 3547k    STANDARD OUTPUT:  Maintenance mode enabled  Deleting increment at time:  <snip>  

So why does the command ls -A ${gitea_backup_dir} produce error output in this case, and how can I prevent this? An alternative robust mehtod to test if a directory is empty would be acceptable, but I would also like an explanation of why the command seems to produce error output.

using rsync (or other tool) to correct file time stamps between two systems

Posted: 23 Apr 2021 08:15 AM PDT

I have two systems (A, B) which have the same file structure. Files are identical but in one of the places (B), the file timestamps are incorrect. The problem is that most of the timestamps in B are set to newer date than A. Is there a way to correct this with rsync (or some other tool)?

The problem is that if I do rsync -azEX --delete A B no modification is done on the B side since the dates show that the files are newer (and transferring the whole file structure is an option that I want to avoid).

UFW - Allow SSH in custom port

Posted: 23 Apr 2021 08:45 AM PDT

enter image description here

I'm new to ufw, and followed this link and entered exact command as instructed.

ufw allow ssh 2222  

Since my SSH is 2222, I want to add that rule in, but I kept getting

ERROR: Need 'to' or 'from' clause

Any hints for me ? I don't want to get lock out.

How to make sure if I rescued all the data from a corrupted ext4 partition?

Posted: 23 Apr 2021 08:10 AM PDT

Here's the history log of the partition:

On a recent post about this topic they concluded that it was a problem on the sata controller.

The partition has a bad superblock and lots of i-node errors, I made several backups of the faulty partition, first at the whole disk, the whole partition, and many image files of the partitions.

At whole drive clone it recognizes the ext4 filesystem, which isn't recognized on the faulty drive, but all other errors persist.

Instead the image files are found to be totally clean without errors and mounts smoothly as loop device, this is how I recovered around 300 GB of files that I think it's all what it was in that drive, but I'm not super sure.

Btw answering yes to all the i-node errors detected by fsck results on half of the files wiped out, I know this because I tested it on a cloned disk.

  1. Can you explain all this magic?
  2. How can I make sure that efectively I rescued all the data?

IdeaPad 5 15are05 touchpad not working after fix in Pop!_OS

Posted: 23 Apr 2021 09:03 AM PDT

I have a new Ideapad 5 15 with Pop!_OS 20.04 installed.
After the installation, my touchpad wasn't working but I was able to fix that issue using the solution of @theunreal89 given here https://askubuntu.com/questions/1248176/ideapad-5-15are05-elan-touchpad-not-working-on-20-04-nor-on-18-04
Now, after some updates yesterday, it began to not work again.
I tried doing the same thing again, running

sudo systemctl daemon-reload  sudo systemctl enable --now touchpadfix.service    

as explained in the thread above. But now I am getting an error for the lines

echo "i2c-ELAN0001:00" > /sys/bus/i2c/drivers/elants_i2c/unbind  echo "i2c-ELAN0001:00" > /sys/bus/i2c/drivers/i2c_hid/bind  

telling me there are no such devices.
Now, running the following

cat /proc/bus/input/devices  

does not even show any signs of an existing touchpad anymore, so I am really lost on how to proceed.
Is there a way to troubleshoot that doesn't require kernel shenanigans?

EDIT:
Just found out that the touchpad works after booting with the power supply plugged in. But otherwise not.

Can't install apt-transport-https on Debian 9 stretch due to unmet dependencies: apt (>= 1.5~alpha4)

Posted: 23 Apr 2021 10:31 AM PDT

I have a Dockerfile which is built in a CI/CD pipeline. It starts with a Drupal image based on Debian 9 stretch and installs Node 10. Building the image has worked for the past three years but it just stopped working yesterday.

To reproduce the issue I can run the container without any of the instructions from the Dockerfile:

docker run -it drupal:8.6.1-apache bash  

Then in the container I run this script which should install Node 10:

curl -fsSL https://deb.nodesource.com/setup_10.x | bash -  

The script fails when trying to install apt-transport-https. If I try to install it directly so I can see the error I get:

apt-get update && apt-get install -y apt-transport-https  ...  Some packages could not be installed. This may mean that you have  requested an impossible situation or if you are using the unstable  distribution that some required packages have not yet been created  or been moved out of Incoming.  The following information may help to resolve the situation:    The following packages have unmet dependencies:   apt-transport-https : Depends: apt (>= 1.5~alpha4) but 1.4.8 is to be installed  E: Unable to correct problems, you have held broken packages.  

If I try to run this to install that specific version:

apt-get install -y apt=1.5~alpha4  ...  E: Version '1.5~alpha4' for 'apt' was not found  

If I run this to see the list of available versions of apt:

apt list -a apt  ...  apt/stable-updates 1.8.2.3 amd64 [upgradable from: 1.4.8]  apt/stable,stable 1.8.2.2 amd64  apt/oldstable 1.4.11 amd64  apt/oldstable 1.4.10 amd64  apt/now 1.4.8 amd64 [installed,upgradable to: 1.8.2.3]  

Then if I:

apt-get install -y apt=1.8.2.3  ...  The following packages have unmet dependencies:   apt : Depends: libapt-pkg5.0 (>= 1.7.0~alpha3~) but 1.4.8 is to be installed         Depends: libgnutls30 (>= 3.6.6) but 3.5.8-5+deb9u5 is to be installed  E: Unable to correct problems, you have held broken packages.  

Then if I try to install the required version of libapt-pkg5.0:

The following packages have unmet dependencies:   libapt-pkg5.0 : Depends: libc6 (>= 2.27) but 2.24-11+deb9u4 is to be installed                   Depends: libzstd1 (>= 1.3.2) but 1.1.2-1+deb9u1 is to be installed                   Breaks: apt (< 1.6~) but 1.4.8 is to be installed                   Recommends: apt (>= 1.8.2.3) but 1.4.8 is to be installed  

Then if I run:

apt-get install -y libzstd1=1.3.8+dfsg-3+deb10u2 # this seems to install fine  apt-get install -y libc6=2.28-10  

The libc6 installation fails with:

Reading package lists... 0%  Reading package lists... Done  Building dependency tree  Reading state information... Done  The following package was automatically installed and is no longer required:    linux-libc-dev  Use 'apt autoremove' to remove it.  Suggested packages:    glibc-doc libc-l10n locales  Recommended packages:    libidn2-0  The following packages will be upgraded:    libc6  1 upgraded, 0 newly installed, 0 to remove and 57 not upgraded.  Need to get 2867 kB of archives.  After this operation, 1693 kB of additional disk space will be used.  Get:1 http://deb.debian.org/debian buster/main amd64 libc6 amd64 2.28-10 [2867 kB]  Fetched 2867 kB in 1s (1858 kB/s)  debconf: delaying package configuration, since apt-utils is not installed  dpkg: warning: 'ldconfig' not found in PATH or not executable  dpkg: error: 1 expected program not found in PATH or not executable  Note: root's PATH should usually contain /usr/local/sbin, /usr/sbin and /sbin  E: Sub-process /usr/bin/dpkg returned an error code (2)  

At this point I've spent hours troubleshooting this issue and I'm wondering if it's worth it to keep going in this direction or if there is another way to resolve this.

I was able to satisfy all the dependencies using aptitude in a fresh container like this:

apt-get update  apt-get install -y aptitude  aptitude install apt-transport-https    

It asks me:

The following NEW packages will be installed:    apt-transport-https libseccomp2{a}  The following packages will be upgraded:    apt{b} libapt-pkg5.0{b}  2 packages upgraded, 2 newly installed, 0 to remove and 59 not upgraded.  Need to get 2575 kB of archives. After unpacking 1189 kB will be used.  The following packages have unmet dependencies:   apt : Depends: libgnutls30 (>= 3.6.6) but 3.5.8-5+deb9u3 is installed and it is kept back         Breaks: aptitude (< 0.8.10) but 0.8.7-1 is installed   libapt-pkg5.0 : Depends: libc6 (>= 2.27) but 2.24-11+deb9u3 is installed and it is kept back                   Depends: libzstd1 (>= 1.3.2) but it is not going to be installed                   Breaks: aptitude (< 0.8.9) but 0.8.7-1 is installed  The following actions will resolve these dependencies:         Keep the following packages at their current version:  1)     apt [1.4.8 (now)]  2)     apt-transport-https [Not Installed]  3)     libapt-pkg5.0 [1.4.8 (now)]        Accept this solution? [Y/n/q/?]  

This solution does nothing but if I enter n it then asks me:

The following actions will resolve these dependencies:          Install the following packages:  1)      libboost-iostreams1.67.0 [1.67.0-13+deb10u1 (stable)]  2)      libboost-system1.67.0 [1.67.0-13+deb10u1 (stable)]  3)      libgpm2 [1.20.4-6.2+b1 (oldstable)]  4)      libncursesw6 [6.1+20181013-2+deb10u2 (stable)]  5)      libtinfo6 [6.1+20181013-2+deb10u2 (stable)]  6)      libunistring2 [0.9.10-1 (stable)]  7)      libzstd1 [1.3.8+dfsg-3+deb10u2 (stable)]          Upgrade the following packages:  8)      aptitude [0.8.7-1 (now, oldstable) -> 0.8.11-7 (stable)]  9)      aptitude-common [0.8.7-1 (now, oldstable) -> 0.8.11-7 (stable)]  10)     libc-bin [2.24-11+deb9u3 (now) -> 2.28-10 (stable)]  11)     libc-dev-bin [2.24-11+deb9u3 (now) -> 2.28-10 (stable)]  12)     libc6 [2.24-11+deb9u3 (now) -> 2.28-10 (stable)]  13)     libc6-dev [2.24-11+deb9u3 (now) -> 2.28-10 (stable)]  14)     libcwidget3v5 [0.5.17-4+b1 (now, oldstable) -> 0.5.17-11 (stable)]  15)     libgnutls30 [3.5.8-5+deb9u3 (now) -> 3.6.7-4+deb10u4 (stable)]  16)     libhogweed4 [3.3-1+b2 (now, oldstable) -> 3.4.1-1 (stable)]  17)     libidn2-0 [0.16-1+deb9u1 (now, oldstable) -> 2.0.5-1+deb10u1 (stable)]  18)     libnettle6 [3.3-1+b2 (now, oldstable) -> 3.4.1-1 (stable)]  19)     libp11-kit0 [0.23.3-2 (now, oldstable) -> 0.23.15-2+deb10u1 (stable)]  20)     libtasn1-6 [4.10-1.1+deb9u1 (now, oldstable) -> 4.13-3 (stable)]  21)     libxapian30 [1.4.3-2+deb9u3 (now, oldstable) -> 1.4.11-1 (stable)]    Accept this solution? [Y/n/q/?]  

Accepting this solution works - apt-transport-https is successfully installed and then I am able to run the curl -fsSL https://deb.nodesource.com/setup_10.x | bash - script to install Node. However, this is an interactive solution which requires me to enter n once, then y twice, and I couldn't figure out a way to have that happen automatically during a Docker build.

Creating an empty line

Posted: 23 Apr 2021 08:18 AM PDT

I'm currently trying to add an empty line in here, the example of the command that I have so far

echo -e "my kernel version is $(uname -a) \nand the current date is $(date)"  

how do I change the whole thing, so the output of this is

my kernel version is Linux centos7 3.10.0-1127.18.2.el7.x86_64 #1 SMP Sun Jul 26 15:27:06 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux  *** empty line***     and the current date is Fri Apr 23 09:18:29 EEST 2021  

how can I pipe the output of "wc" to an inequality (-gt) without using "if;then" syntax?

Posted: 23 Apr 2021 09:04 AM PDT

I want to count the number of files matching a pattern at a depth of 1, and compare it with a number, e.g. to see if I have 3 or more such files. However, I want to do this without using the if/then syntax.

i.e. something like this:

ls -1q -d *patternX* | wc -l | [ -ge 3]    

I'd appreciate very much if someone could let me know if there's a way to escape using the if/then syntax. Thanks!

How To Download Kali linux Old ISO From Official Sources Like kali.org

Posted: 23 Apr 2021 07:46 AM PDT

Hey Stack Overflow Community I Have been Huge Fan Of Kali Linux as all the development pakage is preinstalled plus using windows you have distraction like playing games so i generally use kali for penetration testing learning and for the development as all python 3 c shell are preinstalled but i did n't like the fancy new kali arrival after 2020.1 so is there a way to get old kali iso like 2017,18,19 from the official source because of disk file from untrusted could reverse engineer to make it maleware i do know oldkali.org like ftp but i do not know it as official if it do reply

How are time namespaces supposed to be used?

Posted: 23 Apr 2021 10:42 AM PDT

I thought I could do something like:

sudo unshare -T bash -c 'date -s "$1" && foobar' sh "$(date -d -1day)"  

so foobar would see a different system time from the rest of the system. However, it seems the change of system time is not contained. It changes the system time of the whole system.

This LWN article seems to suggest this namespace was meant for the use I tried to give it.

System calls that adjust the system time will, when called outside of the root time namespace, adjust the namespace-specific offsets instead.

Looking at strace date -s ..., I see among other output:

clock_settime(CLOCK_REALTIME, {tv_sec=1619044910, tv_nsec=0}) = 0  

However, reading time_namespaces(7):

This affects various APIs that measure against these clocks, including: clock_gettime(2), clock_nanosleep(2), nanosleep(2), timer_settime(2), timerfd_settime(2), and /proc/uptime.

I see it doesn't mention clock_settime(2). The wording "including" tells me this is perhaps not the complete list, but maybe it is.

I also don't understand --boottime/--monotonic. Looking at clock_settime(2), I see:

CLOCK_MONOTONIC A nonsettable system-wide clock that represents monotonic time since—as described by POSIX—"some unspecified point in the past". On Linux, that point corresponds to the number of seconds that the system has been running since it was booted.

CLOCK_BOOTTIME (since Linux 2.6.39; Linux-specific) A nonsettable system-wide clock that is identical to CLOCK_MONOTONIC, except that it also includes any time that the system is suspended.

However, when trying them, they don't seem to change the uptime:

$ uptime -s  2021-04-10 10:30:45  $ sudo unshare -T --boottime 1000000000 uptime -s  2021-04-10 10:30:45  $ sudo unshare -T --monotonic 1000000000 uptime -s  2021-04-10 10:30:45  $ sudo unshare -T --boottime -100000 uptime -s  2021-04-10 10:30:45  $ sudo unshare -T --monotonic -100000 uptime -s  2021-04-10 10:30:45  

I see from strace uptime that it reads /proc/uptime instead of calling clock_gettime(2), and /proc/uptime doesn't seem to be affected by the unshare calls and their offsets, despite the documentation at time_namespaces(7) saying that it affects /proc/uptime as I quoted above.

How is this namespace supposed to be used? I can't seem to find any command that would be affected by unshare --time.

Wrong keyboard layout in even though everything in the OS seem to report correct/wanted layout - e.g. åäö becomes [';

Posted: 23 Apr 2021 10:40 AM PDT

I installed Manjaro 21.x (don't remember exact version) a few months back and I chose the Gnome flavor. As per usual I set up my OS to have two keyboard layouts (US and SE) and life was great. Then life became less great as I realised Gnome is not for me. So, I switched back to good old XFCE by simply installing XFCE using the package manager. Now, whenever I boot Linux (or log out and in again) my keyboard layout seem to be, in practice, US since everytime I write åäö it becomes [';. This happens anywhere in the OS.

I've tried to troubleshoot it but I have found nothing really. It seems the OS really believe I use the SE layout. For example the layout switcher in XFCE show SE:

enter image description here

Some output from interesting commands (they are the same regardless if I have the issue or worked around it):

➜ cat /etc/vconsole.conf                     KEYMAP=sv-latin1  FONT=  FONT_MAP=    ➜ localectl status                                  System Locale: LANG=en_US.utf8                    LC_TIME=sv_SE.utf8                    LC_COLLATE=C         VC Keymap: sv-latin1        X11 Layout: se,us       X11 Variant: ,    ➜ setxkbmap -print -verbose 10                      Setting verbose level to 10  locale is C  Trying to load rules file ./rules/evdev...  Trying to load rules file /usr/share/X11/xkb/rules/evdev...  Success.  Applied rules from evdev:  rules:      evdev  model:      pc105  layout:     se,us  variant:    ,  options:    grp:win_space_toggle,terminate:ctrl_alt_bksp  Trying to build keymap using the following components:  keycodes:   evdev+aliases(qwerty)  types:      complete  compat:     complete  symbols:    pc+se+us:2+inet(evdev)+group(win_space_toggle)+terminate(ctrl_alt_bksp)  geometry:   pc(pc105)  xkb_keymap {      xkb_keycodes  { include "evdev+aliases(qwerty)" };      xkb_types     { include "complete"  };      xkb_compat    { include "complete"  };      xkb_symbols   { include "pc+se+us:2+inet(evdev)+group(win_space_toggle)+terminate(ctrl_alt_bksp)"   };      xkb_geometry  { include "pc(pc105)" };  };    ➜ cat /etc/X11/xorg.conf.d/00-keyboard.conf  # Written by systemd-localed(8), read by systemd-localed and Xorg. It's  # probably wise not to edit this file manually. Use localectl(1) to  # instruct systemd-localed to update it.  Section "InputClass"          Identifier "system-keyboard"          MatchIsKeyboard "on"          Option "XkbLayout" "se,us"          Option "XkbVariant" ","  EndSection  

System info:

➜ cat /etc/os-release   NAME="Manjaro Linux"  ID=manjaro  ID_LIKE=arch  BUILD_ID=rolling  PRETTY_NAME="Manjaro Linux"  ANSI_COLOR="32;1;24;144;200"  HOME_URL="https://manjaro.org/"  DOCUMENTATION_URL="https://wiki.manjaro.org/"  SUPPORT_URL="https://manjaro.org/"  BUG_REPORT_URL="https://bugs.manjaro.org/"  LOGO=manjarolinux    ➜ gnome-shell --version  GNOME Shell 3.38.4    ➜ xfce4-about --version  xfce4-about 4.16.0 (Xfce 4.16)  

To work around this issue all I have to do is to change the layout by clicking SE in the above screenshot or by using the keyboard shortcut I've assigned. After I've done this the layout switcher still say SE, which is sort of expected I guess. Also, åäö now becomes the expected åäö.

readline implement find and move cursor to string in current line

Posted: 23 Apr 2021 08:33 AM PDT

Ummg in this posted question showed how to bind keys to functions in .bashrc to manipulate input text using xclip.

I have always missed being able to move cursor to search string in the current input line.

Could this be implemented by

  • binding say, // to kill the current input line,
  • move it to a clipboard
  • take all new entered text up to the next // as the search string
  • return the original input line with the cursor placed at the beginning of the search string

The posted code references environmental variables READ_LINE_LINE, READ_LINE_POINT that would be needed for this. Where/how do these get exported?

Kali Linux stays on the black screen on boot and the graphics look weird

Posted: 23 Apr 2021 11:09 AM PDT

I'm trying to install Kali Linux on my laptop, but I'm having a problem with the boot screen. When I say load in graphic mode, a black screen is displayed, and the strange images appearing in the picture appear and nothing happens. I think this error is related to the graphics driver or chip. I am trying to do this setup with the USB disk. I tested the USB disk on my desktop computer and the installation starts. As I thought, Kali Linux seems to be a problem with the graphics component embedded in my laptop. Does anyone have any ideas? How can I solve this problem? You can review the photos below to understand me better. Thanks in advance.

My System: Lenovo Thinkpad T430 Intel Core i5 (3rd Gen) 3320M / 2.6 GHz 8 GB DDR3 SDRAM/1600 MHZ Intel HD Graphics 4000

For more info go link about T430;

https://www.cnet.com/products/lenovo-thinkpad-t430/

enter image description here enter image description here enter image description here

Configure static values in /etc/dhcpcd.conf

Posted: 23 Apr 2021 08:59 AM PDT

What are the valid values (and what are they used for) for the static option in /etc/dhcpcd.conf file?

I'm configuring a network interface of a Raspberry (running raspbian stretch) by editing the /etc/dhcpcd.conf file.
Altough I was able to set up it correctly, I am curious about all the configuration options provided through this file, specifically for static configuration.

I read the man page of dhcpcd.conf and didn't find any explanation of the values the static option accepts. I wasn't able to find anything on google neither.

The man page of dhcpcd.conf just says this:

 static value           Configures a static value.  If you set ip_address then dhcpcd           will not attempt to obtain a lease and will just use the value           for the address with an infinite lease time.  If you set           ip6_address, dhcpcd will continue auto-configuation as normal.             Here is an example which configures two static address,           overriding the default IPv4 broadcast address, an IPv4 router,           DNS and disables IPv6 auto-configuration.  You could also use the           inform6 command here if you wished to obtain more information via           DHCPv6.  For IPv4, you should use the inform ipaddress option           instead of setting a static address.                 interface eth0                 noipv6rs                 static ip_address=192.168.0.10/24                 static broadcast_address=192.168.0.63                 static ip6_address=fd51:42f8:caae:d92e::ff/64                 static routers=192.168.0.1                 static domain_name_servers=192.168.0.1                 fd51:42f8:caae:d92e::1             Here is an example for PPP which gives the destination a default           route.  It uses the special destination keyword to insert the           destination address into the value.                 interface ppp0                 static ip_address=                 destination routers  

After reading some tutorials all valid options I know are these:

  • ip_address

  • routers

  • domain_name_servers

  • domain_search

  • domain_name


JFYI my /etc/dhcpcd.conf configuration file looks like this:

# Inform the DHCP server of our hostname for DDNS.  hostname  # Use the hardware address of the interface for the Client ID.  clientid  # Persist interface configuration when dhcpcd exits.  persistent  # Rapid commit support.  # Safe to enable by default because it requires the equivalent option set  # on the server to actually work.  option rapid_commit  # A list of options to request from the DHCP server.  option domain_name_servers, domain_name, domain_search, host_name  option classless_static_routes  # Most distributions have NTP support.  option ntp_servers  # A ServerID is required by RFC2131.  require dhcp_server_identifier  # Generate Stable Private IPv6 Addresses instead of hardware based ones  slaac private  # A hook script is provided to lookup the hostname if not set by the DHCP  # server, but it should not be run by default.  nohook lookup-hostname    # Static IP configuration for eth0.  interface eth0      static ip_address=192.168.12.234/24      static routers=192.168.12.1      static domain_name_servers=192.168.12.1      nogateway  

"No space left on device" when extracting tar archive

Posted: 23 Apr 2021 10:43 AM PDT

I am trying to unzip a tar file using tar xvzf ZAP_2.7.0_Linux.tar.gz here is the file. What could be the problem?

-rw-r--r--  1 root root  32165888 Mar  6 12:06 ZAP_2.7.0_Linux.tar.gz  -rwxrwxrwx  1 root root         0 Mar  6 12:47 ZAP_2_7_0_unix.sh  

When I try this [root@localhost ~]# tar xvzf ZAP_2.7.0_Linux.tar.gz

I am getting the following error.

tar: ZAP_2.7.0/plugin/diff-beta-8.zap: Cannot open: No space left on device  ZAP_2.7.0/plugin/directorylistv1-release-3.zap    gzip: stdin: unexpected end of file  tar: Unexpected EOF in archive  tar: Error is not recoverable: exiting now  

Looks like I have enough disk space. Am I reading something incorrectly here? Sorry if its a basic question. Why is root showing 'Use%' as 100%?

[root@localhost ~]# df  Filesystem              1K-blocks    Used Available Use% Mounted on  /dev/mapper/centos-root   6981632 6981080       552 100% /  devtmpfs                  1426928       0   1426928   0% /dev  tmpfs                     1442256      84   1442172   1% /dev/shm  tmpfs                     1442256    8892   1433364   1% /run  tmpfs                     1442256       0   1442256   0% /sys/fs/cgroup  /dev/sda1                  508588  160004    348584  32% /boot  tmpfs                      288452      16    288436   1% /run/user/42  tmpfs                      288452       0    288452   0% /run/user/0  [root@localhost ~]#  

What could be the problem?

[root@localhost ~]# df /dev/sda  Filesystem     1K-blocks  Used Available Use% Mounted on  devtmpfs         1426928     0   1426928   0% /dev  

Tomcat 8 503 Error with Apache2 mod_jk as Reverse Proxy

Posted: 23 Apr 2021 08:04 AM PDT

I'm following this guide to setup Tomcat 8 on Ubuntu Server 16.04 using Apache2's mod_jk module as a reverse proxy:

https://www.digitalocean.com/community/tutorials/how-to-encrypt-tomcat-8-connections-with-apache-or-nginx-on-ubuntu-16-04

Everything works until the last step, which is to change the HTTP and AJP Connectors in server.xml to only listen on localhost. Here's the change I made to the AJP Connector:

<Connector port="8009" address="127.0.0.1" protocol="AJP/1.3" redirectPort="8443" />  

Before this change, typing https://myhostname takes me to the Tomcat administration page; after it, I get "503 Service Unavailable". I've temporarily turned off my firewall and removed AppArmor. Here's the relevant portion of mod_jk.log:

jk_open_socket::jk_connect.c (817): connect to ::1:8009 failed (errno=111)  ajp_connect_to_endpoint::jk_ajp_common.c (1068): (ajp13_worker) Failed opening socket to (::1:8009) (errno=111)  ajp_send_request::jk_ajp_common.c (1728): (ajp13_worker) connecting to backend failed. Tomcat is probably not started or is listening on the wrong port (errno=111)  

What could be causing this, and how can I resolve it?

Multiple passes of wiping disk with `dd`

Posted: 23 Apr 2021 08:43 AM PDT

I want to use dd to overwrite a disk multiple times to destroy data beyond recovery.

I know I could use dd if=/dev/zero/ of=/dev/sdx/ to do a pass with zeros, or dd if=/dev/null/ of=/dev/sdx/ to accomplish similar (not exact same) result, but it will be slower.

I was wondering if there is a way to do something where I could write all zeros, then all ones to a disk using dd, or is there a better way to accomplish this?

Two vsftpd instances - check passive port on FTPS

Posted: 23 Apr 2021 09:07 AM PDT

I have configured VSFTPD in a CentOS machine to run on two instances, with vsftpd.conf and vsftpd2.conf. Here the content of the second conf file:

anonymous_enable=NO  chroot_list_enable=YES  chroot_list_file=/etc/vsftpd/chroot_list  chroot_local_user=YES  connect_from_port_20=YES  dirmessage_enable=YES  force_local_data_ssl=YES  force_local_logins_ssl=YES  ftpd_banner=Hello.  listen=YES  listen_port=30  local_enable=YES  local_umask=022  pam_service_name=vsftpd  pasv_enable=YES  pasv_address=192.168.100.162  pasv_max_port=389  pasv_min_port=389  rsa_cert_file=/etc/vsftpd/vsftpd.pem  ssl_enable=YES  ssl_sslv2=NO  ssl_sslv3=NO  ssl_tlsv1=YES  ssl_ciphers=HIGH  user_config_dir=/etc/vsftpd/user_conf  userlist_enable=NO  write_enable=YES  xferlog_enable=YES  xferlog_file=/var/log/xferlog2  xferlog_std_format=NO  dual_log_enable=YES  log_ftp_protocol=YES  

The fist conf file is identical excepting the xferlog file, and the listen_port that is missing, and the pasv_max_port/pasv_min_port that are 65000/60000

So I have one FTPS working on port 21 and the other on port 30. Both are working fine, but I want to make sure that 389 is really being used. So, I started a session with Wireshark, where I could see all my TCP packets to port 30, but no one to 389. Instead, I see packets to my remote server on port 49276.

How can I make sure that this is working fine?

MC does not apply custom config

Posted: 23 Apr 2021 09:41 AM PDT

I have a fresh installation of Debian 7. Upon using Midnight Commander I noticed that auto indentation is disabled. 'No problem' I told to myself and changed the ~/.config/mc/ini file.
Surprisingly that did nothing, as soon as I reopened mc the new iniwas overwritten with the previous version.
After a little googling I also tried the same with ~/.mc/mc.ini and ~/.mc/ini with no results at all.
An strace at least showed me that the ~/.config/mc/ini file is read by mc, but as it seems this file is overwritten somewhat before the read access, which is not traceable in the strace.

My next guess was to use a global config file like /etc/mc/mc.ini but that changed nothing either.

As suggested below I prevent modification of the ini file via chattr +i $HOME/.config/mc/ini. Right now everything works fine.
Does anyone else experience problems like this? Maybe this is a bug.

Extending/resizing Fedora Guest-OS root-drive in VirtualBox

Posted: 23 Apr 2021 09:47 AM PDT

I hope you can help me with that:
My VirtualBox Guest-OS (Fedora 19) told me, that there is not enough free disk-space availabe, so I first increased the .vdi-file by using the Windows-Commandline:

VBoxManage.exe modifyhd "path" --resize 20480  

After that VirtualBox correctly shows a bigger virtual hdd.

Then I booted the VirtualMachine with a Fedora(and later Ubuntu to test)-LiveCD and first set the LVM-size to 20GB with

pvresize /dev/sda2 --setphysicalvolumesize 20G  

That worked fine, but then I tried to resize the root-partition with:

lvresize -l+100%FREE -r /dev/fedora/root  

And I get this error-message:

Extending logical volume root to 16,71 GiB    device-mapper: resume ioctl on  failed: invalid argument    Unable to resume fedora-root (253:1)    Problem reactivating root  

After that it is impossible to use the root. I tried to do it with the swap-partition as well (to find out, if it is just the root) and it didn't work either. I tried lvextend and it didn't work. The drive is suspended after this and I can not reactivate it using

lvchange -a y  

Same error there.

Maybe somebody can help me with that? I do have a backup of that machine, so that I can always start at the beginning :-)

How to go to next screen in screen

Posted: 23 Apr 2021 09:44 AM PDT

I have 10 screens running in detached mode. When I do screen -r pid I can see the output from one of them. The man page says ctrl-a + n will show me the next screen, but when I do it it says: No other window.

What am I missing?

CentOS: List the installed RPMs by date of installation/update?

Posted: 23 Apr 2021 08:37 AM PDT

I'm on a CentOS machine. I updated and installed some packages a few weeks back, but I don't remember the name of every package or the names of every dependency. I used yum.

Can I list the packages on my system by the date they were last installed or updated?

No comments:

Post a Comment