Monday, April 26, 2021

Recent Questions - Unix & Linux Stack Exchange

Recent Questions - Unix & Linux Stack Exchange


i tried to install golang in kali linux and itz showing the following error

Posted: 26 Apr 2021 09:59 AM PDT

click the link to see the screenshot

once go-lang is installed if I type go version and press enter in the same terminal which i used to install the go-lang it works and it shows me the go version...if I open new terminal and entered the same command go version and it shows command not found

"-bash: ./a.out: No such file or directory" but file exists (arm-linux compiler)

Posted: 26 Apr 2021 09:58 AM PDT

I'm comiling with an ARM gcc compiler on an evaluation board. What follow are some flags of my Makefile

CC = arm-linux-gnueabi-gcc  CFLAGS = -Wextra -Wall -O3 -mcpu=cortex-m4  LD = arm-linux-gnueabi-gcc  

Actually, in the authors' original implementation, arm-linux-gcc but arm-linux-gnueabi-gcc (I cannot say if it is the same or not but it works fine). The point is that after compiling and linking, an executable is created but I cannot launch it because of this error:

-bash: ./a.out: No such file or directory  

On the web I found some solutions but i) they're quite old questions and ii) they require me to add the i386 architecture, with the following commands:

dpkg --add-architecture i386  apt-get update  apt-get install libc6:i386 libncurses5:i386 libstdc++6:i386  

But I run into some issue when executing apt-get update

E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/dists/focal-updates/main/binary-i386/Packages  404  Not Found [IP: 91.189.88.142 80]  E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/dists/focal-backports/universe/binary-i386/Packages  404  Not Found [IP: 91.189.88.142 80]  E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/dists/focal-security/main/binary-i386/Packages  404  Not Found [IP: 91.189.88.142 80]  E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/dists/focal/main/binary-i386/Packages  404  Not Found [IP: 91.189.88.142 80]  E: Some index files failed to download. They have been ignored, or old ones used instead.  

and, consequently, when I launch apt-get install libc6:i386 libncurses5:i386 libstdc++6:i386 what I get is:

Reading package lists... Done  Building dependency tree         Reading state information... Done  E: Unable to locate package libc6:i386  E: Unable to locate package libncurses5:i386  E: Unable to locate package libstdc++6:i386  E: Couldn't find any package by regex 'libstdc++6'  

So I just searched for the "failed to fetch 404" error online and I found various answers, but basically, the suggestions are all simile if not identical to this but here the problem is that I don't have URL such as "http://us.archive.ubuntu.com/ubuntu/" as in my /etc/apt/sources.list the URL are all of the form "http://ports.ubuntu.com/" or "http://ports.ubuntu.com/". I report the starting line of the documents (I would have reported it entirely but stackexchange deem it as spam)

## Note, this file is written by cloud-init on first boot of an instance  ## modifications made here will not survive a re-bundle.  ## if you wish to make changes you can:  ## a.) add 'apt_preserve_sources_list: true' to /etc/cloud/cloud.cfg  ##     or do the same in user-data  ## b.) add sources in /etc/apt/sources.list.d  ## c.) make changes to template file /etc/cloud/templates/sources.list.tmpl  

I hope I've been clear enough, because I'm getting crazy..

One-liner for SFTP download

Posted: 26 Apr 2021 09:56 AM PDT

I have a laptop and a raspberry pi acting as a storage server. I'd like to know how to download a file without any user interaction other than running the program. I read through the man page, and there doesn't seem to be a way to specify a password with a flag and a download location.

Any suggestions?

How to retrieve items from an array of arrays?

Posted: 26 Apr 2021 09:48 AM PDT

Hello StackExchange pros!

I am working on a zsh project for macOS. I used typeset to create three associative arrays to hold values, and a fourth array to reference the individual arrays. Is it possible to iterate over the arrCollection to retrieve the key/value pairs from each of the member arrays? Note that the keys in the arrays below are not the same as my production script--they are simply key indices rather than the more descriptive keys you might find in an associative array.

I thought I could use parameter expansion like this:

for k in $(sort <<< "${(kvF)arrCollection}"); do     echo "$arrCollection["${(kvF)k}"]"  done  

I don't have it quite right though. Can anyone help? Expected output will be a list of all items from all three arrays separated by a newline.

Full script sample below. Usage: arrTest.sh showAll

#!/bin/zsh    key=$1    typeset -A arrOne arrTwo arrThree  typeset -A arrCollection    #Plan is to use an array of arrays so that a for loop can be used later to loop  #through each key/value pair looking for a value that matches some pattern in an if statement  #(if statement not included here). In the showAll case, how can I use parameter expansion to print all  #of the values in each array? The if statement will further constrict what is actually echoed based on its  #value.    arrOne[1]="First"  arrOne[2]="Second"  arrOne[3]="Third"  arrOne[4]="Fourth"    arrTwo[1]="Purple"  arrTwo[2]="Orange"  arrTwo[3]="Red"  arrTwo[4]="Green"  arrTwo[5]="Blue"    arrThree[1]="First"  arrThree[2]="Red"  arrThree[3]="Planet"  arrThree[4]="Sun"  arrThree[5]="Moon"  arrThree[6]="Star"    #Array of arrays  arrCollection[1]=arrOne  arrCollection[2]=arrTwo  arrCollection[3]=arrThree    #Expect a parameter  if [ -z "$key" ]  then      echo "Please enter a parameter"  else      case "$key" in      showAll)          for k in $(sort <<< "${(kvF)arrCollection}"); do              #This is the part I am having trouble with              echo "$arrCollection["${(kvF)k}"]"          done          exit 1      ;;      *)          echo "Something goes here"          exit 1      ;;      esac  fi  

Can't kill stopped jobs with bash script

Posted: 26 Apr 2021 09:23 AM PDT

for i in $( seq 1 $1 )  do          kill %$i  done  

I try to kill the stopped jobs with this script, but interestingly it can't be able to even though i have jobs open.

Here is the screenshot related to the case:

screenshot

Listing VirtIO devices from a shell in a linux guest

Posted: 26 Apr 2021 08:47 AM PDT

As the title already summarizes, is there a way (a tool or a simple command) to list available (thus recognized by a linux guest) VirtIO devices ?

Can't kill wget process with `kill -9`

Posted: 26 Apr 2021 09:06 AM PDT

I have a wget process that I am unable to kill. This question is similar as one asked before, but here the D in the STAT column seems to indicate that it is in uninterruptible sleep (usually IO), while in the other question the process was in state R.

$ ps -axuf | grep `id -un`  USER         PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND  [...]  biogeek   2833351  0.0  0.0      0     0 ?        D    Apr12   0:03 [wget]  [...]  

Trying to kill it doesn't produce any output

$ kill -9 2833351  

and when I run ps -axuf again, the wget process is still there.

How do I figure out which software/hardware fault caused this issue?

Can grub recognize a "degraded" raid1 mdadm partition?

Posted: 26 Apr 2021 08:38 AM PDT

Grub can boot(I have tried) from a zfs "degraded" raid1, is simple: create two zfs pools, one is boot, one is root, each one is raid1...and grub load Linux, with two disk, or with only one active, one or two. I want to try a similar thing with btrfs root raid1 + mdadm raid1 on ext4 for boot. As I known the latest grub on Slackware current can recognize md raid on boot(metadata 0.90). I configure my system on this way

fdisk -l /dev/vda  Disk /dev/vda: 50 GiB, 53687091200 bytes, 104857600 sectors  Units: sectors of 1 * 512 = 512 bytes  Sector size (logical/physical): 512 bytes / 512 bytes  I/O size (minimum/optimal): 512 bytes / 512 bytes  Disklabel type: gpt  Disk identifier: 7715105B-51CD-9A45-9D05-E2C8161E51E9    Device        Start       End  Sectors  Size Type  /dev/vda1      2048   1050623  1048576  512M EFI System  /dev/vda2   1050624   9439231  8388608    4G Linux swap  /dev/vda3   9439232  11536383  2097152    1G Linux RAID  /dev/vda4  11536384 104857566 93321183 44.5G Linux filesystem      fdisk -l /dev/vdb  Disk /dev/vdb: 50 GiB, 53687091200 bytes, 104857600 sectors  Units: sectors of 1 * 512 = 512 bytes  Sector size (logical/physical): 512 bytes / 512 bytes  I/O size (minimum/optimal): 512 bytes / 512 bytes  Disklabel type: gpt  Disk identifier: 7715105B-51CD-9A45-9D05-E2C8161E51E9    Device        Start       End  Sectors  Size Type  /dev/vdb1      2048   1050623  1048576  512M EFI System  /dev/vdb2   1050624   9439231  8388608    4G Linux swap  /dev/vdb3   9439232  11536383  2097152    1G Linux RAID  /dev/vdb4  11536384 104857566 93321183 44.5G Linux filesystem   

this is the fstab

LABEL=SWAP       swap             swap        defaults            0   0  LABEL=ROOT       /                btrfs       defaults,degraded   1   1  LABEL=BOOT       /boot            ext4        defaults            1   2  /dev/vda1        /boot/efi        vfat        defaults            1   2  devpts           /dev/pts         devpts      gid=5,mode=620      0   0  proc             /proc            proc        defaults            0   0  tmpfs            /dev/shm         tmpfs       nosuid,nodev,noexec 0   0  

this is the /etc/default/grub

GRUB_DEFAULT=0  GRUB_HIDDEN_TIMEOUT_QUIET=false  GRUB_TIMEOUT=10  GRUB_DISTRIBUTOR=$( sed 's/Slackware /Slackware-/' /etc/slackware-version )  GRUB_ENABLE_CRYPTODISK=y  GRUB_CMDLINE_LINUX_DEFAULT=""  GRUB_CMDLINE_LINUX="console=tty1 console=ttyS0,115200 rootflags=degraded"  GRUB_TERMINAL="console serial"  

finally the mkinitrd.conf

SOURCE_TREE="/boot/initrd-tree"  CLEAR_TREE="1"  OUTPUT_IMAGE="/boot/initrd.gz"  KERNEL_VERSION="$(ls /var/log/packages/kernel-generic-*-x86_64-* |cut -d - -f 3)"  KEYMAP="it"  MODULE_LIST="btrfs:ext4:vfat:xhci-hcd:hid:usbhid:ochi-hcd:uhci-hcd:uhci-hcd:ehci-hcd:virtio-net:virtio-ring:virtio-blk:virtio-pci"  LUKSDEV="/dev/disk/by-uuid/b97cbe7d-c5e5-432e-adc4-659ed80dd65f:/dev/disk/by-uuid/ec4fd069-0bd2-4b53-90e2-e493c50070f1"  ROOTDEV="/dev/vda4"  ROOTFS="btrfs"  RESUMEDEV="/dev/disk/by-label/SWAP"  RAID="1"  LVM="0"  UDEV="1"  

I update initrd and grub

mkinitrd -B -F  grub-mkconfig -o /boot/grub/grub.cfg  

i reboot and work: It ask me two password for two encrypted luks devices and go directly on login. I tried to boot from the second disk and...

enter image description here

As you can see the second efi partition is recognize, but not the md raid1 degraded partition, if I attach the first disk works fine. Of course I had installed grub on boot disks

grub-install --target=x86_64-efi --recheck --efi-dir=/boot/efi /dev/vda  grub-install --target=x86_64-efi --recheck --efi-dir=/boot/efi /dev/vdb  

Any solution?

Copy all non-text files

Posted: 26 Apr 2021 09:10 AM PDT

I need to move all files not ending with the .txt, .cpp, and .h extensions in one folder to a seperate folder via the cp command.

Is there a built in way to do this or do I need to make a script?

Messed up qt5 installation

Posted: 26 Apr 2021 08:14 AM PDT

I had everything running as it should before, but the other day I installed qt6 and qtcreator to do a project, and now it seems I messed up qt somehow. After a restart, no qt gui apps would open. Specifically qjackctl wont open with a message:

qjackctl: symbol lookup error: /usr/lib/libQt6Core.so.6: undefined symbol: ucal_getDefaultTimeZone_69  

VLC, and any other qt interface won't work either. I tried deleting everything: Both qt5, qt6, qtcreator and qjackctl, and let it install it's own dependencies by itself, but still it won't run with the same error. It's obviously missing something, or it's a .so version mismatch. Does anyone know how to fix this?

Why does `grep -L x <<<x >/dev/null` return 0 despite that `grep -L x <<<x` returns 1?

Posted: 26 Apr 2021 08:05 AM PDT

I have GNU grep 3.3-1 (the current version in Debian Buster).

From man grep:

EXIT STATUS

Normally the exit status is 0 if a line is selected, 1 if no lines were selected, and 2 if an error occurred. However, if the -q or --quiet or --silent is used and a line is selected, the exit status is 0 even if an error occurred.

This is consistent with POSIX. (-L is not specified there.)

Full documentation of the latest release and documentation in the latest commit don't have more details about -L (--files-without-match).

grep -L x <<<x (in bash) exits with code 1. I'm not sure if this is consistent with the documentation (what exactly are selected lines here?), but at least this is explainable: no input file has matched the condition that there must be no x.

grep -Lq x <<<x and grep -L x <<<x >/dev/null both exit with code 0. Okay, -q is a tiny bit more understandable, but why stdout redirection affects exit code? So to get the original exit-code behavior while suppressing output, a hack like (set -o pipefail; /bin/grep -L x <<<x | cat >/dev/null) is needed. Why is that?

Since GNU grep is so widely-used, I'm not sure that this is a bug in grep: it's more likely that I miss something. Or maybe exit code of -L is simply something not to rely on? Even though you can understand the behavior of the current version using test runs like this and source code, this behavior may change in the future since it's not consistent with the current documentation. What do you think?

(By the way, tests in the latest commit don't seem to test -L other than for a non-2 exit code (in in-eq-out-infloop) and for edge cases like -f /dev/null (in skip-read).)

cmp-command for three files

Posted: 26 Apr 2021 08:13 AM PDT

I'd like to compare three text files using the cmp-command in bash and perform an action, if file1 differs from file2, but file1 and file3 are exactly the same. As according to the help-file, cmp outputs a 0 if the files are the same and a 1 if they differ, I tried:

if [ "cmp -s file1.txt file2.txt" != 0 ] && [ "cmp -s file1.txt file3.txt" == 0 ]; then  #Action  else  #Do nothing  fi  

However, the partial condition if [ "cmp -s file1.txt file2.txt" != 0 ] does not even work as a single if-condition, nor does the second, because they act the same way if the two files are the same or not. What am I doing wrong?

As a root user, I do not have permission to create/write any file in any directory

Posted: 26 Apr 2021 07:56 AM PDT

As a root user, I cannot create/edit any file in any directory all of a sudden with the following error.

E212: Can't open file for writing  

Everything was working fine. The last thing I did was to create a conf file on the /etc/rsyslog.d directory and remove it afterward since it was a useless one. Can it be related to that? How can I solve this?

How to append a Linux command line to file?

Posted: 26 Apr 2021 08:34 AM PDT

I am writing a bash script which will generate a Vagrantfile. The reason that i use a bash script to generate Vagrantfile is to enable my colleague to use a single script in setting up their environments before running vagrant up The challenge that I facing now is when appending command line to the Vagrantfile, the command is executed instead of appended into Vagrantfile

For example (setup-vagrant-host.sh)

#!/bin/bash  .  .  .  some pre requisites steps  .  .  .  # Generate Vagrantfile  cat <<EOL > Vagrantfile      Vagrant.configure("2") do |config|      config.vm.define "vagrant-host"      config.vm.provision "shell", inline: <<-SHELL          sudo su          apt update          ipaddress=`hostname -I | awk '{print $2}'`          echo "*** IP address is $ipaddress ***"      SHELL      end  EOL  vagrant up  

When i execute the above script, hostname -I | awk '{print $2}' always executed instead of appended to Vagrantfile, I dont know am I doing this right, please suggest a better way...

Not native english speaker, forget my poor grammar, thanks

Are there any caveats in using shopt -s autocd?

Posted: 26 Apr 2021 09:17 AM PDT

I have recently discovered the feature shopt -s autocd:

          autocd  If  set,  a command name that is the name                    of a directory is executed as if it  were                    the argument to the cd command.  This op‐                    tion is only used by interactive shells.  

At first glance it seems helpful but I am not an expert Bash user and I wonder if it may be a mistake to use it.

Are there any potential dangers to setting shopt -s autocd? I am especially interested in terms of scripting and conflicts with other applications or configurations.

HP-UX 11.11 - Incoming connection problem

Posted: 26 Apr 2021 07:37 AM PDT

I have machine with fresh installed HP-UX 11.11. All seem to work well, but there is one problem - while I have no problem to ping my Ubuntu 20.04 machine, access internet etc from HP-UX machine, I cannot ping HP-UX machine itself, therefore I cannot ssh, ftp and other things. Could anyone suggest me what could I do in order to make my HP-UX machine available in the network? (I have googled and found some info about ipfilter but I cannot find any signs of ipfilter in /etc/rc.config.d)

***HP-UX ip address is 10.0.2.15 - can ping Linux machine on 192.168.0.102

Linux ip address is 192.168.0.102 - cannot ping HP-UX machine on 10.0.2.15***

lanscan gives lan0 interface only, IP address is 10.0.2.15

I am pinging my Linux machine (192.168.0.102) from the HP-UX machine

ping 192.168.0.102 gives the following: 64 bytes from 192.168.0.102, icmp_sq=0, time=8, ms

ping community.hpe.com gives the following: 64 bytes from 99.86.161.54, icmp_sq=0, time=45, ms

Cannot ping from my Linux (192.168.0.102) machine

ping 10.0.2.15 gives the following:

From 10.244.232.2 icmp_seq=1 Time to live exceeded From 10.244.232.2 icmp_seq=2 Time to live exceeded From 10.244.232.2 icmp_seq=3 Time to live exceeded From 10.244.232.2 icmp_seq=4 Time to live exceeded

ssh dragon@10.0.2.15 gives the following ssh: connect to host 10.0.2.15 port 22: No route to host

(sshd is up and running on the HP-UX machine, ps -ef | grep sshd shows that)

Cannot boot (any more) from MBR SSD drive even if legacy BIOS is enabled

Posted: 26 Apr 2021 08:40 AM PDT

The situation is simple to describe in fact:

  • A MBR-partitioned Kingston SSD of 1TB containing Windows 10 and Kubuntu was moved to a new (HP) laptop: in order to boot there, legacy support had to be enabled; after that, it worked fine. It stopped working though after trying to install a new OS (Fedora: its installer for some reason was trying to install in UEFI mode it seems) which resulted in losing grub. I was not able to restore grub on the Kingston SSD inside the HP laptop.

  • Trying to go back to the old working setting I've moved back the SSD to the old laptop (a Sony with old BIOS, no-UEFI) and there I have installed Fedora and Solus (both KDE) beside previous Windows 10 and Kubuntu (because I'm looking for a long-term KDE Linux and I need to test a few). I have also restored grub onto the Kubuntu partition as it was initially. All goes well on the Sony with this 4-OS setting, no problem.

  • Moving again the SSD to the HP laptop, it isn't seen as if the legacy support were not enabled or the SSD were not connected.

  • The SSD has no physical connection problems on the HP: booting from a live USB the SSD partitions are seen and accessible. (Other physical problems with the SSD are to be excluded: it works fine on the Sony laptop).

  • The legacy support setting on the HP is enabled and active: putting an older HDD/MBR drive (Linux Mint Xfce + Windows 10) on the HP it booots and runs just fine.


Could it be a problem that the SSD now has more OS-es (4) than when the setting "legacy support enabled and Secure Boot disabled" worked (2)?

I am even ready to change the partition table to GPT if I was sure it would fix it. But the fact that an old MBR HDD drive boots fine should indicate that I don't need to.

What I mainly want would be to go back to the situation where that setting ("legacy support enabled and Secure Boot disabled") was working with the MBR table on the SSD just as it does with the old HDD that I've tested with.

Given the MBR SSD works fine (grub and all) on Sony laptop (so, all is installed in proper legacy mode on that drive), and an old MBR HDD works well on the HP too (so, the legacy support is in place), what could be now the difference between old HDD and the SSD given one is seen and one not?


Reply to comment asking for details on boot error of SSD on HP; it looks like this:

enter image description here


Although now I think grub is fine on the SSD, I have run again boot-repair from live usb on HP as if to restore it on SSD. I got errors, pasted on ubuntu pastebin although not as severe as I thought: moving the SSD on Sony, the Windows 10 was not available but Kubuntu and other Linuxes were. Booting in Kubuntu (where grub is installed) I run Grub Customizer and fixed the grub list.

how to add a icon in dmenu file launcher

Posted: 26 Apr 2021 09:27 AM PDT

how do i implement a icon on the left hand side of each file/directory with printf or echo in this script?if anyone could help or point me to the right direction it would be greatly appreciated

#!/bin/bash  spYtf6LsC7qKfvs  while true; do      open=$(ls -1a --group-directories-first --file-type | dmenu  -fn Symbola -c -g 1 -l 50 -p 'Navigate:' "$@")      if [[ -d "$open" ]]; then          cd "$open"      else          if [[ "$open" != "" ]]; then              xdg-open "$open"          fi          exit 0      fi  done  

Error output from ls -A results in error output, why?

Posted: 26 Apr 2021 09:59 AM PDT

I have a script I run regularly using cron. I would like to get notified by email when these scripts fail. I do not wish to be notified every time they run and produce any output at all.

As such, I am using the script Cronic to run my jobs inside cron, which should mean only error output gets sent, and not just any output.

However, in one script I have a command like this:

if [ "$(ls -A ${local_backup_location}/nextcloud-data/)" ]; then    # save space by removing diffs older than 6 months    rdiff-backup --remove-older-than 6M --force ${local_backup_location}/nextcloud-data/ || echo "[$(date "+%Y-%m-%d %T")] No existing nextcloud data backup"  fi  

The ls -A ${local_backup_location}/nextcloud-data/ is intended to test if a directory is empty. My problem is that this command seems to result in output which is recognized as error output cronic. Cronic defines an error as any non-trace error output or a non-zero result code. For example:

Cronic detected failure or error output for the command:  /usr/local/sbin/run_backup    RESULT CODE: 0    ERROR OUTPUT:  appdata_ocgcv9nemegb  files_external  flow.log  flow.log.1  __groupfolders  .htaccess  index.html  nextcloudadmin  nextcloud-db.bak  nextcloud.log  nextcloud.log.1  .ocdata  rdiff-backup-data  Test_User  updater.log  updater-ocgcv9nemegb ]  custom  gitea-db.sql  log ]    % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current                                   Dload  Upload   Total   Spent    Left  Speed      0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0  100   365    0     0  100   365      0    302  0:00:01  0:00:01 --:--:--   303  100   365    0     0  100   365      0    165  0:00:02  0:00:02 --:--:--   165  100   365    0     0  100   365      0    113  0:00:03  0:00:03 --:--:--   113  100   365    0     0  100   365      0     86  0:00:04  0:00:04 --:--:--    86  100   365    0     0  100   365      0     70  0:00:05  0:00:05 --:--:--    70  100   365    0     0  100   365      0     58  0:00:06  0:00:06 --:--:--     0  100   365    0     0  100   365      0     50  0:00:07  0:00:07 --:--:--     0  100   365    0     0  100   365      0     44  0:00:08  0:00:08 --:--:--     0  100   365    0     0  100   365      0     39  0:00:09  0:00:09 --:--:--     0  100   365    0     0  100   365      0     37  0:00:09  0:00:09 --:--:--     0  100 10.4M    0 10.4M  100   365  1016k     34  0:00:10  0:00:10 --:--:-- 2493k  100 11.6M    0 11.6M  100   365  1128k     34  0:   00:10  0:00:10 --:--:-- 3547k    STANDARD OUTPUT:  Maintenance mode enabled  Deleting increment at time:  <snip>  

So why does the command ls -A ${local_backup_location}/nextcloud-data/ produce error output in this case, and how can I prevent this? An alternative robust method to test if a directory is empty would be acceptable, but I would also like an explanation of why the command seems to produce error output.

EDIT: Adding Cronic stdout with set -ex

Some commenters have requested the actual whole script which is very long, but Cronic reports the actual stdout of the script and I use set -ex at the top of the script. The error output happens immediately after the invocation of ls -A /mnt/reos-storage-2/backups/nextcloud-data/ which is why I believe the error output to be the result of this command.

+ rdiff-backup --ssh-no-compression /var/www/nextcloud /mnt/reos-storage-2/backups/nextcloud/  + ls -A /mnt/reos-storage-2/backups/nextcloud-data/  + [ 67cf481e-62a3-1039-8bf2-05805d214bca  <removed>  appdata_ocgcv9nemegb  <removed>  <removed>  <removed>  <removed>  files_external  flow.log  flow.log.1  __groupfolders  .htaccess  index.html  <removed>  <removed>  nextcloudadmin  nextcloud-db.bak  nextcloud.log  nextcloud.log.1  .ocdata  <removed>  <removed>  rdiff-backup-data  <removed>  Test_User  <removed>  updater.log  updater-ocgcv9nemegb ]  + rdiff-backup --remove-older-than 6M --force /mnt/reos-storage-2/backups/nextcloud-data/  + date +%Y-%m-%d %T  + echo [2021-04-21 03:23:38] Starting nextcloud data backup  

DFS-Link failing when mounting DFS share via cifs

Posted: 26 Apr 2021 08:53 AM PDT

I'm trying to mount a DFS share via cifs. The share is built up like this:

\\mydomain.local\Files is the DFS share.
I can successfully mount this share as follows:

# mount -t cifs //mydomain.local/Files ~/fileserver -o username=myuser,domain=mydomain.local,password=hunter2  

After this I can traverse the directories in ~/fileserver as I'd expect.

# ls ~/fileserver  folder1 folder2  

When I try to cd into folder1 however, I get an error:

# cd folder1  cd: folder1: No such file or directory  

It takes a second or two before the error appears.
I think this is because folder1 is a DFS-Link to another fileserver, it links to: \\fileserver2.mydomain.local\Fileshare$\somedirectory\folder1

Now I've looked at dmesg right after this:

# dmesg  CIFS: Attempting to mount //fileserver2.mydomain.local/Fileshare/somedirectory/folder1  No dialect specified on mount. Default has changed to a more secure dialect, SMB2.1 or later (e.g. SMB3), from CIFS (SMB1). To use the less secure SMB1 dialect to access old servers which do not support SMB3 (or SMB2.1) specify vers=1.0 on mount.  FS-Cache: Duplicate cookie detected  FS-Cache: O-cookie c=0000000088cf85cb [p=00000000a52bce0c fl=222 nc=0 na=1]  FS-Cache: O-cookie d=00000000ff7a58d3 n=000000005109413d  FS-Cache: O-key=[5] '46696c6573'  FS-Cache: N-cookie c=00000000c39f9d7a [p=00000000a52bce0c fl=2 nc=0 na=1]  FS-Cache: N-cookie d=00000000ff7a58d3 n=00000000930f66cf  FS-Cache: N-key=[5] '46696c6573'  No dialect specified on mount. Default has changed to a more secure dialect, SMB2.1 or later (e.g. SMB3), from CIFS (SMB1). To use the less secure SMB1 dialect to access old servers which do not support SMB3 (or SMB2.1) specify vers=1.0 on mount.  FS-Cache: Duplicate cookie detected  FS-Cache: O-cookie c=0000000088cf85cb [p=00000000a52bce0c fl=222 nc=0 na=1]  FS-Cache: O-cookie d=00000000ff7a58d3 n=000000005109413d  FS-Cache: O-key=[5] '46696c6573'  FS-Cache: N-cookie c=000000007c6a3385 [p=00000000a52bce0c fl=2 nc=0 na=1]  FS-Cache: N-cookie d=00000000ff7a58d3 n=00000000f006535b  FS-Cache: N-key=[5] '46696c6573'  No dialect specified on mount. Default has changed to a more secure dialect, SMB2.1 or later (e.g. SMB3), from CIFS (SMB1). To use the less secure SMB1 dialect to access old servers which do not support SMB3 (or SMB2.1) specify vers=1.0 on mount.  CIFS VFS: \\fileserver2.mydomain.local cannot query dirs between root and final path, enabling CIFS_MOUNT_USE_PREFIX_PATH  CIFS VFS: Autodisabling the use of server inode numbers on new server.  CIFS VFS: The server doesn't seem to support them properly or the files might be on different servers (DFS).  CIFS VFS: Hardlinks will not be recognized on this mount. Consider mounting with the "noserverino" option to silence this message.  CIFS VFS: cifs_read_super: get root inode failed  

I believe the "cannot query dirs between root and final path" seems to be the actual problem, as I don't have permission to directly mount either the Share Fileshare$ or somedirectory, but only folder1. I could also directly mount this share on fileserver2, but since on the DFS there are many links to another server, I'd have to mount a whole lot of stuff.

I'm in the lucky position to be able to try the mount with an elevated account that can access both Fileshare$and somedirectory and when I mount it with that user instead of "myuser", I can access folder1:

# mount -t cifs //mydomain.local/Files ~/fileserver -o username=adminuser,domain=mydomain.local,password=hunter2  # ls ~/fileserver/folder1  file1 file2 file3  

But I can't use this elevated account for day to day work - also, I'm not in a position to change the permissions on the DFS share or the fileserver.

The interesting part is that smbclient can do the traversal with myuser:

# smbclient '\\mydomain.local\Files' -U 'myuser@mydomain.local'  # smb: \> ls folder1  .  ..  file1  file2  file3  

I tried a lot of different options to the mount (mostly in desperation):

vers=1.0  vers=3.0  noserverino  sec=ntlmv2  sec=ntlmssp  

Has anybody got any idea what else I could try?

The DFS share is on a windows server by the way.

radeon failed vce resume after upgrade to newest Linux kernel

Posted: 26 Apr 2021 08:07 AM PDT

I have upgraded my Debian to latest version on the kernel version 3.16.0-4-amd64. Update went fine. After that I decided to upgrade linux kernel version to the latest one supported by Debian 10 - 4.19.0-5-amd64. After reboot my X-server didn't get up and in logs when system starting I see an error like that radeon 0000:01:00.0 failed VCE resume (-110)

Laptom model: Samsung 300E5V/300E4EV/270E5EV/270E4EV/2470EV/2470EE

After sysmtem start I get in command line interface. When try to execute startx I see then same error about radeon and message from x-server:

enter image description here

$lspci | grep VGA    radeon failed VCE resume (-110)  VGA compatible controller: Intel Corporation 3rd Gen Core processor Graphics Controller(rev 09)  

Here is ls -pci log and log from Xorg.0.log file.

Could you help me please fix an issue and get my desktop up again?

Scheduled folder backup

Posted: 26 Apr 2021 08:31 AM PDT

I'm looking for how to automatically backup a user's home directory in CentOs 7 to a remote host or NAS or just to ~/.snapshot. In some Linux setups, I have seen a .snapshot folder in the user's home directory (~/.snapshot/) that holds hourly, nightly, and weekly backups of their home directory (ie ~/.snapshot/weekly1 for a copy of what was in the user's home directory 1 week ago).

The /home/username/.snapshot/ directory would be read-only by the user. It's not a backup for the purpose of guarding against hardware failure. It's just nice to have the ability to recover a file from yesterday or this morning if you don't like the changes that have been made.

I have seen several related posts on stack overflow, but so far, I haven't seen a guide that explains the complete workflow.

This is what I know so far:

  1. Use rsync to copy the contents of a given folder to the remote host, NAS, or (~/.snapshot/hourly0)
  2. Create a shell script to execute the rsync command

#!/bin/bash sudo rsync -av --progress --delete --log-file=/home/username/$(date +%Y%m%d)_rsync.log --exclude "/home/username/.snapshot" /home/username/ /home/username/.snapshot/hourly1

  1. Change the permissions on the script to make it executable

sudo chmod +x /home/username/myscript.sh

  1. Use crontab to schedule the rsync command at the desired backup interval

  2. Somehow move hourly0 to hourly1 before running the scheduled hourly rsync

  3. Delete the oldest backup once rsync completes successfully

Are there any guides that cover how to do this? I don't understand how to automatically rename the folders as time goes on (ie weekly1 to weekly2), or how to delete "week10" if I decide to only keep weeks up to 9. Is this another cron job?

Update: After some more Googling, I've discovered that NetApp creates the snapshot folders. I just don't currently have a NetApp NAS. https://library.netapp.com/ecmdocs/ECMP1635994/html/GUID-FB79BB68-B88D-4212-A401-9694296BECCA.html

How to run jhbuild as root

Posted: 26 Apr 2021 07:51 AM PDT

I have installed jhbuild and set the PATH variable to $PATH:~/.local/bin. Now when I run jhbuild command I get error: You should not use jhbuild as root user and when I change the user to non-root and again I change the PATH value to above one replacing ~ with /root, I get error jhbuild command not found. I am using kali linux so the default user is root user.

can't generate key via dnssec-keygen

Posted: 26 Apr 2021 09:14 AM PDT

 $ dnssec-keygen -a HMAC-MD5 -b 512 -n HOST  {host}  

above results in blank line and endless waiting

 $ dnssec-keygen -T DNSKEY -a HMAC-MD5 -b 512 -n HOST  {host}  

the same

entropy:

$ cat /proc/sys/kernel/random/entropy_avail   890  

ps. I was trying to make some noise by find / but that brought no result

LFTP exclude file extensions

Posted: 26 Apr 2021 09:04 AM PDT

I am trying to mirror directories with lftp but I don't want to download filetypes that are notoriously large like .mp4 and .swf. But I am having trouble with the regex - and seeming like the exclude-glob too. Both of them download all files.

What I tried:

/usr/local/bin/lftp -u user,pass -e 'mirror -x ^(\.mp4|\.swf)$ $src $dest' ftp.host

&&

/usr/local/bin/lftp -u user,pass -e 'mirror -X swf $src $dest' ftp.host

Linux Desktop Access via Web Browser

Posted: 26 Apr 2021 07:40 AM PDT

This should be pretty simple, in my naive opinion of course...

I have a tower that I am going to run a media server off of, along with some other functions but that's the main (re)purpose of the parts I had around.

I'd like to be able to access the Debian (or other Linux) desktop environment via a web browser, like you can do for a printer or wireless router etc. The goal being that from any device on my wireless network I can just type in the ip and login as if I were in front of the box itself. This way I can download stuff, manage sharedrives, etc from any network device but all the actions happen on my media server.

Is there anything out there like this besides this VNC thing I've read about?

Is there a better way to accomplish what I'm trying to do?

Looking to grep or egrep year ranges from 1965-1996

Posted: 26 Apr 2021 09:04 AM PDT

I have a grep that works for some of the dates but having trouble getting my brain to make it fully functional.

grep 19[6-9][5-6]$ filename  

it catches a few correctly but I'm looking to grab all years between 1965-1996.

Here is the current solution but looking for a one line really, but here's what I've gotten so far:

grep 196[5-9]$ filename  grep 197[0-9]$ filename  grep 198[0-9]$ filename  grep 199[0-6]$ filename  

Looking for better and shorter if possible?

Pipe find into grep -v

Posted: 26 Apr 2021 09:04 AM PDT

I'm trying to find all files that are of a certain type and do not contain a certain string. I am trying to go about it by piping find to grep -v

example:

find -type f -name '*.java' | xargs grep -v "something something"  

This does not seem to work. It seems to be just returning all the files that the find command found. What I am trying to do is basically find all .java files that match a certain filename(e.g. ends with 'Pb' as in SessionPb.java) and that do not have an 'extends SomethingSomething" inside it.

My suspicion is that I'm doing it wrong. So how should the command look like instead?

What is yum equivalent of 'apt-get update'?

Posted: 26 Apr 2021 09:11 AM PDT

Debian's apt-get update fetches and updates the package index. Because I'm used to this way of doing things, I was surprised to find that yum update does all that and upgrades the system. This made me curious of how to update the package index without installing anything.

What if 'kill -9' does not work?

Posted: 26 Apr 2021 09:06 AM PDT

I have a process I can't kill with kill -9 <pid>. What's the problem in such a case, especially since I am the owner of that process. I thought nothing could evade that kill option.

No comments:

Post a Comment