Monday, June 14, 2021

Recent Questions - Unix & Linux Stack Exchange

Recent Questions - Unix & Linux Stack Exchange


Dynamic swap file with systemd swap unit

Posted: 14 Jun 2021 10:11 AM PDT

On each boot a temporary partition is mounted at /mnt/tmp. The temporary partition varies in size.

I need to set a swapfile on it which is half the size of the partition.

Is this possible with systemd swap unit?

I think I can do this with a systemd service unit incorporating a dedicated script that makes the swap file and activates it, and that depends on the temporary partition being mounted. But a swap unit doesn't have this functionality of executing scripts, hence I wonder whether I am on the right track?

Why anbox is not found on the Application Finder of Ubuntu 20.04 with XFCE4?

Posted: 14 Jun 2021 10:02 AM PDT

My goal is to install anbox in Ubuntu 20.04 and run it through PuTTy (with X11 Forwarding, can it be done?) or Widnows Remote Desktop Connection.

My actual result: Installation successful. However I can't find anbox using the application finder.

My expected result: I can find anbox using the application finder.

What I did

sudo snap install --devmode --beta anbox  

Output of snap list

root@ubuntu-1cpu-1gb-sg-sin1:~# snap list  Name   Version    Rev    Tracking       Publisher   Notes  anbox  4-56c25f1  186    latest/beta    morphis     devmode  core   16-2.50.1  11167  latest/stable  canonical✓  core  

gpg: can't open 'file.pgp': No such file or directory

Posted: 14 Jun 2021 09:57 AM PDT

I Was decrypting a file.pgp file, and the file is present in the directed. The powershell script is able to read the name of the file but not able to open it. it says below error:

gpg.exe : gpg: can't open 'file.pgp': No such file or directory

The code works successfully fine for sometimes and sometimes it shows the above error.

Can anyone let me know what can be issue, that the same decryption code works fine for some runs, but decryption fails for some runs.

Redirecting output from within disk operations does not work

Posted: 14 Jun 2021 09:57 AM PDT

I am not able to successfully redirect STDOUT+STDERR on commands that operates with disks. Standard redirecting which always works, is somehow now catching the output. Two practical examples:

Example 1:

# wipefs --all --force /dev/sda >>/var/log/custom.log 2>&1    [   20.169018 ]  sda: sda1  

Example 2:

# mount --verbose --options defaults --types ext4 /dev/sda1 /path/is/here >>/var/log/custom.log 2>&1    [   30.947410 ] EXT4-fs (sda1): mounted filesystem with ordered data mode. Opts: (null)  

Interesting is, that this only happens when touching disks somehow. All other redirects within the script works as expected.

Any ideas?

How to set Open File Dialog path explicitly?

Posted: 14 Jun 2021 09:47 AM PDT

Description

I'd like to point a folder so any application will start its "File Open Dialog" in it. How can I do that?

Rationale

I'm naturally using many applications while working on a project, like FreeCAD, LibreCAD, VLC, SimpleScan, etc. It's frustrating to navigate to my work folder for every single one of those applications. If I could set such a path, any application will start that dialog within my work folder, so I can easily handle my files.

bash I/O redirection - how to append to stderr

Posted: 14 Jun 2021 09:12 AM PDT

I have a script that loops over some big collection of data and performs some lenghty operations. Then i need to sort | uniq -c its output. So to let it know that its alive, I print a dot every N items on stderr (very primitive pseudo progress-bar), so it looks pretty much like this:

for i in {1..100}; do       [[ $(( (i+=1) % 10)) -eq 0 ]] && echo -n "." >&2      shuf -i 1-10 -n1      sleep 0.1  done | sort | uniq -c   

and the output:

..........      9 1       10 10        8 2       14 3       13 4        9 5       11 6        8 7        8 8       10 9  

the "progress bar" messes up the output a little - so i was wondering:

  • is there an easy way to add a nweline to that stderr before flushing that stdout? (probably echo >&2 is all I need)
  • or remove it ?

of course in reeality i dont know wow many items there are (at least not out-of-the box). So i was wondering if this can be acieved by some stream redirection

What does `ln /path/to/file -i` does in context of setuid'ed script?

Posted: 14 Jun 2021 08:31 AM PDT

I was making bash script with setuid permission on, but it doesn't work. So I found my solution here:

Now my script works fine and all (I rewrote it in cpp).

To satisfy my curiosity as why pure bash shell didn't work, I read this link: http://www.faqs.org/faqs/unix-faq/faq/part4/section-7.html (referenced by this answer: https://unix.stackexchange.com/a/2910 ). At that site, I came across such command:

        $ echo \#\!\/bin\/sh > /etc/setuid_script          $ chmod 4755 /etc/setuid_script          $ cd /tmp          $ ln /etc/setuid_script -i          $ PATH=.          $ -i  

What I didn't understand is fourth line that said ln /etc/setuid_script -i.

The question is: what do that command do?

I've read in ln manual, that -i is just "interactive" flag (asking whether you want to overwrite existing file or not). So why does a ln /etc/setuid_script -i followed with PATH=. and -i can make my shell executes /bin/sh -i?

How to proceed to enlarge /boot

Posted: 14 Jun 2021 07:45 AM PDT

I know how to use gparted on a livecd to resize partitions, but here it's a bit more complex and I don't want to screw it up. I have a /boot which is ridiculously small (can hold only one kernel at a time, so it's very contrived to upgrade). Here's the setup:

$ sudo fdisk -l /dev/sda  Disk /dev/sda: 476.94 GiB, 512110190592 bytes, 1000215216 sectors  Disklabel type: dos  Disk identifier: 0x000f146d  Device     Boot  Start        End   Sectors   Size Id Type  /dev/sda1  *      2048     499711    497664   243M 83 Linux  /dev/sda2       501758 1000214527 999712770 476.7G  5 Extended  /dev/sda5       501760 1000214527 999712768 476.7G 83 Linux    $ lsblk -f  NAME                     FSTYPE      FSVER    LABEL UUID                                   FSAVAIL FSUSE% MOUNTPOINT  sda                                                                                                         ├─sda1                   ext2        1.0            25830e25-c61f-466b-9239-ced150ccf577       58M    70% /boot  ├─sda2                                                                                                      └─sda5                   crypto_LUKS 1              3102b8d0-a320-49db-b764-1a23c495ab20                    └─sda5_crypt           LVM2_member LVM2 001       WJjfMf-xUhh-2iob-ow9v-RWfN-TG9L-cc3GOz                      ├─kubuntu--vg-root   ext4        1.0            003987e7-8317-4cd2-b47b-561378ea0245       52G    84% /      └─kubuntu--vg-swap_1 swap        1              3e69be10-8e23-4460-a16f-74ffef8fe290                  [SWAP]  

Seems to me, before I can enlarge /dev/sda1, I need to shrink /dev/sda2 (or is that /dev/sda5 ?), move it forward and then enlarge /dev/sda1. But since /dev/sda2/5 is an encrypted partition holding the system, maybe I need to do extra things to /dev/sda5(_crypt) ?!?

As you can tell, I don't have a clear understanding of the relationship between sda2, sda5 and sda5_crypt. Note that those are listed here as seen from the booted system, not from the liveCD.

grep (or sed?): skip a specified number of lines before looking for matches

Posted: 14 Jun 2021 07:21 AM PDT

I'm working with huge log files that accumulate over days that I can't truncate/rotate but need to parse new entries hourly.

I've been using grep to grab entries with a specific string then counting how many I get and tossing the first N, where N is the number of entries

I've already ingested on all prior loops, but of course this means inefficiently grepping the whole file every loop. I'm relatively unix naive, but I feel like there's a more efficient way to do this? I don't think tail would work because I won't know how many new lines have been written since the last parsing. This post talks of skipping, but using a search string to determine how many lines to skip whereas I'd be looking to supply the skip number as an argument. This one speaks to skipping a specified number of characters on each line, but I'd be looking to skip a specified number of lines.

Any suggestions?

Can't read or mount MicroSD card (ext4) on Asus X205T (Q4OS)

Posted: 14 Jun 2021 08:38 AM PDT

I have hard time with MicroSD card on Asus X205T. I can see device with lsblk, but it can't be mounted and read, it doesn't read filesystem and stuff.

Distro: Q4OS. Filesystem: ext4.

  1. It is not hardware issue because MicroSD card worked on this same laptop with Windows earlier this week.
  2. It is not MicroSD card issue because I tried several MicroSD cards and they all work on other PC.
  3. This fix from wiki.debian.org didn't work out
  4. When I lunch GParted the error stricks: Input/output error during read on /dev/mmcblk2. Eventually GParted doesn't show device at all.

Output for sudo mount /dev/mmcblk2 /mnt/sdcard:

mount: /mnt/sdcard: can't read superblock on /dev/mmcblk2.  

Here is lsblk output:

NAME         MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT  mmcblk2      179:0    0   58G  0 disk  mmcblk1      179:256  0 29.1G  0 disk  ├─mmcblk1p1  179:257  0  512M  0 part /boot/efi  ├─mmcblk1p2  179:258  0 27.7G  0 part /  └─mmcblk1p3  179:259  0  976M  0 part [SWAP]  mmcblk1boot0 179:512  0    4M  1 disk  mmcblk1boot1 179:768  0    4M  1 disk  

Output for udisksctl info -b /dev/mmcblk2:

/org/freedesktop/UDisks2/block_devices/mmcblk2:    org.freedesktop.UDisks2.Block:      Configuration:              []      CryptoBackingDevice:        '/'      Device:                     /dev/mmcblk2      DeviceNumber:               45824      Drive:                      '/org/freedesktop/UDisks2/drives/SA64G_0x2967a02d'      HintAuto:                   true      HintIconName:      HintIgnore:                 false      HintName:      HintPartitionable:          true      HintSymbolicIconName:      HintSystem:                 false      Id:      IdLabel:      IdType:      IdUUID:      IdUsage:      IdVersion:      MDRaid:                     '/'      MDRaidMember:               '/'      PreferredDevice:            /dev/mmcblk2      ReadOnly:                   false      Size:                       62260248576      Symlinks:                   /dev/disk/by-id/mmc-SA64G_0x2967a02d                                  /dev/disk/by-path/platform-PNP0FFF:00  

Output for dmesg | tail:

[ 4713.099537] Buffer I/O error on dev mmcblk2, logical block 0, async page read  [ 4713.099571] ldm_validate_partition_table(): Disk read failed.  [ 4713.106012] Buffer I/O error on dev mmcblk2, logical block 0, async page read  [ 4713.110676] Buffer I/O error on dev mmcblk2, logical block 0, async page read  [ 4713.117095] Buffer I/O error on dev mmcblk2, logical block 0, async page read  [ 4713.123469] Buffer I/O error on dev mmcblk2, logical block 0, async page read  [ 4713.123576] Dev mmcblk2: unable to read RDB block 0  [ 4713.131799] Buffer I/O error on dev mmcblk2, logical block 0, async page read  [ 4713.137274] Buffer I/O error on dev mmcblk2, logical block 0, async page read  [ 4713.149637]  mmcblk2: unable to read partition table  

Is something wrong with my Puppy Linux installation?

Posted: 14 Jun 2021 07:11 AM PDT

I've just installed bionicpup64-8.0-uefi.iso on VirtualBox.

enter image description here

Now there's something that's bothering me: the installation option is still there. Please explain. :)

Extract fields and substrings and merge sorted lines

Posted: 14 Jun 2021 07:50 AM PDT

I have a file consisting of 5 tab separated fields (irrelevant fields are empty in this example).

1       2       URL                     email               5                    https://www.a.com/t     a@b.com                  https://www.a.com       a@b.com                  https://www.b.fr        c@hl.com                  https://www.b.fr/s/faq  a@hl.com  

Desired output:

domain          email(s)                a.com           a@b.com  b.fr            c@hl.com, a@hl.com  

Steps:

  1. Isolate column 3 and 4
awk -F "\t" '{print $3 "\t\t" $4}'   

This yields what is shown in the first block above.

How do I go on from here?

I know how to grep the domain only, but the isolated domains don't help much in achieving the desired output lines.

I am not restricted to awk, it was just the only tool I knew that could isolate fields easily (via the -F flag).

How to continue script commands, functions & variables after chroot?

Posted: 14 Jun 2021 08:32 AM PDT

I have written a BASH script that would prepare and install everything during ArchLinux initial installation. The script would work fine and execute everything successfully until it reaches the arch-chroot command then it would stop.

Also, the solutions I found online (like the EOF trick) wouldn't pass functions or variables after chroot.

Here is a demo:

#!/bin/bash    username=test  pause_var=1    pause ()  {  if [ $pause_var -eq 1 ]  then      read -n 1 -s -r -p "Press any key to continue"  fi  }    arch-chroot /mnt #the script stops after executing this line!!    # some commands after chroot  useradd -m $username  pause    echo $username:123 | chpasswd  pause    # ... more commands below  

I googled for a solution but none of the solutions that I found have worked for me. I'm a Linux noob.

Thank you.

How to configure a GUE receive tunnel in Linux for IPv6

Posted: 14 Jun 2021 08:36 AM PDT

I am trying to configure a GUE tunnel to receive IPv6 packets that contain GUE encapped IPv4 packets but I am having trouble de-encapsulating the packets. The IPv6 packets have a GUE encapsulated packet inside of which has a IPv4 packet. I setup a receive tunnel on my end.

sysctl net.ipv4.conf.all.rp_filter=2  modprobe fou  modprobe fou6  ip -6 fou add port 42428 gue -6  ip addr add $VIP/32 dev ip6tnl0  ip -6 link set ip6tnl0 up  

This is what the resulting ip6tnl0 looks like:

4: ip6tnl0@NONE: <NOARP,UP,LOWER_UP> mtu 1452 qdisc noqueue state UNKNOWN group default qlen 1000      link/tunnel6 :: brd ::      inet $VIP/32 scope global ip6tnl0         valid_lft forever preferred_lft forever      inet6 $LINK_LOCAL/64 scope link         valid_lft forever preferred_lft forever  

On my other machine I can curl the $VIP which is a IPv4 address and on my machine through tcp-dump I can see the encapped packets

tcpdump: listening on any, link-type LINUX_SLL (Linux cooked v1), capture size 262144 bytes  21:32:09.183750 IP6 (hlim 60, next-header UDP (17) payload length: 72) $IPV6_A.53322 > $IPV6_B.42428: [udp sum ok] UDP, length 64  

So when that UDP packet is decapped properly, I would expect it to contain an IPv4 packet matching the source IPv4 of $VIP. But when I run

tcpdump -i any host $VIP -n  

I do not see anything.

I have repeated this exact same setup for IPv4 (IPv4 packet encapped inside IPv4 packet using GUE) for which the setup for the receive tunnel is similar:

sysctl net.ipv4.conf.all.rp_filter=2  modprobe fou  ip fou add port 42428 gue  ip addr add $VIP/32 dev tunl0  ip link set tunl0 up  

In which case I can see the decapped packets

root@ipv4-control:~# tcpdump -i any host $VIP -n  tcpdump: verbose output suppressed, use -v or -vv for full protocol decode  listening on any, link-type LINUX_SLL (Linux cooked v1), capture size 262144 bytes  23:12:04.749247 IP $VIP.43830 > $VIP.80: Flags [S], seq 2247712115, win 65495, options [mss 65495,sackOK,TS val 2120453320 ecr 0,nop,wscale 7], length 0  

$VIP above is a virtual ipv4 address that is serving http traffic.

Any ideas what is wrong with the way my IPv6 receive tunnel is setup?

Does the override.conf file change the actual service file conf?

Posted: 14 Jun 2021 09:51 AM PDT

I have created an override.conf file for systemd-journal-catalog-update.service and placed it in systemd-journal-catalog-update.service.d/ directory. The purpose it to remove systemd-tmpfiles-setup.service from the systemd-journal-catalog-update.service file.

The file has this in it now:

[Unit]  After=local-fs.target systemd-tmpfiles-setup.service  

My override.conf file has this:

[Unit]  After=  After=local-fs.target  

However, the systemd-journal-catalog-update.service file does not seem to be changing. Am I misunderstanding how the override.conf file works? I know that I can manually modify the original service file but project circumstances are limiting this as an option. Any assistance/advice you guys can give is greatly appreciated.

How do i overlap bash subshell with dialog guage in a bash script?

Posted: 14 Jun 2021 08:29 AM PDT

I might as well have gotten the title wrong. But if the explanation leads to a successfully solution then maybe someone can kindly suggest a title.

I'm debugging a very old script. It is an installtion script for some local program. The script is running just perfectly fine. This script is implemented with a progress bar using dialog --gauge. Each progress has its own subshell. At some point the first progress wouldn't exit automatically after the code is done.

(      # Start Programm          echo -e "XXX"          echo -e "The Programm is being installed"          echo -e "Starting the program"          echo -e "XXX"          echo -e 50            #/opt/programm/service/programm start --systemd >/dev/null 2>&1          ${DIR_LINK}/init.d/programm start          log "${LOGFILE}" "[INFO]: Programm is running"              # ----------------------------------------------------------------------          # Cleanup (delete old files)          echo -e "XXX"          echo -e "Programm is being installed\nPlease wait"          echo -e "Delete old files"          echo -e "XXX"          echo -e 92            while read file; do                  if [ -f ${file} ]; then                                     rm -rf ${file}                          log "${LOGFILE}" "[INFO]: Deleted, no longer needed: ${file}"                  fi          done < "${DIR_LINK}/installer/config.delete"            # ----------------------------------------------------------------------          # end          log "${LOGFILE}" "[INFO]: The Installation is completed"        ) 2>/tmp/errors | dialog --title "${tmp_dialog_title}" \                           --gauge 'Start installing' 16 60 0      (          echo -e "XXX"          echo -e ""          echo -e "          The installation is completed."          echo -e "    The programm should be up and running."          echo -e ""          echo -e "    \ZuPress Enter to continue.\Zn"          echo -e "XXX"          echo -e 100  ) | dialog --title "${tmp_dialog_title}" \          --gauge "Installation completed" 16 60  read -n1 a    # timestamp for dialog output deletion  cut -d":" -f3- ${LOGFILE} > ${TMP}  dialog --title "$(dialog_title "Logfile: ${LOGFILE}")" \          --textbox ${TMP} 16 60  

The result of this script is successfull and the programm is installed as needed. But the problem is the first shell doesnt exit itself unless i press CtrlC.

Image where the subshell stucks

To get out of this i use CtrlC which will take to the next dialog and tell me the installation is completed.

Is there a way to make the first subshell after "Delete old files" exit and the next subshell (dialog) to appear automatically?

Not forgetting, this script is being run in another bash script which is also being run from another bash script. Just extra information maybe it might help out solving the case.

Thanks in advance to anyone who might be able to help out.

veth interfaces performance problem

Posted: 14 Jun 2021 08:38 AM PDT

On a fast AWS machine (m5.2xlarge), I am creating around 600 veth interfaces, each one having a little server (with socat) running on a port.

I then start sending around 7kb/second of data per server. When sending to about 500 servers everything goes well, but when I send it to around 600 servers, timeouts begin to occur. The connection to the server can take more than 3 seconds to be executed, as I have tested.

It's not a lot of processing (for such a server) and it's not a lot of data.

Is the Linux veth implementation slow?

I have created a git repo to reproduce the problem. Any help would be highly appreciated.

How to fix ".service: Start request repeated too quickly." on custom service?

Posted: 14 Jun 2021 08:30 AM PDT

I'm learning how to create services with systemd. I get this error:

.service: Start request repeated too quickly.  

I can't start the service any more; it was working yesterday. What am I doing wrong?

(root@Kundrum)-(11:03:19)-(~)  $nano /lib/systemd/system/swatchWATCH.service   1 [Unit]   2 Description=Monitor Logfiles and send Mail reports   3 After=syslog.target network.target   4   5 [Service]   6 Type=simple   7 ExecStart=/usr/bin/swatch --config-file=/home/kristjan/.swatchrc --input-record-separator="\n \n " --tail-file=/var/log/snort/alert --daemon   8 Restart=on-failure   9 StartLimitInterval=3  10 StartLimitBurst=100  11  12 [Install]  13 WantedBy=multi-user.target  

StartLimitInterval and StartLimitBurst I added after trying to fix it.

My system is Debian 9.8 Stretch all updates.

GVIM : Shortcut for finding end for particular begin in SystemVerilog language

Posted: 14 Jun 2021 08:01 AM PDT

Need a shortcut for finding the end for a particular begin in SystemVerilog syntax.

New Intel i350 NIC not detected by system, but appears in lspci - Possible Intel IGB issue?

Posted: 14 Jun 2021 08:37 AM PDT

I'm running into an issue where a new network card isn't being automatically detected by the OS. I recently purchased an Intel I350 gigabit network card. I have purchased this card before and I have used it in other systems with the same OS with no issues. This is the card https://ark.intel.com/products/84805/Intel-Ethernet-Server-Adapter-I350-T4V2.

The odd thing is that this card is being detected by Windows, but not Oracle Linux, CentOS Live, or Ubuntu Live.

"nmcli d" output

DEVICE  TYPE      STATE         CONNECTION  eno2    ethernet  connected     eno2  eno1    ethernet  disconnected  --  lo      loopback  unmanaged     --  

These are the onboard adapters. The 4 Intel ones are not detected.

"lspci | grep Network" output

b3:00.0 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)  b3:00.1 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)  b3:00.2 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)  b3:00.3 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)  

The OS appears to at least know of the existence of the PCI card.

This is the output of "lshw -class network"

*-network:0         description: Ethernet interface         product: Ethernet Connection X722 for 10GBASE-T         vendor: Intel Corporation         physical id: 0         bus info: pci@0000:19:00.0         logical name: eno1         version: 09         serial: ac:1f:6b:4c:ff:04         capacity: 10Gbit/s         width: 64 bits         clock: 33MHz         capabilities: pm msi msix pciexpress vpd bus_master cap_list rom ethernet physical 1000bt-fd 10000bt-fd autonegotiation         configuration: autonegotiation=off broadcast=yes driver=i40e driverversion=1.5.16-k firmware=3.1d 0x80000827 1.1638.0 latency=0 link=no multicast=yes         resources: irq:54 memory:c4000000-c4ffffff memory:c5008000-c500ffff memory:c5d80000-c5dfffff memory:c5010000-c508ffff    *-network:1         description: Ethernet interface         product: Ethernet Connection X722 for 10GBASE-T         vendor: Intel Corporation         physical id: 0.1         bus info: pci@0000:19:00.1         logical name: eno2         version: 09         serial: ac:1f:6b:4c:ff:05         size: 1Gbit/s         capacity: 10Gbit/s         width: 64 bits         clock: 33MHz         capabilities: pm msi msix pciexpress vpd bus_master cap_list rom ethernet physical tp 1000bt-fd 10000bt-fd autonegotiation         configuration: autonegotiation=on broadcast=yes driver=i40e driverversion=1.5.16-k duplex=full firmware=3.1d 0x80000827 1.1638.0 ip=192.168.127.36 latency=0 link=yes multicast=yes port=twisted pair speed=1Gbit/s         resources: irq:54 memory:c3000000-c3ffffff memory:c5000000-c5007fff memory:c5d00000-c5d7ffff    *-network:0 UNCLAIMED         description: Ethernet controller         product: I350 Gigabit Network Connection         vendor: Intel Corporation         physical id: 0         bus info: pci@0000:b3:00.0         version: 01         width: 32 bits         clock: 33MHz         capabilities: pm msi msix pciexpress cap_list         configuration: latency=0         resources: memory:fbd00000-fbdfffff memory:fbe8c000-fbe8ffff memory:fbe00000-fbe7ffff memory:fbe90000-fbeaffff memory:fbeb0000-fbecffff    *-network:1 UNCLAIMED         description: Ethernet controller         product: I350 Gigabit Network Connection         vendor: Intel Corporation         physical id: 0.1         bus info: pci@0000:b3:00.1         version: 01         width: 32 bits         clock: 33MHz         capabilities: pm msi msix pciexpress cap_list         configuration: latency=0         resources: memory:fbc00000-fbcfffff memory:fbe88000-fbe8bfff memory:fbed0000-fbeeffff    *-network:2 UNCLAIMED         description: Ethernet controller         product: I350 Gigabit Network Connection         vendor: Intel Corporation         physical id: 0.2         bus info: pci@0000:b3:00.2         version: 01         width: 32 bits         clock: 33MHz         capabilities: pm msi msix pciexpress cap_list         configuration: latency=0         resources: memory:fbb00000-fbbfffff memory:fbe84000-fbe87fff    *-network:3 UNCLAIMED         description: Ethernet controller         product: I350 Gigabit Network Connection         vendor: Intel Corporation         physical id: 0.3         bus info: pci@0000:b3:00.3         version: 01         width: 32 bits         clock: 33MHz         capabilities: pm msi msix pciexpress cap_list         configuration: latency=0         resources: memory:fba00000-fbafffff memory:fbe80000-fbe83fff  

I noticed that the card is detected here as well, but it is listed as "UNCLAIMED". How do I go about making the system "claim" the card?

I have tried using the drivers listed from Intel's website, but it didn't seem to help. I could've also have been doing something wrong too. I don't have much experience with linux drivers.

Any help would be greatly appreciated,

Thank you!

Bash: calculate the time elapsed between two timestamps

Posted: 14 Jun 2021 07:19 AM PDT

I have written a script that notifies me when a value is not within a given range. All values "out of range" are logged in a set of per day files.

Every line is timestamped in a proprietary reverse way: yyyymmddHHMMSS

Now, I would like to refine the script, and receive notifications just when at least 60 minutes are passed since the last notification for the given out of range value.

I already solved the issue to print the logs in reverse ordered way with:

for i in $(ls -t /var/log/logfolder/*); do zcat $i|tac|grep \!\!\!|grep --color KEYFORVALUE; done  

that results in:

...  20170817041001 - WARNING: KEYFORVALUE=252.36 is not between 225 and 245 (!!!)  20170817040001 - WARNING: KEYFORVALUE=254.35 is not between 225 and 245 (!!!)  20170817035001 - WARNING: KEYFORVALUE=254.55 is not between 225 and 245 (!!!)  20170817034001 - WARNING: KEYFORVALUE=254.58 is not between 225 and 245 (!!!)  20170817033001 - WARNING: KEYFORVALUE=255.32 is not between 225 and 245 (!!!)  20170817032001 - WARNING: KEYFORVALUE=254.99 is not between 225 and 245 (!!!)  20170817031001 - WARNING: KEYFORVALUE=255.95 is not between 225 and 245 (!!!)  20170817030001 - WARNING: KEYFORVALUE=255.43 is not between 225 and 245 (!!!)  20170817025001 - WARNING: KEYFORVALUE=255.26 is not between 225 and 245 (!!!)  20170817024001 - WARNING: KEYFORVALUE=255.42 is not between 225 and 245 (!!!)  20170817012001 - WARNING: KEYFORVALUE=252.04 is not between 225 and 245 (!!!)  ...  

Anyway, I'm stuck at calculating the number of seconds between two of those timestamps, for instance:

20170817040001  20160312000101  

What should I do in order to calculate the time elapsed between two timestamps?

How to open Vim in Terminator by default?

Posted: 14 Jun 2021 08:34 AM PDT

I installed Terminator as the default terminal, you can see it in the screenshot.

But when I launch Vim through the menu it opens at the gnome-terminal.

I tried to change the settings in gsettings, but I'm not sure what exactly I need to change.

System: Linux Mint 18 Cinnamon 64-bit
Cinnamon version: 3.0.7
Terminator version: 0.98
VIM version: 7.4

enter image description here

How to get Bluetooth working on Arch Linux?

Posted: 14 Jun 2021 09:20 AM PDT

I have the BCM423142 chip on my laptop, recently I've installed Arch Linux (Antergos) and downloaded the linux-headers and broadcom-wl-dkms packages from AUR.

WiFi works perfect but Bluetooth doesn't, it only appears as powered off in the gnome-panel.

screen capture

I have this output from the dmesg | grep Bluetooth command:

[   12.376925] toshiba_bluetooth: Toshiba ACPI Bluetooth device driver  [   15.655590] Bluetooth: Core ver 2.21  [   15.655611] Bluetooth: HCI device and connection manager initialized  [   15.655614] Bluetooth: HCI socket layer initialized  [   15.655616] Bluetooth: L2CAP socket layer initialized  [   15.655621] Bluetooth: SCO socket layer initialized  [   18.325428] Bluetooth: hci0 command 0x1001 tx timeout  [   18.373084] Bluetooth: BNEP (Ethernet Emulation) ver 1.3  [   18.373088] Bluetooth: BNEP filters: protocol multicast  [   18.373094] Bluetooth: BNEP socket layer initialized  [   26.432140] Bluetooth: hci0: BCM: Reading local version info failed (-110)  

I have this output from lsmod | grep blue

bluetooth             487424  12 btrtl,btintel,bnep,btbcm,btusb  toshiba_bluetooth      16384  0  rfkill                 20480  8 toshiba_bluetooth,bluetooth,toshiba_acpi,cfg80211  crc16                  16384  2 bluetooth,ext4  

I have this output from the bluetooth command:

[bluetooth]# power on  No default controller available  

I've already tried with this Installation and this Configuration via the CLI and none works.

Setting classpath for Java

Posted: 14 Jun 2021 08:03 AM PDT

I was trying to use a tool a tool written in java called "fastqc" (for people who are interested in what is fastqc. when I tried typing the command :" fastqc" I got the error:

Exception in thread "main" java.lang.NoClassDefFoundError: uk/ac/babraham/FastQC/FastQCApplication  Caused by: java.lang.ClassNotFoundException: uk.ac.babraham.FastQC.FastQCApplication      at java.net.URLClassLoader$1.run(URLClassLoader.java:217)      at java.security.AccessController.doPrivileged(Native Method)      at java.net.URLClassLoader.findClass(URLClassLoader.java:205)      at java.lang.ClassLoader.loadClass(ClassLoader.java:323)      at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:294)      at java.lang.ClassLoader.loadClass(ClassLoader.java:268)  

when someone had similar previously,some one suggested that in similar case, I need to set the class path to the directory which contains FastQC installation:

and depending on having a standard class path or non-standard classpath on my machine, I need to append existing classpath like:

java -Xmx250m -classpath /usr/local/FastQC uk.ac.bbsrc.babraham.FastQC.FastQCApplication  

or

java -Xmx250m -classpath /usr/local/FastQC:$CLASSPATH uk.ac.bbsrc.babraham.FastQC.FastQCApplication  

Since my directory which contains the FastQC is /u32/myusername/Tool/FastQC

so I tried both:

java -Xmx250m -classpath /u32/myusername/Tool/FastQC uk.ac.bbsrc.babraham.FastQC.FastQCApplication  

and

java -Xmx250m -classpath /u32/myusername/Tool/FastQC:$CLASSPATH uk.ac.bbsrc.babraham.FastQC.FastQCApplication  

but none of them seemed to work.

Did I mess something up? I am not sure about what -Xmx250m means, with or without it, the path setting did not work. Sorry for my ignorance. Any idea or suggestion appreciated.

Convert non-RAID disk with data into RAID 1 disk (hardware controller)

Posted: 14 Jun 2021 09:00 AM PDT

I moved away from software RAID due to all the hassle it brings. After an OS reinstall, I am left with only one drive. I ordered a hardware RAID controller today, and when the controller arrives, I'd like to plug in the identical drives into the RAID controller and set up RAID 1 WITHOUT losing any data or needing to reinstall the OS (Debian Jessie x86_64).

Output of lsblk:

NAME              MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT  sda                 8:0    0 931.5G  0 disk  ├─sda1              8:1    0   953M  0 part /boot  ├─sda2              8:2    0  29.8G  0 part [SWAP]  └─sda3              8:3    0 900.8G  0 part    ├─vgmain-lvroot 254:0    0 621.4G  0 lvm  /    ├─vgmain-lvmail 254:1    0  93.1G  0 lvm  /var/vmail    ├─vgmain-lvhome 254:2    0  93.1G  0 lvm  /home    ├─vgmain-lvtmp  254:3    0  18.6G  0 lvm  /tmp    └─vgmain-lvvar  254:4    0  74.5G  0 lvm  /var  sdb                 8:16   0 931.5G  0 disk  

Can I do this somehow by dding the existing data to the clean drive while having it plugged into the RAID controller and set up as RAID 1? To clarify, let's say sda is the drive with my data, sdb is the drive which is not in use.

  • Plug sda into the mobo sata controller
  • Plug sdb into the RAID controller
  • Define sdb as RAID 1 drive
  • Boot from liveCD and dd contents of sda → sdb
  • Plug sda into RAID controller, define as RAID1
  • RAID controller syncs the drives, (copies over sdb to sda) (?)
  • Boot without problems?

Will dd copy the drive in a way that mbr/partitions/etc. are preserved? Am I thinking in a completely stupid way of doing this?

I contacted the RAID controller manufacturer and asked if it has some kind of utility to convert a drive into 2 drives in RAID1, but they said no. If it's relevant in any way, the specific controller is a HighPoint RocketRAID 620 PCI-Express 2.0 x1 SATA III RAID card.

reinit NFS client without restart

Posted: 14 Jun 2021 09:13 AM PDT

I have been working on my server, from which I export one directory using NFS. Of course over the week or so of server reboots, I multiple times forgot to umount the export filesystem in my workstation (which gets mounted from /etc/fstab on boot). In between I was able to umount after the fact and remount (I am not using autofs):

umount -fl /data0  mount /data0  

But this no longer works.

I cannot mount the exported directory from the server on a different directory (mount hangs), but I can nfs mount that exported dir on a virtual machine running on my workstation.

What I tried is removing (rmmod) the nfs and nfsv3 module (which would not work: Resource temporarily unavailable). lsof hangs. mount doesn't show anything mounted via nfs. This is all probably a result of using 'umount -l' multiple times, but the first two times this worked without a problem.

I have restarted the server in the mean time, after not being able to mount without that making any difference. I also used service nfs-kernel-server restart. I suspect everything would be back to normal if I restart the client workstation.

Is there a way to recover from this and reinitialise the nfs client side on my workstation without a reboot?
If I cannot fix this without reboot, would this not reoccur if I start using autofs?

lsof -b hangs with as last lines:

lsof: avoiding readlink(/run/user/1001/gvfs): -b was specified.  lsof: avoiding stat(/run/user/1001/gvfs): -b was specified.  lsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/1001/gvfs        Output information may be incomplete.  

in the lines preceding that, there is no /data0.

The entry in /etc/fstab:

192.168.0.2:/data0 /data0  nfs  defaults,auto,nolock,user 0 2  

OpenLdap - restore backup - slapcat/slapadd

Posted: 14 Jun 2021 10:02 AM PDT

Im using slapcat to make backup like this:

slapcat -n 1 > ${BACKUP_PATH}/ldap.domain.com.ldif  

Then import using slapadd:

slapadd -F /etc/ldap/slapd.d -n 1 -l ldap.domain.com.ldif  

I can't restore my backup this way because of the operational attributes.

I have errors for example:

structuralObjectClass: no user modification allowed  

It's possible to make a backup without operational attributes or import somehow with them?

Is there a way to check for a working polkit agent without checking for running process?

Posted: 14 Jun 2021 07:09 AM PDT

I need to check if I have an usable polkit agent in a desktop-environment agnostic way.

Right now, what I'm doing is to check if a polkit agent is running, using a code like this:

ps aux | grep some-polkit-agent  

where some-polkit-agent may be:

  • polkit-gnome-authentication-agent-1 (for gnome2 and gnome3-fallback)
  • polkit-kde-authentication-agent-1 (for kde)
  • polkit-mate-authentication-agent-1 (for mate)
  • lxpolkit (for lxde)

The "no-fallback" gnome3 (gnome-shell) has its own polkit agent within the gnome-shell process itself, so I can't ps-grep it. What I assume is that if gnome-shell is running then the polkit agent is in place.

The problem comes when a system has hidepid enabled (see http://www.linux-dev.org/2012/09/hide-process-information-for-other-users/). This security measure makes that a ps doesn't show me any polkit agent running even if there is one.

Is there any better way that I can check for an usable polkit agent?

force rsync to overwrite files at destination even if they're newer

Posted: 14 Jun 2021 09:00 AM PDT

I have an rsync backup script I run, which also restores files back where they came from when I ask. But if the files at the destination are newer than those in the backup when I try to restore, it will not replace them. I really want to replace the newer files with those in the backup but I don't see a way to make rsync do this.

tldr: is there a way to force rsync to overwrite files at the destination?

edit: I've been running rsync -avhp When I want to restore a backup, I use the same command with the "to" and "from" swapped. So it tries to copy files from the backup drive to the place on my computer they belong.

cat files with directory

Posted: 14 Jun 2021 07:17 AM PDT

Is there a command to show the directory/file name when cat files?

For example: assume two files f1.txt and f2.txt are under ./tmp

./tmp/f1.txt  ./tmp/f2.txt   

Then when I do cat ./tmp/*.txt, only the content of files will be shown. But how to firstly show the file name, then the content?, e.g.:

 (The needed command):   ./tmp/f1.txt:     This is from f1.txt   and so on   ./tmp/f1.txt:   This is from f2.txt ...  

Is there a command to do it? (There seems to be no option for 'cat' to show the file names)

No comments:

Post a Comment