Tuesday, July 20, 2021

Recent Questions - Unix & Linux Stack Exchange

Recent Questions - Unix & Linux Stack Exchange


Zsh - pasting text killed in a custom widget only works for last word, can that be fixed?

Posted: 20 Jul 2021 10:14 AM PDT

I'm using answers from this question so that I can cut longer/shorter parts of text when I press Ctrl+W or Alt+Backspace respectively. Specifically I have this in my .zshrc to add the Alt+Backspace behavior (Ctrl+W is built-in)

backward-kill-dir () {      local WORDCHARS=''      zle backward-kill-word  }  zle -N backward-kill-dir  bindkey '^[^?' backward-kill-dir  

This works fine for killing text, but then pasting it doesn't work as expected. Let's say I have this text:

A quick brown fox  

If I press Ctrl+W four times and then press Ctrl+Y, the entire text will be cut and then pasted back. But if I have this text:

a-quick-brown-fox  

and I press Alt+Backspace four times and then Ctrl+Y, it will cut the text as expected but only paste "a-".

How can I make the latter also paste the entire text?

I have a mis-behaving network manager, fed34

Posted: 20 Jul 2021 10:18 AM PDT

I have a miss behaving network manager on Fed34 but I had the same problem when trying to upgrade to Fed32. I ended up having to do a clean installation to Fed33.

The basic problem is NM fails to connect to a wifi AP for which password and ESSID is known and tested. Proof: if I reboot to Ubuntu 20.04 it works fine.

I had this connection working on Fed31 for about 6mo. When F31 went EOL I had to update but the live iso failed to connect, so I'd been putting it off. I had to upgrade so I went straight to Fed33 which screwed up , so I did a clean installtion of Fed33. Wifi to AP worked :)

Having now migrated to Fed34 I'm back to borked wifi.

Symptom is NM shows the AP but never ending whirling blobs icon in the task bar widget. I need to "disconnect" despite the fact it never actually connects. It seems to communicating with the AP but failing to identify.

Jul 20 17:07:31 localhost NetworkManager[2247]: <info>  [1626793651.3390] device (wlp0s16u2u3u4u2): state change: prepare -> config (reason 'none', sys-iface-state: 'managed')  Jul 20 17:07:31 localhost NetworkManager[2247]: <info>  [1626793651.3398] device (wlp0s16u2u3u4u2): Activation: (wifi) access point 'MySID 1' has security, but secrets are required.  Jul 20 17:07:31 localhost NetworkManager[2247]: <info>  [1626793651.3399] device (wlp0s16u2u3u4u2): state change: config -> need-auth (reason 'none', sys-iface-state: 'managed')  Jul 20 17:07:31 localhost NetworkManager[2247]: <info>  [1626793651.3483] device (wlp0s16u2u3u4u2): state change: need-auth -> prepare (reason 'none', sys-iface-state: 'managed')  Jul 20 17:07:31 localhost NetworkManager[2247]: <info>  [1626793651.3493] device (wlp0s16u2u3u4u2): state change: prepare -> config (reason 'none', sys-iface-state: 'managed')  Jul 20 17:07:31 localhost NetworkManager[2247]: <info>  [1626793651.3500] device (wlp0s16u2u3u4u2): Activation: (wifi) connection 'MySID 1' has security, and secrets exist.  No new secrets needed.  Jul 20 17:07:31 localhost NetworkManager[2247]: <info>  [1626793651.3501] Config: added 'ssid' value 'MySID'  Jul 20 17:07:31 localhost NetworkManager[2247]: <info>  [1626793651.3501] Config: added 'scan_ssid' value '1'  Jul 20 17:07:31 localhost NetworkManager[2247]: <info>  [1626793651.3502] Config: added 'bgscan' value 'simple:30:-70:86400'  Jul 20 17:07:31 localhost NetworkManager[2247]: <info>  [1626793651.3502] Config: added 'key_mgmt' value 'WPA-PSK WPA-PSK-SHA256 FT-PSK'  Jul 20 17:07:31 localhost NetworkManager[2247]: <info>  [1626793651.3504] Config: added 'auth_alg' value 'OPEN'  Jul 20 17:07:31 localhost NetworkManager[2247]: <info>  [1626793651.3505] Config: added 'psk' value '<hidden>'  Jul 20 17:07:31 localhost wpa_supplicant[2413]: wlp0s16u2u3u4u2: SME: Trying to authenticate with e4:9e:12:67:ef:bb (SSID='MySID' freq=2417 MHz)  Jul 20 17:07:31 localhost kernel: wlp0s16u2u3u4u2: authenticate with e4:9e:12:67:ef:bb  Jul 20 17:07:31 localhost kernel: wlp0s16u2u3u4u2: send auth to e4:9e:12:67:ef:bb (try 1/3)  Jul 20 17:07:31 localhost NetworkManager[2247]: <info>  [1626793651.3899] device (wlp0s16u2u3u4u2): supplicant interface state: inactive -> authenticating  Jul 20 17:07:31 localhost kernel: wlp0s16u2u3u4u2: authenticated  Jul 20 17:07:36 localhost kernel: wlp0s16u2u3u4u2: aborting authentication with e4:9e:12:67:ef:bb by local choice (Reason: 3=DEAUTH_LEAVING)  Jul 20 17:07:36 localhost wpa_supplicant[2413]: wlp0s16u2u3u4u2: CTRL-EVENT-SSID-TEMP-DISABLED id=0 ssid="MySID" auth_failures=1 duration=10 reason=CONN_FAILED  Jul 20 17:07:36 localhost NetworkManager[2247]: <info>  [1626793656.4293] device (wlp0s16u2u3u4u2): supplicant interface state: authenticating -> disconnected  Jul 20 17:07:46 localhost NetworkManager[2247]: <info>  [1626793666.4406] device (wlp0s16u2u3u4u2): supplicant interface state: disconnected -> scanning  Jul 20 17:07:47 localhost wpa_supplicant[2413]: wlp0s16u2u3u4u2: CTRL-EVENT-SSID-REENABLED id=0 ssid="MySID"  Jul 20 17:07:47 localhost kernel: wlp0s16u2u3u4u2: authenticate with e4:9e:12:67:ef:bb  Jul 20 17:07:47 localhost wpa_supplicant[2413]: wlp0s16u2u3u4u2: SME: Trying to authenticate with e4:9e:12:67:ef:bb (SSID='MySID' freq=2417 MHz)  Jul 20 17:07:47 localhost kernel: wlp0s16u2u3u4u2: send auth to e4:9e:12:67:ef:bb (try 1/3)  Jul 20 17:07:47 localhost NetworkManager[2247]: <info>  [1626793667.5540] device (wlp0s16u2u3u4u2): supplicant interface state: scanning -> authenticating  Jul 20 17:07:47 localhost kernel: wlp0s16u2u3u4u2: authenticated  Jul 20 17:07:52 localhost kernel: wlp0s16u2u3u4u2: aborting authentication with e4:9e:12:67:ef:bb by local choice (Reason: 3=DEAUTH_LEAVING)  Jul 20 17:07:52 localhost wpa_supplicant[2413]: wlp0s16u2u3u4u2: CTRL-EVENT-SSID-TEMP-DISABLED id=0 ssid="MySID" auth_failures=2 duration=20 reason=CONN_FAILED  Jul 20 17:07:52 localhost NetworkManager[2247]: <info>  [1626793672.5960] device (wlp0s16u2u3u4u2): supplicant interface state: authenticating -> disconnected  Jul 20 17:07:56 localhost NetworkManager[2247]: <warn>  [1626793676.6096] device (wlp0s16u2u3u4u2): Activation: (wifi) association took too long, failing activation  Jul 20 17:07:56 localhost NetworkManager[2247]: <info>  [1626793676.6097] device (wlp0s16u2u3u4u2): state change: config -> failed (reason 'ssid-not-found', sys-iface-state: 'managed')  Jul 20 17:07:56 localhost NetworkManager[2247]: <info>  [1626793676.6106] manager: NetworkManager state is now DISCONNECTED  Jul 20 17:07:56 localhost NetworkManager[2247]: <info>  [1626793676.7169] device (wlp0s16u2u3u4u2): set-hw-addr: set MAC address to EE:18:50:EA:EF:EF (scanning)  Jul 20 17:07:56 localhost NetworkManager[2247]: <warn>  [1626793676.7992] device (wlp0s16u2u3u4u2): Activation: failed for connection 'MySID 1'  Jul 20 17:07:56 localhost NetworkManager[2247]: <info>  [1626793676.8001] device (wlp0s16u2u3u4u2): state change: failed -> disconnected (reason 'none', sys-iface-state: 'managed')  Jul 20 17:07:56 localhost wpa_supplicant[2413]: wlp0s16u2u3u4u2: Reject scan trigger since one is already pending  Jul 20 17:07:56 localhost NetworkManager[2247]: <info>  [1626793676.8108] policy: set-hostname: current hostname was changed outside NetworkManager: 'localhost.localdomain'  Jul 20 17:08:02 localhost NetworkManager[2247]: <info>  [1626793682.5970] device (wlp0s16u2u3u4u2): supplicant interface state: disconnected -> inactive  

Why can't I enable SSH on several ports?

Posted: 20 Jul 2021 10:00 AM PDT

In order to train my networking skills, I am trying to get a Raspberry Pi to listen for ssh connections on both ports 22 and 2222. My endgoal is then to practice using ufw in order to allow connections on port 22 from my WAN and on port 2222 from an ethernet connection only. For now, ufw is disabled and I am just trying to set sshd to listen on the two aforementioned ports. Here are the only uncommented lines from my /etc/ssh/ssh_config file:

Host *      Port 22      Port 2222      SendEnv LANG LC_*      HashKnownHosts yes      GSSAPIAuthentication yes  

However, I get a Connection refused error when trying to ssh on port 2222, regardless of the originating machine. For some reasons I cannot explain, ssh does not seem to be listening on port 22:

paupaulaz@pi2:~ $ ss -tlnp | grep 22  LISTEN    0         128                0.0.0.0:22               0.0.0.0:*  LISTEN    0         128                   [::]:22                  [::]:*  

I of course tried both restarting sshd and rebooting the Raspberry Pi, and I have no user specific ssh config file.

Thanks a lot for any help !

Is it possible to encrypt sensitive data on a headless embedded device in a secure way?

Posted: 20 Jul 2021 09:14 AM PDT

My company works with Raspberry Pis, where all data (OS, our software, etc) is stored on an SD card. We configure these devices (load our software on them), and send them out into the field (an environment we don't control). These Pis have sensitive data on them, and the fear is that someone in the field will take the Pi, and get access to this sensitive data.

The obvious solution of not storing sensitive data on the SD card, but rather streaming it over a secure network won't work for us - the Pi won't always have access to internet, or any other kind of network.

The other obvious solution is to encrypt the partition where the data is stored, but that is proving to be a challenge. The Pi needs to be able to access this sensitive data as it runs, which means no matter what type of encryption we use, the Pi needs to be able to decrypt the encrypted partition at boot. This implies that it needs to have some sort of decryption key that is stored on a non-encrypted partition, which is inherently flawed. An attacker can easily gain access to the key, and use it to decrypt the encrypted partition.

There are hardware solutions, like the Zymkey, that promise to address this. We tried that, and it took me just over 5 minutes to break into an encrypted root partition that used the Zymkey as its key. The problem is that even though you can encrypt the root partition, you can't encrypt the boot partition, which stores the kernel, and the files that pass args to the kernel at boot. This lets an attacker modify these bootloader files, asking the kernel to start a shell at boot for example, giving the attacker full access to the encrypted root partition.

Even if we were to compile our own custom kernel that didn't accept any args, preventing boot args that give an attacker shell, this custom kernel would be stored on the /boot partition that the attacker has access to. Nothing would stop them from just replacing our custom kernel with a generic one.

I know you can hack together some hardware solutions, where you glue the SD card, and/or put the Pi in a box that's rigged with booby traps, where if someone tries to open the box, it will delete the encryption key, and unmount the encrypted partition (or reboot). Those are all relatively easy to bypass, and are hacky at best.

So my question is this: Is it conceptually even possible to encrypt either the entire root partition, or just some data partition where sensitive files can be stored, so that if an attacker gets their hands on the SD card, they won't be able to get their hands on the files themselves? Linux still needs to be able to decrypt and use these files as it runs.

Track CPU load of a process

Posted: 20 Jul 2021 08:52 AM PDT

I have a process foo that I would like to run from the Terminal. At the same time, I'm interested in identifying how much this process is consuming the CPU, and so I want, for example, to go into top, find the process foo (only one process will have this name), get the value from the %CPU column, and append that value to a file with the datetime timestamp and the extracted value on one row. With these values, I can produce a plot and some descriptive statistics to understand the workload of foo better.

Moreover, I'd like this CPU load extraction to continue every n seconds (for example every n=1 second), and I would like it to start when foo starts and end when foo has completed processing.

As far as I understand, this requires that two processes simultaneously.

Any thoughts on how I can achieve this? Preferably as a direct command to provide the Terminal—with a shell script as a last resort if necessary.

Nginx Serve Up 8443 Application over 443

Posted: 20 Jul 2021 08:24 AM PDT

0

I have a PHP running on a current nginx running over HTTPS (443), I also have a Java application running over HTTPS port 8443.

Is there a way that I can expose the Java application over HTTPS (443)? Maybe using proxy_pass or something?

https://docs.nginx.com/nginx/admin-guide/web-server/reverse-proxy/

My aim is to have both applications available over a single port by the user going to: https://server/ - Standard Web pages https://server/java - Redirects to the Java app

Many thanks.

Unlock GRUB bootloader password remotely

Posted: 20 Jul 2021 10:04 AM PDT

Is it possible to unlock password protected grub loader remotely? For LUKS encrypted disk, I can unlock using dropbear-initramfs package installed in Debian in which dropbear will run a custom listening port so that I can use it to unlock LUKS encrypted disk remotely. I'm wondering, if there is any solutions for grub password protected bootloader?

How to passthrough GPU in QEMU-KVM? Failing to start

Posted: 20 Jul 2021 08:03 AM PDT

I have tried countless guides and always ended up with the same result, so it's time I ask those who know what's going on.

I have all virtualisation IOMMU etc enabled in my BIOS. I am running the VM without issues when not passing through the GPU, no errors, great performance etc.

When I do try to passthrough the GPU (both GPU and same IOMMU group devices) and press start the VM, nothing happens. Literally nothing, as if I did not press start the VM.

When I try to remove the PCI passthrough, virt-manager crashes. If I forcibly close it then start it again, I can't connect to the server (qemu://system) until I fully restart my PC (logout doesn't help).

What am I missing? I have a second GPU of course, which is what I've connected my monitor to. I have Nvidia drivers installed. Both GPUs are recognised and functional (tested by switching the monitor).

The GPUs are 3060 and 560.

Using Debian 10 and everything is updated to the latest version.

Replacing columns using awk

Posted: 20 Jul 2021 07:55 AM PDT

I have a file:

50102.5924   4.2599   4.2184  1.0098   4.2392  50103.5903   4.2895   4.2474  1.0099   4.2685  50107.5850   4.2100   4.2286  0.9956   4.2193  50108.5331   4.1477   4.1112  1.0089   4.1295  50108.7620   4.0770   4.1060  0.9929   4.0915  50109.5345   4.2227   4.2153  1.0018   4.2190  50109.7681   4.1677   4.1673  1.0001   4.1675  50110.5308   4.2333   4.3158  0.9809   4.2746  50110.7612   4.2339   4.2743  0.9905   4.2541  50111.5591   4.1330   4.1542  0.9949   4.1436  50112.5324   4.1417   4.0986  1.0105   4.1202  50112.7668   4.0075   3.9844  1.0058   3.9960  50113.5301   4.2147   4.2147  1.0000   4.2147  50113.7639   4.2263   4.2263  1.0000   4.2263  50114.5321   4.1205   4.1211  0.9999   4.1208  

And many files:

4.5149 50102.5924   72.220     1.000     1    1  4.5683 50103.5903   -3.800     1.000     1    1  4.4682 50107.5850  -23.670     1.000     1    1  

How to replace the first column in many files by the last column of the file in such a way that the first column of the file is the same as the second column of many files.

The desired result for the small file given for an example is

4.2392 50102.5924   72.220     1.000     1    1  4.2685 50103.5903   -3.800     1.000     1    1  4.2193 50107.5850  -23.670     1.000     1    1  

I tried:

for f in small_file*; do       awk 'NR==FNR{ar[$1]=$5;next} ($2 in ar) {$1= ar[$1]}1'  her_OK "$f" > "${f}_em"  done  

The first columns of small files disappered instead of to be replaced.

Linux Mint: Read only file system error

Posted: 20 Jul 2021 08:47 AM PDT

Linux Mint was running fine until one day, suddenly I was unable to write any changes to other file systems. Screenshots from (1)Sublime Text (2)VS Code: https://imgur.com/a/o1fCvim

now the first thing I tried to do was check if permission to read and write was revoked which seemed to be the only possible reason for this error but to my surprise it had read and write access and group was set to root.

Permissions for the folder

now I opened Thunar with root access and tried changing Group to juvenile_lad (user) but then I receive the following error enter image description here

now If I click Yes, then it keeps on prompting me for every single file inside the folder which basically means no changes were made. Then I tried following an answer given in the following post: https://askubuntu.com/questions/628862/sublime-text-3-authentication-question-when-saving-document# , but still had the same error just in the terminal.

At this point I have no idea what or where to change permissions from. It was working perfecty a day before and I don't remember making any changes to the system settings whatsoever. Sorry I the format of my question was not upto StackExchange's standards since it is my first time asking a question. kindly help.

[EDIT] found the solution by following @Panki 's advice that I was on the wrong track. took a different approach and came to know that Windows10 (on dual-boot) can enable Quick Boot so all I had to do it disable it by going into the BIOS and it fixed everything.

Share the internet of one WiFi card through a hotspot of another WiFi card

Posted: 20 Jul 2021 10:31 AM PDT

I have two WiFi cards. One is connected to a router, and gets an internet connection. The other is set up as an Access Point (AP), to which devices are able to connect. But they don't get any internet. How to share the internet from WiFi-1 (wlan1) over to WiFi-2 (wlan2)?

I'm using Fedora KDE.


After adding a bridge between the two, using the GUI:

Settings in Connection - System settings - I assume this is NetworkManager settings. I am able to connect to the AP with my android phone, but only if I specify IP,Gateway, Prefix Length and DNS by hand. I have tried to mimick the configuration of the host computer with the AP; using the same IP and gateway, and also use the same IP but incremented by 1. Either way, the phone is able to connect but does not get internet.

enter image description here enter image description here enter image description here enter image description here enter image description here

Setting which ports to use for passive FTP connection with Linux's ftp client

Posted: 20 Jul 2021 07:57 AM PDT

I'm trying to connect to a FTP server behind a firewall that allows incoming connections in the range 6100-6200 only. I have successfully connected to this server using curl like this:

curl --ftp-port :6100-6200 --list-only ftp.server  

But I'd like to reproduce the behaviour of this curl command with other clients that are more friendly to use from Python. In principle Linux's ftp, but I'm open to other options if someone suggest a good one. I tried ftplib but it seems that this library does not allow you to select ports; I've tried it unsuccessfully.

Currently I can not make it work with ftp:

230 Login successful.  Remote system type is UNIX.  Using binary mode to transfer files.  ftp> passive  Passive mode on.  ftp> ls  227 Entering Passive Mode (XXX,XXX,XXX,XXX,202,251).  ftp: connect: Connection refused  

The same set of commands work from my laptop, therefore it seems clear that the problem is the firewall.

How can I force ftp to negociate a data connection in a port in the range 6100-6200, so emulating the behaviour of curl?

Fully unattended Windows 10 installation in KVM

Posted: 20 Jul 2021 08:35 AM PDT

I have a script that automatically creates and starts a (Q35/UEFI based) VM using virsh. The VM has a Windows10.iso mounted to it and also a virtual floppy disk that contains an unattend.xml file that the Windows installer automatically detects to install itself without user interaction.

So I basically already have everything automated.

But there is a problem that I haven't been able to work around: After successfully booting from the Windows10.iso, it always asks to press any key to start the installation. Which kind of makes sense because if it didn't, it would just automatically reinstall itself every time the installation is done and it reboots.

What I would like instead is to bypass the press any key prompt and somehow tell the VM to only boot from the Windows10.iso the first time.

I think I've seen VMware or VirtualBox do this before.

Now I'm wondering if there is a way to do this using KVM. Maybe a flag that I can set that automatically detects the prompt and then sends virtual keyboard input to the VM.

Note: I don't want to modify the Windows10.iso.
Note: The press any key prompt is not from the boot manager, but actually from the Windows iso.

Extend partition size on the left

Posted: 20 Jul 2021 09:43 AM PDT

I have Linux Debian on the partition sda4 of my 1TB hard disk:

NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT  sda      8:0    0 931.5G  0 disk   ├─sda1   8:1    0   260M  0 part /boot/efi  ├─sda2   8:2    0    16M  0 part   ├─sda3   8:3    0 530.7G  0 part   ├─sda4   8:4    0 135.5G  0 part /  ├─sda5   8:5    0   977M  0 part [SWAP]  ├─sda6   8:6    0   973M  0 part   └─sda7   8:7    0  14.3G  0 part   sr0     11:0    1  1024M  0 rom  

Between sda3 and sda4 there is some space (exactly, 267 GB of unallocated space) that I took from windows (sda3) to extend the Linux partition. So, this space is on the left of sda4 and I want to give this it to the Linux partition, sda4.

Using GParted this is complicated to do, unless someone could explain it easily. I'd want to use fdsik, if possible, so to modify the partition size from the partition itself.

Many thanks in advance to all.

How to filter for only unique errors in multiple logs using grep?

Posted: 20 Jul 2021 10:14 AM PDT

I am trying to use the following pattern on Ubuntu:

grep -Eri "warning|error|critical|severe|fatal" --color=auto  

to find relevant errors in many different .log files recursively in /var/log and its subfolders.

The issue I am having is that this results in tens of thousands of lines of matches being printed as the expression is run. I'd like to filter these somehow in at least one of the following ways:

  1. Print but then skip a match if more than eg. 3 of the same match exist
  2. Show only unique matches (i.e. print one of each line found)

Can I do this by piping the output to something? Currently going through each log for errors is incredibly time consuming which is why I am trying this. But the expression I am using prints so much info that it is also not usable itself either. I have tried piping to 'less' but that removes highlighting which makes it harder to read and does not fix the issue with the output being so large.

I realise I could also limit the expression to specific files at a time, but as I mentioned some logs are full of matches and others have very little. So further filtering out duplicates would be really helpful.

Here is an example error line in one of the many logs I am searching:

./artifactory/artifactory-service.log:20:2021-07-20T08:45:30.248Z [jfrt ] [ERROR] [.j.a.c.g.GrpcStreamObserver:97] [c-default-executor-1] - refreshing affected platform config stream - got an error  

If there are hundreds of such errors, I would like to show eg. at most 3 of these before moving onto the next match.

Alternatively, due to how the dates are listed in the log. It would be great to filter to match for only specific dates, how would I go about doing this? Date filtering would limit the output greatly.

How to "split" an attachment with Mutt ? (Message/partial)

Posted: 20 Jul 2021 10:15 AM PDT

A few days ago I started to use the library MUTT (send email) in the command line.

I know how to "modify/create" headers with the command line "my_hdr", but I can't find a way to send a big file (I know there is a limit. But I saw some library that "split" the attachment with an id to recompose the original file).

My question is: How can I do that (if it's possible ofc) with Mutt? My goal is to create a script that's why I use the command line

How can I email a terminal session typescript without raw data?

Posted: 20 Jul 2021 10:15 AM PDT

I have a bash script that tries to kill two birds with one stone by running commands with script -c and writing the output to a log file so that I can monitor the progress, then email myself the results.

The final log file is quite long as it is a typescript of everything that was displayed in the terminal session; every single progress output is logged. However, if I read the data with cat, I only get the final output showed in the terminal.

For instance: script -c 'rsync -ah --info=progress2 folder1 folder2' logfile.log

Opening the file with nano:

> # nano logfile.log  Script started on 2021-07-20 14:22:40+0800  ^M         36.84M   0%   34.31GB/s    0:00:00 (xfr#1, to-chk=606/673)^M        808.26M   7%  752.75GB/s    0:00:00 (xfr#31, to-chk=603/673)^M        860.63M   7%  801.52GB/s    0:00:00 (xfr#34, to-chk=592/673)$    Script done on 2021-07-20 14:22:40+0800  

Whereas, with cat

> # cat logfile.log                                                                                                                                                                                                Script started on 2021-07-20 14:22:40+0800           11.48G 100% 10693.06GB/s    0:00:00 (xfr#616, to-chk=0/673)    Script done on 2021-07-20 14:22:40+0800  

However, if writing cat ouput to a file:

> # cat logfile.log > temp.log  

The resulting temp.log will include the entire raw data.


  1. What is the reason for the discrepencies?

  2. I would like to email the same output as what I get from cat on display; not the raw output showed from nano. However, cat always output raw data whether to a file, another command, etc.

The command below emails raw data.

> # echo -e "Subject : report" "\n\n\n `cat logfile.log`" | sendmail hello@example.com  
  1. Is there any way to cleanup the typscript file from all the raw data afterwards? I didn't find anything online or in the manuals.

hylafax probemodem fails to autodetect USB modem for adding a modem

Posted: 20 Jul 2021 08:46 AM PDT

On Ubuntu 21.04 I'm having trouble adding a USB modem to my Hylafax 6.0.7 installation. It consistently fails to probe on two different USB modems.

I've gotten the packages installed and ended up appending the following to /etc/udev/rules.d/50-myusb.rules so that Ubuntu would allow me to write to the modem:

KERNEL=="ttyUSB[0-9]*",MODE="0666"  KERNEL=="ttyACM[0-9]*",MODE="0666"  

I noticed this was omitted from several tutorials floating around on the internet. No matter, I was able to talk to the modems and query the modem classes with cu.

For reference, I have two USB modems that I'm working with: a USRobotics 5637, and a "USB 2.0 Fax Modem" acquired from Amazon that has a Conexant chipset as reported by dmesg and labeled on the box. Both mount as ttyACM0 & ttyACM1 when connected simultaneously.

faxsetup, addfaxmodem, and probemodem, when run as su, all hang when either modem is probed by each script.

Per above, at first there was no activity on the modem when I ran probemodem. However, now that I added the udev rules there is a flurry of blinking lights on both modems when running probemodem for about one second, then the blinking stops and only the power light remains on.

probemodem is stuck displaying 38400 and does not cycle through each speed unless I unplug the modem and plug it back in. After plugging it back in, the modem will hang on 19200, but sometimes it will cycle all the way through and error out saying that it was unable to deduce DTE-DCE speed. It will also hang when specifying a speed. The end result seems to be the same each time: the modem cannot be added to hylafax.

I checked permissions on the devices, and both are root/dialout with rw/rw/rw.

I also noticed that there is no fax group on my system after installing the hylafax-server and hylafax-client packages from the Ubuntu repo.

So, I know that adding the rules to udev allowed access to the USB devices, and I've read that for a while now other people on the internet have had success using the same USR5637 modem with hylafax so it should be OK, however something is amiss with the software somewhere and I'm at a loss as to where to go from here to get it working.

  • How do I get Hylafax to successfully probe and use either of these modems?
  • Do I even need udev rules, or did I overlook something when configuring my system?
  • Should the absence of a fax group be concerning?

I'm sort of a noob when it comes to more advanced Linux configuration; I appreciate any help in advance.

How do you set --author and --message for the image using buildah?

Posted: 20 Jul 2021 10:27 AM PDT

There is both a buildah commit and a podman commit however the buildah commit doesn't support --author or --message which is provided by podman. Is there a way to get this functionality with just buildah? Or do I need to take a container online with the image merely to set the author and message?

How do I change the icon on an app installed as a "flatpak"?

Posted: 20 Jul 2021 09:59 AM PDT

On my new Meerkat, running Pop 21.04 (based on Ubuntu), I have Thunderbird installed as a flatpak. And I would like to change the application icon (along with a number of other application icons) to a tongue-in-cheek version (in the case of T-Bird, the mascot has his wing wrapped around a bottle of Thunderbird wine; in the case of Firefox, which came pre-installed, the mascot is chewing on an Internet Explorer logo).

So far, nothing I've tried for T-Bird has had the slightest effect on what shows up in the applications menu or the dock: I've tried changing the .desktop file to point to the fully-qualified pathname of a PNG file; no effect (and I backed out the change). I've tried backing up the hicolor directory in .local/share/flatpak/app/org.mozilla.Thunderbird/current/active/files/share/icons, then replacing every last instance of org.mozilla.Thunderbird.png within with a correctly-scaled version of the modified icon; still no effect, even after both an "update-icon-caches" and a system restart. I tried variations on this everywhere else I could find either an instance of org.mozilla.Thunderbird.png, or a link to one.

For what it's worth:

enter image description here

rsync could not find xattr #1 for {file}... error in rsync protocol data stream

Posted: 20 Jul 2021 08:08 AM PDT

I have regular and frequent backups from a set of QNAP systems to a central backups repository. Backups are rsync over ssh, pulled from the central server. The QNAP filesystem is ext4, shared to my users via Samba. (QNAPs are based on Linux, and I'm fairly confident that for the purposes of this question you can treat them as such.) The filesystem on the backups server also handles extended attributes.

Recently I've been getting this fatal error from one of them

[sender] could not find xattr #1 for long_filename.xlsm  rsync error: protocol incompatibility (code 2) at xattrs.c(622) [sender=3.1.2]  rsync: [generator] write error: Broken pipe (32)  

The rsync command is driven from rsnapshot but it comes down to this

rsync -avzSAXiv --delete --numeric-ids --fake-super --fuzzy --delete-after --partial --link-dest=/path/to/previous user@remoteHost:/share/ /path/to/backup/  

Extended attributes on the source file

getfattr -d -m - long_filename.xlsm  # file: long_filename.xlsm  security.NTACL=0sAwADAA..........AQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABAASAZAAAAIAAAAAAAAAAnAAAAAEFAAAAAAAFFQAAABSYSwXsMclxQXR48kIFAAABBQAAAAAABR....................IBAgAAAgAc..........QA/wEfAAEBAAAAAAABAAAAAA==  user.DOSATTRIB=0sMH..............EQAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAKZ........YBAAAAAAAAAAA=  user.qtier="io_aware"  

What is this xattr #1? Is it referring to extended attributes on the remote server or on the local destination? What might I be looking for, to identify the problem? The destination file doesn't exist, because that's where rsync crashed out, but 87000 or so other files successfully transferred. Nothing seems to be particularly special about the source file.

I'm currently trying to build an MRE but until I found the security.NTACL attribute I was failing dismally (getfattr only displays user.* attributes by default).

Thanks

Sed to print out the line number

Posted: 20 Jul 2021 08:51 AM PDT

Here is my sample file

user@linux:~$ cat file.txt   Line 1  Line 2  Line 3  Line 4  Line 5  user@linux:~$   

I can print line 2-4 with grep -A2 'e 2' file.txt

user@linux:~$ grep -A2 'e 2' file.txt   Line 2  Line 3  Line 4  user@linux:~$   

I can also print out the line number as well with grep -n

user@linux:~$ grep -nA2 'e 2' file.txt   2:Line 2  3-Line 3  4-Line 4  user@linux:~$   

Also, the same thing can be accomplished with sed -n 2,4p file.txt

user@linux:~$ sed -n 2,4p file.txt   Line 2  Line 3  Line 4  user@linux:~$   

But I'm not sure how to print out the line number with sed

Would it be possible to print out the line number with sed?

GNOME: Different PAM configurations for lockscreen vs login

Posted: 20 Jul 2021 09:48 AM PDT

I recently purchased a U2F security key, and I have successfully configured my Ubuntu 18.04 machine to require authentication via the key as well as my usual password to log in. I am hoping to change my authentication configurations such that:

  1. When I first login to my machine, I need to both enter my password and insert my U2F key
  2. When I lock my already-logged-in machine, I need only to insert my U2F key to unlock it.

Is this something that is possible with the stock GNOME lock screen? If so, which pam configuration do I have to edit?

Currently the only thing I have changed is adding

auth    required  pam_u2f.so  

to /etc/pam.d/gdm-password, under

@include common-auth  

Allow user to run PHP-FPM without password using sudoers

Posted: 20 Jul 2021 08:07 AM PDT

I'm trying to make it so a user can reload PHP-FPM without needing a password everytime.

I've added the following to the /etc/sudoers file using pkexec visudo, and there are no syntax errors, but it is still not working, any ideas?

Defaults exempt_group=forge  User_Alias FORGE = forge  Cmnd_Alias FORGE_COMMANDS = /usr/sbin/service php-fpm *  FORGE ALL = (ALL) NOPASSWD: FORGE_COMMANDS  

I've hunted everywhere and this seems to be a common problem of getting it to work, but each question doesn't seem to have an answer, or one that works for me.

Using CentOS 7.

Thanks.


When using sudo -u I get the following:

==== AUTHENTICATING FOR org.freedesktop.systemd1.manage-units === Authentication is required to manage system services or units. Multiple identities can be used for authentication:

I can then proceed as normal, but the point is for forge to be able to do this without requiring authentication.

Error when building Apache 2.4.25 from source with open ssl 1.1.0c

Posted: 20 Jul 2021 10:00 AM PDT

I have built Apache 2.4.25 with OpenSSL 1.0.2 successfully.

But Because of some security holes we found in our Internal tests, I have been asked to patch Openssl with the latest version. So I am trying to build Apache httpd 2.4.25 with OpenSSL 1.1.0c (or) 1.1.0d

My Environments

lsb_release -a  Distributor ID: RedHatEnterpriseServer  Description:    Red Hat Enterprise Linux Server release 5.11 (Tikanga)  Release:        5.11  Codename:       Tikanga    Perl:           5.24  PCRE:           8.38  APR:            1.5.2  APR-util:       1.5.4  OpenSSL:        1.1.0c / 1.1.0d  

All the above Apache dependencies have been successfully built and installed

Apache 2.4.25 - Installation steps

cd /my/softwares  tar -xvf httpd-2.4.25.tar -C /my/build/    cd /my/build/httpd-2.4.25/    ./configure --prefix=/my/apache-httpd-2.4.25 \      --with-pcre=/my/dependencies/pcre-8.38/ \      --with-apr=/my/dependencies/apr-1.5.2 \      --with-apr-util=/my/dependencies/apr-util-1.5.4 \      --enable-ssl \      --with-ssl=/usr/local/ssl-1.1.0c \      --enable-ssl-staticlib-deps \      --enable-mods-static=ssl    make // see below errors  make install  

I am getting the below error when building Apache from source with open ssl. Please help me in the right directions.

ssl_engine_init.c: In function 'make_dh_params':  ssl_engine_init.c:61: error: dereferencing pointer to incomplete type  ssl_engine_init.c:62: error: dereferencing pointer to incomplete type  ssl_engine_init.c:63: error: dereferencing pointer to incomplete type  ssl_engine_init.c:63: error: dereferencing pointer to incomplete type  ssl_engine_init.c: In function 'ssl_init_ctx_protocol':  ssl_engine_init.c:519: warning: 'TLSv1_client_method' is deprecated (declared at /usr/local/ssl-1.1.0c/include/openssl/ssl.h:1598)  ssl_engine_init.c:520: warning: 'TLSv1_server_method' is deprecated (declared at /usr/local/ssl-1.1.0c/include/openssl/ssl.h:1597)  ssl_engine_init.c:525: warning: 'TLSv1_1_client_method' is deprecated (declared at /usr/local/ssl-1.1.0c/include/openssl/ssl.h:1604)  ssl_engine_init.c:526: warning: 'TLSv1_1_server_method' is deprecated (declared at /usr/local/ssl-1.1.0c/include/openssl/ssl.h:1603)  ssl_engine_init.c:530: warning: 'TLSv1_2_client_method' is deprecated (declared at /usr/local/ssl-1.1.0c/include/openssl/ssl.h:1610)  ssl_engine_init.c:531: warning: 'TLSv1_2_server_method' is deprecated (declared at /usr/local/ssl-1.1.0c/include/openssl/ssl.h:1609)  ssl_engine_init.c: In function 'ssl_init_ctx_session_cache':  ssl_engine_init.c:641: warning: passing argument 2 of 'SSL_CTX_sess_set_get_cb' from incompatible pointer type  ssl_engine_init.c: In function 'use_certificate_chain':  ssl_engine_init.c:861: warning: implicit declaration of function 'BIO_s_file_internal'  ssl_engine_init.c:861: warning: passing argument 1 of 'BIO_new' makes pointer from integer without a cast  ssl_engine_init.c: In function 'ssl_init_server_certs':  ssl_engine_init.c:1201: error: dereferencing pointer to incomplete type  make[3]: *** [ssl_engine_init.lo] Error 1  make[3]: Leaving directory `/my/build/httpd-2.4.25/modules/ssl'  make[2]: *** [all-recursive] Error 1  make[2]: Leaving directory `/my/build/httpd-2.4.25/modules/ssl'  make[1]: *** [all-recursive] Error 1  make[1]: Leaving directory `/my/build/httpd-2.4.25/modules'  make: *** [all-recursive] Error 1  

How can I have two keystrokes to delete to either a slash or a word in zsh?

Posted: 20 Jul 2021 10:17 AM PDT

Bash behaviour

I've just migrated from bash to zsh. In bash, I had the following line in ~/.inputrc.

"\e\C-?": unix-filename-rubout  

Hence, Alt+Backspace would delete back to the previous slash, which was useful for editing paths.

Separately, bash defaults to making Ctrl+w delete to the previous space, which is useful for deleting whole arguments (presuming they don't have spaces). Hence, there two slightly different actions performed with each key combination.

Zsh behaviour

In zsh, both Alt+Backspace and Ctrl+w do the same thing. They both delete the previous word, but they are too liberal with what constitutes a word-break, deleting up to the previous - or _. Is there a way to make zsh behave similarly to bash, with two independent actions? If it's important, I have oh-my-zsh installed.

Verify that partition is encrypted

Posted: 20 Jul 2021 08:24 AM PDT

I just installed Debian and as far as I can remember I encrypted my home-partition using LVM. During the boot process I haven't been asked to enter the password.

Is there any way to check whether the encryption is up and running?

CP: max source files number arguments for copy utility

Posted: 20 Jul 2021 09:09 AM PDT

Consider that there are countless number of files under /src/

cp /src/* /dst/  

How many files cp will successfully process?

How to determine Linux kernel architecture?

Posted: 20 Jul 2021 08:02 AM PDT

uname -m gives i686 and uname -m gives i686 i386 output in Red Hat Enterprise Linux Server release 5.4 (Tikanga) machine. I need to install Oracle Database 10g Release 2 on that machine. So, how can I decide whether kernel architecture is 32bit or 64bit?

What is Fedora's equivalent of 'apt-get purge'?

Posted: 20 Jul 2021 09:18 AM PDT

In Debian, there's at least two ways to delete a package:

  • apt-get remove pkgname
  • apt-get purge pkgname

The first preserves system-wide config files (i.e. those found in "/etc"), while the second doesn't.

What is Fedora's equivalent of the second form, purge? Or maybe I should rather ask if yum remove pkgname actually preserves config files.

No comments:

Post a Comment