Friday, December 10, 2021

Recent Questions - Unix & Linux Stack Exchange

Recent Questions - Unix & Linux Stack Exchange


turn 1st row into + 3 columns in file txt

Posted: 10 Dec 2021 03:29 AM PST

I have a txt file, which looks like this:

#A9999999999999              012021I                                     0099999999    000000000099999999+000000000000000000-000000000000000000-    0099999999    000000000099999999+000000000000000000-000000000000000000-    0099999999    000000000099999999+000000000000000000-000000000000000000-    0099999999    000000000099999999+000000000000000000-000000000000000000-  

With the first row I want to create 3 more columns

9999999,012021,I,0099999999,000000000099999999+,000000000000000000-,000000000000000000-  9999999,012021,I,0099999999,000000000099999999+,000000000000000000-,000000000000000000-  9999999,012021,I,0099999999,000000000099999999+,000000000000000000-,000000000000000000-  9999999,012021,I,0099999999,000000000099999999+,000000000000000000-,000000000000000000-  

This example would include 3 columns with the value of the first row, first column with these positions 1st Column (08-15) 2nd column (30-35), 3rd column position 36 with the data of the 1st row.

Need help

How to remove any commands that begins with "echo" from history

Posted: 10 Dec 2021 02:53 AM PST

I have tried the following:

history -d $(history | grep "echo.*" |awk '{print $1}')  

But it is not deleting all the commands from the history with echo

I want to delete any commands which start with echo, like

echo "mamam"  echoaaa  echo "hello"  echooooo  

Crontab notify after successful execution?

Posted: 10 Dec 2021 02:17 AM PST

  • I have a cron job that executes an R script hourly.
  • The script checks an online data source that gets updated at an unknown time each day.
  • If the data source is not updated, the script exits with an error code.
  • If the source is updated, the script runs normally without any error codes.
  • After the script completes, I need to begin a manual workflow.
  • I would like to receive a notification when the cronjob completes, so I know when to begin my workflow.

Things I have considered doing, but find to be hacky/incorrect:

  • Send the Email from within the R script
  • Generate error when the Script succeeds

What I want to do:

  • Send a customized cron notification email after successful execution
  • Something better that I haven't considered yet

Btrfs automounted and mounted via gnome-disks : "Error finding object for block device" on umount in terminal

Posted: 10 Dec 2021 02:12 AM PST

I wanted to use btrfs on my Linux Mint PC more, but I keep encountering new issues. Now I've found out (tried on two USB sticks) that btrfs formatted USB stick can be properly unmounted/ejected via GUI (Nemo), but not from terminal.

$ umount /dev/sdb1  Error finding object for block device 0:87  

For other fs it works, just checked that after I insert USB stick and it is automounted, running umount in terminal works (for ext4 and ISO 9660), but not for btrfs.

Why?

Sticks were formatted with btrfs via Gnome-disks, maybe it matters...

Added 1:
Btrfs partition on local harddrive mounted via Gnome-Disks GUI produced same error on umount in terminal. I was able to successfully sudo mount and sudo umount it in terminal.

Google Chrome fails to launch in a VM

Posted: 10 Dec 2021 03:19 AM PST

Google Chrome fails to launch in a VM and shows these errors:

~$ google-chrome-stable  [1210/095138.308075:ERROR:file_io_posix.cc(144)] open /sys/devices/system/cpu/cpu0/cpufreq/scaling_cur_freq: No such file or directory (2)  [1210/095138.308293:ERROR:file_io_posix.cc(144)] open /sys/devices/system/cpu/cpu0/cpufreq/scaling_max_freq: No such file or directory (2)  Trace/breakpoint trap (core dumped)  

Issue with install iverilog and gtkwave on CentOS 8

Posted: 10 Dec 2021 02:50 AM PST

I'm trying to install iverilog and gtkwave on our CentOS 8 however it always prompted about Problem: conflicting requests

  • nothing provides libtcl8.5.so()(64bit) needed by gtkwave-3.3.61-1.el7.x86_64
  • nothing provides libtk8.5.so()(64bit) needed by gtkwave-3.3.61-1.el7.x86_64

Any idea how to get this resolved? I'm referring to attached commands but not sure how to download as mentioned in their step 1?

Any advice would be much appreciated. enter image description here

Turn off Suspend, Sleep, Hibernate for user (xfce4)

Posted: 10 Dec 2021 01:06 AM PST

as the title is says, I want to disable all methods (buttons, commands, etc.) an unprivileged user have to turn a system into any standby-mode (sleep|hibernate|hybrid) under xfce4.

I found out that with a kioskrc file, we can disable to save user sessions.

# /etc/xdg/xfce4/kiosk/kioskrc  [xfce4-session]  SaveSession=NONE  

Alsamixer and no sound duing recording

Posted: 10 Dec 2021 12:41 AM PST

I am a bit new to audio and its control and I am working on embedded system running ArchLinux. I am trying to record some sound from my headset microphone. I plug it using a audio splitter (CTIA as with my headset) to the microphone input.

I tried to run arecord -f dat -d 10 -t wav --device="hw:0,0" test-mic.wav. It runs for like 10 secs and record the sound but when I played back using aplay test-mic.wav. However no sound was heard over the headset. The headset is confirmed to be working with the board and has tried to play some wav and mp4 video(with sound.

I tried to use alsamixer. And this is what i see

Default device

And I have adjust all the volume to max

Next I tried to select the sound card

Sound card

And selecting the sound card 0. I have more options

Sound card 0

I adjust the mic boost. Afterwhich there is no other options relating to mic except Line -in which can be select to microphone. I retain the settings as Line-In

After which I tried to do recording again, but still no sound was heard

What should I do?

Thanks

Clearing an occupied /dev/tty to start a service

Posted: 09 Dec 2021 11:33 PM PST

I want to run a service with the output on a certain tty on a Ubuntu server, before any login prompt. I've already successfully made a service that runs htop on tty3:

[Unit]  Description=htop on tty3    [Service]  Type=simple  ExecStart=/usr/bin/htop  ExecStop=/bin/kill -HUP ${MAINPID}  StandardInput=tty  StandardOutput=tty  TTYPath=/dev/tty3  Restart=always  RestartSec=2    [Install]  WantedBy=getty.target  

And that works mostly fine and as intended, switching to tty3 (alt+f3) brings up htop as expected, and ending the process restarts it instantly. Even stopping and starting/restarting the service from another tty works fine and as intended.

But there is a weird edge case, which is frustrating my efforts for another service somewhat. If I stop the above service, switch to tty3 which gets me a login prompt, switch back to another tty, and start the service again, htop does not come back. I suspect that this is because the normal login/terminal/whatever has now claimed /dev/tty3, so my service is waiting untill it can claim /dev/tty3.

So my question is, how would I clear a specific tty so this service or a service like it can restart, after a regular bash terminal has already claimed it?

Splitting a file based on every X number of regex pattern matches

Posted: 10 Dec 2021 03:07 AM PST

This question is similar to Splitting text files based on a regular expression, but not quite the same. My problem is that I don't want to split it into a specific number of files, but I actually want to split it based on the number of matches. For example: I have a 457 MB file and trying to split it down into much smaller files. Here's what's currently working:

csplit -z Scan.nessus /\<ReportHost/ '{*}'  

However, this creates about 61.5k for me as I have a ton of these <ReportHost entries in this 457MB file. Ultimately, I'd like to break this down by every 50 entries rather than every single entry.

Is there a way to modify this to accomplish that? I tried doing this in Ruby to some extent, but it seems to max out the VM's memory trying to parse through the file with Nokogiri.

Why not prepend user directory to PATH?

Posted: 10 Dec 2021 02:48 AM PST

For user scripts, the usual advice is to append their directory to $PATH in one's .profile:

PATH="$PATH:$HOME/.myscripts" # or .bin or whatever  

Apparently that is safer than prepending it: PATH="$HOME/.myscripts:$PATH"

But doing it the safe way means your script is going to be trumped by a system package with the same name. If you name your script mount or import, for example, unexpected things will happen when you try to use it.

I understand that many will see this as a feature, not a bug. But personally I want to be able to name my scripts whatever I like, including import, and have them run without surprises.

As I understand it, the risks of prepending are:

  • a malicious script could rewrite ls etc without having root access (but is this really a concern when installing software from standard distro repos only?)
  • a system package might call your user script instead of the other system package (but do user packages ever call mount or whatever without the full path, in practice? Seems like a bad idea)

How serious, exactly, are the security implications of prepending via .profile on a single-user system?

List the PATH's dirs where current user has permission to write?

Posted: 10 Dec 2021 12:25 AM PST

Can I extract from PATH only the directories where I (current user) have permissions to write?

I can imagine I'd need something like echo $PATH | grep... but I can't figure out what.

How should I avoid repeated evaluation of lots of bashrc commands in shells-within-shell-sessions?

Posted: 10 Dec 2021 01:47 AM PST

In bash, we have the inherent separation of .bash_profile and .bashrc, with the former running for login shells and the other for all shells. Now, I understand it's common to start an interactive non-login shell from a non-shell process, and for this reason I find myself running quite a bit of initialization stuff in my .bashrc. The thing is, one also often invokes the shell from within an interactive shell session, or within shell scripts; and I'm not at all sure none of them runs .bashrc. So, I think I would like to somehow constrain some of the stuff I do in my .bashrc to only happen in "top-level" interactive shells in some sense.

Is there some convention on how this is done? Or perhaps, is it too much of a hassle compared to the benefit?

Failure to change key mapping with loadkeys in systemd service

Posted: 10 Dec 2021 12:04 AM PST

My Packard-Bell laptop keyboard has 2 non-standard keys grouped with the navigation arrows making a 2-rows x 3-columns sub-block instead of the traditional inverted-T. Of course, these keys are not recognised by off-the-shelf Linux kernel.

I can make them active with setkeycodes and loadkeys commands as root.

To avoid having to launch manually the command(s), I designed a systemd unit (a .service file) so that the keyboard is configured during startup.

This worked fine until recently when I upgraded my laptop from a very old Fedora release to Fedora 35.

I now get "keymap x: permission denied" on all maps I try to modify. I don't understand why.

Unless I'm wrong, all commands launched by system systemd services are run as root. As such, loadkeys should have access to any file (I had to move the mapping file to /etc from my user directory to fix a "no such file or directory" error) and be able to change the console mapping.

Fearing a possible race condition, I changed the dependency so that the service is started after multi-user.target is reached (instead of some time before) and I am sure that every partition is mounted and ready. But this did not fix the error.

I suspect my service is run under some non-privileged user but I can't guess which (I can't use id or whoami because the commands are not interpreted by a shell and I can't redirect output to some file for later use).

Man says there is no use to add User= or Group= because units are already owned by root.

UPDATE: I was able to check that the service is launched as root. Consequently, the Keymap x: permission denied doesn't make sense. And if I run the command directly as root (not through systemctl), loadkeys works fine.

So what? Can you point me into some direction?

How to redirect only stdout in crontab?

Posted: 09 Dec 2021 11:29 PM PST

I want to only redirect stdout to a logfile from crontab, and let crontab notify me by mail on errors.

If I'd want to redirect both std and err stream, I'd go for 2>&1:

MAILTO=john@me.com  0 23 * * * /home/john/import.sh > /home/john/logs/backup.log 2>&1  

But wouldn't this prevent crontab catching on errors and sending a mail notification? So that's why I'm looking for a way to only redirect the stdout.

My problem is that my import.sh script runs a mysql import. And oddly crontab is sending my emails about it, even though there is nor error:

mysql: [Warning] Using a password on the command line interface can be insecure.  Importing from file '/tmp/my.csv' to table `mytable` in MySQL Server at /var%2Frun%2Fmysqld%2Fmysqld.sock using 3 threads  [Worker001] Records: 340086  Deleted: 0  Skipped: 0  Warnings: 0  [Worker002] Records: 351334  ...  File '/tmp/my.csv' was imported in 1 min 9.9572 sec at 59.91 MB/s  

So I'm looking for a way to still log those statements, but notify me only in error case.


Update: Image a crontab as follows:

*/1 * * * * /opt/test.sh  

test.sh:

#!/usr/bin/env bash    echo "just testing"  exit 0   

Result: the crontab will always send an email notification, even though it's simply a stdout with success code. But why? How can I prevent this?

ls and sort by number for human

Posted: 10 Dec 2021 02:15 AM PST

I create catalogs mkdir site_{1..44}. I want to sort them

site_1  site_2  site_3  site_4  ...  site_44  

I execute the command ls | sort -h and I have

site_1  site_10  site_11  site_12  site_13  site_14  site_15  site_16  site_17  site_18  site_19  site_2  site_20  site_21  site_22  site_23  site_24  site_25  site_26  site_27  site_28  site_29  site_3  site_30  site_31  site_32  site_33  site_34  site_35  site_36  site_37  site_38  site_39  site_4  site_40  site_41  site_42  site_43  site_44  site_5  site_6  site_7  site_8  site_9  

Where do I go wrong?

Enabling hardware video acceleration in Chrome, Kubuntu 20.04

Posted: 10 Dec 2021 03:02 AM PST

Problem in a nutshell: cannot enable hardware video acceleration in Chrome. My desktop has integrated GPU Intel UHD 750 and Core i5 11600 and it runs Kubuntu 20.04.

Initially, I had no hardware acceleration at all, so that even VLC played videos without acceleration, despite I had intel-media-va-driver-non-free installed. The output of vainfo was

libva info: VA-API version 1.7.0  libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/iHD_drv_video.so  libva info: Found init function __vaDriverInit_1_7  libva info: va_openDriver() returns 1  libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/i965_drv_video.so  libva info: Found init function __vaDriverInit_1_6  libva info: va_openDriver() returns -1    

I searched for the solution but did not find anyone with the same problem. I decided to follow some advises for the related issues. First, I updated the kernel from 5.11 to 5.15 but that did not help. Then I added a repo to install 21.xx version of the Intel drivers as suggested in the comments here: https://githubmemory.com/repo/HaveAGitGat/Tdarr/issues/452. After upgrading some packages and installing some kept back packages I got video acceleration. The current output of vainfo is

libva info: VA-API version 1.12.0  libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/iHD_drv_video.so  libva info: Found init function __vaDriverInit_1_12  libva info: va_openDriver() returns 0  vainfo: VA-API version: 1.12 (libva 2.12.0)  vainfo: Driver version: Intel iHD driver for Intel(R) Gen Graphics - 21.3.3 (6fdf88c)  vainfo: Supported profile and entrypoints        VAProfileNone                   : VAEntrypointVideoProc        VAProfileNone                   : VAEntrypointStats  

and so on.

The next step is to enable hardware acceleration in Chrome. I followed the instructions from here https://www.linuxuprising.com/2021/01/how-to-enable-hardware-accelerated.html but it did not help. The chrome://gpu tab shows the following

Graphics Feature Status

  • Canvas: Software only. Hardware acceleration disabled
  • Canvas out-of-process rasterization: Disabled
  • Compositing: Software only. Hardware acceleration disabled
  • Multiple Raster Threads: Disabled
  • Out-of-process Rasterization: Disabled
  • OpenGL: Disabled
  • Rasterization: Software only. Hardware acceleration disabled
  • Raw Draw: Disabled
  • Skia Renderer: Enabled
  • Video Decode: Software only. Hardware acceleration disabled
  • Vulkan: Disabled
  • WebGL: Disabled
  • WebGL2: Disabled

Problems Detected

  • Accelerated video decode has been disabled, either via blocklist, about:flags or the command line.
    Disabled Features: video_decode
  • Gpu compositing has been disabled, either via blocklist, about:flags or the command line. The browser will fall back to software compositing and hardware acceleration will be unavailable.
    Disabled Features: gpu_compositing
  • GPU process was unable to boot: GPU process crashed too many times with SwiftShader.
    Disabled Features: all
    ...

I also tried to enable video acceleration in Firefox but failed. Moreover, I installed Chromium and chrome://gpu now shows that almost everything is enabled but video acceleration is not.

Please, help!

How do I resolve this yum update conflict?

Posted: 10 Dec 2021 01:53 AM PST

I'm getting a bunch of conflict error messages regarding the updating of packages containers-common and runc on my CentOS 8 server.

Yum output:

/root>yum update --nobest  Last metadata expiration check: 0:16:51 ago on Fri 24 Sep 2021 03:59:35 PM EDT.  Dependencies resolved.     Problem: package containers-common-1:1.3.1-5.module_el8.4.0+886+c9a8d9ad.x86_64 requires runc, but none of the providers can be installed    - package containerd.io-1.4.9-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-70.rc92.module_el8.4.0+673+eabfc99d.x86_64    - installed package containerd.io-1.4.9-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-70.rc92.module_el8.4.0+673+eabfc99d.x86_64    - package containerd.io-1.4.9-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-73.rc93.module_el8.4.0+830+8027e1c4.x86_64    - installed package containerd.io-1.4.9-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-73.rc93.module_el8.4.0+830+8027e1c4.x86_64    - package containerd.io-1.4.9-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-74.rc95.module_el8.4.0+886+c9a8d9ad.x86_64    - installed package containerd.io-1.4.9-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-74.rc95.module_el8.4.0+886+c9a8d9ad.x86_64    - cannot install the best update candidate for package containers-common-1:1.2.2-10.module_el8.4.0+830+8027e1c4.x86_64    - cannot install the best update candidate for package containerd.io-1.4.9-3.1.el8.x86_64    - package runc-1.0.0-56.rc5.dev.git2abd837.module_el8.3.0+569+1bada2e4.x86_64 is filtered out by modular filtering    - package runc-1.0.0-64.rc10.module_el8.4.0+522+66908d0c.x86_64 is filtered out by modular filtering    - package runc-1.0.0-65.rc10.module_el8.4.0+819+4afbd1d6.x86_64 is filtered out by modular filtering    - package runc-1.0.0-70.rc92.module_el8.4.0+786+4668b267.x86_64 is filtered out by modular filtering    - package runc-1.0.0-71.rc92.module_el8.4.0+833+9763146c.x86_64 is filtered out by modular filtering  ================================================================================================================================   Package                      Architecture      Version                                              Repository            Size  ================================================================================================================================  Skipping packages with conflicts:  (add '--best --allowerasing' to command line to force their upgrade):   runc                         x86_64            1.0.0-70.rc92.module_el8.4.0+673+eabfc99d            appstream            3.1 M   runc                         x86_64            1.0.0-73.rc93.module_el8.4.0+830+8027e1c4            appstream            3.2 M   runc                         x86_64            1.0.0-74.rc95.module_el8.4.0+886+c9a8d9ad            appstream            3.3 M  Skipping packages with broken dependencies:   containers-common            x86_64            1:1.3.1-5.module_el8.4.0+886+c9a8d9ad                appstream             95 k    Transaction Summary  ================================================================================================================================  Skip  4 Packages    Nothing to do.  Complete!  /root>  

I tried the suggestion to use the --best --allowerasing flags, but it shows that my docker environment would be corrupted by removing some important packages.

/root>yum update containers-common --best --allowerasing  Last metadata expiration check: 0:30:49 ago on Fri 24 Sep 2021 03:59:35 PM EDT.  Dependencies resolved.  ================================================================================================================================   Package                          Architecture  Version                                          Repository                Size  ================================================================================================================================  Upgrading:   containers-common                x86_64        1:1.3.1-5.module_el8.4.0+886+c9a8d9ad            appstream                 95 k  Installing dependencies:   runc                             x86_64        1.0.0-74.rc95.module_el8.4.0+886+c9a8d9ad        appstream                3.3 M  Removing dependent packages:   containerd.io                    x86_64        1.4.9-3.1.el8                                    @docker-ce-stable        112 M   docker-ce                        x86_64        3:20.10.8-3.el8                                  @docker-ce-stable         95 M   docker-ce-rootless-extras        x86_64        20.10.8-3.el8                                    @docker-ce-stable         16 M    Transaction Summary  ================================================================================================================================  Install  1 Package  Upgrade  1 Package  Remove   3 Packages    Total download size: 3.4 M  Is this ok [y/N]: N  

Is there a permanent workaround for these package update conflicts?

Samba has a module vfs_full_audit, what does each object actually mean within the module?

Posted: 10 Dec 2021 01:24 AM PST

The module vfs_full_audit in Samba lists objects that can be added to the module to increase the logging specificity or verbosity generally of syscalls. Example:

# defaults for auditing  full_audit:priority = notice  full_audit:facility = local6  full_audit:failure = create_file open opendir rmdir unlink unlinkat connect connectpath disconnect  full_audit:success = rename opendir rmdir unlink open create_file opendir unlinkat connect connectpath disconnect  full_audit:prefix = %U|%d|%u|%R|%I|%S  

However in the man page located here: https://www.samba.org/samba/docs/current/man-html/vfs_full_audit.8.html

It fails to define explicitly what each object actually does - I understand that some of them are fairly obvious such as "open" or "rmdir" but a sentence just describing what each part does would be very useful for more questionable ones such as "kernel_flock"

Does anyone know of any resource/URL that defines these values explicitly? Or perhaps this has been asked previously by one of you and had data back from Sernet detailing it?

Thanks for looking ;)

What's the impact of setting pasv_min_port=pasv_max_port in vsftpd?

Posted: 09 Dec 2021 11:22 PM PST

I am installing an FTP server vsftpd following this tutorial, which can be summarized as:

apt-get install vsftpd Then in Config file /etc/vsftpd.conf uncomment the lines local_umask=022 and write_enable=YES, and at the end, add:

lock_upload_files=NO   chroot_local_user=YES  force_dot_files=YES  

and change the following (feel free to change 8745 with whichever port you prefer):

pasv_enable=YES   pasv_min_port=8745  pasv_max_port=8745  

What's the impact of setting pasv_min_port to be the same port as pasv_max_port in vsftpd? E.g., does it any impact on the performance?

TNAS how to install wget

Posted: 10 Dec 2021 01:10 AM PST

I have a TNAS F5-221. How to install wget? I'm getting: wget command not found. I tried: yum install wget, apt install wget. I'm logged through ssh as root. Is there work around for it?

lsyncd - How to include specific directories and exclude rest of all directories

Posted: 09 Dec 2021 10:07 PM PST

I want to include some of the directories in lsyncd process and exclude rest of all directory.

I have so many directories in a multisource directory. I want to include only temp and temp1 directory and exclude rest of all directories in lsyncd.

I try using below code in /etc/lsyncd/lsyncd.conf.lua file,

settings {          logfile = "/var/log/lsyncd/lsyncd.log",          statusFile = "/var/log/lsyncd/lsyncd.status"  }  sync {          default.rsyncssh,          source = "/var/www/html/multisource",          host="user@<ip_address>",          targetdir = "/var/www/html/multisource",          delay     = 5,          rsync = {                  perms = true,                  owner = true,                  group = true,                  --include = {"/temp", "/temp1"},                  --exclude = {"/*"}          }  }  

Does have any idea about this?

Is it possible to run a script on the host machine when a docker container starts or stops

Posted: 09 Dec 2021 11:04 PM PST

I am to start a docker container and bind an IPv6 address to it by running docker run -itd --restart=always --name=<container> --net=br6 --ip6=2001:db8:8:2::100 <image>. However, I have to use ndp proxy ip neigh replace proxy "2001:db8:8:2::100" dev ens3 to make the address accessible. Is it possible to run this command on the host machine every time when the docker container starts?

use terminal to remove a 'Read Only' file / filesystem in Mac

Posted: 10 Dec 2021 12:06 AM PST

I am currently in terminal in 'Diskutility' trying to remove everything from my HD including the Recovery Partition which is corrupted. I am using : rm -f System or rm -f Library as examples and I get" Library is a Directory error.

I have an embedded system placed in my Mac and it has caused me 1000's of dollars and no Mac help from Apple.

Can someone assist here as it does involve an emergency.

Thank you.

dnf: (something) was supposed to be installed but is not!

Posted: 10 Dec 2021 03:01 AM PST

I see that it is an error that happend here and there for some packages, but I didnt find any solution working for me.

Everything broke down trying to install Jupyter by dnf install python2-qtconsole python2-jupyter-core, when I found that python2-urllib3 was giving an error trying to install. Now whenever I do: dnf install python2-urllib3, I get:

Dependencies resolved.  ========================================================================================================================================   Package                              Arch                        Version                            Repository                    Size  ========================================================================================================================================  Installing:   python2-urllib3                      noarch                      1.22-3.fc27                        updates                      178 k    Transaction Summary  ========================================================================================================================================  Install  1 Package    Total download size: 178 k  Installed size: 678 k  Is this ok [y/N]: y  Downloading Packages:  python2-urllib3-1.22-3.fc27.noarch.rpm                                                                   78 kB/s | 178 kB     00:02      ----------------------------------------------------------------------------------------------------------------------------------------  Total                                                                                                    53 kB/s | 178 kB     00:03       Running transaction check  Transaction check succeeded.  Running transaction test  Transaction test succeeded.  Running transaction    Preparing        :                                                                                                                1/1     Installing       : python2-urllib3-1.22-3.fc27.noarch                                                                             1/1   Error unpacking rpm package python2-urllib3-1.22-3.fc27.noarch  Error unpacking rpm package python2-urllib3-1.22-3.fc27.noarch  error: unpacking of archive failed on file /usr/lib/python2.7/site-packages/urllib3/packages/ssl_match_hostname: cpio: File from package already exists as a directory in system  python2-urllib3-1.22-3.fc27.noarch was supposed to be installed but is not!    Verifying        : python2-urllib3-1.22-3.fc27.noarch                                                                             1/1     Failed:    python2-urllib3.noarch 1.22-3.fc27                                                                                                        Error: Transaction failed  

I tried all these

dnf clean all  yum clean all && rpm --rebuilddb  package-cleanup --problems  rpm -e python3-urllib3-1.22-3.fc27.noarch --nodeps  rpm -i python2-urllib3-1.22-3.fc27.noarch.rpm  

... and I'm losing ideas fast ...

Incorrect automatic time zone

Posted: 10 Dec 2021 01:05 AM PST

I've noticed that the automatic time zone detection functionality of my GNOME 3 (Arch Linux) is not working correctly. My actual time zone is PST (UTC-08), but if I toggle on the "Automatic Time Zone" option in "All Settings -> Date & Time", it would detect me to be in EST (UTC-05).

Kernel: 4.9.11-1-ARCH

GNOME: 3.22.3-1

Output of timedatectl:

      Local time: Wed 2017-03-01 05:36:18 EST    Universal time: Wed 2017-03-01 10:36:18 UTC          RTC time: Wed 2017-03-01 10:36:18         Time zone: America/New_York (EST, -0500)   Network time on: yes  NTP synchronized: yes   RTC in local TZ: no  

Output of sudo hwclock --show: 2017-03-01 05:37:38.295861-0500 (Which is the current EST time)

Output of date: Wed Mar 1 05:39:07 EST 2017

I suspected it was something wrong about my IP address, but all online IP location finder websites I've tried tell me I'm in San Francisco (which is correct). Also, I'm running dual systems (Windows 10 & Arch), and one OS writing the hardware clock always results in the other OS having an incorrect time on the next boot; I just ignore it and let the OSes' internet time services correct it. Wrong time zone detection only began today.

I'm not sure how to approach this issue. Can anyone shed some light on what might be the cause?

mod_wsgi with Apache ignoring python-path

Posted: 10 Dec 2021 02:01 AM PST

I'm trying to run mozilla-firefox-sync-server with apache 2.4.17-3 on my Arch Linux server, following this guide. Here's a part of my /etc/httpd/conf/extra/httpd-vhosts.conf file.

<Directory /opt/mozilla-firefox-sync-server>      Require all granted  </Directory>    <VirtualHost *:80>      ServerName ffsync.example.com      DocumentRoot /opt/mozilla-firefox-sync-server/        WSGIProcessGroup ffsyncs      WSGIDaemonProcess ffsyncs user=ffsync group=ffsync processes=2 threads=25 python-path=/opt/mozilla-firefox-sync-server/local/lib/python2.7/site-packages/      WSGIPassAuthorization On      WSGIScriptAlias / /opt/mozilla-firefox-sync-server/syncserver.wsgi      CustomLog /var/log/httpd/ffsync_custom combined      ErrorLog /var/log/httpd/ffsync_error  </VirtualHost>  

When I curl ffsync.example.com, I get a 500 error. In the log, It looks like it's running with Python 3.5 (ImportError: No module named 'ConfigParser').

Indeed, if I replace syncserver.wsgi with the following sample code from the ArchWiki page on mod_wsgi:

#-*- coding: utf-8 -*-  def wsgi_app(environ, start_response):      import sys      output = sys.version.encode('utf8')      status = '200 OK'      headers = [('Content-type', 'text/plain'),                 ('Content-Length', str(len(output)))]      start_response(status, headers)      yield output    application = wsgi_app  

I get a 200 status code with 3.5.0 (default, Sep 20 2015, 11:28:25) [GCC 5.2.0].

When I use the package mod_wsgi2, everything works correctly, but I need to use mod_wsgi because there's also a Python 3 WSGI application running with Apache which cannot run with mod_wsgi2. The ArchWiki page on mod_wsgi states that mod_wsgi should work with Python 2 and 3.

What makes the python-path argument in the WSGIDaemonProcess directive ignored?

Update : Having a recent version of mod_wsgi (4.4.21-1), I also tried using python-home, like so:

WSGIDaemonProcess ffsyncs user=ffsync group=ffsync processes=2 threads=25 python-home=/opt/mozilla-firefox-sync-server/local/  

This time, I get a 504 error and this message in the error log (whether original or modified syncserver.wsgi)

Timeout when reading response headers from daemon process 'ffsyncs': /opt/mozilla-firefox-sync-server/syncserver.wsgi  

How to store a large folder in a single file without compression

Posted: 10 Dec 2021 01:24 AM PST

I want to take a 78gb folder and store it in a single file (for upload into a cloud service), as if I am compressing it in an archive, but I don't want any compression (I don't have that much CPU time available). Is there anyway that I can accomplish this, perhaps a terminal command I don't know about?

Adding to Profile.local in Ubuntu?

Posted: 09 Dec 2021 11:08 PM PST

I was told to adjust the /etc/profile.local with following lines: (as user root)

export PATH=$PATH:~/cmds:.  export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:~/lib  export LIBRARY_PATH=$LIBRARY_PATH:~/lib  export ALLOW=1  

But I can't seem to find a profile.local file, I'm on Ubuntu 14.04. In any case, the program I need to do this for is looking for a command in cmds, so I assume this is just updating my path?

I'm fairly new to Linux, so any help would be appreciated. I tried updating the global etc/profile, but this did nothing.

How to cache or otherwise speed up `du` summaries?

Posted: 10 Dec 2021 12:21 AM PST

We have a large file system on which a full du (disk usage) summary takes over two minutes. I'd like to find a way to speed up a disk usage summary for arbitrary directories on that file system.

For small branches I've noticed that du results seem to be cached somehow, as repeat requests are much faster, but on large branches the speed up become negligible.

Is there a simple way of speeding up du, or more aggressively caching results for branches that haven't been modified since the previous search?

Or is there an alternative command that can deliver disk usage summaries faster?

No comments:

Post a Comment