Thursday, November 4, 2021

Recent Questions - Unix & Linux Stack Exchange

Recent Questions - Unix & Linux Stack Exchange


Funnel all egress container traffic on a host through another port on the same host

Posted: 04 Nov 2021 10:59 AM PDT

I have a VM running several docker containers using bridge networking mode. These containers are able to connect to external services just fine. Now I want to be able to funnel all of the outgoing TCP traffic originating from each of these containers through a transparent proxy (envoyproxy in this case) running on a specific port (e.g. 10000) on the same host (also running as a container but with host networking mode enabled).

Let's say the app containers are C1, C2; the Proxy container is P1 and the Host is H1, below is what I'm looking for:

  1. if C1 running on H1 wants to connect to www.google.com, I want the outgoing tcp traffic from C1 to go through P1:10000 running on H1
  2. if C2 running on H1 wants to connect to www.apple.com, I want the outgoing tcp traffic from C2 to go through P1:10000 running on H1

For example's sake, I'm able to change /etc/resolve.conf on H1 to resolve those DNS requests to 127.0.0.1 and using the below iptables rule, I'm able to have a local curl command (running curl 127.0.0.1 directly on H1) go through P1:10000

iptables -t nat -I OUTPUT --src 0/0 --dst 127.0.0.1 -p tcp -j REDIRECT --to-ports 10000  

But I'm not able to get this to work with docker containers. I've tried to add the below rule on H1, and tried curl host.docker.internal while exec'ing into the container, but it didn't help

iptables -t nat -I OUTPUT --src 0/0 --dst 172.17.0.1 -p tcp -j REDIRECT --to-ports 10000  

Is it possible to achieve this kind of proxying for containers? Any help is appreciated.

Tab completion error: bash: cannot create temp file for here-document: No space left on device

Posted: 04 Nov 2021 11:02 AM PDT

I am absolutely new to Linux.

I downloaded 50GB of data on the server disk via SSH.

Then I deleted them using midnight commander.

Now, the tab-completion doesn't work, and it is giving me the following error:

-bash: cannot create temp file for here-document: No space left on device  

How can I resolve this issue?

After installing garuda and rebooting stuck in boot screen

Posted: 04 Nov 2021 10:47 AM PDT

Hello everyone first time arch /garuda user here. I just partitioned my windows laptop to install garuda and finish the installation process. When printed to reboot I did so and was stuck on a loading screen. When I went i to grub is said.

/new_root: can't find UUID=7137....  

After it dropped me into an emergency shell I was hit with a

sh: can't access tty: job control turned off  

can anyone explain to me what is happening and how to fix it?

How to multi thread this for loop script?

Posted: 04 Nov 2021 10:46 AM PDT

I have this command here for batch converting pdf (first 2 pages) to tiff files using pdftoppm. Goal is to put the tiff images into its own folder with folder name matching the original pdf file name.

for file in *.pdf; do pdftoppm -tiff -f 1 -l 2 "$file" ~/tiff/directory/"$file"/"$file"; done

How can I run this for loop multithreaded?

I am running debian.

I have 10000s of pdfs to convert to tiff.

I would like to have around 8 processes concurrently.

VMWare Player 15.5.1 - cannot start GUI - vmmon cannot be installed

Posted: 04 Nov 2021 10:20 AM PDT

enter image description hereenter image description hereI cannot start VMWare Player 15.5.1. - vmmon gives an error - see picture an file

Copy multiple .txt contents into single file based on character length

Posted: 04 Nov 2021 09:37 AM PDT

I'm looking to find the largest file within a directory by character count, copy its contents, delete the file and then paste this to another file elsewhere with the end goal being that every file (txt) in the directory is copied into one single complete file in this new order.

I have managed this by sorting the byte size of the file but not by character count.

My only headway ahs been attempting to loop this within the directory containing the files but this just results in an error, I get the impression this code is barking up the wrong tree...

du -b *.txt | sort -n | tail -n1  

How To Save Log Files To Mongodb Server With Curl

Posted: 04 Nov 2021 09:36 AM PDT

Wish to have a shell script that can save the content of linux log files to a mongodb server using curl? Keeping in mine that the script has to copy the content of the log files line by line before sending it to the mongodb server.

WHM / Rearrange an Account

Posted: 04 Nov 2021 08:59 AM PDT

I recently bought another block of storage from my provider and following their guide, I rearranged my accounts to it:

  1. sudo mkfs.ext4 /dev/vdb
  2. sudo mkdir /mnt/vol-us-1
  3. sudo mount -t ext4 /dev/vdb /mnt/vol-us-1
  4. bin/bash -c "if [ $(cat /etc/fstab | grep -i /dev/vdb | awk '{print $1}')!="/dev/vdb" ]; then sudo bash -c 'echo \/dev/vdb /mnt/vol-us-1 ext4 defaults,nofail,discard,noatime 1 2\ >> /etc/fstab';fi"

This worked initially, but then I ran through an upgrade for WHM and I lost my settings, and the storage block seemed to have been wiped. I was able to recover from a backup, but not entirely.

I can see in my Disk Usage report that the mount exists: enter image description here

But then trying to move my account, WHM doesn't let me do it:

enter image description here

Is this network setup reasonable?

Posted: 04 Nov 2021 08:36 AM PDT

Is there a problem with my network like this?

~ $ ifconfig  lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536          inet 127.0.0.1  netmask 255.0.0.0          inet6 ::1  prefixlen 128  scopeid 0x10<host>          unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00  txqueuelen 1  (UNSPEC)          RX packets 264394  bytes 13483549 (12.8 MiB)          RX errors 0  dropped 0  overruns 0  frame 0          TX packets 264394  bytes 13483549 (12.8 MiB)          TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0    p2p0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500          inet6 fe80::ec51:bcff:fe55:af4b  prefixlen 64  scopeid 0x20<link>          unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00  txqueuelen 1000  (UNSPEC)          RX packets 0  bytes 0 (0.0 B)          RX errors 0  dropped 0  overruns 0  frame 0          TX packets 8  bytes 648 (648.0 B)          TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0    wlan0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1460          inet 192.168.1.101  netmask 255.255.255.0  broadcast 192.168.1.255          inet6 fe80::ee51:bcff:fe55:af4b  prefixlen 64  scopeid 0x20<link>          unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00  txqueuelen 3000  (UNSPEC)          RX packets 6086  bytes 6348785 (6.0 MiB)          RX errors 0  dropped 0  overruns 0  frame 0          TX packets 4665  bytes 894397 (873.4 KiB)          TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0  

Grub menu enters shell with Fedora 34 install

Posted: 04 Nov 2021 08:43 AM PDT

I recently did a Fedora 34 minimal install on my own PC. It works fine.

Same household, different PC: Grub greets me with a grub> shell. Typing exit worked as expected, I entered the normal menu and could start my Linux distro. I have read a lot of "grub menu broken" post, with is not an issue, it works, but it greets me with a shell.

It is a dual boot setup and Windows is accessible.

I unplugged the drive with Fedora on it and the grub shell is still there, when i type exit now, the screen goes black for a second and Windows is loaded, without asking me what i want to boot.

Is grub not being installed on my Linux partition and is that what triggers the console to open first? I reinstalled Fedora as well, (before unplugging the drive) and nothing changed.

Yum as root only works if I use the sudo command

Posted: 04 Nov 2021 10:00 AM PDT

This is for RHEL 7.8...

I support a VM for a customer where another third-party uses the root account to update packages. When they use yum as root, they receive a timeout, like this (note: this is not package-specific and it is not repo-specific):

https://rhui-2.microsoft.com/pulp/repos/microsoft-azure-rhel7/repodata/repomd.xml: [Errno 12] Timeout on https://rhui-2.microsoft.com/pulp/repos/microsoft-azure-rhel7/repodata/repomd.xml: (28, 'Operation timed out after 30001 milliseconds with 0 out of 0 bytes received')  

However, I've noticed that when I issue the sudo command, everything works just fine.

To summarize, as root, this command doesn't work (It just times out when trying to access any repo, see the above timeout error):

yum install -y java-11-openjdk  

But this command does work:

sudo yum install -y java-11-openjdk  

Any ideas what would cause this issue? Obviously, everything is fine with sudo. The customer wants them to use the root account, so making them their own account is not an option. I am really just wondering why you would ever have to run sudo as root to get something to work? Is this a PATH issue?

Thank you for your time and consideration.

-Adam, RHCE

cups-pdf : Multiple printers with different targets

Posted: 04 Nov 2021 06:58 AM PDT

I have on my CUPS server multiple network printers, and I added a cups-pdf virtual printer. All this is working.

Now I want to add another virtual printer who save the PDF file to another folder.

I can't find how to do this in configuration.

Is it possible to work this way? If yes, how? If not, do you have an idea of another way doing these kind of thing?

replace a value only between the " with sed or awk [closed]

Posted: 04 Nov 2021 08:57 AM PDT

I have such a scenario inside a CSV file:

ID,PRICE,QUANTITY,ARRIVAL  01299,"41,5",1,0  26528,"412,03",0,0  38080,"2,35",0,0  38081,"2,35",0,0  ..  ..  

The question I ask myself is: how do I replace , with the ., but only at the prices inside "..." in the PRICE column?

I tried with

sed -i 's/\(,\)[^ ]*\( .*\)/\1"."\2/'  

but without success, can you give me a tip?

no free/cached memory and no process consuming it

Posted: 04 Nov 2021 08:44 AM PDT

I can't manage to monitor where my RAM is hidden, it's not buff/cached and I'm not sure to understand the /proc/meminfo data

What is sure is when I top or try to catch processes I can't find more than 3-4 processes that consume 0.1% of my RAM

I tried to reboot the server, it goes right for few hours and then within seconds the memory drops from 90% free to this

Here are my free -m and /proc/meminfo

Thanks in advance

$free -m                total        used        free      shared  buff/cache   available  Mem:           3924        3569         205           5         150         160  Swap:           512         133         379  
$cat /proc/meminfo   MemTotal:        4019180 kB  MemFree:          217848 kB  MemAvailable:     168264 kB  Buffers:           13024 kB  Cached:           106532 kB  SwapCached:        21440 kB  Active:           109704 kB  Inactive:          53776 kB  Active(anon):      19084 kB  Inactive(anon):    30604 kB  Active(file):      90620 kB  Inactive(file):    23172 kB  Unevictable:           0 kB  Mlocked:               0 kB  SwapTotal:        525308 kB  SwapFree:         388992 kB  Dirty:                76 kB  Writeback:             0 kB  AnonPages:         25476 kB  Mapped:            17148 kB  Shmem:              5756 kB  Slab:              49972 kB  SReclaimable:      26380 kB  SUnreclaim:        23592 kB  KernelStack:        3484 kB  PageTables:         5576 kB  NFS_Unstable:          0 kB  Bounce:                0 kB  WritebackTmp:          0 kB  CommitLimit:      771568 kB  Committed_AS:     404024 kB  VmallocTotal:   34359738367 kB  VmallocUsed:           0 kB  VmallocChunk:          0 kB  HardwareCorrupted:     0 kB  AnonHugePages:         0 kB  ShmemHugePages:        0 kB  ShmemPmdMapped:        0 kB  HugePages_Total:    1722  HugePages_Free:     1722  HugePages_Rsvd:        0  HugePages_Surp:        0  Hugepagesize:       2048 kB  DirectMap4k:       13248 kB  DirectMap2M:     4163584 kB  
$df -h | grep tmpfs  udev           devtmpfs  2.0G     0  2.0G   0% /dev  tmpfs          tmpfs     393M   46M  347M  12% /run  tmpfs          tmpfs     2.0G   48K  2.0G   1% /dev/shm  tmpfs          tmpfs     5.0M   48K  5.0M   1% /run/lock  tmpfs          tmpfs     2.0G     0  2.0G   0% /sys/fs/cgroup    

Replace a hexadecimal value by a (modified) decimal value in a text file

Posted: 04 Nov 2021 08:41 AM PDT

Inside a file test.txt I have a hexadecimal value

0x0000000000000000000000000000000000000000047546124890225541102135415377465907  

Only one line. There are no other lines or characters. It is also identified only by 0x.

I want to convert this hex value to decimal (388355321549592156970965297418600041568519), subtract 1, and overwrite the original value with the result of this operation in test.txt

Ultimately, the data in the test.txt file should be converted from

0x0000000000000000000000000000000000000000047546124890225541102135415377465907  

to

388355321549592156970965297418600041568518  

I would be very grateful if you could tell me how to do it with Bash (Linux shell).

What is an effective way to automatically append file extensions to extensionless files?

Posted: 04 Nov 2021 09:14 AM PDT

I wrote a bash script today during my lunch break that finds extensionless files in a directory and appends a file extension to those files.

The script is relatively long because I added a bunch of flags and stuff like directory selection and whether to copy or overwrite the file, but the meat and potatoes of its functionality can be replicated simply with this:

#recursively find files in current directory that have no extension  for i in $(find . -type f ! -name "*.*"); do      #guess that extension using file      extfile=$(file --extension --brief $i)      #select the first extension in the event file spits something weird (e.g. jpeg/jpe/jfif)       extawk=$(echo $extfile | awk -F/ '{print $1}')      #copy the file to a file appended with the extension guessed from the former commands      cp -av $i $i.$extawk  done  

It's a bit tidier in my actual script—I just wanted to split commands up on here so I could comment why I was doing things.

My question: Using find in combination with file in the manner I have chosen is likely not the most fool-proof way to go about doing this—what is the best way to recursively guess and append extensions for a bulk group of diverse filetypes among several directories?

How to batch change exif data for JPEG photo files (wrong date set in camera)?

Posted: 04 Nov 2021 10:17 AM PDT

I have taken 300 photos at an event. Afterwards I noticed that the date was set incorrectly in the camera - one day off. There are lots of EXIF data in the files, not just creation dates.

How can I change only the dates contained within all relevant EXIF fields to correct the date (minus one day exactly)?

No other data should be changed by this modification!

Perhaps for each file I could dump the data (exiftool or exiv2?), then modify the dump (with awk?), then replace EXIF data from the modified dump? But how?

EDIT:

There is a lot of data per file:

# exiftool IMG_9040.JPG | wc       289    2218   13996  

Lots of it are dates:

# exiftool IMG_9040.JPG | grep 2021 | grep -v File  Modify Date                     : 2021:11:02 17:06:58  Date/Time Original              : 2021:11:02 17:06:58  Create Date                     : 2021:11:02 17:06:58  Create Date                     : 2021:11:02 17:06:58.24+01:00  Date/Time Original              : 2021:11:02 17:06:58.24+01:00  Modify Date                     : 2021:11:02 17:06:58.24+01:00  

I wish to change all of these.

Kinda complicated tar based on modified date

Posted: 04 Nov 2021 10:58 AM PDT

Okay, I think this is possible, but I can't quite figure it out. This is the situation.

A folder contains the log files of all the processes on my robot. The structure looks sort of like this:

$ ls -lrt  total 8  drwxrwxr-x 2 per per 4096 nov  3 12:46 launch01  -rw-rw-r-- 1 per per    0 nov  3 12:47 camera112.log  -rw-rw-r-- 1 per per    0 nov  3 12:47 motors121.log  -rw-rw-r-- 1 per per    0 nov  3 12:47 lidar111.log  drwxrwxr-x 2 per per 4096 nov  3 12:49 launch02  -rw-rw-r-- 1 per per    0 nov  3 12:49 motors122.log  -rw-rw-r-- 1 per per    0 nov  3 12:49 lidar211.log  -rw-rw-r-- 1 per per    0 nov  3 12:49 camera113.log    

The files camera112.log, motors121.log and lidar111.log are associated to the logs in folder launch01. I would like to write a script that gets all the files that belong to a specific launch and tar them into one tarball. Since timestamps can change between slightly by files and the numbers in the files are only nearly related, I think the best way to gather all relevant files is to get all files which are below launch01 (inclusive), up to the next directory in the list (exclusive). The number of files can vary, as can the time stamps and names. What is consistent is the folder, then a bunch of files, then the next folder, then files, etc. Ultimately, I would like to get the latest set of logs easily.

Unsure of the approach here. Any ideas how to go about this?

Clarifications:

  • Number of files can vary.
  • The exact timestamp is not reliable (as above, the folder launch01 is different than camera112.log) but relative timestamps work fine. For instance, if I could tar all files from launch01 (inclusive) to launch02 (exclusive) in the list provided by ls -lrt, that works great.

Is it possible to remove specific repeated character of a list using regex?

Posted: 04 Nov 2021 08:04 AM PDT

I have a list of one column that contain ~ 100 lines, in which some lines are repeated, and my purpose is to get rid of a specific duplicate lines and leave only one copy, while the other lines kept untouched.

An extract of the files that I'm working on :

V(Mn9)     V(C1,H3)   V(Mn6)     V(Mn6)     V(C4,H6)   V(Mn9)     V(Mn9)     V(C1,Mn6)  V(C4,Mn9)  V(Mn6)     V(C1,C4)   C(Mn9)     C(Mn6)     C(C1)      C(C4)      C(Mn9)     C(Mn6)     V(C1,H2)   V(Mn9)     V(Mn6)     V(C4,H5)  

My purpose is to remove all the duplicate lines contain C(Xx0-9) and leave one of them and kept the V(Xxx..).

The result I seek :

V(Mn9)     V(C1,H3)   V(Mn6)     V(Mn6)     V(C4,H6)   V(Mn9)     V(Mn9)     V(C1,Mn6)  V(C4,Mn9)  V(Mn6)     V(C1,C4)   C(C1)      C(C4)      C(Mn9)     C(Mn6)     V(C1,H2)   V(Mn9)     V(Mn6)     V(C4,H5)  

I used the command :

sed '0,/C(Mn9)/{/C(Mn9)/d}' inputfile.txt | sed '0,/C(Mn6)/{/C(Mn6)/d}'  

and it's working, but it's not good enough for the whole file, because there is a lot of C(Xx1-50), I thinked to use regular expression, but I don't know how, that's why I need your help.

How to run rtorrent as systemd service under a dedicated user?

Posted: 04 Nov 2021 10:27 AM PDT

I am trying to get rtorrent to run as a systemd service, but the service wouldn't start. Here's the config file and any log I can get. Ask for more info if you need to. I am running:

$ lsb_release -a  No LSB modules are available.  Distributor ID: Ubuntu  Description:    Ubuntu 20.04.2 LTS  Release:        20.04  Codename:       focal  
$ systemctl status rtorrent  ● rtorrent.service - rTorrent       Loaded: loaded (/etc/systemd/system/rtorrent.service; enabled; vendor preset: enabled)       Active: failed (Result: exit-code) since Thu 2021-05-27 08:52:43 EEST; 5min ago      Process: 20199 ExecStart=/usr/bin/tmux new-session -d -P -s rt -n rtorrent /usr/bin/rtorrent (code=exited, status=0/SUCCESS)      Process: 20205 ExecStop=/usr/bin/tmux send-keys -t rt:rtorrent C-q (code=exited, status=1/FAILURE)     Main PID: 20201 (code=exited, status=0/SUCCESS)    May 27 08:52:43 $MACHINE systemd[1]: Starting rTorrent...  May 27 08:52:43 $MACHINE tmux[20199]: rt:  May 27 08:52:43 $MACHINE systemd[1]: Started rTorrent.  May 27 08:52:43 $MACHINE tmux[20205]: no server running on /tmp/tmux-110/default  May 27 08:52:43 $MACHINE systemd[1]: rtorrent.service: Control process exited, code=exited, status=1/FAILURE  May 27 08:52:43 $MACHINE systemd[1]: rtorrent.service: Failed with result 'exit-code'.  

The config file..

[Unit]  Description=rTorrent  Requires=network.target local-fs.target    [Service]  Type=forking  KillMode=none  User=rt  Group=adm  ExecStart=/usr/bin/tmux new-session -d -P -s rt -n rtorrent /usr/bin/rtorrent  ExecStop=/usr/bin/tmux send-keys -t rt:rtorrent C-q  WorkingDirectory=/tmp/tmux-110/    [Install]  WantedBy=multi-user.target  

Some more logs:

$ journalctl -u rtorrent  May 27 08:52:43 $MACHINE systemd[1]: Starting rTorrent...  May 27 08:52:43 $MACHINE tmux[20199]: rt:  May 27 08:52:43 $MACHINE systemd[1]: Started rTorrent.  May 27 08:52:43 $MACHINE tmux[20205]: no server running on /tmp/tmux-110/default  May 27 08:52:43 $MACHINE systemd[1]: rtorrent.service: Control process exited, code=exited, status=1/FAILURE  May 27 08:52:43 $MACHINE systemd[1]: rtorrent.service: Failed with result 'exit-code'.  

So far I have added the user rt to the adm group, but I can't figure it out why tmux can't be started as rt. I also authorized rt user to launch services thanks to the enable-linger option: loginctl enable-linger rt I first added the rt user with:sudo adduser --system --gecos "rTorrent Client" --disabled-password --group --home /home/rt rt. How to make rtorrent run as systemd service with tmuxas a dedicated user? Or is there any other way to run it as service with systemd? Any help is really appreciated.

UPDATE: So, just to get a fresh start, I have created a new user named rtorrent with: sudo adduser --system --gecos "rTorrent System Client" --disabled-password --group --home /home/rtorrent rtorrent and changed the /etc/systemd/system/rtorrent.service file to this (also adding system.daemon = true in /home/rtorrent/.rtorrent.rc, because of this post):

[Unit]  Description=rTorrent System Daemon  After=network.target    [Service]  Type=simple  User=rtorrent  Group=rtorrent    ExecStartPre=-/bin/rm -f /home/rtorrent/.session/rtorrent.lock  ExecStart=/usr/bin/rtorrent -o import=/home/rtorrent/.rtorrent.rc  Restart=on-failure  RestartSec=3    [Install]  WantedBy=multi-user.target  

But after all I get this error:

$ systemctl status rtorrent  ● rtorrent.service - rTorrent System Daemon       Loaded: loaded (/etc/systemd/system/rtorrent.service; enabled; vendor preset: enabled)       Active: activating (auto-restart) (Result: exit-code) since Thu 2021-05-27 10:12:26 EEST; 2s ago      Process: 22855 ExecStartPre=/bin/rm -f /home/rtorrent/.session/rtorrent.lock (code=exited, status=0/SUCCESS)      Process: 22856 ExecStart=/usr/bin/rtorrent -o import=/home/rtorrent/.rtorrent.rc (code=exited, status=255/EXCEPTION)     Main PID: 22856 (code=exited, status=255/EXCEPTION)  

Why is this happening? What I am doing wrong?

UPDATE 2: One more thing, This post suggest not dropping any files in the /etc/systemd/system/, but instead, to drop them in /usr/local/lib/systemd/system which in Debian based systems is in /lib/systemd/system. Therefore, I moved the unit-file there and when enabling it, it automatically created a symlink to /etc/systemd/system/. But still,, I get this error:

$ sudo systemctl status rtorrent  ● rtorrent.service - rTorrent System Daemon       Loaded: loaded (/lib/systemd/system/rtorrent.service; enabled; vendor preset: enabled)       Active: activating (auto-restart) (Result: exit-code) since Thu 2021-05-27 10:39:14 EEST; 924ms ago      Process: 24530 ExecStartPre=/bin/rm -f /home/rtorrent/.session/rtorrent.lock (code=exited, status=0/SUCCESS)      Process: 24531 ExecStart=/usr/bin/rtorrent -o import=/home/rtorrent/.rtorrent.rc (code=exited, status=255/EXCEPTION)     Main PID: 24531 (code=exited, status=255/EXCEPTION)  

How to verify a checksum using one command line?

Posted: 04 Nov 2021 07:09 AM PDT

Suppose I type and run the following command:

sha256sum ubuntu-18.04.1-desktop-amd64.iso  

After a delay, this outputs the following:

5748706937539418ee5707bd538c4f5eabae485d17aa49fb13ce2c9b70532433  ubuntu-18.04.1-desktop-amd64.iso  

Then, I realize that I should have typed the following command to more rapidly assess whether the SHA‐256 hash matches:

sha256sum ubuntu-18.04.1-desktop-amd64.iso | grep 5748706937539418ee5707bd538c4f5eabae485d17aa49fb13ce2c9b70532433  

Is there a way to act on the first output without using the sha256sum command to verify the checksum a second time (i.e., to avoid the delay that would be caused by doing so)? Specifically:

  1. I'd like to know how to do this using a command that does not require copy and pasting of the first output's checksum (if it's possible).
  2. I'd like to know the simplest way to do this using a command that does require copy and pasting of the first output's checksum. (Simply attempting to use grep on a double‐quoted pasted checksum (i.e., as a string) doesn't work.)

How to pass header from stdin or local file into remote curl?

Posted: 04 Nov 2021 09:18 AM PDT

The following curl command works as expected:

$ curl -H @- -vso/dev/null http://www.example.com <<<"Foo:Bar"  * Rebuilt URL to: http://www.example.com/  ...  > Accept: */*  > Foo:Bar  >   < HTTP/1.1 200 OK  ...  

since I can see my custom header (Foo:Bar), but it doesn't work when running via ssh:

$ ssh user@localhost curl -H @- -vso/dev/null http://www.example.com <<<"Foo:Bar"  * Rebuilt URL to: http://www.example.com/  ...  > Accept: */*  >   < HTTP/1.1 200 OK  ...  

I can confirm that the stdin works on the remote by:

$ ssh user@localhost cat <<<"Foo:Bar"  Foo:Bar  

My goal is to pass the headers from stdin or local file (not from the variable) into remote curl.

And I'm not quite sure why the above doesn't work.

Why does GDB need the executable as well as the core dump?

Posted: 04 Nov 2021 10:43 AM PDT

I'm debugging using core dumps, and note that gdb needs you to supply the executable as well as the core dump. Why is this? If the core dump contains all the memory that the process uses, isn't the executable contained within the core dump? Perhaps there's no guarantee that the whole exe is loaded into memory (individual executables are not usually that big though) or maybe the core dump doesn't contain all relevant memory after all? Is it for the symbols (perhaps they're not loaded into memory normally)?

How to assign e1000e driver to Ethernet adapter

Posted: 04 Nov 2021 07:01 AM PDT

Is there a way to instruct an Ethernet adapter to use a certain driver? Or perhaps the way it works is to have a way to instruct a driver to support a specific adapter?

I have a system running a recently installed RHEL Server 7.3 OS (kernel 3.10.0-514.el7.x86_64), where the e1000e driver is not linked to an on-board I219-LM Ethernet adapter. This condition was found while investigating why the adapter is not working properly. The other Ethernet adapter, which works fine, is a PCI card attached to the MB.

A simple lspci says:

# lspci | grep net  00:1f.6 Ethernet controller: Intel Corporation Ethernet Connection (2) I219-LM (rev 31)  06:00.0 Ethernet controller: Intel Corporation 82572EI Gigabit Ethernet Controller (Copper) (rev 06)  

Verbose lspci for the I219-LM device does not report a driver in use:

# lspci -v -s 00:1f.6  00:1f.6 Ethernet controller: Intel Corporation Ethernet Connection (2) I219-LM (rev 31)      Subsystem: Intel Corporation Device 0000      Flags: fast devsel, IRQ 16      Memory at a1700000 (32-bit, non-prefetchable) [size=128K]      Capabilities: [c8] Power Management version 3      Capabilities: [d0] MSI: Enable- Count=1/1 Maskable- 64bit+      Capabilities: [e0] PCI Advanced Features      Kernel modules: e1000e  

Conversely, the same command for the other adapter states that e1000e is being used by the device:

# lspci -v -s 06:00.0  06:00.0 Ethernet controller: Intel Corporation 82572EI Gigabit Ethernet Controller (Copper) (rev 06)      Subsystem: Intel Corporation PRO/1000 PT Server Adapter      Flags: bus master, fast devsel, latency 0, IRQ 130      Memory at a1320000 (32-bit, non-prefetchable) [size=128K]      Memory at a1300000 (32-bit, non-prefetchable) [size=128K]      I/O ports at 4000 [disabled] [size=32]      Expansion ROM at a1340000 [disabled] [size=128K]      Capabilities: [c8] Power Management version 2      Capabilities: [d0] MSI: Enable+ Count=1/1 Maskable- 64bit+      Capabilities: [e0] Express Endpoint, MSI 00      Capabilities: [100] Advanced Error Reporting      Capabilities: [140] Device Serial Number [edited]      Kernel driver in use: e1000e      Kernel modules: e1000e  

I have another system available, using the same OS and type of on-board (and properly functioning) I219-LM adapter, where I verified that, indeed, the driver should be linked to the device.

Browsing the /sys/bus/pci/drivers/e1000e and /sys/devices/pci0000:00/0000:00:1f.6 areas has shown a couple of missing things:

  1. In the .../drivers/e1000e folder, there is a soft-link using the PCI address of the 82572EI adapter that points to the /sys/devices/ area, but none with the I219-LM adapter's one. In comparison, in the mentioned "control" system, there are links for all the adapters it has.
  2. In the /sys/devices/pci0000:00/0000:00:1f.6 area, there is no driver soft-link. However, that soft-link is present in the corresponding folder for the other adapter (../pci0000:00/0000:06:00.0), pointing to the /sys/bus/pci/drivers/e1000e path as it should.

Let me know if more info is needed to help me on this.

Thank you.

Zsh tab completions not working as desired for partial paths

Posted: 04 Nov 2021 10:02 AM PDT

I want case-insensitive fuzzy completion for files and directories in zsh. After reading the manual for a few hours, this is what I came up with:

zstyle ':completion:*:*:*:*:globbed-files' matcher 'r:|?=** m:{a-z\-}={A-Z\_}'  zstyle ':completion:*:*:*:*:local-directories' matcher 'r:|?=** m:{a-z\-}={A-Z\_}'  zstyle ':completion:*:*:*:*:directories' matcher 'r:|?=** m:{a-z\-}={A-Z\_}'  

Additionally, I want pressing TAB once to display possible completions, only modifying what I have typed if there is exactly one completion. Then pressing TAB a second time should put me into "menu completion" mode. Based on the manuals, I came up with this:

zstyle ':completion:*' menu select  

Now everything works as it should except in one circumstance. I have two folders Desktop and .rstudio-desktop in my home directory. Since I have setopt globdots, I expect typing the following:

$ cd ~/dktop<TAB>  

to leave my command as entered, and display as completion candidates Desktop and .rstudio-desktop. Instead, it removes dktop, leaving me with the following:

$ cd ~/  

I have looked at all of the relevant manuals, guides, Stack Exchange questions, and various other sources. But whatever I do, I can't make this work.

Interestingly, though, if I'm in the home directory and type the following then everything works as expected:

$ cd dktop<TAB>  

That is, it's only a problem with non-leading segments of paths (and you can see with C-x h that this corresponds to the directories tag rather than the local-directories tag being used).

For easy reproducibility, here is a ~/.zshrc that will reproduce the situation and behavior described above (tested on a fresh El Capitan virtual machine with zsh from Homebrew).

remove lines from a vcf.gz file with awk command

Posted: 04 Nov 2021 08:07 AM PDT

I just asked a question about filtering out lines with a specific value in a specific column.

If I now want to delete lines with a specific value in a specific column. How do I do that?

E.g. delete lines with 1/1 in the column labelled 12345 in the file.vcf.gz and put the remaining lines in new file called newfile.vcf.gz

E.g.

#CHROM      POS         ALT     12345     1           345632      T       0/1:4,4:8:99:105,0,106  4           032184      C       1/1:46,9:55:99:99,0,1222  6           843290      A       0/1:67,20:87:99:336,0,1641  

Expected result:

1           345632      T       0/1:4,4:8:99:105,0,106  6           843290      A       0/1:67,20:87:99:336,0,1641  

php drop down menu to execute script with argument

Posted: 04 Nov 2021 11:00 AM PDT

I have shell script which I run like this

/var/www/test.sh 2015  

or

/var/www/test.sh 2014  

When this script runs, it actually takes data from freeradius and generates gnuplot base graph for the specific year in the www folder like

/var/www/output.jpg   

Now I want to make a php drop down menu with years like 2015, 2014 and so on, and when user select any year, it should run the script with the specific choice year. But how can I pass year to the shell script.

so far I have tried this but its not working.

root@rm:/var/www# cat test5.php    <html>  <head><title>some title</title></head>  <body>    <form method="post" action="">      <input type="text" name="something" value="<?= isset($_POST['something']) ? htmlspecialchars($_POST['something']) : '' ?>" />      <input type="submit" name="submit" />    </form>    <?php  if(isset($_POST['submit'])) {  echo ($_POST['something']);  // Now the script should be executed with the selected year         $message=shell_exec("/var/www/test.sh $something");  // and after executing the script, this page should also open the output.jpg in the browser    }  ?>  </body>  <html>  root@rm:/var/www#  

Problem running Ubuntu on crouton after updating chromeos

Posted: 04 Nov 2021 06:56 AM PDT

I've been running Ubuntu 14.04 using Crouton on a Toshiba Chromebook 2 for several months. Today I exited Crouton and restarted my chromebook. After restarting and issuing sudo startxfce4 in the shell I received the following error

chronos@localhost / $ sudo startxfce4  Entering /mnt/stateful_partition/crouton/chroots/trusty...  /usr/bin/startxfce4: Starting X server    X.Org X Server 1.15.1  Release Date: 2014-04-13  X Protocol Version 11, Revision 0  Build Operating System: Linux 3.2.0-76-generic x86_64 Ubuntu  Current Operating System: Linux localhost 3.10.18 #1 SMP Tue Apr 14 20:43:12 PDT 2015 x86_64  Kernel command line: cros_secure console= loglevel=7 init=/sbin/init cros_secure oops=panic panic=-1 root=/dev/dm-0 rootwait ro dm_verity.error_behavior=3 dm_verity.max_bios=-1 dm_verity.dev_wait=1 dm="1 vroot none ro 1,0 2506752 verity payload=PARTUUID=e4e36f0d-ca2b-5940-a7fe-a61287b5a2d8/PARTNROFF=1 hashtree=PARTUUID=e4e36f0d-ca2b-5940-a7fe-a61287b5a2d8/PARTNROFF=1 hashstart=2506752 alg=sha1 root_hexdigest=45e6c45d7f91005eb3265c86cdf50fb85b6449c4 salt=d14d293f1aa4206fae2fe4284ac3a5e3de528f53b75f6b378b55c5ce1c9ddfc5" noinitrd vt.global_cursor_default=0 kern_guid=e4e36f0d-ca2b-5940-a7fe-a61287b5a2d8 add_efi_memmap boot=local noresume noswap i915.modeset=1 tpm_tis.force=1 tpm_tis.interrupts=0 nmi_watchdog=panic,lapic    Build Date: 12 February 2015  02:49:29PM  xorg-server 2:1.15.1-0ubuntu2.7 (For technical support please see http://www.ubuntu.com/support)   Current version of pixman: 0.30.2          Before reporting problems, check http://wiki.x.org          to make sure that you have the latest version.  Markers: (--) probed, (**) from config file, (==) default setting,          (++) from command line, (!!) notice, (II) informational,          (WW) warning, (EE) error, (NI) not implemented, (??) unknown.  (==) Log file: "/var/log/Xorg.1.log", Time: Wed Apr 29 13:43:33 2015  (==) Using system config directory "/usr/share/X11/xorg.conf.d"  (EE)   (EE) Backtrace:  (EE) 0: /usr/bin/X (xorg_backtrace+0x48) [0x7f3c407bd848]  (EE) 1: /usr/bin/X (0x7f3c40614000+0x1ad539) [0x7f3c407c1539]  (EE) 2: /lib/x86_64-linux-gnu/libpthread.so.0 (0x7f3c3f710000+0x10340) [0x7f3c3f720340]  (EE) 3: /usr/bin/X (0x7f3c40614000+0xb57a6) [0x7f3c406c97a6]  (EE) 4: /usr/bin/X (xf86BusProbe+0x9) [0x7f3c4069d099]  (EE) 5: /usr/bin/X (InitOutput+0x74d) [0x7f3c406ab6fd]  (EE) 6: /usr/bin/X (0x7f3c40614000+0x59bab) [0x7f3c4066dbab]  (EE) 7: /lib/x86_64-linux-gnu/libc.so.6 (__libc_start_main+0xf5) [0x7f3c3e150ec5]  (EE) 8: /usr/bin/X (0x7f3c40614000+0x451ee) [0x7f3c406591ee]  (EE)   (EE) Segmentation fault at address 0x0  (EE)   Fatal server error:  (EE) Caught signal 11 (Segmentation fault). Server aborting  (EE)   (EE)   Please consult the The X.Org Foundation support            at http://wiki.x.org   for help.   (EE) Please also check the log file at "/var/log/Xorg.1.log" for additional information.  (EE)   (EE) Server terminated with error (1). Closing log file.  /usr/bin/xinit: giving up  /usr/bin/xinit: unable to connect to X server: Connection refused  /usr/bin/xinit: server error  Not unmounting /mnt/stateful_partition/crouton/chroots/trusty as another instance is using it.  

Does anyone know what has gone wrong?

Can I speed up pasting into vim?

Posted: 04 Nov 2021 08:40 AM PDT

I copied a part of the HTML out of a web page and wanted to save it in a file. For that I started a new vim session in a terminal window, with a (new) filename specified on the commandline, hit i to get to insert mode and then CtrlShift+V and waited while [-- INSERT --] showed at the bottom and waited...

As vim was non-responsive after several seconds, I opened 'Text Editor' from the Applications→Accessoiries menu pasted the text (which showed up within a fraction of a second, saved it under a new name, closed, and killed the Vim session that still was not done, 1.5 minutes later. The amount of text was 186K in 3200 lines, not excessive I would say, nor with overly long lines.

Is there a way to speed up these kind of insertions in vim and/or is there an explanation why this is so slow compared to using the, otherwise horrible and mouse oriented, Text Editor?

(The %CPU according to top doesn't come above 5%, although I have some processors free in the system, so it might be some I/O bound problem, that doesn't exist when reading the same text from a file)

Version info:
Ubuntu 12.04
Vim: 7.3, with patches as supplied by Ubuntu 12.04
bash: 4.2.25
gnome-terminal: 3.4.1.1

Show sum of file sizes in directory listing

Posted: 04 Nov 2021 08:28 AM PDT

The Windows dir directory listing command has a line at the end showing the total amount of space taken up by the files listed. For example, dir *.exe shows all the .exe files in the current directory, their sizes, and the sum total of their sizes. I'd love to have similar functionality with my dir alias in bash, but I'm not sure exactly how to go about it.

Currently, I have alias dir='ls -FaGl' in my .bash_profile, showing

drwxr-x---+  24 mattdmo  4096 Mar 14 16:35 ./  drwxr-x--x. 256 root    12288 Apr  8 21:29 ../  -rw-------    1 mattdmo 13795 Apr  4 17:52 .bash_history  -rw-r--r--    1 mattdmo    18 May 10  2012 .bash_logout  -rw-r--r--    1 mattdmo   395 Dec  9 17:33 .bash_profile  -rw-r--r--    1 mattdmo   176 May 10  2012 .bash_profile~  -rw-r--r--    1 mattdmo   411 Dec  9 17:33 .bashrc  -rw-r--r--    1 mattdmo   124 May 10  2012 .bashrc~  drwx------    2 mattdmo  4096 Mar 24 20:03 bin/  drwxrwxr-x    2 mattdmo  4096 Mar 11 16:29 download/  

for example. Taking the answers from this question:

dir | awk '{ total += $4 }; END { print total }'  

which gives me the total, but doesn't print the directory listing itself. Is there a way to alter this into a one-liner or shell script so I can pass any ls arguments I want to dir and get a full listing plus sum total? For example, I'd like to run dir -R *.jpg *.tif to get the listing and total size of those file types in all subdirectories. Ideally, it would be great if I could get the size of each subdirectory, but this isn't essential.

No comments:

Post a Comment