Wednesday, May 18, 2022

Recent Questions - Unix & Linux Stack Exchange

Recent Questions - Unix & Linux Stack Exchange


davinci dont show videos on timeline and dont play them

Posted: 18 May 2022 11:06 AM PDT

I'm new user of Linux. I have Mint. I wanted to edit my wideo using Davinci. After following few tutorials it,s finally on my computer, but now I can't play video inside of it and it don't even show frames of that video. I used ffmpeg to convert my files into mp4(not worked), and mov. Still not working. Please help I dont want to use kdenlive because it sucks. enter image description here

socat listen on fd for systemd socket activation

Posted: 18 May 2022 10:40 AM PDT

When using systemd socket activation, systemd listens on the socket and passes the fd to the service. Is it possible to have socat listen on the fd and connect to some where else? I am trying to move a service that does not support socket activation to a private network and let socat do the bridging.

tmux conf: run bash command and store in variable

Posted: 18 May 2022 10:31 AM PDT

I would like to set date +"%m %d %y" to a variable in my tmux.conf file to set it to set-option -g set-titles-string. How can I store the value of the output of the shell command in tmux.conf and dereference it later?

systemd doesn't read environment variables required for script

Posted: 18 May 2022 10:25 AM PDT

I want to use borg backup and systemd timers to do my backups. I am using the script from borg backup website very slightly adjusted. You can see the original HERE and my script below:

#!/bin/sh    rm ~/db/dump_*  docker exec -t db_container pg_dumpall -c -U postgres > ~/db/dump_`date +%Y-%m-%d"_"%H_%M_%S`.sql    # Setting this, so the repo does not need to be given on the commandline:  export BORG_REPO=callmebob@mydomain.com:/path/to/repo    # See the section "Passphrase notes" for more infos.  export BORG_PASSPHRASE=SecretPassphrase    # some helpers and error handling:  info() { printf "\n%s %s\n\n" "$( date )" "$*" >&2; }  trap 'echo $( date ) Backup interrupted >&2; exit 2' INT TERM    info "Starting backup"    # Backup the most important directories into an archive named after  # the machine this script is currently running on:    borg create                            \      --remote-path=/usr/local/bin/borg  \      --verbose                          \      --filter AME                       \      --list                             \      --stats                            \      --show-rc                          \      --compression lz4                  \      --exclude-caches                   \                                         \      ::'{hostname}-{now}'               \      ~/.ssh                             \      ~/db/dump_*                        \    backup_exit=$?    info "Pruning repository"    # Use the `prune` subcommand to maintain 7 daily, 4 weekly and 6 monthly  # archives of THIS machine. The '{hostname}-' prefix is very important to  # limit prune's operation to this machine's archives and not apply to  # other machines' archives also:    borg prune                             \      --remote-path=/usr/local/bin/borg  \      --list                             \      --prefix '{hostname}-'             \      --show-rc                          \      --keep-daily    7                  \      --keep-weekly   4                  \      --keep-monthly  6                  \    prune_exit=$?    # use highest exit code as global exit code  global_exit=$(( backup_exit > prune_exit ? backup_exit : prune_exit ))    if [ ${global_exit} -eq 0 ]; then      info "Backup and Prune finished successfully"  elif [ ${global_exit} -eq 1 ]; then      info "Backup and/or Prune finished with warnings"  else      info "Backup and/or Prune finished with errors"  fi    exit ${global_exit}  

ATM I am just trying to set up a service, no timer.

I created n.service (working name) in /etc/systemd/system, content below:

[Unit]  Description=Example Service    [Service]  Type=simple  User=callmebob  Group=callmebob  Environment="BORG_REPO=callmebob@mydomain.com:/path/to/repo"  Environment="BORG_PASSPHRASE=SecretPassphrase"  ExecStart=/bin/sh /home/callmebob/backup.sh    [Install]  WantedBy=multi-user.targetshell  

Now from console:

sudo systemctl daemon-reload  sudo systemctl start n  sudo systemctl status n  

Service seems to be running OK, but it can never read the environment variables:

May 18 05:45:31 mydomain systemd[1]: Started Example Service.  May 18 05:45:31 mydomain sh[213874]: Wed 18 May 05:45:31 UTC 2022 Starting backup  May 18 05:45:38 mydomain sh[213903]: passphrase supplied in BORG_PASSPHRASE, by BORG_PASSCOMMAND or via BORG_PASSPHRASE_FD is incorrect.  May 18 05:45:38 mydomain sh[213903]: terminating with error status, rc 2  

I tried with EnvironmentFile, where I put key/value pairs, no quotes etc, but also didn't work. Using PassEnvironment also did nothing. I tried different service types as well, no luck.

Any idea what I am missing here?

How do i fix this error message? "oh no! Something has gone wrong"

Posted: 18 May 2022 10:20 AM PDT

I installed some packages on my system after rebooting I find it hard to use gnome. Each time I login it displays "Oh no something has gone wrong and gives me option to logout."

My system Toshiba satellite Nvidia graphics 525M Kali Linux 2022.1

Why does running sudo systemctl stop ssh over ssh not immediately terminate the ssh session

Posted: 18 May 2022 10:57 AM PDT

This question stems out of curiosity, but lets say I currently have an ssh session with a remote computer, and I run the following command:

sudo systemctl stop ssh

and after running:

systemctl status ssh

to verify the status of the service, it seems to be inactive(dead) why am I still able to execute any commands remotely?

again this is not a formal problem, I am just curious

Thank You for your time

Virtualbox Linux Adding SocketCAN Interface

Posted: 18 May 2022 10:12 AM PDT

Does anyone have experience using SocketCAN within Virtualbox? I'm using Xubuntu. I started by modprobing can, can_raw, can_dev and the relevant driver for the particular can module I'm using. In addition, I directed Virtualbox to pass through the CAN-USB device that I wanted to interface with.

When I ran the command sudo ip link set can0 type can bitrate [bitrate], I got: Cannot find device 'can0'. I checked /dev, and I didn't find anything CAN related.

I ran sudo dmesg | grep 'usb', and it looks like it is registered as a usb device, but not a CAN device.

Does anyone have any experience with SocketCAN? Is there some way to direct Linux to use a certain USB as a CAN socket?

How can I log the UID in UFW log file permanently?

Posted: 18 May 2022 10:03 AM PDT

I have here a debian 10 with installed UFW. I want to know the user or uid of the processes of the connections which are logged by UFW. To log the UID I have add --log-uid to the log rules in /etc/ufw/user.rules:

### LOGGING ###  -A ufw-after-logging-input -j LOG --log-uid --log-prefix "[UFW BLOCK] " -m limit --limit 3/min --limit-burst 10  -A ufw-after-logging-output -j LOG --log-uid --log-prefix "[UFW BLOCK] " -m limit --limit 3/min --limit-burst 10  -A ufw-after-logging-forward -j LOG --log-uid --log-prefix "[UFW BLOCK] " -m limit --limit 3/min --limit-burst 10  -A ufw-logging-deny -m conntrack --ctstate INVALID -j LOG --log-uid --log-prefix "[UFW AUDIT INVALID] " -m limit --limit 3/min --limit-burst 10  -A ufw-logging-deny -j LOG --log-uid --log-prefix "[UFW BLOCK] " -m limit --limit 3/min --limit-burst 10  -A ufw-logging-allow -j LOG --log-uid --log-prefix "[UFW ALLOW] " -m limit --limit 3/min --limit-burst 10  -I ufw-before-logging-input -j LOG --log-uid --log-prefix "[UFW AUDIT] " -m conntrack --ctstate NEW -m limit --limit 3/min --limit-burst 10  -I ufw-before-logging-output -j LOG --log-uid --log-prefix "[UFW AUDIT] " -m conntrack --ctstate NEW -m limit --limit 3/min --limit-burst 10  -I ufw-before-logging-forward -j LOG --log-uid --log-prefix "[UFW AUDIT] " -m conntrack --ctstate NEW -m limit --limit 3/min --limit-burst 10  ### END LOGGING ###  

If I restart ufw (systemctl restart ufw) and look into the /etc/ufw/user.rules my added --log-uid are still there. But after a while this option will be removed automatically and the UFW log output don't contains the UID. So my question is: What is the correct way to add the --log-uid permanently?

Where does each of my local facilities logs to in Unix?

Posted: 18 May 2022 09:44 AM PDT

I was using local0 facility to log info in HAProxy. What I don't understand is in which file each of my facilities (local0,local1,local2,local3,local4,local5,local6,local7) logs to?

How to find (and move) all duplicate file names

Posted: 18 May 2022 09:23 AM PDT

Due to moving files back and forth from my Linux PC to a Mac, I now have a few documents that duplicate. Their names seem identical, but apparently they are codified slightly differently, like in this question.

For instance, ls in a certain directory reports, among other things

'Voisin - Géométrie algébrique et espaces de modules.pdf'  'Voisin - Géométrie algébrique et espaces de modules.pdf'  

These really look the same, but using the command ls | LC_ALL=C sed -n l as suggested in the above question, I get

Voisin - Ge\314\201ome\314\201trie alge\314\201brique et espaces de m\  odules.pdf$  Voisin - G\303\251om\303\251trie alg\303\251brique et espaces de modu\  les.pdf$  

Now, I have a directory tree full of such "duplicates". Is there a way to

  • find them all
  • for each duplicate pair, move one of them to an external directory? (I don't want to delete them right now, just in case I mess something up)

I think that the content is also identical, so the diff should be nothing, but I am not sure, since I don't know a way to to be sure that I am running diff on the two copies, as the paths look identical

ntpq peers output explanation

Posted: 18 May 2022 09:11 AM PDT

I got following output from ntpq command:

# ntpq -pn                                                        remote           refid            st t when poll reach delay  offset jitter   ================================================================================   *192.168.1.1          10.10.4.1     2 u  68y 1024 170   0.198   0.584  0.606   

What does the 68y mean in the when column? Documentation says it is time since last received packet. Does it mean the last received packet was received 68 years ago? Can I believe that?

# ntpq --version   ntpq 4.2.8p11  

Help with xmodmap example from archwiki

Posted: 18 May 2022 08:17 AM PDT

This article: https://wiki.archlinux.org/title/xmodmap#Reassigning_modifiers_to_keys_on_your_keyboard

Has an example:

clear lock  clear control  add control = Caps_Lock Control_L Control_R  keycode 66 = Control_L Caps_Lock NoSymbol NoSymbol  

That maps caps_lock physical to the function control and physical shift + caps_lock to the function caps_lock. My problem is that I don't know how this example works.

My intuitive (but incorrect) understanding of how these lines should work is:

you clear the lock/control modifiers. I naively think that lock means caps lock but this is probably incorrect.

Then the add control = Caps_Lock Control_L Control_R means 'the keysyms Caps_Lock Control_L and Control_R will trigger the control modifier.

Then the last line: keycode 66 = Control_L Caps_Lock NoSymbol NoSymbol should mean (in my head but not in reality) map the physical key with keycode 66 (the physical caps lock key) so that alone it gives keysym Control_L (triggering the control modifier per the previous line), and shift+keycode 66 should (and here's where I'm confused) also just trigger the Caps_Lock keycode, and therefore the control modifier.

I've experimented with various modifications to this with further unexpected results.

I find the documentation for xmodmap (both in the manpage and archwiki) to be terse and unhelpful.

So I am wondering if someone could explain what's going on with this code. Also the functions of the control and lock modifiers because I've realized that the lock modifier is not just caps lock.

I know that XKB is the recommended config tool now, and that doing complicated things with xmodmap is discouraged so maybe this is just one step too far.

Add new lines based on the columns of a tab delimited file

Posted: 18 May 2022 09:16 AM PDT

I have a tab-separated file like this:

211845  032  215979  002   071  217783  143   156   169  219750  111  

For the lines that have multiple tab separated entries, I want to add new lines based on the value of column one. Here is my desired result:

211845  032  215979  002  215979  071  217783  143  217783  156  217783  169  219750  111  

Appreciate any ideas, this one has me stumped.

Define bash function and chain `script` with it in a shell script

Posted: 18 May 2022 09:35 AM PDT

I am having issues defining a bash function inside a bash script and getting to use it in the same script when I try to chaining it after the script command.

A minimal working example is this. I have file called my_script.sh, containing

#!/bin/bash    my_function () {    echo "My output"  }    my_function    script my_log.log -c my_function  

Which when run returns

My output  Script started, output log file is 'my_log.log'.  bash: line 1: my_function: command not found  Script done.  

I do not understand why my_function is recognized alone, but not when chained after script.

Can someone explain, and perhaps offer a solution?

-bash: syntax error near unexpected token `(' when using lookahead and lookback?

Posted: 18 May 2022 11:05 AM PDT

The use case is rather simple. I have a text file, say the following named eg.txt:

'simple_example': 345, 'to_demonstrate': 232,
'regex': 'is not easy to use'

I am trying to capture the keys:

grep -oP (?<=')[a-zA-Z_0-9]+(?=':) eg.txt  

It gives me error:

-bash: syntax error near unexpected token `('

Escaping the single quote does not help either:

grep -oP (?<=\')[a-zA-Z_0-9]+(?=\':) eg.txt  

Nor does using extended grep help:

grep -oE (?<=')[a-zA-Z_0-9]+(?=':) eg.txt  

What is happening here? I am using linux bash with Windows 10 WSL.

How to automate the process to open 3 shells and run 3 commands?

Posted: 18 May 2022 08:45 AM PDT

I set up something so that I'd open up 3 shells and run the programs one by one:

cd foo/bar  ./foo.sh  
cd foo/bar  node bar.js  
cd foo/bar  ruby foobar.rb  

However, since it is time consuming, so I wrote a shell script to do it:

cd foo/bar    ./foo.sh &  node bar.js &    ruby foobar.rb  

However, when I press CTRL-C to stop this script, the other 2 programs are still running. If I type

jobs  

it won't show the background processes (probably belongs to the script) -- if it did, I could have used fg to bring them to the foreground and press CTRL-C on them one by one, so I have to do

ps ax | grep foo  ps ax | grep bar  

to find the process ids and then kill the processes.

Is there a better way to automate the process to open up 3 shells and run 3 programs so that they are like how I open up 3 shells and run them? I don't care about the STDOUT output, so if one shell window is divided into 3 parts, that's ok.

Can tmux, emacs, or any other tool achieve this goal? (I am not familiar with tmux or emacs enough to know if it is possible).

Setting time zone in a kindle bash file

Posted: 18 May 2022 09:20 AM PDT

I've recently jailbreaked my kindle to make it a clock that shows a certain image for each minute of the day. I used the instructions mentioned in this article

https://www.instructables.com/Literary-Clock-Made-From-E-reader/

The thing is to do this there is a scribt to make it work, in the script it shows this code for getting the timezone

#!/bin/bash  test -f /mnt/us/timelit/clockisticking || exit  MinuteOTheDay="$(env TZ=CEST date -R +"%H%M")";  

The thing is, whenever I put in my time zone, which is GMT+4, it never shows the correct time. Even if the time is set correctly on the kindle, it just keeps using its own time.

I tried

MinuteOTheDay="$(env TZ=GMT+4 date -R +"%H%M")";  

and

MinuteOTheDay="$(env TZ=Asia/Muscat date -R +"%H%M")";  

and

MinuteOTheDay="$(env TZ=GST date -R +"%H%M")";  

and they didn't give the correct time, is there a way around this, am I missing something? Is there a way to make the script take the time from the kindle time?

Double tunnel hop via SSH

Posted: 18 May 2022 09:17 AM PDT

I'm using WinSSHTerm to connect to a proxy, from which I then connect to a server hosting a data warehouse. I just can't figure out how to reproduce my Putty connection using a shell command.

Short recap:

I first connect to the proxy server which maps the port 5432 to my local port 10001. After that, i connect to the database server and map its 5432 port to my proxy's 5432 port, which I previously mapped to my 10001 port locally. I am then able to connect to the databse via a database manager locally.

To do so:

I created the following connection to my proxy server first.

enter image description here

I then added a tunnel from there to my localhost port 10001.

enter image description here

Once I'm logged in to the proxy server, I use the following command to connect to the database server and map its 5432 port to the proxy's 5432 port.

ssh username@databaseServer -L 127.0.0.1:5432:databaseServer:5432  

I'd like to leave putty and move to WinSSHterm, predefine some login commands for a specific server.

How may I reproduce the behavior above using a shell command?

Here's my initial try, which is unfortunately not working:

ssh username@databaseServer -L 127.0.0.1:5432:databaseServer:5432  

enter image description here

Thank you

How to Activate a Mount of a Remote Share When Its Machine Connects?

Posted: 18 May 2022 10:12 AM PDT

How can one activate a mount of a remote SMB share when the remote machine connects?

This is more about discerning a local event triggered by the connection of a particular remote machine, than it is about the action taken on that event. What can be determined is the port and protocol, of course, probably the source IP, and perhaps its MAC.

To illustrate, imagine two Windows laptops named Blue and Green, each with a share named Data, that occasionally connect to a Linux Samba server named Martini. The objective is for Martini to mount \Blue\Data to /srv/blue (or wherever)(and do other things) when Blue connects, and mount \Green\Data to /srv/green (or wherever)(and do other things) when Green connects.

Perhaps I'm too deep in the weeds but this seems harder than it looks.

It's straightforward to mount a remote share when localhost connects to it, e.g., when Martini boots, does its thing, finds Blue and Green running, and mounts their shares.

I even have figured out how to activate a host mount of a share on a virtual machine when it fires up (create a systemd.path unit that monitors the VM's log file, then x-systemd.requires=foo.path in fstab).

For a fully remote machine, however, I'm drawing a blank. There is a roundabout / Rube Goldberg way via the iptables LOG target and rsyslog (directly or via a systemd.path unit) but that has too many moving pieces and seems like a kludge. The hope is that something more direct exists.

Socket activation can mind a port but (and I easily could be wrong) isn't obviously capable of discerning the connecting machine. Udev activation seems focused only on localhost's hardware. I haven't figured out a client-wise /dev, /proc, or other path to inspect, although I easily could have missed something. Perhaps there is something in /etc/samba/smb.conf.

Pending further tail-chasing, I thought I'd post to see what ideas the community might have. Any input would be most appreciated.

TAB completion not working for mounted partition

Posted: 18 May 2022 09:27 AM PDT

I'm used to mount a Windows partition (G) into my Windows Subsystem for Linux (running Ubuntu 20.04) by giving the following commands:

sudo mkdir /mnt/g  sudo mount -t drvfs G: /mnt/g  

Unfortunately, writing in a shell, TAB completion of directories or files for such partition does not work (even if it works well for all the rest).

As stated in the comments below:

  • the G partition seems to be mounted correctly, since ls /mnt/g shows all the directories and files within it
  • it makes no difference launching WSL with wsl ~ -e bash --noprofile --norc

How could I solve this issue? Thanks in advance!

While installing Debian KDE on i386 computer, Locks me out unexpectedly

Posted: 18 May 2022 10:04 AM PDT

I have a Toshiba Satellite M105 currently installing Debian KDE 11 and after around 5 mins, locks me out unexpectedly and the password to get back in is unknown. Help?

I have not touched any settings whatsoever since the software is currently installing.

What changed from Linux Kernel 5.9 to 5.10?

Posted: 18 May 2022 09:16 AM PDT

I use a Measurement Computing DAQ in Ubuntu to perform continuous analog reads and writes from another system connected to the board. I have been using Ubuntu 16.04 (which went up to Linux kernel 4.15) for about five years now. I was recently exploring upgrading the system to Ubuntu 20.04 - 22.04 and each of those operating systems ships with Linux kernel 5.10 - 5.15. I am noticing that I am getting what appears to be periodic interrupts that are quite drastic (about 50 milliseconds) for every kernel 5.10 or higher. So something appears to have changed from the 5.9 kernel to the 5.10 kernel that is affecting system read() and write() calls with the A/D board. The differences can be seen in my data acquisition software:

enter image description here

And also in an average loop time program I have (that loops through successive read and write calls - along with some math in between):

enter image description here

Note how the maximum times I am seeing go from about 43 microseconds for Linux kernel 5.9 and below to 50 milliseconds for Linux kernel 5.10 and above. I obviously would like to fix this problem, but I am not sure what was changed that could have caused it. Does anyone have any idea what the culprit is, and if it could be fixed by perhaps changing a kernel parameter in the GRUB bootloader? Any pointers at all would certainly be appreciated. Thanks!

EDIT:

I have implemented a minimal example where we continuously call write system commands to update the DAC Outputs.

At minimum, the DAC Write command is calling "get_user" to obtain data from user-space to kernel, and calling "outw" to write the data into the DAC Register.

Now when we are executing the minimal example, we're doing back-to-back write system commands and we're noticing this 50 millisecond delay.

However, when we add a 1 microsecond delay between the write system commands, then the 50 millisecond delay vanishes. Is this possibly an issue with trying to access the user-space information or writing from the kernel to the device too quickly?

Is there a way to analyze what the kernel is doing between accessing user-space and writing data from the kernel to device?

0bda:c811 Realtek not recognized

Posted: 18 May 2022 11:07 AM PDT

Have a popular realtek chipset usb wifi adapter. Debian 10 and kali linux both do not recognize the chipset. Similar problem reported in Mint, Ubuntu, Centos

@ilak:$ lsusb

Bus 001 Device 003: ID 0bda:c811 Realtek Semiconductor Corp. 802.11ac NIC

Adapter does not appear in ip a or ifconfig outputs

Based on forum post: ubuntu and mint distribution fixes for 0bda:c811 Realtek not recognized as wireless adapter

@ilak:$ sudo apt-get update

@ilak:$ sudo apt install build-essential git dkms

@ilak:$ git clone http://github.com/brektrou/rtl8821CU.git

@ilak:$ cd rtl8821CU @ilak/rtl8821CU:$ sudo chmod +x dkms-install.sh @ilak/rtl8821CU:$ sudo ./dkms-install.sh

'make' KVER=5.10.0-kali7-amd64....(bad exit status: 2) Error! Bad return status for module build on kernel: 5.10.0-kali7-amd64 (x86_64) Consult /var/lib/dkms/rtl8821CU/5.4.1/build/make.log for more information. error log: /var/lib/dkms/rtl8821CU/5.4.1/build/make.log

  • the error log created by install/make: make.log

Is this an issue like: The RTL8822BE firmware has been added to the backports-ed firmware-realtek package (it is not available in the main package) .

How to run docker inside an lxc container?

Posted: 18 May 2022 11:05 AM PDT

I have unprivileged lxc container on Arch host created like this:

lxc-create -n test_arch11 -t download -- --dist archlinux --release current --arch amd64

And it doesn't run docker. What I did inside a container:

  1. Installed docker from Arch repos pacman -S docker
  2. Tried to run a hello-world container docker run hello-world
  3. Got the next error:

docker: Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:297: applying cgroup configuration for process caused \"mkdir /sys/fs/cgroup/cpuset/docker: permission denied\"": unknown.

ERRO[0037] error waiting for container: context canceled

What is wrong and how to make docker work inside a container?

Deal with broken pipe problem with ssh local port forwarding

Posted: 18 May 2022 08:02 AM PDT

My server is behind a firewall that exposes only port 22. I frequently use SSH local port forwarding to access several HTTP services running on the server. It works, but not always. Now and then I get an error message packet_write_wait: Connection to XXX.XXX.XXX.XXX port 22: Broken pipe and I have to restart the SSH connection for it to work again.

I have ServerAliveInterval set to 30 in my config file. In addition, I often open multiple independent ssh processes with different ports forwarded. When one is broken, others work still, so I would think that the network connection itself should be normal.

If it is of any use, my client is on macOS High Sierra, and the server is running Ubuntu 16.04.

What could be the cause of the issue? What potential solutions could I have?

Distinguish between error and "success" in scanimage-batch

Posted: 18 May 2022 09:51 AM PDT

I'm running a little script with the scanimage batch-command on a remote server and would like to know if and how the scan has been done the batch. Therefore the script requires a proper "error"-description to handle the next steps.

Yet scanimage does return a pretty odd message:

scanimage: sane_start: Document feeder out of documents  

So the whole output looks like this if there was a success:

scanscript "scanimage --device='brother4:net1;dev0' --format tiff --resolution=150 --source 'Automatic Document Feeder(left aligned,Duplex)' -l 0mm -t 0mm -x210mm -y297mm --batch=$(date +%Y%m%d_%H%M%S)_p%04d.tiff" "/home/qohelet/scans/images/281/" "myscan"  scanimage: rounded value of br-x from 210 to 209.981  scanimage: rounded value of br-y from 297 to 296.973  Scanning -1 pages, incrementing by 1, numbering from 1  Scanning page 1  Scanned page 1. (scanner status = 5)  Scanning page 2  Scanned page 2. (scanner status = 5)  Scanning page 3  scanimage: sane_start: Document feeder out of documents  

Technically this is correct, yes - but this happens always when the job is done. In case I haven't put any paper into the feeder it looks like that:

scanscript "scanimage --device='brother4:net1;dev0' --format tiff --resolution=150 --source 'Automatic Document Feeder(left aligned,Duplex)' -l 0mm -t 0mm -x210mm -y297mm --batch=$(date +%Y%m%d_%H%M%S)_p%04d.tiff" "/home/qohelet/scans/images/281/" "myscan"  scanimage: rounded value of br-x from 210 to 209.981  scanimage: rounded value of br-y from 297 to 296.973  Scanning -1 pages, incrementing by 1, numbering from 1  Scanning page 1  scanimage: sane_read: Error during device I/O  Scanned page 1. (scanner status = 9)  

The error 9 is unfortunately just one part of the output. How can I distinguish whether it was thrown or not?

In my scanscript I use if to evaluate whether or not the scan was successful:

if eval $1; then      #Do stuff  else      #Do error stuff and exit with error code  fi  

Unfortunately when using scanimage with a batch it's always counted as a failure. Is there a way to find out what actually happened?

Seems someone had a similar issue with a different scanner (I have a Brother-Scanner, but that's not really related to the issue) already: http://sane.10972.n7.nabble.com/Issue-with-Fujitsu-ScanSnap-iX500-td18589.html

But the topic was not continued there, yet now I'm stuck here and would like to know what to do.

what(): locale::facet::_S_create_c_locale name not valid

Posted: 18 May 2022 10:02 AM PDT

I have a Kali Linux where I cannot install any packages. locale is not working and I cannot install it what can I do? I changed sources.list but it's of no help, I tried sudo dpkg-reconfigure locales it tells me:

Can't exec "locale": No such file or directory at /usr/share/perl5/Debconf/Encoding.pm line 16.  Use of uninitialized value $Debconf::Encoding::charmap in scalar chomp at /usr/share/perl5/Debconf/Encoding.pm line 17.  dpkg: error: configuration error: /etc/dpkg/dpkg.cfg.d/multiarch:1: unknown option 'foreign-architecture'  /usr/sbin/dpkg-reconfigure: locales is not installed  

Determine what program is in my MBR code

Posted: 18 May 2022 08:25 AM PDT

I've done a lot of partitioning / dual booting on my Macbook Pro. Right now I have Mac OS X installed along with Ubuntu 12.04, with Grub installed on the Ubuntu partition.

I am wondering - what is the code in my MBR (the first 446 bytes)? Because Macs use EFI and GUID partitioning, the MBR is only a protective/hybrid MBR (in my case, it is a hybrid MBR).

Q: How can I identify what program is in my MBR (based on its hexdump)? Is there some sort of a signature? I'm guessing it's grub but I did a hexdump of it and it didn't match the code I found in this article detailing the Grub MBR ("Stage 1") code.

EDIT: I am runnning rEFInd, an EFI bootmanager program. It is an EFI application, and thus resides on my EFI system partition. This program is what runs immediately following bootup, but I do not think it places any code in the 446 bytes of the MBR.

EDIT2: I should add that I have had Windows installed for dual-boot as well.

Port fowarding and load balancer in ubuntu server 12.04

Posted: 18 May 2022 09:04 AM PDT

I am looking to create a load balancing server. Essentially here is what I want to do:

I have a public IP address, lets say 1.1.1.1 I have a second public IP address, lets say 2.2.2.2. I have a website, www.f.com point to 1.1.1.1 via an A record. I want that Ubuntu server to forward traffic like this:

  • Port 80 traffic is forwarded to 2.2.2.2 on port 60,000 and port 60,001.
  • Port 443 traffic is forwaded to 2.2.2.2 on port 60,010 and port 60,011.
  • Port 25 traffic is forwared to 2.2.2.2 on port 60,020 and port 60,021

The port forwarding is more important then being able to load balance.

I look forward to some responses. Both server 1.1.1.1 and 2.2.2.2 are both running Ubuntu 12.04 server edition.

Convince grep to output all lines, not just those with matches

Posted: 18 May 2022 08:34 AM PDT

Say I have the following file:

$ cat test    test line 1  test line 2  line without the search word  another line without it  test line 3 with two test words  test line 4  

By default, grep returns each line that contains the search term:

$ grep test test    test line 1  test line 2  test line 3 with two test words  test line 4  

Passing the --color parameter to grep will make it highlight the portion of the line that matches the search expression, but it still only returns lines that contain the expression. Is there a way to get grep to output every line in the source file, but highlight the matches?

My current terrible hack to accomplish this (at least on files that don't have 10000+ consecutive lines with no matches) is:

$ grep -B 9999 -A 9999 test test  

Screenshot of the two commands

If grep can't accomplish this, is there another command-line tool that offers the same functionality? I've fiddled with ack, but it doesn't seem to have an option for it either.

No comments:

Post a Comment