Friday, September 17, 2021

Recent Questions - Unix & Linux Stack Exchange

Recent Questions - Unix & Linux Stack Exchange


How to test starttls+sasl config?

Posted: 17 Sep 2021 10:04 AM PDT

I have a requirement to provision SMTP authentication services running in the cloud. I have setup postfix and confirmed that it does not allow relaying without authentication (tested with telnet) and that it requires starttls before authentication. My problem is how to test whether relaying works with authentication.

A DNS (A) record has been configured for the service.

Client-side I am using Mint 18.3

I first tried with Thunderbird - however that required me to configure both the incoming and outgoing servers at the same time. I install a pop3 server and ensured it was running, however the authentication for the pop3 service failed. Unfortunately the create email button is greyed out - I'm guessing its because the pop3 isn't working, but I don't want to spend a lot of time trying to fix that in order to find out is the SMTP stuff is working.

I then tried mutt. After find the relevant configuration options in the man page, I created a ~/.muttrc with:

smtp-authenticators="plain"  smtp_pass="s3cr3t"  smtp_url="smtp://symcbean@myserver.com"  ssl_starttls="yes"  

On starting up mutt, it complained it didn't recognise any of these (but they are all described in the man(5) muttrc page which came with the package!).

Checking online I saw references to 'aerc' - but that is not available from repo on my machine.

Can anyone suggest how to proceed?

Install oracle on an AIX VM virtualizing with QEMU

Posted: 17 Sep 2021 09:13 AM PDT

I installed a Linux VM in the virtual box, and inside it I installed the QEMU package and virtualized an AIX 7.2 image. All installation went correctly, but when I try to install on the AIX operating system, an Oracle 11g database, I get a Segmentation Fault error

Start VM Qemu:

ppc64-softmmu/qemu-system-ppc64 -cpu POWER8 \  -machine pseries -m 8192 \  -drive file=hdisk0.qcow2,if=none,id=drive-virtio-disk0 \  -device virtio-scsi-pci,id=scsi \  -device scsi-hd,drive=drive-virtio-disk0 \  -cdrom AIX72.iso \  -net nic -net tap,script=no,ifname=tap0 \  -prom-env "boot-command=boot disk:" \  -prom-env "input-device=/vdevice/vty@71000000" \  -prom-env "output-device=/vdevice/vty@71000000" \  --daemonize  

Command:

$ ./runInstaller  ./runInstaller[238]: 6095358 Segmentation fault  

Details:

$ ulimit -a  time(seconds)        unlimited  file(blocks)         2097151  data(kbytes)         131072  stack(kbytes)        32768  memory(kbytes)       32768  coredump(blocks)     2097151  nofiles(descriptors) 2000  threads(per process) unlimited  processes(per user)  unlimited    $ /usr/sbin/lsattr -E -l sys0 -a realmem  realmem 8388608 Amount of usable physical memory in Kbytes False  

Links:

Guide QEMU

Guide Oracle

Question about Ubuntu 20.04 disk partition during installation

Posted: 17 Sep 2021 08:54 AM PDT

I am trying to install Ubuntu 20.04 on a NVMe disk.

The installation wizard shows below disk info:

enter image description here

I don't quite understand it. My questions are:

  1. Why the /dev/mapper/vgubuntu-root and /dev/mapper/vgubuntu-swap_1 are listed twice, respectively?

  2. The /dev/mapper parts are for LVM, which is the logical view. Why /dev/nvme0n1 also needs to be listed, which is the physical view for the same disk?

  3. Why I can do nothing when right clicking the /dev/nvme0n1p2? But I can change/delete the dev/nvme0n1p1?

  4. I see the size of dev/nvme0n1p2 - two free space = /dev/mapper/vgubuntu-swap1 + /dev/mapper/vgubuntu-root. Is this some coincidence?

Linux IPTables - Do not change Source IP

Posted: 17 Sep 2021 08:58 AM PDT

I am new to IPTables, so I am basically flying by the seat of my pants. I am currently working on a Squid server and I have managed to make it proxy correctly and all that; however, I would like the server to only intercept port 80 and 443. Currently with the basic Squid setup, it appears to intercept and rewrite all traffic - DNS, Active Directory ports, et cetera. I took a step back, stopped Squid, and decided to attempt to simply forward traffic through that server and not do anything to the traffic, as if it was simply physically inline. I think I have managed to do this, but it looks like it is rewriting all traffic. For example, in my router I can block my actual client IP address from passing DNS traffic, but looking at the state traffic, I can see the server (even without running the Squid software) is actually making all the requests. This, of course, makes firewalling impossibly difficult from my router. Below is just my basic config to allow packets to flow through the server:

-P INPUT ACCEPT  -P FORWARD ACCEPT  -P OUTPUT ACCEPT  -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT  -A INPUT -p icmp -j ACCEPT  -A INPUT -i lo -j ACCEPT  -A INPUT -p tcp -m tcp --dport 22 -j ACCEPT  -A INPUT -p tcp -m tcp --dport 9090 -j ACCEPT  -A INPUT -p tcp -m tcp --dport 3128 -j ACCEPT  -A INPUT -j REJECT --reject-with icmp-host-prohibited  -A FORWARD -i eth0 -j ACCEPT  -A FORWARD -o eth0 -j ACCEPT  -A FORWARD -j REJECT --reject-with icmp-host-prohibited  

There are no other rules under any of the other chains, nor is the server masquerading under the NAT chain. What can I do to get traffic to flow through my server without it rewriting anything?

Output of iptables -t nat -L -nv

Chain PREROUTING (policy ACCEPT 3076 packets, 238K bytes)   pkts bytes target     prot opt in     out     source               destination    Chain INPUT (policy ACCEPT 0 packets, 0 bytes)   pkts bytes target     prot opt in     out     source               destination    Chain OUTPUT (policy ACCEPT 46 packets, 3428 bytes)   pkts bytes target     prot opt in     out     source               destination    Chain POSTROUTING (policy ACCEPT 46 packets, 3428 bytes)   pkts bytes target     prot opt in     out     source               destination  

Output of iptables -L -nv

Chain INPUT (policy ACCEPT 0 packets, 0 bytes)   pkts bytes target     prot opt in     out     source               destination    410 98704 ACCEPT     all  --  *      *       0.0.0.0/0            0.0.0.0/0            state RELATED,ESTABLISHED      0     0 ACCEPT     icmp --  *      *       0.0.0.0/0            0.0.0.0/0      0     0 ACCEPT     all  --  lo     *       0.0.0.0/0            0.0.0.0/0      0     0 ACCEPT     tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            tcp dpt:22      0     0 ACCEPT     tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            tcp dpt:9090      0     0 ACCEPT     tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            tcp dpt:3128    507 39633 REJECT     all  --  *      *       0.0.0.0/0            0.0.0.0/0            reject-with icmp-host-prohibited    Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)   pkts bytes target     prot opt in     out     source               destination    124 16918 ACCEPT     all  --  eth0   *       0.0.0.0/0            0.0.0.0/0      0     0 ACCEPT     all  --  *      eth0    0.0.0.0/0            0.0.0.0/0      0     0 REJECT     all  --  *      *       0.0.0.0/0            0.0.0.0/0            reject-with icmp-host-prohibited    Chain OUTPUT (policy ACCEPT 842 packets, 89994 bytes)   pkts bytes target     prot opt in     out     source               destination  

Using rsync with passed filed

Posted: 17 Sep 2021 08:30 AM PDT

I want to shuffle a set of files using shuf, but I am unsure how to then use rsync with the corresponding file. Should I use --files-from= for instance?

shuf -n "$nf" -e /medhc/*.f |    rsync -av --update --files-from=- . "$dst"  

where is main function in shell script?

Posted: 17 Sep 2021 08:26 AM PDT

I am using FUNCNAME array variable that gives us various executing functions name . While using it , I came across main function inside ${FUNCNAME[max_ind]} . My question is where this main function is defined in our shell script . what code is written inside main and how can I use it .? basically all the information of this main function.

Need help with a script which copies all files from a folder to another based on time stamp

Posted: 17 Sep 2021 08:08 AM PDT

I need to supply start time and end time .

i.e I want all files from 3 am to 6 am on sep 10th
start time:2021091003
end time :2021091006

Source folder has files as below

00:59 file.2021091000.log
01:59 file.2021091001.log
02:59 file.2021091002.log
03:59 file.2021091003.log
04:59 file.2021091004.log
05:59 file.2021091005.log

expected output in destination folder

03:59 file.2021091003.log
04:59 file.2021091004.log
05:59 file.2021091005.log

Please help.

How to put multiple -I, -L and -l flags in ./configure?

Posted: 17 Sep 2021 07:53 AM PDT

I am trying to build using ./configure.

I have

  1. Three include directories

    -I/path1/include  -I/path2/include  -I/path3/include  
  2. Two link directories

    -L/path1/lib  -L/path2/lib  
  3. Two -l flag options

    -ltensorflow  -lasan  
  4. Two compile flags

    -O3  -g  

How can I put all these flags effectively as options in ./configure?

File Comparison

Posted: 17 Sep 2021 08:40 AM PDT

I have two files with FILE1 containing lots of lines and FILE2 with KEY VALUE parms. I need to compare FILE2 with FILE1 and if there is match the corresponding word in FILE1 should be replace with next column in FILE2.

Example:

FILE1:

<SOME YAML CODE  -------------->  PARM1  PARM2  PARM3  PARM4  <END OF YAML CODE  ---------------->  

FILE2:

PARM1 mmddyy  PARM2 hhmmss  PARM3 awsid  PARM4 cc  

So for every match from FILE2 in FILE1, the corresponding word in FILE1 should be replace with 2nd column in FILE2. So the desired output should like:

<SOME YAML CODE  -------------->  mmddyy  hhmmss  awsid  cc  <END OF YAML CODE  ---------------->  

I tried using sed with limited knowledge but not achieving the desired output.

Appreciate your time and support

Using wildcard elegantly

Posted: 17 Sep 2021 08:01 AM PDT

I am executing the below command for 1000 files:

ebook-convert <name-of-first-file>.epub <name-of-first-file>.mobi  ebook-convert <name-of-second-file>.epub <name-of-second-file>.mobi  

Apparently, instead of manually doing this for 1000 files, one could write a bash script for the job.

I was wondering if there is an easier way to do something like this in UNIX though, a small command that would look something like

ebook-convert *.epub *.mobi  

Can you use wildcards in a similar way, that works for a scenario like the above?

Bash process gets killed in HPC

Posted: 17 Sep 2021 07:21 AM PDT

I am new to bash scripting and I am using the below script to automate my job submission. This scripts waits for the previous job to finish and automatically submits a new job.

while true   do          jobstat=$(squeue -u $USER | grep DNAJB6 | wc -l)            if [[ "$jobstat" == '0' ]]; then                  sbatch per3_restart.sh                  break          fi    done  

I run this with ./script.sh & on my login node

This is the output when I do top | grep bash for sometime (maybe a day), after that I will not see this even if i grep with the process ID

PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND  26054 vadupa    20   0  140428   3052    836 S   0.3  0.0   0:04.47 bash  

But after sometime, the process gets automatically killed with out any error message or warning.

Am I missing something ? Lemme know, Thanks.

Edit:

I logout and login to frequently to check the progress.

Output for bash -x script.sh > /tmp/trace.txt 2>&1 &

+ [[ hxB =~ i ]]  + export -f module  + ENV=/hpc/eb/modules-tcl-1.923/init/profile.sh  + export ENV  + BASH_ENV=/hpc/eb/modules-tcl-1.923/init/bash  + export BASH_ENV  + '[' 4 -ge 3 ']'  + [[ hxB =~ i ]]  + MODULESHOME=/hpc/eb/modules-tcl-1.923  + export MODULESHOME  + [[ ! :/hpc/sw/hpc/bin:/hpc/sw/hpc/sbin:/usr/lib64/qt-3.3/bin:/hpc/eb/compilerwrappers/compilers:/hpc/eb/compilerwrappers/linkers:/hpc/eb/modules-tcl-1.923/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/opt/ibutils/bin:/home/vadupa/bin: =~ :/hpc/eb/modules-tcl-1\.923/bin: ]]  ++ manpath  + manpath=/hpc/sw/hpc/man:/hpc/sw/hpc/man:/hpc/eb/modules-tcl-1.923/share/man:/usr/local/share/man:/usr/share/man/overrides:/usr/share/man:/opt/ibutils/share/man:/hpc/sw/hpc/man:/hpc/eb/modules-tcl-1.923/share/man:/usr/local/share/man:/usr/share/man/overrides:/usr/share/man:/opt/ibutils/share/man:/usr/share/man  + [[ ! :/hpc/sw/hpc/man:/hpc/sw/hpc/man:/hpc/eb/modules-tcl-1.923/share/man:/usr/local/share/man:/usr/share/man/overrides:/usr/share/man:/opt/ibutils/share/man:/hpc/sw/hpc/man:/hpc/eb/modules-tcl-1.923/share/man:/usr/local/share/man:/usr/share/man/overrides:/usr/share/man:/opt/ibutils/share/man:/usr/share/man: =~ :/hpc/eb/modules-tcl-1\.923/share/man: ]]  + '[' /sw/noarch/modulefiles/environment:/hpc/sw/modules/modulefiles/init:/hpc/sw/modules/modulefiles/init-devel = '' ']'  + '[' compilerwrappers:surfsara = '' ']'  + '[' -r /hpc/eb/modules-tcl-1.923/init/modulerc -a /sw/noarch/modulefiles/environment:/hpc/sw/modules/modulefiles/init:/hpc/sw/modules/modulefiles/init-devel = '' -a compilerwrappers:surfsara = '' ']'  + true  ++ squeue -u vadupa  ++ wc -l  + jobstat=2  + [[ 2 == \1 ]]  

Microsoft Team 100% CPU usage

Posted: 17 Sep 2021 08:24 AM PDT

Occasionally Microsoft Teams will decide to continuously utilize 100% CPU of one core and not stop unless it is killed.

Is there a better solution to this problem than to kill it and hope it won't do it again soon?

Iteration on arrays

Posted: 17 Sep 2021 07:29 AM PDT

I need to create sript to check pools statuses. Each of the pool will return scan result like below:

pool1 - scan: scrub repaired 0 in 0 days 00:06:17 with 0 errors on Thu Sep   pool2 - scan: scrub in progress since Thu Sep   pool3 - scan: scrub repaired 0 in 0 days 00:04:02 with 0 errors on Thu Sep   pool4 - scan: scrub repaired 0 in 0 days 00:04:22 with 0 errors on Thu Sep   

I need to iterate on each of them and check if the scan is completed. And if all of them contain scrub repaired then do something. If there is one in progress or two, I need to check them lets say every 5 seconds and wait all of them to complete. So far I have this without the do/until loop:

declare -a scans=("pool1 - scan: scrub repaired 0 in 0 days 00:06:17 with 0 errors on Thu Sep"   "pool2 - scan: scrub in progress since Thu Sep"  "pool3 - scan: scrub repaired 0 in 0 days 00:04:02 with 0 errors on Thu Sep"   "pool4 - scan: scrub repaired 0 in 0 days 00:04:22 with 0 errors on Thu Sep"    for scan in "${scans[@]}"; do      echo "$scan"      if ![[ $scan == *"scrub repaired"* ]]; then              echo "Scan in progress. Waiting.."      elif [[ $scan == *"scrub repaired"* ]]; then              echo "Scan is ready. Saving it somewhere for documentation"                    else              continue      fi      break  done  

How do I get an ssh command to run on boot?

Posted: 17 Sep 2021 09:06 AM PDT

I've tried putting an ssh comand in /etc/rc.local but it doesn't work.

/etc/rc.local:

#!/bin/bash  ssh -fN -R 8080:localhost:80 -i /home/pi/.ssh/id_rsa ubuntu@50.0.0.1 >> /tmp/ssh-nginx.out 2>>/tmp/ssh-nginx.err  

/tmp/ssh-nginx.err:

pi@raspberrypi:~ $ cat /tmp/ssh-nginx.err   ssh: connect to host 50.0.0.1 port 22: Network is unreachable  

Adding the same command in crontab (line is @reboot /etc/init.d/ssh-nginx) gives the same output.

What's the right way to do this?

Display Systemd's ExecStart instruction with resolved environment variables

Posted: 17 Sep 2021 08:09 AM PDT

Wondering if there's a way to extract the complete ExecStart instruction utilized by any given systemd service. See that by "complete" I'm referring to the interpreted version of the ExecStart string, not the literal one displayed when you do something like:

systemctl show kubelet.service -p ExecStart  

Example:

$ cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf    [Service]  ...  EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env  EnvironmentFile=-/etc/default/kubelet  ExecStart=  ExecStart=/opt/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS  

The typical systemctl show <svc> instruction shows the 'literal' string ...

$ systemctl show kubelet.service -p ExecStart --no-pager | cut -d";" -f2 | sed 's@argv\[\]=@@' | sed 's@^ @@'  /opt/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS  

But I'm looking for the interpreted version of the above so that the content of the environment variables is properly displayed.

I would expect this state to be available somewhere in systemd's engine, as I believe that this one must be already aware of the existence of the environment-files where these variables are expected (as declared above in EnvironmentFile clause).

And sure, I can write a script to parse the service file and obtain all this info, but I suspect / hope that there's an easier approach.

Use bash instead of tcsh for non-interactive shells

Posted: 17 Sep 2021 07:20 AM PDT

I have an environment similar to the one in this question: Different shells for interactive and non-interactive work

I'm currently stuck with tcsh as my "official" default shell. For interactive shells, I essentially exec /bin/bash from my ~/.login file.

Is there any way to have bash be the shell for non-interactive shells? Ie, if I do ssh myserver env it prints out that the shell is /bin/tcsh. I was looking at ~/.cshrc and I see where I could put something in there to do this, but I don't know what to put in there. Or perhaps there is a different place to put something like this?

Currently if I exec /bin/bash from ~/.cshrc for non-interactive shells, the above ssh command will hang (presumably because it's trying to exec an interactive shell from the non-interactive one).

Is it even possible to do what I want to do?

ftp...Return to your local Kermit and give a RECEIVE command

Posted: 17 Sep 2021 08:52 AM PDT

I'm testing ftp/kermit (trying command line to see why it's not working with script), and for some reason I get "Return to your local Kermit and give a RECEIVE command", but there wasn't a spot to enter it at that statment...there were weird characters on the screen at that point. I'm not sure what this refers to and haven't found much useful info with a search online. This is what I'm seeing. I'm not exactly sure about the feedback from kermit.

$/apps/bin/kermit  C-Kermit>ftp open MMMM /USER:user\useruser /PASSWORD: pwpw   #changed for safety  Connected to MMMM.  User logged in.  Switching LOCUS  for file-management commands to REMOTE.  Remote system type is Windows_NT.  Default transfer mode is TEXT ("ASCII")  (/home/mcleary/k_test/michele/) C-Kermit>cd /home/mcleary/k_test/michele/  Switching LOCUS for file-management commands to LOCAL.  Service not available, connection closed by server  (/home/mcleary/k_test/michele/) C-Kermit>ascii  (/home/mcleary/k_test/michele/) C-Kermit>put test.txt ../20210916_test.txt  Return to your local Kermit and give a RECEIVE command.    KERMIT READY TO SEND...   SENT: (0 files)    *************************  SEND-class command failed.   Packets sent: 2   Retransmissions: 11   Timeouts: 12   Damaged packets: 0   Fatal Kermit Protocol Error: Too many retries    HINTS... If the preceding error message does not explain the failure:   . Adjust the timeout method (see HELP SET SEND).   . Increase the retry limit (see HELP SET RETRY).   . Try it again with SET STREAMING OFF.   . Try it again with SET PARITY SPACE.   . As a last resort, give a ROBUST command and try again.  Also:   . Be sure the source file has read permission.   . Be sure the target directory has write permission.  (Use SET HINTS OFF to suppress hints.)  *************************  

Delete files in a directory only when cumulative sum of files size exceeds xGB

Posted: 17 Sep 2021 09:30 AM PDT

I have a directory, with thousands of files.

I need to sort the files in desc order (to make sure that the newest files wont be deleted), and start summing the size of these files until the summation reaches xGB (example 10GB).

Once that is reached, i need to be able to delete all the files (that are already sorted in desc order), that come after those 10GB files.

So, the contents of my directory should NOT exceed 10GB in size.

I need to be able to accomplish this without the usage of GAWK since I don't have GNU system.

Is this doable with the find command only?

Use awk to delete everything after the ","

Posted: 17 Sep 2021 08:57 AM PDT

I have a variable, var, that contains:

XXXX YY ZZZZZ\n  aaa,bbb,ccc  

All I want is aaa in the second line. I tried:

out=$(echo "$var" | awk 'NR==2{sub(",.*","")}' )  

but I get no output. I tried using , as the FS but I can't get the syntax right. I really want to learn awk/regex syntax.

I want to use out as a variable "$out" somewhere else -- not to print.

Extract sub-directory name based on pattern

Posted: 17 Sep 2021 08:20 AM PDT

I have a list of paths stored in a shell variable tmp for example:

/abc/bcd/def/ZRT834/ZRT834_9/5678/S1_L001_R1.tar  /abc/bcd/def/ZRT834/ZRT834_9/5678/S2_L001_I1.tar  /abc/bcd/def/ZRT834/ZRT834_9/5678/S1_L001_I2.tar  /abc/bcd/def/ZRT207/ZRT207_1/5678/S1_L001_R1.tar  /abc/bcd/def/ZRT207/ZRT207_1/5678/S1_L001_R2.tar  /abc/bcd/def/ZRT207/ZRT207_1/5678/S1_L001_I2.tar  

I want to create new directories based on the matching patterns from the paths. In the above example, I want to create directories ZRT834_9 and ZRT207_1 and create soft links for the tar files into their corresponding directories.

My output should be something like: ZRT834_9 directory having S1_L001_R1.tar, S2_L001_I1.tar, and S1_L001_I2.tar

How do I achieve this?

Delete the oldest files in folder if combined size of folder is more than 10G

Posted: 17 Sep 2021 07:22 AM PDT

The following syntax will remove the files under hive folder:

/usr/bin/find /var/log/hive -type f -print -delete  

I am trying to do the following:

Remove the oldest files under /var/log/hive only if folder size is more than 10G

NOTE - the deletion process will stop when size under hive folder is exactly 10G , so purging process will start if size is more then 10G

Can we create this solution with find command or maybe another approach?

Renaming and cleaning up filenames of TV shows to be S01E01.mp4 etc

Posted: 17 Sep 2021 07:24 AM PDT

If I have some TV Shows that are named badly and I need to clean them up

$ ls  Some_Series.1_Episode.1.mp4  'Some Series01.Episode02.mp4' SomeSeries1Episode03.mp4  

I need to batch rename them to become

$ ls  S01E01.mp4  S01E02.mp4  S01E03.mp4  

I have used the following script and it works but only when the original filenames contain Series and Episode numbers 01 02 03 and not 1 2 3

#!/bin/bash  # rename tv show filenames to be kodi friendly  cd /mnt/2tb_hd/con/  if [ $? == 1 ]; then       exit  fi  for filename in *; do          if [[ "$filename" =~(\**).*(.*[0-9][0-9]).*([0-9][0-9]).*(\....)$ ]]; then              result=$(echo mv \"$filename\" S${BASH_REMATCH[2]}E${BASH_REMATCH[3]}${BASH_REMATCH[4]}\")                     if [[ $? == 0 ]] ; then                       mv "$filename" "S${BASH_REMATCH[2]}E${BASH_REMATCH[3]}${BASH_REMATCH[4]}"                  fi              fi          done          exit  

I need to make this code either change any 1 2 3 4 5 etc in the filenames to have a 0 padding before running the 2nd renaming loop or just alter the code I already have to change either 01 or 1 regardless of the 0 padding.

Sorry if this seems really obvious but I am new to bash so please forgive me.


I have updated the script and now I have issues with episode 8 and 9. I get the following error

line 10: printf: 08: invalid octal number  

So episode 8 and 9 are missing but there is one extra file S02E00.mkv for each series with over 7 episodes.

The adapted script

#!/bin/bash  # rename tv show files to kodi friendly format S01E01 etc  cd /mnt/2tb_hd/Adults/TV_Shows/Breaking\ Bad/  if [ $? == 1 ]; then       exit  fi  reg='^([^0-9]*)([0-9][0-9]*)[^0-9]*([0-9][0-9]*).*(\....)$'  for filename in *.*; do        if [[ $filename =~ $reg ]]; then                printf -v newname 'S%02dE%02d%s' "${BASH_REMATCH[2]}" "${BASH_REMATCH[3]}" "${BASH_REMATCH[4]}"                    mv "$filename" "$newname"                  fi              done          exit  

See http://pastebin.com/2XRH85ua for the full outcome of the test run.

Problem with GTK applications

Posted: 17 Sep 2021 09:08 AM PDT

I did an upgrade with yaourt -Syuu but when I rebooted, my Xfce didn't work. So I installed KDE and it worked perfectly.

When I tried to run firefox, this is the output:

process:5495): GLib-CRITICAL **: g_slice_set_config: assertion 'sys_page_size == 0'   failed  firefox: symbol lookup error: /usr/lib/libpangoft2-1.0.so.0 If you want   : undefined symbol: hb_buffer_set_cluster_level  

with mousepad:

(mousepad:5517): GtkSourceView-CRITICAL **: gtk_source_style_scheme_get_id: assertion 'GTK_IS_SOURCE_STYLE_SCHEME (scheme)' failed    (mousepad:5517): GLib-CRITICAL **: g_variant_new_string: assertion 'string != NULL' failed    (mousepad:5517): GtkSourceView-CRITICAL **: gtk_source_style_scheme_get_id: assertion 'GTK_IS_SOURCE_STYLE_SCHEME (scheme)' failed    (mousepad:5517): GLib-CRITICAL **: g_variant_new_string: assertion 'string != NULL' failed    (mousepad:5517): GtkSourceView-CRITICAL **: gtk_source_style_scheme_get_id: assertion 'GTK_IS_SOURCE_STYLE_SCHEME (scheme)' failed  mousepad: symbol lookup error: /usr/lib/libpangoft2-1.0.so.0: undefined symbol: hb_buffer_set_cluster_level  

and chromium:

/usr/lib/chromium/chromium --ppapi-flash-    path=/usr/lib/PepperFlash/libpepflashplayer.so --ppapi-flash-version=18.0.0.233: symbol lookup error: /usr/lib/libpangoft2-1.0.so.0: undefined symbol: hb_buffer_set_cluster_level  

Searching a while I found libpangoft2-1.0.so is in lib32-pango or Pango.

sudo pacman -S lib32-pango  advertencia: lib32-pango-1.36.8-1 está actualizado -- reinstalando  resolviendo dependencias...  buscando conflictos entre paquetes....    Paquetes (1) lib32-pango-1.36.8-1    Tamaño total de la instalación:  0,50 MiB  Tamaño neto tras actualizar:    0,00 MiB    :: ¿Continuar con la instalación? [S/n] S  (1/1) comprobando las claves del depósito            [############################] 100%  (1/1) verificando la integridad de los paquetes      [############################] 100%  (1/1) cargando los archivos de los paquetes          [############################] 100%  (1/1) comprobando conflictos entre archivos          [############################] 100%  (1/1) comprobando el espacio disponible en disco     [############################] 100%  (1/1) reinstalando lib32-pango                       [############################] 100%  sbin/ldconfig: El fichero /usr/lib/libtracker-miner-1.0.so.0 está vacío, no se comprueba.  sbin/ldconfig: El fichero /usr/lib/libtracker-miner-1.0.so está vacío, no se comprueba.  sbin/ldconfig: El fichero /usr/lib/libtracker-control-1.0.so.0.600.0 está vacío, no se comprueba.  sbin/ldconfig: El fichero /usr/lib/libtracker-miner-1.0.so.0.600.0 está vacío, no se comprueba.  sbin/ldconfig: El fichero /usr/lib/libtracker-control-1.0.so.0 está vacío, no se comprueba.  

Also, I tried reinstalling glib2, glib gtk and gtk2 but none of it worked.

How do I safely delete old kernel versions in CentOS 7?

Posted: 17 Sep 2021 07:11 AM PDT

I might be encountering odd symptoms resulting from competing kernels in CentOS 7. So how do I safely delete the old kernels? And how do I know which kernel is the newest one?

Below is the terminal output I get at the moment when researching this on the server in question. Note that I tried package-cleanup but it leaves the same 2 kernels:

The instructions in this tutorial say that the output of the following two commands should match, but you can see that they do not match, even after a reboot:

[root@localhost ~]# rpm -qa kernel |sort -V |tail -n 1  kernel-3.10.0-229.el7.x86_64  [root@localhost ~]# uname -r  3.10.0-229.14.1.el7.x86_64  

The remaining commands confirm that there are two kernels, and illustrate attempts to delete the old one.

[root@localhost ~]# cd /usr/src/kernels  [root@localhost kernels]# ls -al  total 16  drwxr-xr-x.  4 root root 4096 Oct  2 12:55 .  drwxr-xr-x.  4 root root 4096 Oct  2 13:15 ..  drwxr-xr-x. 22 root root 4096 Oct  2 12:55 3.10.0-229.14.1.el7.x86_64  drwxr-xr-x. 22 root root 4096 Oct  2 12:35 3.10.0-229.el7.x86_64  [root@localhost kernels]# rpm -q kernel  kernel-3.10.0-229.el7.x86_64  kernel-3.10.0-229.14.1.el7.x86_64  [root@localhost kernels]# package-cleanup --oldkernels=1  Loaded plugins: fastestmirror  Usage:       package-cleanup: helps find problems in the rpmdb of system and correct them        usage: package-cleanup --problems or --leaves or --orphans or --oldkernels  Command line error: --oldkernels option does not take a value  [root@localhost kernels]# package-cleanup --oldkernels  Loaded plugins: fastestmirror  No old kernels to remove  [root@localhost kernels]# rpm -q kernel  kernel-3.10.0-229.el7.x86_64  kernel-3.10.0-229.14.1.el7.x86_64  [root@localhost kernels]#   

I also opened up /etc/yum.conf and set installonly_limit=1, but this resulted in an error from a subsequent yum update command saying that 1 is outside the range of acceptable values for installonly_limit.

I assume that 3.10.0-229.14.1.el7.x86_64 is the newest, but how can I know this? Also, the boot options seem to offer multiple kernels to choose from. And the opportunities for confusion get worse when the system auto-boots from the first kernel on the list of options.

Can someone please explain how this works, and in specific, how to safely delete old kernels so that kernel version can be eliminated as a possible cause of odd symptoms? I want to make sure that the most recent kernel is the only kernel that can ever run, no matter how the system is restarted.

Can change monitor resolution of a SyncMaster SA850 and Intel HD graphics

Posted: 17 Sep 2021 08:01 AM PDT

I have installed Debian 7.2 from liveUSB and connected the monitor Samsung SyncMaster SA850 with DVI cable into integrated Intel HD graphics. However I have very low resolution and can't change it. In addition, the system doesn't want to shut down (drivers update didn't help). With VGA cable everything works fine.

Configuration: I have ASUS H87-Pro motherboard and Intel Core i7-4770K. Linux kernel 3.2.0-4-amd64

Finding the sector size of a partition

Posted: 17 Sep 2021 09:12 AM PDT

I answered this question, assuming that the *.img file had a sector size of 512.

How do I query a device, or the image of a device, to find the correct sector size?

No comments:

Post a Comment