Tuesday, April 13, 2021

Recent Questions - Unix & Linux Stack Exchange

Recent Questions - Unix & Linux Stack Exchange


How to resize LVM without data loss?

Posted: 13 Apr 2021 09:48 AM PDT

My partition map

My lvm should be around 135GB but i see its only 74.5G how to increase it without data loss as my SDA is 150 GB .

Im using VMware for virtualization

Terminal not open after uninstall zsh ubuntu 20?

Posted: 13 Apr 2021 09:52 AM PDT

Problem

After uninstalling zsh terminal could not open

Tried

Xterm and other terminals not open

When I login yyt2 console (alt + f1) it still asked the username and password again and again

When I open the .bashrc file in the first line I can see.

exec /bin/zsh --login  

How I can remove this? Because zsh not exist any more

Any other possible solution?

how to properly match data in awk and fill in missind data based on a known date/time file

Posted: 13 Apr 2021 09:49 AM PDT

I have a data study running from 1/1/2002 00:00 to 12/31/19 23:00. Not all location have the same time ranges thus to make data processing easier I have a script to fill in the missing dates YYYYMMDD and HH:MM times. I have happened to notice the script is not transferring the data exactly correct. I am processing this data using awk scripting. Here is example data input ....

01:00,20020101,0.003  02:00,20020101,0.002  03:00,20020101,0.003  04:00,20020101,0.002  05:00,20020101,0.001  

This is then fed to temp1.tmp which gives the correct values ....

20020101 0.003  20020101 0.002  20020101 0.003  20020101 0.002  20020101 0.001  

However when attempting to match col 1 to identify missing data in temp1 and matching to the correct date/time row the temp2.tmp file gives....

20020101 0.013  20020101 0.013  20020101 0.013  20020101 0.013  20020101 0.013  

These values are not even close to correct. 0.013 data point does not even occur until the 09:00 hour measurement. Any suggestions would be greatly helpful with this script. Thank you

#Print the column information  awk -F ',' '{print $2,$3}'  County081-O3-0124.txt > temp1.tmp  awk  'NR==FNR {missing[$1]=$2} NR>FNR {printf("%s %s\n",$1,missing[$1]);}' temp1.tmp 2002-2019yyyymmdd.txt > temp2.tmp  # Print data column for MODIS data  awk '{print $2}' temp2.tmp > temp3.tmp  # Fill blank data spots with missing data flag of -99  awk '{print NF?$1:blankrow}' blankrow=-999 temp3.tmp > temp4.tmp          cp 2002-2019yyyymmdd-hhmm.txt  temp5.tmp          paste temp5.tmp temp4.tmp > temp6.tmp  #    sed -i 's/-28672.0000/-999/g' temp6.tmp  #    sed -i 's/0.0000/-999/g' temp6.tmp  #    sed -i 's/-999000/-999/g' temp6.tmp      sed -i 's/\t/,/g' temp6.tmp       mv temp6.tmp test.out  

Passing a command with arguments and redirection to a function

Posted: 13 Apr 2021 09:52 AM PDT

In a Bash script, I pass a command with arguments. This works fine, except when the command includes a redirection. In that case, the redirection character is treated as an ordinary character.

$ cat foo  #!/bin/bash    f() {    echo "command: $@"    $@  }    f echo a-one a-two  f 'echo b-one b-two'  f 'echo c-one c-two > c.tmp'  # I don't want to do f echo d-one d-two > d.tmp because I want to redirect the  # output of the passed command, not the output of the f() function.    $ ./foo  command: echo a-one a-two  a-one a-two  command: echo b-one b-two  b-one b-two  command: echo c-one c-two > c.tmp  c-one c-two > c.tmp  

As you see, this prints "c-one c-two > c.tmp" when I wanted to print "c-one c-two" to file c.tmp. Is there a way to do that?

How bash interprets triple parentheses?

Posted: 13 Apr 2021 09:48 AM PDT

I see that in bash the command

echo $(((i=18)))  

prints 18. This makes me understand that $(((i=18))) is interpreted as an arithmetic expansion (with the variable i being initialized inside the construct). Though, one could also think of a command substitution

$(command)  

with

((i=18))  

being the command. As a matter of fact, it looks command substitutions come before arithmetic expansions (Learning the bash Shell, O'Reilly 2005, p. 181). Therefore the result is not what one should expect. How do you explain this?

how to read two input files using sed processing through the bash loop?

Posted: 13 Apr 2021 09:17 AM PDT

Very beginner in bash, already I wrote a small script looping through all the *.txt files and processing the *.txt as an input to a perl script.

Single file as an input

#!/bin/bash  set -e    for i in *.txt  do    SAMPLE=$(echo ${i} | sed "s/.txt//")     echo ${SAMPLE}.txt    time /home/sunn/data/softwares/evaluation/msa/pal2nal.v14/pal2nal.pl ${SAMPLE}.txt -output paml > ${SAMPLE}.paml.txt  done   

Actual command for running perl script (2 files as input)

    pal2nal.pl  OG0012884_out.fa OG0012884_out.txt -output paml > OG0012884_paml.txt  

Two files as a input ? I got struck..

#!/bin/bash  set -e    for i in *.txt   do    SAMPLE=$(echo ${i} | sed "s/.txt//" | "s/.fa//")     echo ${SAMPLE}.txt     time /home/sunn/data/softwares/evaluation/msa/pal2nal.v14/pal2nal.pl ${SAMPLE}.fa ${SAMPLE}.txt -output paml > ${SAMPLE}.paml.txt  done   

Not sure if cron job is run

Posted: 13 Apr 2021 09:18 AM PDT

I have the following cron job in /etc/cron.d/backup:

*/1 * * * * backupbot /home/backupbot/bin/backup-script.sh  

Basically, I want the backup-script.sh to run every minute (and the user backupbot should be executing the job).

The /home/backupbot/bin/backup-script.sh file is owned by backupbot (and he has "x" permission on it). The file looks as follows:

#!/bin/bash  set -e    {    BACKUP_DIR=/var/app/backups  STORAGE_ACCOUNT_URL=https://myserver/backups    BACKUP_FILE=$(ls $BACKUP_DIR -t | head -1)    if [ -z "$BACKUP_FILE" ]; then      echo "There are no backups to synchronize"      exit 0  fi    azcopy login --identity  azcopy copy $BACKUP_DIR/$BACKUP_FILE $STORAGE_ACCOUNT_URL/$BACKUP_FILE    } >/tmp/cron.backup-script.$$ 2>&1  

Normally, any output should be logged into /tmp/cron.backup-script.xxxx. Such a file is never created.

The only evidence that the job is being noticed by Cron is the following output of systemctl status cron.service:

● cron.service - Regular background program processing daemon     Loaded: loaded (/lib/systemd/system/cron.service; enabled; vendor preset: enabled)     Active: active (running) since Tue 2021-04-13 09:59:08 UTC; 6h ago       Docs: man:cron(8)   Main PID: 1086 (cron)      Tasks: 1 (limit: 4915)     CGroup: /system.slice/cron.service             └─1086 /usr/sbin/cron -f    Apr 13 16:00:01 my-vm CRON[17201]: pam_unix(cron:session): session closed for user root  Apr 13 16:00:01 my-vm CRON[17198]: pam_unix(cron:session): session closed for user root  Apr 13 16:00:01 my-vm CRON[17199]: pam_unix(cron:session): session closed for user root  Apr 13 16:01:01 my-vm CRON[17402]: pam_unix(cron:session): session opened for user root by (uid=0)  Apr 13 16:01:01 my-vm CRON[17403]: (root) CMD ([ -f /etc/krb5.keytab ] && [ \( ! -f /etc/opt/omi/creds/omi.keytab \) -o \( /etc/krb5.keytab -nt /etc/opt/omi/creds/omi.keytab \) ] &&  Apr 13 16:01:01 my-vm CRON[17401]: pam_unix(cron:session): session opened for user backupbot by (uid=0)  Apr 13 16:01:01 my-vm CRON[17404]: (backupbot) CMD (/home/backupbot/bin/backup-script.sh)  Apr 13 16:01:01 my-vm CRON[17402]: pam_unix(cron:session): session closed for user root  Apr 13 16:01:01 my-vm CRON[17401]: (CRON) info (No MTA installed, discarding output)  Apr 13 16:01:01 my-vm CRON[17401]: pam_unix(cron:session): session closed for user backupbot  

It mentions something about sessions for backupbot. How can I investigate further?

Expanding comma separated list in a tab-delimited file into separate lines

Posted: 13 Apr 2021 09:30 AM PDT

I have a very similar problem to this question, but have no idea how to adapt the answer to my own issue.

I have a tab-sep file with 2nd column containing comma-sep list, such as:

TRINITY_DN1_c0_g1   DN1_c0_g1   GO:0000166,GO:0003674,GO:0005488,GO:0005515,GO:0005524,GO:0005575  TRINITY_DN1_c0_g3   DN1_c0_g3   GO:0005829,GO:0006457,GO:0006458,GO:0006950,GO:0008134  TRINITY_DN10_c0_g1  DN10_c0_g1  GO:0050896,GO:0051082,GO:0051084,GO:0051085  

I want to get it to this:

TRINITY_DN1_c0_g1   DN1_c0_g1   GO:0000166  TRINITY_DN1_c0_g1   DN1_c0_g1   GO:0003674  TRINITY_DN1_c0_g1   DN1_c0_g1   GO:0005488  TRINITY_DN1_c0_g1   DN1_c0_g1   GO:0005515  TRINITY_DN1_c0_g1   DN1_c0_g1   GO:0005524  TRINITY_DN1_c0_g1   DN1_c0_g1   GO:0005575  TRINITY_DN1_c0_g3   DN1_c0_g3   GO:0005829  TRINITY_DN1_c0_g3   DN1_c0_g3   GO:0006457  TRINITY_DN1_c0_g3   DN1_c0_g3   GO:0006458  TRINITY_DN1_c0_g3   DN1_c0_g3   GO:0006950  TRINITY_DN1_c0_g3   DN1_c0_g3   GO:0008134  TRINITY_DN10_c0_g1  DN10_c0_g1  GO:0050896  TRINITY_DN10_c0_g1  DN10_c0_g1  GO:0051082  TRINITY_DN10_c0_g1  DN10_c0_g1  GO:0051084  TRINITY_DN10_c0_g1  DN10_c0_g1  GO:0051085  

There is a variable number of terms in the 3rd column. I need a separate line for each with it's associated 1st and 2nd column.

If any help, the starting one liner from above questions is:

perl -lne 'if(/^(.*?: )(.*?)(\W*)$/){print"$1$_$3"for split/, /,$2}'  

But I have no idea which bits needs to be changed to work for my issue!

Many thanks in advance for help.

HDFS + how to avoid removing of folder when last folder with variable is null

Posted: 13 Apr 2021 08:30 AM PDT

we create the following directory folder1 VIA hdfs` user

su hdfs  hdfs dfs -mkdir /user/hdfs/folder/folder1  

lets say now we want to remove the folder - folder1 but by setting folder1 value in variable as the following

folder_val=folder1  su hdfs -c "hdfs dfs -rm -r -skipTrash /user/hdfs/folder/$folder_val"  

and we get

Deleted /user/hdfs/folder/folder1  

now ... lets say we set by mistake the variable $folder_val as null value as the following

folder_val=  

then we do the following

su hdfs -c "hdfs dfs -rm -r -skipTrash /user/hdfs/folder/$folder_val"  

now we get

Deleted /user/hdfs/folder  

as we can see because the mistake by setting the $folder_val as null value , we deleted by mistake also the folder - /user/hdfs/folder

how to avoid this case when we are dealing with hdfs ??

Filter a word list using a stop words file

Posted: 13 Apr 2021 09:48 AM PDT

So I have a .txt with a random text and I need to list all the words present in that file, but filtering all the words common to my stop words file. What commands can I use for this?

Writing data into a file via SSH has permission error, even with sudo

Posted: 13 Apr 2021 09:44 AM PDT

I am creating an automation script. As part of it, I want to add a cron job. Here's a part of the script that fails:

BACKUP_USER=backupbot  SCRIPT_NAME=backup-script.sh    scp -i ./ssh-key ./$SCRIPT_NAME user@server:/tmp  ssh -i ./ssh-key user@server "      sudo mv /tmp/$SCRIPT_NAME /home/$BACKUP_USER/bin/ &&      sudo chown $BACKUP_USER /home/$BACKUP_USER/bin/$SCRIPT_NAME &&      sudo chmod 100 /home/$BACKUP_USER/bin/$SCRIPT_NAME &&      sudo sed -i 's/THE_URL/'${1}'/' /home/$BACKUP_USER/bin/$SCRIPT_NAME &&      sudo echo '*/1 * * * *' $BACKUP_USER /home/$BACKUP_USER/bin/$SCRIPT_NAME > /etc/cron.d/discourse-backup"  

The problematic command is:

sudo echo '*/1 * * * *' $BACKUP_USER /home/$BACKUP_USER/bin/$SCRIPT_NAME > /etc/cron.d/discourse-backup  

I'm getting:

bash: line 5: /etc/cron.d/discourse-backup: Permission denied

Until this one, everything is executed as it should. What is the issue with my last command? I thought it is some problem with quotes - I tried multiple combinations of single- and double- quotes, but I ended up with the same (or worse) results.

Printing all numeric data one by one

Posted: 13 Apr 2021 09:53 AM PDT

I am provided with .txt file which contains data (alphanumeric, special characters in any order and layout) now I have to print all numeric data one by one.

Example :

this is txt *24354 filer3243gdt             4332 123  sfdg gtdf, gtdf;tr 3435; gfdsf .43er,       ;43 3543;   4354w t535 tfgq 3542 fgdg, 243; wre; 24342 ; 24354 ;;;; 13     tgd dsgf ,3256653756456744rfdgf@gmail.com  

Output

4332  123  3435  43  3543  3542  243  24342  24354  13  

PS: The problem is there is no specific start or end of these numbers so I am unable to think of RegEx that can do it.

Note characters " ", ",", ";", ".", "EOL" can be used to separate numbers.

How to create a multi-seat with one graphics card (having three DisplayPorts / HDMI)?

Posted: 13 Apr 2021 08:17 AM PDT

I am planning to build a reasonably powerful PC, for my needs (scientific calculations, as well as general computing needs; no gaming).

And I will have a couple of collaborators coming to join me.

I have realized it could be useful to have a multi-seat Linux operating system. If I understand correctly, a multi-seat operating system allows all users to work independently (with their own monitor, mouse and keyboard) and concurrently.

However, when I look for documentation, I see lots of old webpages, and the information is a bit confusing (to me, at least).

For example, I read the statement that each user needs a graphics card. But for example, if we have a single GPU, and this GPU had 3 DisplayPorts (or HDMI), couldn't this GPU be used for the three users? Note: I have read Zephyr allows to have a multi-seat with a single GPU, but I want to avoid third-party applications, I would like to rely only on Linux.

If it is not possible, has anybody tried a Threadripper (or any other board with enough PCIe lanes) with say 3 passive, low power GPU, like the nvidia GT 730 for example?

Edit: I have changed the word "multi-user" for "multi-seat", given a recommendation in the replies (thank you).

How to list and delete terminfo?

Posted: 13 Apr 2021 09:10 AM PDT

I have installed some custom terminfo with tic command. How do I list all terminfo in the database (e.g. with infocmp) and how to delete specific terminfo?

Here's my idea as of right now:

On Linux, system-wide terminfo database is located in /lib/terminfo (Debian), /usr/share/terminfo (Arch), and /usr/share/lib/terminfo (Solaris);

On macOS, system-wide terminfo database is located in /usr/share/terminfo;

User-defined terminfo database is in ~/.terminfo.

For now I believe the terminfo database items could be altered by deleting the compiled items in those directories. So further questions are: Why are items terminfo organized in two hex digit directories (e.g. 31, 7a)? How are they organized? And if I write a new terminfo with tic into the database, but with an existent name, is the previous terminfo overwritten?

SSH to machine and change user to root

Posted: 13 Apr 2021 07:19 AM PDT

I am trying to SSH a remote machine and change user to root and run a series of command which need root

I tried the command below but seems it's not working

sshpass -p <pwd> ssh -q  -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null <user>@$IP "echo <pwd>| sudo -S su; whoami"  

whoami always return instead of root

Any idea how to get it done?

edit

echo <pwd> | sudo -S <some-command> always work in this case, but not with sudo -S su ?

Multi-user VNC via gdm

Posted: 13 Apr 2021 08:44 AM PDT

I'd like to set up VNC server to run as a service so I can turn on a remote machine and access it without the need to be physically present.

One option is to set it up in the user-bus:

$ systemctl --user cat vnc.service  # /home/stew/.config/systemd/user/vnc.service  [Unit]  Description=VNC Server  After=default.target    [Service]  ExecStart=x11vnc -nevershared -forever -nopw    [Install]  WantedBy=default.target  

This works, but still requires the me to physically sit at the terminal to log into gdm3 to get XAUTHORITY before I can do any remote work. If I ssh in before logging in with gdm3, the service fails. I work around this by using AutomaticLoginEnable=True and AutomaticLogin=stew in /etc/gdm3/daemon.conf.

Instead, I'd like to be able to use VNC without the need to log in as a specific user first (similar to RDP). I think the best way to do this is to use -nopw to get to a gdm3 screen.

I tried to do that with:

$ systemctl cat vnc.service  # /etc/systemd/system/vnc.service  [Unit]  Description=VNC Service (system-wide)  After=graphical.target    [Service]  ExecStart=x11vnc -auth /run/user/116/gdm/Xauthority -display :0 -nopw    [Install]  WantedBy=graphical.target  

I found the XAUTHORITY path with this command which revealed the path is owned by UID 116 (system user: Debian-gdm).

stew ~ $ ps wwwwaux | grep auth  root        1033  0.1  0.5 189548 63596 tty1     Sl+  14:32   0:00 /usr/lib/xorg/Xorg vt1 -displayfd 3 -auth /run/user/116/gdm/Xauthority -nolisten tcp -background none -noreset -keeptty -novtswitch -verbose 3  

I also need to set WaylandEnable=false in /etc/gdm3/daemon.conf because VNC doesn't seem to work with Wayland.

At first, this seems to work well. I get the gdm login screen. But, when I try to log in as a user, the auth gets transferred to another user and I am disconnected.

Is there a way to set up VNC so I can log in via gdm?

Concatenating text files based on prefix?

Posted: 13 Apr 2021 07:26 AM PDT

In a directory, I had 9792 files from 1088 groups (1088 x 9 = 9792) each group had a unique ID. I'm interested in concatenating only those files which match group ID as a prefix.

OG00 - prefix for all the groups/files, I showed below

example filenames (with prefix) -  OG000190,OG0012877,OG0012858 .... (1088)    OG0011984  OG0011984._1_1.txt.fa   OG0011984._1_2.txt.fa   ...  OG0011984._1_9.txt.fa     OG0011288  OG0011288._1_1.txt.fa  OG0011288._1_2.txt.fa  ....  OG0011288._1_8.txt.fa  OG0011288._1_9.txt.fa    OG0011219  OG0011219._1_1.txt.fa  OG0011219._1_2.txt.fa   ....   OG0011219._1_9.txt.fa  

Able to do it for each group individually using cat. How do I automate the process using loops. I tried unable to reproduce, but some help.

     cat *OG0012884. > OG0012884_out.txt                    OG0012884._1_1.txt          OG0012884._1_2.txt          OG0012884._1_3.txt          OG0012884._1_4.txt          OG0012884._1_5.txt          OG0012884._1_6.txt          OG0012884._1_7.txt          OG0012884._1_8.txt          OG0012884._1_9.txt  

Creating new tables based on specific parameters from an existing table

Posted: 13 Apr 2021 07:21 AM PDT

I want to create several separate CSV files from a table.  Here is an example table:

gene   REF_S1_host  REF_S1_FL  S1_host1  S1_host2  S1_FL  REF_S2_host  REF_S2_FL  S2_host1  S2_host2  S2_FL  gene1  1            0          0         0         0      0            0          0         0         0  gene2  1            1          1         1         0      0            0          0         0         0  gene3  0            1          0         0         1      0            0          0         0         0  gene4  1            0          0         0         0      1            0          0         0         0  gene5  0            0          0         0         0      1            0          1         0         0  gene6  1            0          0         0         0      0            0          0         1         1  gene7  0            1          0         0         0      0            0          0         0         1  

I would like to create a CSV (or other tab-delimited file) that:

  1. pulls all data that includes "1" under a column header containing "S1", but where all headers containing "S2" have a value of "0" for that same gene. For example:

    gene   REF_S1_host  REF_S1_FL  S1_host1  S1_host2  S1_FL  REF_S2_host  REF_S2_FL  S2_host1  S2_host2  S2_FL  gene1  1            0          0         0         0      0            0          0         0         0  gene2  1            1          1         1         0      0            0          0         0         0  gene3  0            1          0         0         1      0            0          0         0         0  
  2. pulls only those rows in which there is a "1" value for any REF file (S1 or S2) but only "0"'s for all other fields (i.e., row headers that do not contain the "REF"). For example:

    gene   REF_S1_host  REF_S1_FL  S1_host1  S1_host2  S1_FL  REF_S2_host  REF_S2_FL  S2_host1  S2_host2  S2_FL  gene1  1            0          0         0         0      0            0          0         0         0  gene4  1            0          0         0         0      1            0          0         0         0  
  3. Where a REF_S1* contains a "1" + where all other (i.e., non-REF) S1 samples are "0" + where all REF_S2* are "0" + but where any other S2 samples (non-REF) are "1". For example:

    gene   REF_S1_host  REF_S1_FL  S1_host1  S1_host2  S1_FL  REF_S2_host  REF_S2_FL  S2_host1  S2_host2  S2_FL  gene6  1            0          0         0         0      0            0          0         1         1  gene7  0            1          0         0         0      0            0          0         0         1  
  4. And lastly, where any *FL is "1", and all *host are "0". For example:

    gene   REF_S1_host  REF_S1_FL  S1_host1  S1_host2  S1_FL  REF_S2_host  REF_S2_FL  S2_host1  S2_host2  S2_FL  gene3  0            1          0         0         1      0            0          0         0         0  gene7  0            1          0         0         0      0            0          0         0         1  

But I am not sure how to go about doing this. Any advice is welcome.

Linux on chromeos encountering many errors:58:51

Posted: 13 Apr 2021 06:58 AM PDT

OK This is a complex maze of error messages so bear with me.

When I try to open or doing anything with Linux I get

[=======/  ] Starting the Linux container Error starting penguin container: 58  Launching VM shell failed: Error starting crostini for terminal: 58  

When I close and reopen the terminal I get a ready message, then an error message that I can't read because it closes directly after it displays.

This is what I get when I run lxc list:

(termina) chronos@localhost ~ $ lxc list  +---------+---------+-----------------------+------+------------+-----------+  |  NAME   |  STATE  |         IPV4          | IPV6 |    TYPE    | SNAPSHOTS |  +---------+---------+-----------------------+------+------------+-----------+  | penguin | RUNNING | 100.115.92.197 (eth0) |      | PERSISTENT | 0         |  +---------+---------+-----------------------+------+------------+-----------+  (termina) chronos@localhost ~ $   

I get this when I run vmc list:

crosh> vmc list  penquin (6238208 bytes, raw, sparse)  termina (5690859520 bytes, min shrinkable size 5021630464 bytes, raw)  Total Size (bytes): 5697097728  

This happened like one week ago and I ended up powerwash my Chromebook then it worked until now. I really don't what to powerwash it again.

Any help would be greatly appreciated.

EDIT: ok now I am getting

[====-     ] Starting the container manager Error starting penguin container: 51  Launching vmshell failed: Error starting crostini for terminal: 51  

I don't know what to do!!!

USB memory stick unmounts and vanishes

Posted: 13 Apr 2021 09:32 AM PDT

I bought this device a few days ago, and checked it with utility f3. It is ostensibly a Philips 256 memory stick, internally seen as an

Bus 002 Device 034: ID 090c:2000 Silicon Motion, Inc. - Taiwan (formerly Feiya Technology Corp.)

It appears genuine.

Yesterday, I was in the middle of copying my video library onto it, when it fell over.

Now, whenever I insert it, it persists for a few minutes, and then vanishes. This happens both on my desktop and my laptop machines.

I extracted a section of my syslog<<

Apr 12 11:02:01 LM-Desktop kernel: [ 1416.618362] usb 2-1.3: device descriptor read/64, error -71  Apr 12 11:02:01 LM-Desktop kernel: [ 1416.806360] usb 2-1.3: new high-speed USB device number 11 using ehci-pci  Apr 12 11:02:01 LM-Desktop kernel: [ 1416.898353] usb 2-1.3: device descriptor read/64, error -71  Apr 12 11:02:01 LM-Desktop kernel: [ 1417.098342] usb 2-1.3: device descriptor read/64, error -71  Apr 12 11:02:01 LM-Desktop kernel: [ 1417.206558] usb 2-1-port3: attempt power cycle  Apr 12 11:02:02 LM-Desktop kernel: [ 1417.810315] usb 2-1.3: new high-speed USB device number 12 using ehci-pci  Apr 12 11:02:02 LM-Desktop kernel: [ 1418.234299] usb 2-1.3: device not accepting address 12, error -71  Apr 12 11:02:02 LM-Desktop kernel: [ 1418.314298] usb 2-1.3: new high-speed USB device number 13 using ehci-pci  Apr 12 11:02:03 LM-Desktop kernel: [ 1418.734278] usb 2-1.3: device not accepting address 13, error -71  Apr 12 11:02:03 LM-Desktop kernel: [ 1418.734392] usb 2-1-port3: unable to enumerate USB device  Apr 12 11:10:15 LM-Desktop kernel: [ 1911.310743] usb 2-1.3: new high-speed USB device number 14 using ehci-pci  Apr 12 11:10:15 LM-Desktop kernel: [ 1911.421164] usb 2-1.3: New USB device found, idVendor=090c, idProduct=2000, bcdDevice=11.00  Apr 12 11:10:15 LM-Desktop kernel: [ 1911.421167] usb 2-1.3: New USB device strings: Mfr=1, Product=2, SerialNumber=3  Apr 12 11:10:15 LM-Desktop kernel: [ 1911.421169] usb 2-1.3: Product: USB DISK  Apr 12 11:10:15 LM-Desktop kernel: [ 1911.421170] usb 2-1.3: Manufacturer: SMI Corporation  Apr 12 11:10:15 LM-Desktop kernel: [ 1911.421172] usb 2-1.3: SerialNumber: 09118403000342  Apr 12 11:10:15 LM-Desktop kernel: [ 1911.421701] usb-storage 2-1.3:1.0: USB Mass Storage device detected  Apr 12 11:10:15 LM-Desktop kernel: [ 1911.421898] scsi host4: usb-storage 2-1.3:1.0  Apr 12 11:10:15 LM-Desktop mtp-probe: checking bus 2, device 14: "/sys/devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.3"  Apr 12 11:10:15 LM-Desktop mtp-probe: bus: 2, device: 14 was not an MTP device  Apr 12 11:10:15 LM-Desktop mtp-probe: checking bus 2, device 14: "/sys/devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.3"  Apr 12 11:10:15 LM-Desktop mtp-probe: bus: 2, device: 14 was not an MTP device  Apr 12 11:10:17 LM-Desktop kernel: [ 1912.560605] scsi 4:0:0:0: Direct-Access     SMI      USB DISK         1100 PQ: 0 ANSI: 6  Apr 12 11:10:17 LM-Desktop kernel: [ 1912.561102] sd 4:0:0:0: Attached scsi generic sg2 type 0  Apr 12 11:10:17 LM-Desktop kernel: [ 1912.561619] sd 4:0:0:0: [sdc] 487424000 512-byte logical blocks: (250 GB/232 GiB)  Apr 12 11:10:17 LM-Desktop kernel: [ 1912.562258] sd 4:0:0:0: [sdc] Write Protect is off  Apr 12 11:10:17 LM-Desktop kernel: [ 1912.562261] sd 4:0:0:0: [sdc] Mode Sense: 43 00 00 00  Apr 12 11:10:17 LM-Desktop kernel: [ 1912.562894] sd 4:0:0:0: [sdc] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA  Apr 12 11:10:17 LM-Desktop kernel: [ 1912.590964]  sdc: sdc1  Apr 12 11:10:17 LM-Desktop kernel: [ 1912.593618] sd 4:0:0:0: [sdc] Attached SCSI removable disk  Apr 12 11:10:17 LM-Desktop kernel: [ 1912.804028] FAT-fs (sdc1): Volume was not properly unmounted. Some data may be corrupt. Please run fsck.  Apr 12 11:10:17 LM-Desktop systemd[1]: Finished Clean the /media/roger/PHILIPS256 mount point.  Apr 12 11:10:17 LM-Desktop udisksd[809]: Mounted /dev/sdc1 at /media/roger/PHILIPS256 on behalf of uid 1000  Apr 12 11:11:50 LM-Desktop kernel: [ 2006.228513] usb 2-1.3: reset high-speed USB device number 14 using ehci-pci  Apr 12 11:11:55 LM-Desktop kernel: [ 2011.336844] usb 2-1.3: device descriptor read/64, error -110  Apr 12 11:12:11 LM-Desktop kernel: [ 2026.953802] usb 2-1.3: device descriptor read/64, error -110  Apr 12 11:12:11 LM-Desktop kernel: [ 2027.141821] usb 2-1.3: reset high-speed USB device number 14 using ehci-pci  Apr 12 11:12:11 LM-Desktop kernel: [ 2027.233821] usb 2-1.3: device descriptor read/64, error -71  Apr 12 11:12:11 LM-Desktop kernel: [ 2027.433832] usb 2-1.3: device descriptor read/64, error -71  Apr 12 11:12:12 LM-Desktop kernel: [ 2027.621850] usb 2-1.3: reset high-speed USB device number 14 using ehci-pci  Apr 12 11:12:12 LM-Desktop kernel: [ 2028.053879] usb 2-1.3: device not accepting address 14, error -71  Apr 12 11:12:12 LM-Desktop kernel: [ 2028.133890] usb 2-1.3: reset high-speed USB device number 14 using ehci-pci  Apr 12 11:12:13 LM-Desktop kernel: [ 2028.553911] usb 2-1.3: device not accepting address 14, error -71  Apr 12 11:12:13 LM-Desktop kernel: [ 2028.554330] usb 2-1.3: USB disconnect, device number 14  Apr 12 11:12:13 LM-Desktop kernel: [ 2028.570027] blk_update_request: I/O error, dev sdc, sector 2049 op 0x1:(WRITE) flags 0x100000 phys_seg 1 prio class 0  Apr 12 11:12:13 LM-Desktop kernel: [ 2028.570034] Buffer I/O error on dev sdc1, logical block 1, lost async page write  Apr 12 11:12:13 LM-Desktop kernel: [ 2028.570087] blk_update_request: I/O error, dev sdc, sector 119812544 op 0x1:(WRITE) flags 0x100000 phys_seg 1 prio class 0  Apr 12 11:12:13 LM-Desktop kernel: [ 2028.570092] Buffer I/O error on dev sdc1, logical block 119810496, lost async page write  Apr 12 11:12:13 LM-Desktop kernel: [ 2028.570126] blk_update_request: I/O error, dev sdc, sector 223867520 op 0x1:(WRITE) flags 0x100000 phys_seg 5 prio class 0  Apr 12 11:12:13 LM-Desktop kernel: [ 2028.570129] Buffer I/O error on dev sdc1, logical block 223865472, lost async page write  Apr 12 11:12:13 LM-Desktop kernel: [ 2028.570131] Buffer I/O error on dev sdc1, logical block 223865473, lost async page write  Apr 12 11:12:13 LM-Desktop kernel: [ 2028.570133] Buffer I/O error on dev sdc1, logical block 223865474, lost async page write  Apr 12 11:12:13 LM-Desktop kernel: [ 2028.570134] Buffer I/O error on dev sdc1, logical block 223865475, lost async page write  Apr 12 11:12:13 LM-Desktop kernel: [ 2028.570136] Buffer I/O error on dev sdc1, logical block 223865476, lost async page write  Apr 12 11:12:13 LM-Desktop kernel: [ 2028.570181] blk_update_request: I/O error, dev sdc, sector 233175936 op 0x1:(WRITE) flags 0x100000 phys_seg 3 prio class 0  Apr 12 11:12:13 LM-Desktop kernel: [ 2028.570183] Buffer I/O error on dev sdc1, logical block 233173888, lost async page write  Apr 12 11:12:13 LM-Desktop kernel: [ 2028.570185] Buffer I/O error on dev sdc1, logical block 233173889, lost async page write  Apr 12 11:12:13 LM-Desktop kernel: [ 2028.570187] Buffer I/O error on dev sdc1, logical block 233173890, lost async page write  Apr 12 11:12:13 LM-Desktop kernel: [ 2028.570220] blk_update_request: I/O error, dev sdc, sector 233175941 op 0x1:(WRITE) flags 0x100000 phys_seg 1 prio class 0  Apr 12 11:12:13 LM-Desktop kernel: [ 2028.570242] blk_update_request: I/O error, dev sdc, sector 233175938 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0  Apr 12 11:12:13 LM-Desktop kernel: [ 2028.570253] FAT-fs (sdc1): unable to read inode block for updating (i_pos 3730782246)  Apr 12 11:12:13 LM-Desktop kernel: [ 2028.570280] blk_update_request: I/O error, dev sdc, sector 233175938 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0  Apr 12 11:12:13 LM-Desktop kernel: [ 2028.570288] FAT-fs (sdc1): unable to read inode block for updating (i_pos 3730782252)  Apr 12 11:12:13 LM-Desktop kernel: [ 2028.570353] blk_update_request: I/O error, dev sdc, sector 244392128 op 0x1:(WRITE) flags 0x0 phys_seg 3 prio class 0  Apr 12 11:12:13 LM-Desktop kernel: [ 2028.570367] blk_update_request: I/O error, dev sdc, sector 248685504 op 0x1:(WRITE) flags 0x0 phys_seg 1 prio class 0  Apr 12 11:12:13 LM-Desktop udisksd[809]: Cleaning up mount point /media/roger/PHILIPS256 (device 8:33 no longer exists)  Apr 12 11:12:13 LM-Desktop systemd[1224]: media-roger-PHILIPS256.mount: Succeeded.  Apr 12 11:12:13 LM-Desktop systemd[1]: media-roger-PHILIPS256.mount: Succeeded.  Apr 12 11:12:13 LM-Desktop systemd[1]: Stopping Clean the /media/roger/PHILIPS256 mount point...  Apr 12 11:12:13 LM-Desktop systemd[1]: clean-mount-point@media-roger-PHILIPS256.service: Succeeded.  Apr 12 11:12:13 LM-Desktop systemd[1]: Stopped Clean the /media/roger/PHILIPS256 mount point.  .  .  .  Apr 12 12:04:10 LM-Desktop fwupd[6356]: 11:04:10:0643 FuEngine             device 602b0a6cc821d155208724f0e22f8d111542b74c [WDC WD10EZEX-60WN4A0] does not define a vendor-id!  Apr 12 12:04:10 LM-Desktop fwupd[6356]: 11:04:10:0651 FuEngine             device 2396250036142d895b79f25cc30512a624fc3d2e [WDC WD5000AAKX-004EA0] does not define a vendor-id!  Apr 12 12:04:10 LM-Desktop dbus-daemon[788]: [system] Successfully activated service 'org.freedesktop.fwupd'  Apr 12 12:04:10 LM-Desktop systemd[1]: Started Firmware update daemon.  Apr 12 12:04:10 LM-Desktop fwupdmgr[6339]: Fetching metadata https://cdn.fwupd.org/downloads/firmware.xml.gz  Apr 12 12:04:11 LM-Desktop fwupdmgr[6339]: Fetching signature https://cdn.fwupd.org/downloads/firmware.xml.gz.asc  Apr 12 12:04:11 LM-Desktop fwupdmgr[6339]: Successfully downloaded new metadata: 0 local devices supported  Apr 12 12:04:11 LM-Desktop systemd[1]: fwupd-refresh.service: Succeeded.  Apr 12 12:04:11 LM-Desktop systemd[1]: Finished Refresh fwupd metadata and update motd.    >>  

I suspect a file corrupted the device, but cannot delete anything. I would like to reformat it, but do not know how.

What should I try?

I am running Linux Mint 19.1. Is there any other information I might seek out?

how i can install arch-linux-x64 on UEFI-32bit

Posted: 13 Apr 2021 09:22 AM PDT

i have a tablet laptop with intel Atom , and UEFI 32 bit
i'm using rufus for create bootable usb but usb not able to boot from uefi system

i'm found a solution for ubuntu Here

by copying a file called bootia32.efi to /BOOT/UEFI/  

fortunately it's work for ubuntu but not work for arch because i'm not found this file that work for arch

Cannot get Bluetooth to work on Mac Pro 3,1 running Arch Linux

Posted: 13 Apr 2021 08:39 AM PDT

I've been running Arch Linux for about a year and I have never been able to get the Bluetooth working.  It used to not show up anywhere but recently I set up my wireless adapter with wl and when I did lsusb it showed a USB Bluetooth adapter.

Bus 004 Device 003: ID 05ac:1000 Apple, Inc. Bluetooth HCI MacBookPro (HID mode)

I also recently added a PCI USB hub and I'm not sure if that has anything to do with it.

05:00.0 USB controller: Fresco Logic FL1100 USB 3.0 Host Controller (rev 10)  

I also should have all the right other packages

bluedevil 1:5.21.3-1
blueman 2.1.4-1
bluez 5.56-2
bluez-libs 5.56-2
bluez-qt 5.80.0-1
bluez-utils 5.56-2
gnome-bluetooth 3.34.5-1
pulseaudio-bluetooth 14.2-3

I also have all the correct kernel modules loaded.

btusb 69632 0
btrtl 28672 1 btusb
btbcm 20480 1 btusb
btintel 32768 1 btusb
bluetooth 749568 11 btrtl,btintel,btbcm,bnep,btusb

But when I do ls /sys/class/bluetooth is empty.

I was thinking it could have something to do with how the order in witch the kernel starts the Bluetooth stuff.  I think I read somewhere that the Wi-Fi card starting before the Bluetooth stuff can cause issues in some Macs but I don't remember where I heard that.  I also thought could be related to the EFI vars because /sys/firmware/efi/efivars/ contains file called EFIBluetoothDelay-7c436110-ab2a-4bbb-a880-fe41995c9f82, bluetoothActiveControllerInfo-7c436110-ab2a-4bbb-a880-fe41995c9f82 and boot-feature-usage-62bf9b1c-8568-48ee-85dc-dd3057660863, maybe the firmware is never starting some Bluetooth related thing but already in way over my head, any help would be appreciated.

Use Apple's usdzconvert on a Linux machine

Posted: 13 Apr 2021 08:48 AM PDT

I've successfully built Pixar's USD pipeline on a Linux VM, and I now strive to accomplish something similar to Apple's usdzconvert from their USDZ tools.

As of right now, I'm able to run the standard commands from Pixar, such as usdCat, but what I'm missing is an executable that can convert certain formats (such as FBX or USDC) to .usdz. Is it possible in some way to import certain files/executables from Apple's tools and use them on my Linux VM? If so, how do I proceed?

Any other tips on how to convert to USDZ (using command line) are appreciated.

How to include and install debian/package.timer file inside deblan package, alongside the package.service

Posted: 13 Apr 2021 08:19 AM PDT

I'm creating a debian package which comprises of a service and some shell scripts and would like to also install a timer in the /lib/systemd/system folder so that the service will get called periodically.

According to the debian helper guide https://manpages.debian.org/testing/debhelper/dh_systemd_enable.1.en.html this can be achieved by simply creating a package.timer file along with the package.service file in the debian folder and it will automatically get included in the package when building (sudo debuild -us -uc -d).

When I build, only the service is included and installed, not the timer file. For info, I can add a package.socket file and this gets included but not timer or tmpfile . I hope someone can help me.

For illustration, some of my package files are as follows.

hello-world.service

[Unit]  Description=Hello world service.    [Service]  Type=oneshot  ExecStart=/bin/echo HELLO WORLD!    [Install]  WantedBy=default.target  

hello-world.timer

[Unit]  Description=Timer for periodic execution of hello-world service.    [Timer]  OnUnitActiveSec=5s  OnBootSec=30s    [Install]  WantedBy=timers.target  

control file

Source: hello-world  Maintainer: Joe Bloggs <joe.bloggs@jondoe.com>  Section: misc  Priority: optional  Standards-Version: 1.0.0  Build-Depends: debhelper (>= 9), dh-systemd (>= 1.5)    Package: hello-world  Architecture: amd64  Depends:  Description:   Hello world test app.  

rules file

#!/usr/bin/make -f  %:      dh $@  --with=systemd    override_dh_auto_build:      echo "Not Running dh_auto_build"    override_dh_auto_install:      echo "Not Running dh_auto_install"    override_dh_shlibdeps:      echo "Not Running dh_shlibdeps"    override_dh_usrlocal:      echo "Not Running dh_usrlocal"  

I can't boot up a custom version of ubuntu from live-usb

Posted: 13 Apr 2021 07:04 AM PDT

I made a usb stick with a custom version of ubuntu, is a well know and people don't use to have troubles with this version. I used win32diskimage, rufus and HDDRaw and got the same result with all these programs.

Everything seems fine but when start to boot up, I get:

32-bit relocation outside of Kernel! --System halted  

I am using hive os version: hive-0.5-76-20180924.

My computer spec are:

Motherboard: TB250-BTC  CPU: Intel Pentium G4400 3.3GHz Box  Ram: G.Skill Aegis DDR4 2133 PC4-17000 4GB CL15  Power Supply: Aerocool Xpredator 1000GM 1000W 80 Plus Gold Modular  

I've been using Windows 10 so far without issues.

Issues installing Nvidia drivers in Debian 9

Posted: 13 Apr 2021 09:04 AM PDT

I'm new to linux and I decided to install Debian 9, I installed it in my HDD in UEFI mode with a USB stick and the DVD 1 iso found here: https://cdimage.debian.org/debian-cd/current/amd64/bt-dvd/. I had some issues with network drivers but I managed to solve them. Then I wanted to install Nvidia propietary drivers, my laptop is a MSI GL62-7RDX which comes with an i7-7700HQ CPU and a GTX 1050 (2GB) graphics card, I followed the steps specified here: https://wiki.debian.org/NvidiaGraphicsDrivers#Debian_9_.22Stretch.22 which are:

  • Add non-free repositories by modifying the sources.list file.
  • Execute the following commands as root:

    apt update  apt install linux-headers-$(uname -r|sed 's/[^-]*-[^-]*-//') nvidia-driver  
  • Reboot

The problem is that when I rebooted I only got a black screen with a blinking cursor. I decided to follow the steps shown here: https://wiki.debian.org/NvidiaGraphicsDrivers#Backing_out_in_case_of_failure in order to throw back the changes. After rebooting I could see the login screen, but when I tried to login I got a login loop, despite of the password being correct.

I also tried to install Nvidia drivers as shown here: How to install the latest NVIDIA drivers on Debian 9 Stretch Linux but I got black screen with blinking cursor again.

In summary, I would like to know how to properly install Nvidia drivers in my laptop. I think I am missing something and that the problem is related to my specific hardware, because a few days ago I tried to install Ubuntu 17.10 and it only worked if I added nomodeset by pressing e in the GRUB.

How to use lftp to delete old files before downloading new?

Posted: 13 Apr 2021 08:06 AM PDT

I am running lftp on Raspbian

I have 100GB of content on the remote site and 100GB space available on my SD card so I need to delete files not present at the remote site from the SD card before downloading new content.

How can I achieve this?

#!/bin/bash  login="username"  pass="password"  host="server.feralhosting.com"  remote_dir="/folder/you/want/to/copy"  local_dir="/cygdrive/s/lftp/somefolder/where/you.want/your/files/"    base_name="$(basename "$0")"  lock_file='/tmp/'"$base_name"'.lock'  trap 'rm -f '"$lock_file"'' SIGINT SIGTERM  if [[ -e "$lock_file" ]]  then    echo "$base_name is running already."    exit 1  else    touch "$lock_file"    lftp -u $login,$pass $host << EOF    set ftp:ssl-allow no    set mirror:use-pget-n 5    mirror -c -P5 --log='/var/log/'"$base_name"'.log' "$remote_dir" "$local_dir"    quit  EOF    rm -f "$lock_file"    trap - SIGINT SIGTERM    exit 0  fi  

How to add an ip range to known_hosts?

Posted: 13 Apr 2021 07:12 AM PDT

Many services (like GitHub) use a wide range of IPs, and obviously the same public key.

How can I add an IP range (preferably in a single) to known_hosts file?

For the GitHub example, it uses the following ranges:

  • 207.97.227.224/27
  • 173.203.140.192/27
  • 204.232.175.64/27
  • 72.4.117.96/27
  • 192.30.252.0/22

And the key is:

AAAAB3NzaC1yc2EAAAABIwAAAQEAq2A7hRGmdnm9tUDbO9IDSwBK6TbQa+PXYPCPy6rbTrTtw7PHkccKrpp0yVhp5HdEIcKr6pLlVDBfOLX9QUsyCOV0wzfjIJNlGEYsdlLJizHhbn2mUjvSAHQqZETYP81eFzLQNnPHt4EVVUh7VfDESU84KezmD5QlWpXLmvU31/yMf+Se8xhHTvKSCZIFImWwoG6mbUoWf9nzpIoaSjB+weqqUUmpaaasXVal72J+UX2B+2RPW3RcT0eOzQgqlJL3RKrTJvdsjE3JEAvGq3lGHSZXy28G3skua2SmVi/w4yCE6gbODqnTWlg7+wC604ydGXA8VJiS5ap43JXiUFFAaQ==

Move folder content up one level

Posted: 13 Apr 2021 08:27 AM PDT

I have a directory that is unpacked, but is in a folder. How can I move the contents up one level? I am accessing CentOS via SSH.

No comments:

Post a Comment