Wednesday, July 28, 2021

Recent Questions - Unix & Linux Stack Exchange

Recent Questions - Unix & Linux Stack Exchange


Convert HEIC image files RECURSIVELY with bash script

Posted: 28 Jul 2021 10:15 AM PDT

I'm trying to convert all my HEIC and heic files to jpg (100% quality). They are in separate directories but all in under the main directory IMAGES. I want to convert all these images and when converted move the original HEIC files in the "HEIC_org" directory of the orginal directory.

Structure end result:

I tried making a working bash file but I cannot get it working recursively. Can anyone help me. THNX!!

#!/bin/bash    sudo mkdir "./HEIC_org"  if ! command -v heif-convert &> /dev/null  then      echo "heif-convert COMMAND could not be found."      echo "Please install 'libheif-examples' first."      echo "To install 'libheif-examples', run the following command:"      echo "  sudo apt install libheif-examples"      exit  else      fileExtension="jpg"        while getopts :p flag; do          case ${flag} in              # -p flag: convert heic files to png format instead              p) fileExtension="png"              ;;          esac      done                  #look for files in current path that contains ".heic" OR ".HEIC"      for file in $( ls | grep -E ".heic|.HEIC")      do          echo "Converting file: $file"          sedCommand="s/heic/${fileExtension}/g;s/HEIC/${fileExtension}/g"          #replace original file name by changing the extension from heic to jpg          outputFileName=`echo $file | sed -e $sedCommand`          heif-convert $file $outputFileName          sudo mv "$file" "./HEIC_org"       done    fi     

gnome is not completly removed in kali linux

Posted: 28 Jul 2021 10:07 AM PDT

Hello i installed gnome on Kali Linux, then I decide to remove it and use the XFCE4.so I try apt-get remove gnome-core, but the gnome is in the login menu and my system decide gnome file manager for inserting files and use gnome login panel When I turn my system on. How can I remove the gnome completely ?

The same way to restore init for making boot process successfully in Ubuntu but not in Arch

Posted: 28 Jul 2021 10:17 AM PDT

I am learning Linux. For testing whether kernel will invoke init through boot process, I did:

sudo rm /sbin/init  

and reboot. As I expected, Ubuntu was not able to boot successfully since /sbin/init didn't exist. Then, to fix it, I used a bootable usb to mount, chroot...finally re-symbol-linked it:

ln -s /lib/systemd/systemd /sbin/init  

rebooted...My Ubuntu boot successfully again.

But, It doesn't work in Arch by the same way above. How to explain this ? (Arch and Ubuntu both use systemd as init, and I install them in only one partition separately.)

(After removing /sbin/init in Arch, it showed ERROR: Root device mounted successfully, but /sbin/init does not exist. I did it as root)

Unable to get local issuer certificate (but my trusted CA-certificate store seems OK)

Posted: 28 Jul 2021 09:31 AM PDT

This has kept me busy for a good number of hours. I have read a good deal other articles and Stackexchange-questions, and tried other things, but no positive result so far.

Running Ubuntu20/Nginx/Openssl v1.1.1.

Using wget, openssl s_client or curl on normal web resources, I get the message: "Verify return code: 20 (unable to get local issuer certificate)", or equivalent.

$ openssl s_client -connect google.com:443  CONNECTED(00000003)  depth=2 C = US, O = Google Trust Services LLC, CN = GTS Root R1  verify error:num=20:unable to get local issuer certificate  verify return:1  depth=1 C = US, O = Google Trust Services LLC, CN = GTS CA 1C3  verify return:1  depth=0 CN = *.google.com  verify return:1  ---  Certificate chain   0 s:CN = *.google.com     i:C = US, O = Google Trust Services LLC, CN = GTS CA 1C3   1 s:C = US, O = Google Trust Services LLC, CN = GTS CA 1C3     i:C = US, O = Google Trust Services LLC, CN = GTS Root R1   2 s:C = US, O = Google Trust Services LLC, CN = GTS Root R1     i:C = BE, O = GlobalSign nv-sa, OU = Root CA, CN = GlobalSign Root CA  ---  Server certificate  -----BEGIN CERTIFICATE-----  MIIN...  

A bit of background. The SSL-handshake used to work for these common web resources. But I had an application that required a self-signed certificate to be added to the trusted CA-certificate store. Worked on that for a good twenty hours, tried many things. In the end decided to 'start anew' and delete my whole trusted certificate store, by deleting everything in /etc/ssl/certs/ and /usr/(local/)share/ca-certificates/) and restoring backups of common CA-certs in these folders, and a restore backup of /etc/ca-certificates.conf. Then ran update-ca-certificates. Also: I downgraded OpenSSL from v1.1.1 to 1.0.2, and then upgraded it again from 1.0.2 to 1.1.1.

Output below to demonstrate that it looks alright.

$ update-ca-certificates -f  Clearing symlinks in /etc/ssl/certs...  done.  Updating certificates in /etc/ssl/certs...  129 added, 0 removed; done.  Running hooks in /etc/ca-certificates/update.d...  done.  

As far as I see, my trusted cert-store seems fine: it contains the requested root-certificates in the chain. Notice in the above example there are two root-certificates: (1) C = US, O = Google Trust Services LLC, CN = GTS Root R1, and (2) C = BE, O = GlobalSign nv-sa, OU = Root CA, CN = GlobalSign Root CA.

I am sure these two root-certificates are in my trusted CA-store. Here's a snippet of the output from a trick suggested by Marlon in NginX client cert authentication fails with "unable to get issuer certificate"

$ awk -v cmd='openssl x509 -noout -subject' ' /BEGIN/{close(cmd)};{print | cmd}' < /etc/ssl/certs/ca-certificates.crt  ...  subject=OU = GlobalSign ECC Root CA - R4, O = GlobalSign, CN = GlobalSign  subject=OU = GlobalSign ECC Root CA - R5, O = GlobalSign, CN = GlobalSign  subject=C = BE, O = GlobalSign nv-sa, OU = Root CA, CN = GlobalSign Root CA  subject=OU = GlobalSign Root CA - R2, O = GlobalSign, CN = GlobalSign  subject=OU = GlobalSign Root CA - R3, O = GlobalSign, CN = GlobalSign  subject=OU = GlobalSign Root CA - R6, O = GlobalSign, CN = GlobalSign  ...  subject=C = US, O = Google Trust Services LLC, CN = GTS Root R1  subject=C = US, O = Google Trust Services LLC, CN = GTS Root R2  subject=C = US, O = Google Trust Services LLC, CN = GTS Root R3  subject=C = US, O = Google Trust Services LLC, CN = GTS Root R4  

So the root-certificates that the host in my example (google.com) uses are there in my trusted CA-store. Why am I still getting "Verification error: unable to get local issuer certificate"?

Additionally, I'll add the output when I explicitly define the path to the trusted CA-cert store. The SSL-handshake succeeds! What am I overlooking?

$ openssl s_client -CApath /etc/ssl/certs -connect google.com:443  CONNECTED(00000003)  depth=2 C = US, O = Google Trust Services LLC, CN = GTS Root R1  verify return:1  depth=1 C = US, O = Google Trust Services LLC, CN = GTS CA 1C3  verify return:1  depth=0 CN = *.google.com  verify return:1  ---  Certificate chain   0 s:CN = *.google.com     i:C = US, O = Google Trust Services LLC, CN = GTS CA 1C3   1 s:C = US, O = Google Trust Services LLC, CN = GTS CA 1C3     i:C = US, O = Google Trust Services LLC, CN = GTS Root R1   2 s:C = US, O = Google Trust Services LLC, CN = GTS Root R1     i:C = BE, O = GlobalSign nv-sa, OU = Root CA, CN = GlobalSign Root CA  ---  Server certificate  -----BEGIN CERTIFICATE-----  MIIN...  ...  -----END CERTIFICATE-----  subject=CN = *.google.com    issuer=C = US, O = Google Trust Services LLC, CN = GTS CA 1C3    ---  No client certificate CA names sent  Peer signing digest: SHA256  Peer signature type: ECDSA  Server Temp Key: X25519, 253 bits  ---  SSL handshake has read 6523 bytes and written 392 bytes  Verification: OK  ---  New, TLSv1.3, Cipher is TLS_AES_256_GCM_SHA384  Server public key is 256 bit  Secure Renegotiation IS NOT supported  Compression: NONE  Expansion: NONE  No ALPN negotiated  Early data was not sent  Verify return code: 0 (ok)  ---  

To conclude: I am likely overlooking something, some setting or parameter that may have gotten reset, or set wrongly during my hours of tinkering with the system. However, I just cannot see it, and the sources I've read and tried so far mostly mention making sure my trusted CA-cert store is complete, which I think it is. What am I overlooking? Where should I look, or what should I do to get a grip on this problem?

$1 retains full path of input /path/filename. How do I remove the path so I can redirect output? [duplicate]

Posted: 28 Jul 2021 09:41 AM PDT

Invocation.

script filename  

Code:

sed 's/CHR (3E) /\&GT /g' /edi/scripts/stagingZ/temp.txt >> /edi/scripts/test/out/$filename$(date +"%Y%m%d%M").txt  

Output: /edi/scripts/test/out//edi/scripts/test/schfile.txt2021072822.txt:

Server is slow, there's no workload, no apparent reason, only shutdown solves the problem

Posted: 28 Jul 2021 09:11 AM PDT

I manage a server that is now approximately 5 years old. It has Ubuntu Server installed. There had been a handful of times where suddenly it became super slow to the point it was basically unusable.

  • Everything is very slow. The web applications served by it respond very slowly. Logging into ssh is slow (the prompt takes several seconds to appear). Running commands is slow, i.e. showing the ouput of top takes several seconds.

  • The output of top doesn't show anything out of the ordinary. The cpus show a regular workload, the RAM is not full, there is not a single process eating up all the system resources.

  • Even killing the web applications running on the server, the database (PostgresSQL), etc. doesn't improve anything.

  • There were, though, some suspicious things on hardware related variables. Sometimes the cpu was too hot, sometimes the i/o wait time was very high.

  • The only way to get back to normal was to turn off the server, wait a few minutes and turn it again (a simple reboot wouldn't do the trick, upon restart it was still slow)

Because of the observation about the i/o and temperature, it was decided that these were hardware related issues. Indeed one time it turned out the air conditioning inside the server site was turned off. Also, we decided to replace the hard disk. We installed a SSD and installed a newer Ubuntu version.

All was well, it seemed, this problem hadn't show up again. Until now. A few days ago all this happened again, but now even i/o and temperature seemed normal. Again, workload didn't show anything out of the ordinary. Stuff was simply taking an abnormal amount of time to execute. Again the only thing that could make this go back to normal was to poweroff, wait a few minutes and turn on again (simple reboot would restart and still be slow).

What else would you check that is software related? What else would you check that is hardware related? Is there something Linux does on poweroff that it doesn't do on reboot or viceversa?

What kind of information does a directory file contain?

Posted: 28 Jul 2021 09:07 AM PDT

I've heard everything in Linux, including directories, is a file. So I tried to access a directory file but when I tried to read it with cat I got the error cat: xp: Is a directory. Using less, more, head, tail and nl gave similar results and using Vim landed me in some kind of navigational menu.

So since I seem to be unable to find out myself I came here in search of answers.

So... What kind of information do directory files contain?

I assume that the answer includes some kind of links to the inodes of the files in that directory but what other kind of information does it contain?

Also is there a way to access the file and possibly edit it?

Fish - remove file before download

Posted: 28 Jul 2021 08:57 AM PDT

I run fish shell and it gives an exception when I try to remove a file which is not found. I have the following script:

#!/usr/bin/fish    set files "/tmp/*.xlsx"  rm  -f "$files"  echo "$files"  set tmpfile (mktemp -u).xlsx  curl http://localhost:18085/myService -o "$tmpfile"  -s  xdg-open "$tmpfile"  

When I run it like this than the files variable has a complete list of all the patterns. But if one of them cannot be deleted (e.g. still opened) - none would be delete.

I have two questions:

  1. What is the best way to delete the files?
  2. Since I'm developing sometimes the file isn't downloaded by the curl. How to check and only open if the download was successful? (check if exists or check if file size >0 or??)

Dynamically escape a variable

Posted: 28 Jul 2021 09:59 AM PDT

I'm writing a name-helping script to automatically set the "name": field in a package.json file so that it matches a certain regex structure, but I'm having some issues actually setting the name. The regex it must match is '\@abc\/([a-z]+-{0,1})+[a-z]*$'.

Right now I do basically this (along with some extra stuff to really assert that the naming convention is followed):

pattern='\@abc\/([a-z]+-{0,1})+[a-z]*$'  if [[ ! $name =~ $pattern ]]; then     read -rp "New name: " newName    sed -ri "s/(\s.\"name\"\:\s\").*/\1$newName\",/g" $1/package.json  fi  

As you might see, the problem here is that the variable $newName gets processed in sed as a command, it needs to be escape charactered (assuming that the user actually wrote in a new name with correct structure). Is there a way to do this? Preferably as dependency un-reliant as possible.

Finding duplicates and their indices in an array in Bash

Posted: 28 Jul 2021 09:29 AM PDT

I want to find the duplicates in an array and their indices using bash.
For example, I have this array:

arr=("a" "b" "c" "a" "c")  

In this case, "a" is a duplicate at index 0 and 3, and "c" is also a duplicate at index 2 and 4.

I am currently using two nested loops but I find it too slow especially when it is a large array.
Is there a better, more efficient way of doing this in bash?

Thank you!

How to expand code snippets in GNU nano?

Posted: 28 Jul 2021 08:29 AM PDT

In most text editors, it is possible have "code snippets" you can expand by typing a keyword and pressing the tabulation key.

As an example, a snippet for LaTeX might look like

\begin{$1}      $2  \end{$1}  

However I found nothing in the nano man page nor on the web. Is there some hacky way to achieve that? One idea might be to have some bash function nano-latex-begin or something taking two arguments and then to ask GNU nano to execute it but the process would be rather slow.

Replace TAGS in FileB with VALUE from FileA

Posted: 28 Jul 2021 09:17 AM PDT

I am looking for some assistance in perl, please. I have most of the code built, but I am finding one part particularly challenging.

If FileA:

tag1=value1  tag2=value2  

and FileB:

value1=<tag1>  value2=<tag2>  

pseudo code:

open file 1  open file 2   read line of data from file 1 while data exists     change the equal sign to a space ( tag1=value1 becomes tag1 value1)     separate the line into two variables  

[magic happens here where I change the value in FileB to the actual value from FileA (see example below)]

close file 2  close file 1  

So, I have tried several things, researched the heck out of this using Uncle Google (and here). I know there is a simple way of doing this using a single command line (

prompt> gawk '{sub(/=/," ")}1' [path]/[FileA] |       gawk '{system ("perl -pi -e \x27s/"$1"/"$2"/g\x27 [path]/[FileB]")}'   

), but I don't want to do it that way, but instead I am trying to make it happen inside my perl program because I like making things harder on myself it seems :-p.

So, for example, if FileA contains

<tag1>=192.192.2.3  <tag2>=5400  

and FileB contains

connect IP=<tag1>  connect port=<tag2>  

at the end of this program, I want FileB to contain

connect IP=192.192.2.3  connect port=5400  

I understand how to perform substitutions in the program, BUT, I am having difficulty getting it to update the file.

Any hints would be welcomed; I don't even need a full solution, just something to point me in the right direction.

This is not homework.

Does changing an export from "no_root_squash" to "root_squash" require a remount on the client side?

Posted: 28 Jul 2021 08:35 AM PDT

Situation: a QNAP NAS is serving several directories already and I want to change the settings away from no_root_squash to root_squash.

  • Does performing this impact anything other than user permissions?
  • Does a remount have to happen on the client side for this to take effect?

I would expect it to be transparent to the client, however I can't find an answer on google, not even anything broaching the topic.

Can anyone here tell me with certainty?

get multiple column from a large file that conclude two thousand column

Posted: 28 Jul 2021 10:05 AM PDT

I want to get multiple, specific columns from a large file on a Linux system that has two thousand columns. How can I do this?

The file, file1.gz, looks like:

0 0 0 0 0 0 0 0 0 0 ...  0 0 0 0 0 0 0 0 0 0 ...  0 0 0 0 0 0 0 0 0 0 ...  0 0 0 0 0 0 0 0 0 0 ...  0 0 0 0 0 0 0 0 0 0 ...  0 0 0 0 0 0 0 0 0 0 ...  0 0 0 0 0 0 0 0 0 0 ...  0 0 0 0 0 0 0 0 0 0 ...  0 0 0 0 0 0 0 0 0 0 ...    

The columns I need to get in file2, looks like:

186  187  188  189  190  191  192  193  194  195  (about 1000 column)  

How can I upgrade node.js if it is installed from source code?

Posted: 28 Jul 2021 10:12 AM PDT

I am currently using node v14.17.3 and I want to upgrade it to v16. I installed it from the source code.

       sudo apt-get --only-upgrade install Package  

I believe this line will run only for the packages installed with the package manager.

Using nvm causes issue that terminal and vs code terminal see different versions.

I am thinking to uninstall and reinstall node from the source but building a package from source is taking too much time.

How to list the headers of a man page?

Posted: 28 Jul 2021 09:02 AM PDT

I want to view a list of the headers in a man page without reading all of the man page. For example, in the bash man page (man bash.1) there are many headers: NAME, SYNOPSIS, COPYRIGHT, DESCRIPTION, etc. In essence, I want a list like the one presented at the top of this HTML man-page. Is there a way to get that locally in the command line?

This can be useful in case you want to search for a section by a different name than they provide ("syntax" instead of "grammar") or sometimes you don't even know what you're looking for.

Disabling core dump for an already running process

Posted: 28 Jul 2021 10:12 AM PDT

We are dealing with a vendor's product, which has a nasty tendency to crash (with a massive core-dump) on shut down (upon receiving a SIGTERM).

We don't want to disable core-dumping for it completely, because, when crashes happen during normal runtime, cores are useful. Can we disable the core-dumping by a process right before killing it?

Other than by writing our own core-handling program, that is...

Vim: run commands depending on file directory

Posted: 28 Jul 2021 08:40 AM PDT

In my init.vim, I want to run certain commands only if the current file is under a certain hierarchy. Pseudo-code:

if current_file_directory == ~/some/path/here    autocmd <whatever>    set <whatever>  endif  

But all the if examples I'm finding are too basic to extrapolate.

To clarify, neither :pwd nor getcwd() apply, because those return the directory one is in when invoking neovim. I care about the directory the file is in. Following the code above, the commands should fire if I'm under /tmp but editing the file ~/some/path/here/more/deep/still.txt but not if I'm under ~/some/path/here but editing the file /tmp/example.txt.

How to avoid sed outputting new line / carriage return?

Posted: 28 Jul 2021 09:23 AM PDT

I'm decomposing a single line multiple times, recomposing it after each step, but each command adds a new line to the output.

Basically, these are the commands:

h  s#(^.*?)(\[\{.*$)#\1#p  g  s#(^.*?)(\[\{.*?\}\])(.*)$#echo \2 | jq --sort-keys --compact-output#ep  g  s#(^.*?\}\])(.*)$#\2#p  z  

But the original line results in three lines after sed, because each /p adds a newline after the content: How can I avoid that? One line from input should result in one line in the ouput.

Instead of /p, I also tried writing the s commands result to file with /w filename flag, then read it with r filename command, but the file content is added straight to the output, giving the same result.

And /p is because I tried adding the -n command line parameter to sed.

To add a bit of context: I'm parsing HTTP POST requests, which comprise the request JSON body and I'm trying to use jq to uniformly order that JSON properties.

AWS libcrypto resolve messages seen when using a boto3 library, apparently after an update

Posted: 28 Jul 2021 09:45 AM PDT

I'm using the s4cmd package in Python which in turn uses boto3 to communicate with a (non Amazon) S3 service.

I've started seeing these warning messages on stderr. I believe this happened after an auto update to OpenSSL, but that's just my best guess.

AWS libcrypto resolve: searching process and loaded modules  AWS libcrypto resolve: found static aws-lc HMAC symbols  AWS libcrypto resolve: found static aws-lc libcrypto 1.1.1 EVP_MD symbols  
openssl version  OpenSSL 1.1.1g  21 Apr 2020    cat /etc/os-release | head -n6  NAME="Pop!_OS"  VERSION="20.10"  ID=pop  ID_LIKE="ubuntu debian"  PRETTY_NAME="Pop!_OS 20.10"  VERSION_ID="20.10"  

Does anyone know what these messages are, if they're ignorable, and if they are how to suppress them?

The onset of these messages correlates with a lot of random SSL failures. Both in Firefox and when using boto3. I commonly see errors like [Exception] Connection was closed before we received a valid response from endpoint URL now, but when I ssh into another server I have no problem. An hour later the problems will be gone, only to reappear some apparently random time later.

Additional info:

I recently noticed that inside a docker container on my laptop my boto3 & s4cmd commands work while they fail on my base OS. I checked openssl version on both:

# Base OS, failing  openssl version  OpenSSL 1.1.1g  21 Apr 2020    # Inside docker container, working  openssl version  OpenSSL 1.1.1  11 Sep 2018  

SQLAlchemy error when upgrading Apache Superset

Posted: 28 Jul 2021 10:27 AM PDT

I don't know if this is the right place to post this but I'm desperate. I've been following instructions on how to install Apache Superset based on this link:

https://superset.apache.org/docs/installation/installing-superset-from-scratch

I was able to complete the following tasks:

i) install all required dependencies; ii) install and start python virtual environment.

However, when running the command "superset db upgrade", I get the following error:

sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) duplicate column name: filter_type [SQL: ALTER TABLE row_level_security_filters ADD COLUMN filter_type varchar(255)]

enter image description here

I have no idea on how to start debugging this. My system info is below:

  • Ubuntu 18.04;
  • Apache Superset 0.38.1
  • SQLAlchemy 1.3.24

any help is appreciated.

How to take annotated screenshots with keyboard only?

Posted: 28 Jul 2021 08:56 AM PDT

Workflow:

  1. Take a screenshot
  2. Select region on screenshot (an overlay displays keys)
  3. Input text
  4. Repeat step #2 or save/imgur upload

Does something like that exist?

Crop multiple images with variable height using convert

Posted: 28 Jul 2021 08:46 AM PDT

I have a list of images with fixed width but with variable height.

1440x2461

1440x2292

1440x2328

1440x2564

1440x2438

I would like to crop footer of the images by 380px from the bottom irrespective of the height of the image.

1440x2461 -> after crop -> 1440x2081

The real problem i am facing is that i can't every time specify what to keep after crop like this :

convert in.png -crop 1440x2081+0+0 out.png  

So my question is : Is there a way to tell ImageMagick to keep unspecified area after crop and remove the specified area ?

convert in.png -crop -{command to invert selection} 1440x380+0+0 out.png  

Or any other way to achieve the result?

How can I burn embedded subtitles to a file using ffmpeg?

Posted: 28 Jul 2021 09:50 AM PDT

Most of the examples on the internet advice you to use an external file to burn subtitles into the video with -vf subtitles=subtitle.srt. How can I do that when the subtitles are embedded into the video file?

merging part of a media file with subtitles for a media file?

Posted: 28 Jul 2021 10:09 AM PDT

I extracted part of a media file using the answer given at FFMpeg : Creating a video clip of approx. 10 seconds when video duration is unknown without audio- . In my case it is an .mkv file. The thing is the media file doesn't have subtitles embedded in it. I do have an external subtitle file. Now I want to mux the subtitle file to the video but only of a very small part.

For e.g. let's say the video file is of 1 hour duration. The extracted video is of 2 minutes or 120 seconds. I know the position of the video as well as well as where it is located in the subtitle (srt) file. My question is how to embed/mux the subtitles which are relevant to only that part of the video file and let it remain as it is. I am guessing ffmpeg would be the answer to it, as it is for many things in manipulating media files.

Cannot type file paths on any Open File Dialogs?

Posted: 28 Jul 2021 09:09 AM PDT

There is no way I can type/paste a file path when I use some Open File dialog:

enter image description here

There is not right click option or anything. I use linux Mint XFCE 19.1:

linux@linux:~$ uname -a  Linux linux 4.15.0-20-generic #21-Ubuntu SMP Tue Apr 24 06:16:15 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux  

I had installed nemo and set it as the default file manager, then I uninstalled, but the Open File Dialog still does not allow me to type the file path.

Groff long umlaut

Posted: 28 Jul 2021 08:56 AM PDT

I'm Hungarian and I want to use groff to write good-loking pdf files, the problem is even though I used:

groff -k -ms test.ms -T pdf > test.pdf  

eventhough -k took care of the? é,á,ü,ö characters it cannot handle the: ű,ő characters, when i try it says (this is an example for the ő character):

test.ms:8: warning: can't find special character `u006F_030B'  

Is there a way I can enable these characters?

Linux does not recognize Fake-RAID 0

Posted: 28 Jul 2021 10:06 AM PDT

I am trying to set up Linux (preferably Linux Mint) with Dual Boot alongside my Windows 10 installation. Now the problem is, I'm using a Fake-RAID 0 setup, because I really dislike having partitions.

I'm using an Asus X370 Pro Mainboard and an AMD Ryzen 1800X CPU.

I've searched across the internet, and of course found many guides and stuff for dual booting a linux distro alongside windows, even some for a RAID setup.

What they all had in common, though, was the assumption, that installing mdadm and running

sudo mdadm --assemble --scan  

would work and let Linux detect my RAID array. Unfortunately, for me that is not the case. I received the following output instead:

no arrays found in config file or automatically  

I have then tried several other guides (with some more or different mdadm setup stuff), tried to install Ubuntu instead of Mint (hoping, that it might have better compatibility with my array).

Is there anything I'm missing?

How do I get xinputrc to work for login screen?

Posted: 28 Jul 2021 09:04 AM PDT

I have the following lines in /etc/X11/xinit/xinputrc to tame my mouse sensitivity:

xinput --set-prop 9 "Device Accel Constant Deceleration" 4.5  xinput --set-prop 9 "Device Accel Velocity Scaling" 1  xinput --set-prop 9 "Device Accel Adaptive Deceleration" 1.5  

These work great, the mouse behaves as I want.

However, these commands only get run after a user logs in - on the login screen the mouse has the default sensitivities and is almost unusable.

How do I get xinput commands to run that effect the login screen?

Running LinuxMint 17.1, standard display manager (mdm).

How to install tar file (jhead) on Mac or Linux machine

Posted: 28 Jul 2021 10:15 AM PDT

I'm new to Linux and tar balls and was wondering how to properly install them on a Mac or Linux machine. I would prefer to know how to install on a mac but I just need some help understanding them. I want to install jhead-2.97.tar.gz and I download the zipped source tar ball, yielding a folder containing a myriad of files. I know this is a silly question, but how do I properly install this file on my machine in the Terminal/LXTerminal?

jhead is a command tool that is used to extract from an Exif jpeg file in the Terminal

No comments:

Post a Comment