Saturday, August 7, 2021

Recent Questions - Unix & Linux Stack Exchange

Recent Questions - Unix & Linux Stack Exchange


Comparing two directories based on inodes

Posted: 07 Aug 2021 09:56 AM PDT

A bit of context that I think is relevant for the appropriate solution:

I have a server that has two folders; one is ingest, the other is sorted. The source of the sorted folder is the ingest folder, all directories are unique, all files are hard links.

The result of this is that when the ingest folder has a file deleted, it stays in the sorted folder, and vice versa. This makes cleanup almost impossible, as there are hundreds of thousands of files totaling about 40 terabytes.

I have a script to add all links to a database, with their inode and path name. I can then use some SQL to find the inodes that only appear once, and decide whether or not I want to delete them.

This solution is very slow (need to refresh the entire database every time I want to manage it) and quite clunky (need to run the query, then manually delete files over CLI).

Is there a solution like ncdu or any dual-pane file browser that can show inodes, and filter specifically on number of links for the inode (as shown by stat)?

Debian Buster Freezing when Compiling Large Project

Posted: 07 Aug 2021 09:48 AM PDT

I've recently begun encountering an issue when attempting to compile a moderately large software project where my entire computer freezes. I can't get to another tty, I can't move the mouse. After some experimentation, I was able to replicate this freezing with other compilers and large projects so I don't suspect this issue is particular to my code or compiler. From what I can tell, the issue does appear to occur primarily when some other software (either a IDE linter, browser, etc.) is also taking up a fair amount of CPU.

I've run top while inducing a freeze, and I will see two processes (one compiler and the "other") nearing 100% CPU, but neither process is actually using that much memory (at least according to top before it freezes). When the freeze occurs, the disk indicator light on my tower goes solid.

Having looked into various potential fixes online related to memory, but none of the below seemed to work:

  • Deactivate swapping
  • Decrease swappiness
  • Increase required free RAM

At present, I'm simply at a loss for what to do. I'm not even sure of what logs I can check or ways I can know the true cause of the unresponsiveness. Any and all assistance would be much appreciated.

How to migrate an existing Btrfs subvolume to *systemd-homed*

Posted: 07 Aug 2021 09:42 AM PDT

Correct me if I'm wrong, but it appears to me that Linux is migrating to systemd-homed to manage home directories.

My home directory is on a Btrfs subvolume. I see that homectl mentions that Btrfs subvolumes are supported. Is there any method to migrate an existing Btrfs subvolume to systemd-homed?

On Converting Existing Users to systemd-homed I only see a method to copy paste the entire home directory. This isn't quite ideal, since:

  • My home directory is fairy large.
  • I use Btrfs snapshots for backups. A copy might ruin my incremental backup, which will grow my backup disk by an unreasaonble size.

So I'm wondering if there's a more direct way to migrate, without copying any files.

How to extract columns by names from CSV file

Posted: 07 Aug 2021 09:53 AM PDT

I Have over 150 CSV files with inconsistent columns on Linux. I need to extract specific columns by names ( in case they exist) from each file e.g.,: name, mobile, email into a new file.

So, the goal is to have one CSV file out of these 150 files with the following order:

name, mobile, email

GPG key generation

Posted: 07 Aug 2021 09:22 AM PDT

I'm following a tutorial about gpg key creation in order to build a signed Ubuntu repo:

GPG Tutorial

gpg --batch --gen-key $KEYNAME.batch  

does not generate a *.key file. Now I figured out that I can create a key pair via

gpg --full-generate-key  

Then those keys are put in their perspective keyrings and I can export them. However, from what I've seen those keys are ASCII-encrypted and in the *.pgp format.

I would like to know what's going wrong with the first command (why does it not generate a key file) and what's the difference to the second command and exporting them later.

Is there any good documentation on sysfs?

Posted: 07 Aug 2021 09:49 AM PDT

I'm looking for good documentation for sysfs; the man page is incomplete and even has one sentence that just stops half way through. I've found the Linux kernel documentation but that's a little bit too geared towards programmers as opposed to administration. If anyone can suggest a good source I'd be grateful.

How to show the output for the execution for any command and/or script in stdout and file but keeping the color in the stdout?

Posted: 07 Aug 2021 08:17 AM PDT

For some commands, in the terminal they print with some colors in the stdout, for example:

git status  mvn help:help -Ddetail=true  gradle build  Any Linux command (ls [-...], etc)  

Note: it applies for scripts that contain:

  • Executions of Linux commands
  • Executions for tools commands
  • Executions other scripts

Therefore the following is possible:

./mvnw help:help -Ddetail=true  ./gradlew build  ./customscript.sh  

Until nothing is new and all work how is expected, therefore:

  • linux_command
  • tool_command (maven, git, gradle etc)
  • script.sh (execute linux/tools commands and other scripts)

So if any of them print in the terminal (stdout) some colors, it is the default behavior according each command/tool

Now if I want see the output in the terminal (as above) and write it to some file, according with:

Therefore is possible do in general

"linux_command" | tee [-a] "/some/path/log_file.log"  "tool_command"  | tee [-a] "/some/path/log_file.log"  "script.sh"     | tee [-a] "/some/path/log_file.log"  

And it works how is expected, but the output in the terminal (stdout) does not include the colors anymore.

Question:

  • How to show the output for the execution for any command and/or script in stdout and file but keeping the color in the stdout?

Same behaviour when the pipe and tee were not included and of course meanwhile write the content in the .log file.

Note I did do a research about the script command

but it overrides the script.sh content

I need a general approach, it for any command and/or script.sh

Different desktops (Gnome Classic vs. Gnome Shell) depending on local vs. remote login

Posted: 07 Aug 2021 08:19 AM PDT

If I log in to RHEL 8.4 on my office PC locally the following setting in /var/lib/AccountsService/users/<username> is considered:

[User]  ...  XSession=gnome-classic  

i.e. I have the Gnome Classic desktop there.

If I log on from remote via RDP/Xvnc from a Win 10 PC I still get the Gnome Shell desktop.

How can this be if it is the same user in both cases?

How to activate Classic Shell at remote logon, as well?

How to prevent rsyslog from logging cron tasks to /var/log/syslog using additional config

Posted: 07 Aug 2021 08:17 AM PDT

I have a Docker image for cron tasks. Here is the Dockerfile:

FROM php:8.0-fpm    RUN apt-get update  RUN apt-get install -y cron rsyslog    RUN touch /var/log/cron.log  RUN chmod 0777 /var/log/cron.log    COPY ./app /var/www/app  COPY crontab /etc/cron.d/crontab  RUN chmod 0644 /etc/cron.d/crontab  RUN crontab /etc/cron.d/crontab    COPY 02-cron.conf /etc/rsyslog.d/02-cron.conf    CMD service rsyslog start && service cron start && tail -f /dev/null  

By default rsyslog logs cron to /var/log/syslog. I want to log cron to a separate file /var/log/cron.log.

rsyslog's master config /etc/rsyslog.conf has the following lines:

*.*;auth,authpriv.none      -/var/log/syslog  #cron.*             /var/log/cron.log  

I want to disable logging cron to /var/log/syslog and enable logging it to /var/log/cron.log by adding one more config /etc/rsyslog.d/02-cron.conf:

*.*;cron,auth,authpriv.none     -/var/log/syslog  cron.*                          /var/log/cron.log  

But the result is that cron logs to both /var/log/syslog and /var/log/cron.log.

Manjaro: Steam controllers are not detected wirelessly making them not able toconnect

Posted: 07 Aug 2021 07:58 AM PDT

I have fresh install of Manjaro linux and installed steam client. I have stem controller dongle plugged in the PC. When I turn on the steam controller it blinks as it is unconnected. After something like minute it turns it self off. When I open big picture mode and go to settings > controller and turn the controller on, it will not show in controller list. The steam is totally unable to detect it.

I tried making /lib/udev/rules.d/99-steam-controller-perms.rules with this content:

# This rule is needed for basic functionality of the controller in Steam and keyboard/mouse emulation  SUBSYSTEM=="usb", ATTRS{idVendor}=="28de", MODE="0666"    # This rule is necessary for gamepad emulation; make sure you replace 'pgriffais' with a group that the user that runs Steam belongs to  KERNEL=="uinput", MODE="0660", GROUP="my user name", OPTIONS+="static_node=uinput"    # DualShock 4 wired  SUBSYSTEM=="usb", ATTRS{idVendor}=="054c", ATTRS{idProduct}=="05c4", MODE="0666"  # DualShock 4 wireless adapter  SUBSYSTEM=="usb", ATTRS{idVendor}=="054c", ATTRS{idProduct}=="0ba0", MODE="0666"  # DualShock 4 slim wired  SUBSYSTEM=="usb", ATTRS{idVendor}=="054c", ATTRS{idProduct}=="09cc", MODE="0666"    # Valve HID devices over USB hidraw  KERNEL=="hidraw*", ATTRS{idVendor}=="28de", MODE="0666"    # Valve HID devices over bluetooth hidraw  KERNEL=="hidraw*", KERNELS=="*28DE:*", MODE="0666"    # DualShock 4 over bluetooth hidraw  KERNEL=="hidraw*", KERNELS=="*054C:05C4*", MODE="0666"    # DualShock 4 Slim over bluetooth hidraw  KERNEL=="hidraw*", KERNELS=="*054C:09CC*", MODE="0666"  

it did not work. So I made /etc/udev/rules.d/99-steam-controller-perms.rules with same content with no effect.


when I run lsusb nothing containg word steam or controller is listed.


When I connect controller via USB it works but only inside steam client in big picture mode. I remember I could control cursor outside of big picture mode few years ago using steam controller. Maybe this is some lead.

How can I fix this so I would be able to connect steam controller wirelessly and outside of steam client ?

Thank you for help

Can't connect to postgres using a URI, but can connect with psql -U

Posted: 07 Aug 2021 07:48 AM PDT

I'm trying to set up a simple web server on digital ocean and I'm having trouble connecting to the database with sqlalchemy using a URI.

Running

root@maudlin:/server/http/maudlin# psql postgresql://maudlin:<password>@localhost/maudlin  psql: error: could not connect to server: Connection refused          Is the server running on host "localhost" (127.0.0.1) and accepting          TCP/IP connections on port 5432?  

fails but running

root@maudlin:/server/http/maudlin# psql -U maudlin  Password for user maudlin: <password>  psql (12.7 (Ubuntu 12.7-0ubuntu0.20.04.1))  Type "help" for help.    maudlin=>  

passes.

As far as I can tell my pg_hba.conf file allows local ip connections:

# This file is read on server startup and when the server receives a  # SIGHUP signal.  If you edit the file on a running system, you have to  # SIGHUP the server for the changes to take effect, run "pg_ctl reload",  # or execute "SELECT pg_reload_conf()".  #  # Put your actual configuration here  # ----------------------------------  #  # If you want to allow non-local connections, you need to add more  # "host" records.  In that case you will also need to make PostgreSQL  # listen on a non-local interface via the listen_addresses  # configuration parameter, or via the -i or -h command line switches.  host    maudlin         maudlin         <personal ip 1>/32         md5  host    maudlin         maudlin         <personal ip 2>/32        md5      # DO NOT DISABLE!  # If you change this first entry you will need to make sure that the  # database superuser can access the database using some other method.  # Noninteractive access to all databases is required during automatic  # maintenance (custom daily cronjobs, replication, and similar tasks).  #  # Database administrative login by Unix domain socket  local   all             postgres                                peer    # TYPE  DATABASE        USER            ADDRESS                 METHOD    # "local" is for Unix domain socket connections only  local   all             all                                     md5  # IPv4 local connections:  host    all             all             127.0.0.1/32            md5  # IPv6 local connections:  host    all             all             ::1/128                 md5  # Allow replication connections from localhost, by a user with the  # replication privilege.  local   replication     all                                     peer  host    replication     all             127.0.0.1/32            md5  host    replication     all             ::1/128                 md5  

Doesn't the line

host    all             all             127.0.0.1/32            md5  

mean accept ipv4 connections on local host with password using md5?

I imagine I could add the machine's ip to the top list of external ips and route my connections through that but that seams like a Bad Idea™

Does anyone have any debug tips or suggestions?

What are difference between -bash and bash?

Posted: 07 Aug 2021 07:18 AM PDT

When I log in as root and enter some random non-existing command, it says:

root@localhost:~# asdf  -bash: asdf: command not found  root@localhost:~#   

But when I do the same thing as user rakinar2 it says:

rakinar2@localhost:~$ asdf  bash: asdf: command not found  rakinar2@localhost:~$   

Now what are difference between -bash and bash?

How to explain all details with 'ls' command on freebsd and what color means?

Posted: 07 Aug 2021 07:21 AM PDT

I am trying to clone directory with cp command and meeting some attributes lost. For example, when I am trying to run sudo command from copied tree, it gives error.

sudo: /usr/bin/sudo must be owned by uid 0 and have the setuid bit set  

This is what I am abserving with ls command:

enter image description here

I.e. copied file is indicated with different color. But the textual representations of both files are identical. What is the difference and how to show it with ls?

apt errors not installing

Posted: 07 Aug 2021 09:11 AM PDT

I am currently trying to install mysql-server using sudo apt install mysql-server on my Ubuntu 20.04 WSL. However, after entering Y to install, I am hitting the following error:

Setting up ec2-instance-connect (1.1.12+dfsg1-0ubuntu3.20.04.1) ...  System has not been booted with systemd as init system (PID 1). Can't operate.  Failed to connect to bus: Host is down  sshd override added, restarting daemon  System has not been booted with systemd as init system (PID 1). Can't operate.  Failed to connect to bus: Host is down  System has not been booted with systemd as init system (PID 1). Can't operate.  Failed to connect to bus: Host is down  dpkg: error processing package ec2-instance-connect (--configure):   installed ec2-instance-connect package post-installation script subprocess returned error exit status 1  Errors were encountered while processing:   ec2-instance-connect  

This output also appears when running other apt or apt-get commands. How can I fix this error (very new using Linux).

I can run only commands which produces output less than 25 line in centos

Posted: 07 Aug 2021 07:11 AM PDT

I am connecting to a centos server with ssh and try to execute some commands. But if this commands output is bigger than 25 line, it fails and I can't do anything else in this ssh session. Is there any limit for terminal output?

For example

ifconfig>>output.txt -> There is no error in this command

ifconfig-> This command fails and ssh session drop after some time. I can establish another ssh session and run executable commands during ssh drop time.

If there is a limit for terminal output, how can I change this?

Thanks in advance.

Why does the command echo `echo \\\\\\\z` in bash script print \\z instead of \\\z?

Posted: 07 Aug 2021 08:16 AM PDT

The command

echo `echo \\\\\\\z`  

is from this book , I don't understand why it prints

\\z  

when it get executed via the bash script.

I think it should print

\\\z  

Installing Sublime Text with apt fails due to missing public key

Posted: 07 Aug 2021 07:29 AM PDT

I am using these instructions to setup apt to install sublime-text:

https://www.sublimetext.com/docs/linux_repositories.html

https://wiki.debian.org/DebianRepository/UseThirdParty

How to add a third-party repo. and key in Debian?

However, when running apt update I am getting the following error regarding the encryption key:

The following signatures couldn't be verified because the public key is not available: NO_PUBKEY F57D4F59BD3DF454  

This is what I am doing:

Download key, convert key from ascii to binary, and move key to shared location:

curl https://download.sublimetext.com/sublimehq-pub.gpg | gpg --dearmor > ~/Downloads/sublime-keyring.gpg  sudo mkdir -vp /usr/local/share/keyrings/  sudo mv -v ~/Downloads/sublime-keyring.gpg /usr/local/share/keyrings/sublime-keyring.gpg  sudo chown -v root:root /usr/local/share/keyrings/sublime-keyring.gpg  sudo chmod -v 0640 /usr/local/share/keyrings/sublime-keyring.gpg  

Create source list:

printf "deb [signed-by=/usr/local/share/keyrings/sublime-keyring.gpg] https://download.sublimetext.com/ apt/stable/" | sudo tee /etc/apt/sources.list.d/sublime-text.list  

Set pinning rules to restrict repo usage:

printf "%s\n" "Package: *" "Pin: origin download.sublimetext.com" "Pin-Priority: 1" "" "Package: sublime-text" "Pin: origin download.sublimetext.com" "Pin-Priority: 500" | sudo tee /etc/apt/preferences.d/sublime-text.pref  

Then I run sudo apt update which creates the following output regarding sublime text repo:

Get:1 https://download.sublimetext.com apt/stable/ InRelease [2.536 B]                                                                      Err:1 https://download.sublimetext.com apt/stable/ InRelease                                                                                       The following signatures couldn't be verified because the public key is not available: NO_PUBKEY F57D4F59BD3DF454    W: GPG error: https://download.sublimetext.com apt/stable/ InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY F57D4F59BD3DF454  E: The repository 'https://download.sublimetext.com apt/stable/ InRelease' is not signed.  N: Updating from such a repository can't be done securely, and is therefore disabled by default.  N: See apt-secure(8) manpage for repository creation and user configuration details.  

When I run with debug for gpg, sudo apt -o Debug::Acquire::gpgv=True update, I get a few more details:

Get:1 https://download.sublimetext.com apt/stable/ InRelease [2.536 B]                                                                     0% [Waiting for headers] [Waiting for headers]inside VerifyGetSigners                                                                      Preparing to exec:  /usr/bin/apt-key --quiet --readonly --keyring /usr/local/share/keyrings/sublime-keyring.gpg verify --status-fd 3 /tmp/apt.sig.zwA50y /tmp/apt.data.zbzsmw  Read: [GNUPG:] NEWSIG    Read: [GNUPG:] ERRSIG F57D4F59BD3DF454 1 8 01 1627009220 9 -    Got ERRSIG F57D4F59BD3DF454 !  Read: [GNUPG:] NO_PUBKEY F57D4F59BD3DF454    Got NO_PUBKEY F57D4F59BD3DF454 !  gpgv exited with status 2  Summary:    Good:     Valid:     Bad:     Worthless:     SoonWorthless:     NoPubKey: NO_PUBKEY F57D4F59BD3DF454    Signed-By:     NODATA: no  Err:1 https://download.sublimetext.com apt/stable/ InRelease    The following signatures couldn't be verified because the public key is not available: NO_PUBKEY F57D4F59BD3DF454    W: GPG error: https://download.sublimetext.com apt/stable/ InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY F57D4F59BD3DF454  E: The repository 'https://download.sublimetext.com apt/stable/ InRelease' is not signed.  N: Updating from such a repository can't be done securely, and is therefore disabled by default.  N: See apt-secure(8) manpage for repository creation and user configuration details.  

I read this to mean that apt is correctly passing on the location of the key I have downloaded. This could mean that gpg can not read the contents of the key. Or maybe the key was somehow mangled in the dearmoring process and no longer has the contents expected. This is pure guessing on my part.

When I inspect the dearmored key, sudo gpg --show-keys /usr/local/share/keyrings/sublime-keyring.gpg, I get the following, which seems plausibly correct:

pub   rsa4096 2017-05-08 [SCEA]        1EDDE2CDFC025D17F6DA9EC0ADAE6AD28A8F901A  uid                      Sublime HQ Pty Ltd <support@sublimetext.com>  sub   rsa4096 2017-05-08 [S]  

The key is readable, although it does not mention F57D4F59BD3DF454.

So what went wrong? How can I get this to function correctly?

Conditional dependencies in `debian/control` file

Posted: 07 Aug 2021 09:34 AM PDT

I'm packaging the software, which depends on a particular version of a library shipped in the official Debian distribution. The software also has a bundled sources of the library.

Is there any way to describe this in the debian/control file that for Debian versions older than 10, I don't need to specify anything, but for >= 10, I can use the shipped version as a build requirement?

Also, for the older versions (how to detect it in the debian/rules file?) I have to pass an additional option to cmake at configure step.

How do I add a different prefix to each line?

Posted: 07 Aug 2021 09:13 AM PDT

I have a list of names in a list/text file (file test.txt).

For example:

smith  johnson  west  

How would I add every letter as a prefix for each line, and output it as a new text file?

Desired output:

asmith  bsmith  csmith  dsmith  ...  ajohnson  bjohnson  cjohnson  etc., etc.  

How to add ip rule after dockerd is up?

Posted: 07 Aug 2021 07:28 AM PDT

I want to add an ip rule that involves the docker0 interface. But docker0 does not exist until docker is up. ip rule will fail if docker0 does not exists. I could use a script that repeats until docker0 is added but is there a more elegant way? Like a hook / script triggered after docker is up?

Get an accellerated X11 driver for "XGI Z7" GPU under Alpine Linux v3.14?

Posted: 07 Aug 2021 09:21 AM PDT

I am trying to get a modern Linux up and running on an elderly PC with a Vortex86DX CPU (i586) and a built-in GPU which is reported in dmesg as

[   21.246156] Console: switching to colour frame buffer device 100x37  [   21.256977] sisfb: 2D acceleration is enabled, y-panning enabled (auto-max)  [   21.257003] fb0: XGI Z7 frame buffer device version 1.8.9  [   21.257017] sisfb: Copyright (C) 2001-2005 Thomas Winischhofer  

To my understanding this GPU was supported by the SIS module, but when trying to run startx the XGI driver is attempted, fails, and then the fbdev driver is used:

[  2994.516] (==) Matched xgi as autoconfigured driver 0  [  2994.516] (==) Matched modesetting as autoconfigured driver 1  [  2994.516] (==) Matched fbdev as autoconfigured driver 2  [  2994.516] (==) Matched vesa as autoconfigured driver 3  [  2994.516] (==) Assigned the driver to the xf86ConfigLayout  [  2994.517] (II) LoadModule: "xgi"  [  2994.532] (WW) Warning, couldn't open module xgi  [  2994.532] (EE) Failed to load module "xgi" (module does not exist, 0)  

The fbdev driver does its job but is rather slow. The VideoDriverFaq at https://wiki.freedesktop.org/xorg/VideoDriverFAQ/ mentions that the sis driver should be used but it is clearly not properly autodetected. The sis module is available in a package and installed on the system

How should I approach this?

Tarring files with mtime after a date and with grep

Posted: 07 Aug 2021 08:36 AM PDT

Tons of threads out there, but nothing with that includes the entire hat trick. I'm trying to tar all files newer than 2019-06-30 (±760 days) in a directory tree, but ignore any directories with the name backup. I've looked at dozens of options and this is a close as I've gotten:

tar -cvzf newest-files.tar.gz --newer-mtime "760 days ago" client_images/ | grep -v 'backup'  

This one does the proper filtering (date and ignore) but doesn't tar anything (empty archive error):

sudo find client_images/. -mtime -760 | grep -v 'backup'  

Can't figure out to get the tar happening.

 | tar -cvzf files.tar.gz  

What am I missing?

Thanks.

UPDATE:

This came from my system admin (who wishes to remain anonymous):

This command line will run the tar command for every file it finds, create (-c) a new tar archive (always called newestfiles.tar.gz) for it and then tar and compress the file into that tar archive. So at the end you're left with one tar archive that contains the last file the "find" command found, all previous tar archives were overwritten (since it's always the same name).

To prevent this from happening you need to use the add (-r) instead of the create (-c) tar option. Unfortunately this doesn't work with compression in one command, so that has to happen in a separate step. And if you don't need compression, you can do it with just the one. So,

$ find client_images -type f -mtime -760 -exec tar -rvf newestfiles.tar   --exclude '*mcith'* {} \;   $ gzip newestfiles.tar   

Writing the data to CSV file

Posted: 07 Aug 2021 09:16 AM PDT

I have a file generated everyday which have number of webservices request and response and i wanted to write the contents of the request and response to the CSV file.

Something in the below format (Contents of both request and response together)

phonenumber,RefID,DateTime,SOATransactionID,phonenumber,RefID  

Please note that only Phonenumber,RefID,DateTime to be extracted in the Input Request and SOATransactionID,phonenumber,RefID to be extracted in the Output response.

Sample request & Reponse

#######################Input Request#######################  </soapenv:Header><soapenv:Body>          <cre:Customer>              <cre:account>                  <cor:phonenumber>7654899089</cor:phonenumber>              </cre:account>              <cre:RefID>ABC1234</cre:RefID>              <cre:DateTime>2002-04-20T00:00:06.774+01:00</cre:DateTime>          </cre:Customer>      </soapenv:Body></soapenv:Envelope>  #######################Output Response#######################  <?xml version='1.0' encoding='utf-8'?><soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:cor="http://reftest.com/coredata1"><soapenv:Header><cor:SOATransactionID>123-456-890</cor:SOATransactionID></soapenv:Header><soapenv:Body><ns3:Response xmlns:ns3="http://reftest.com/Testdata1"><ns3:successResponse><ns3:account><cor:phonenumber>7654899089</cor:phonenumber></ns3:account><ns3:RefID>ABC1234</ns3:RefID></ns3:successResponse></ns3:Response></soapenv:Body></soapenv:Envelope>  #######################Input Request#######################  </soapenv:Header><soapenv:Body>          <cre:Customer>              <cre:account>                  <cor:phonenumber>8766769089</cor:phonenumber>              </cre:account>              <cre:RefID>ABC1234</cre:RefID>              <cre:DateTime>2002-04-20T00:00:06.774+01:00</cre:DateTime>          </cre:Customer>      </soapenv:Body></soapenv:Envelope>  #######################Output Response#######################  <?xml version='1.0' encoding='utf-8'?><soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:cor="http://reftest.com/coredata1"><soapenv:Header><cor:SOATransactionID>123-456-890</cor:SOATransactionID></soapenv:Header><soapenv:Body><ns3:Response xmlns:ns3="http://reftest.com/Testdata1"><ns3:successResponse><ns3:account><cor:phonenumber>8766769089</cor:phonenumber></ns3:account><ns3:RefID>ABC1234</ns3:RefID></ns3:successResponse></ns3:Response></soapenv:Body></soapenv:Envelope>  

Thanks

Add JSON objects to array using jq

Posted: 07 Aug 2021 09:30 AM PDT

My goal is to output a JSON object using jq on the output of a find command in bash. It could either be a one-line command or a bash script.

I have this command which creates JSON objects from each line of output:

find ~/ -maxdepth 1 -name "D*" | \  while read line; \  do jq -n \  --arg name "$(basename "$line")" \  --arg path "$line" \  '{name: $name, path: $path}'; \  done  

The output looks like this:

{    "name": "Desktop",    "path": "/Users/username/Desktop"  }  {    "name": "Documents",    "path": "/Users/username/Documents"  }  {    "name": "Downloads",    "path": "/Users/username/Downloads"  }  

But I need these objects to be in an array, and I need the array to be the value of a a parent object's single key called items, like so:

{"items": [      {        "name": "Desktop",        "path": "/Users/username/Desktop"      },      {        "name": "Documents",        "path": "/Users/username/Documents"      },      {        "name": "Downloads",        "path": "/Users/username/Downloads"      }    ]  }  

I tried adding the square brackets to the jq output string for each line ('[{name: $name, path: $path}]';) and that adds the brackets but not the commas between the array elements.

I found possible solutions here but I could not figure out how to use them while looping through each line.

zsh testing existence of a key in an associative array via indirect expansion

Posted: 07 Aug 2021 09:41 AM PDT

So I know that you can test for the existence of a regular parameter via indirect expansion by doing something like:

foo=1  bar=foo  (( ${(P)+bar} )) && print "$bar exists"  

And I know you can test for the existence of a key inside an associative array by doing something like:

foo=([abc]=1)  (( ${+foo[abc]} )) && print "abc exists"  

However I can't figure out how to combine the two and test for the existence of a key inside an associative array via indirect expansion. Is this possible without using eval?

I tried several combinations including the following, and none of them worked:

foo=([abc]=1)  bar=foo  (( ${(P)+bar[abc]} )) && print "$bar has key abc" # Test fails  (( ${(P)+${bar}[abc]} )) && print "$bar has key abc" # Passes for nonexistant keys  (( ${${(P)+bar}[abc]} )) && print "$bar has key abc" # Test fails  (( ${${(P)bar}+[abc]} )) && print "$bar has key abc" # prints "zsh: bad output format specification"  

can't change value in smp_affinity

Posted: 07 Aug 2021 08:05 AM PDT

I am trying to set irq affinity on linux by changing the value in smp_affinity. When I echo the new value into the file, I don't get any error but when I read it back, the value remains unchanged. I don't have irqbalance enabled, so I am not sure what else could be preventing me from changing it.

For example:

> cat /proc/irq/51/smp_affinity     f  > echo 1 > /proc/irq/51/smp_affinity     > cat /proc/irq/51/smp_affinity     f  

The "proper" way to test if a service is running in a script

Posted: 07 Aug 2021 08:50 AM PDT

My problem:

I'm writing a bash script and in it I'd like to check if a given service is running.

I know how to do this manually, with $ service [service_name] status.

But (especially since the move to systemd) that prints a whole bunch of text that's a little messy to parse. I assumed there's a command made for scripts with simple output or a return value I can check.

But Googling around only yields a ton of "Oh, just ps aux | grep -v grep | grep [service_name]" results. That can't be the best practice, is it? What if another instance of that command is running, but not one started by the SysV init script?

Or should I just shut up and get my hands dirty with a little pgrep?

Issue in running sudo visudo command?

Posted: 07 Aug 2021 07:27 AM PDT

I get the following error when running sudo visudo on my Ubuntu:

visudo: /etc/sudoers busy ,try again later  

Location of Wifi SSID configs and how to make a hotspot in Ubuntu 14.04

Posted: 07 Aug 2021 09:01 AM PDT

What is the location of wifi ssid config in Ubuntu 14.04. Also, making a wifi connection as Hotspot does not work.

I am unable to make a hotspot in Ubuntu 14.04 via UI. One of the reasons that I found was that mode is set to infrastructure in the hotspot config and I need to change it. I tried locating the file in /etc/network/interfaces from online search but that configuration is not something that I am looking for.

No comments:

Post a Comment