Monday, November 1, 2021

Recent Questions - Unix & Linux Stack Exchange

Recent Questions - Unix & Linux Stack Exchange


SIGSTOP and SIGCONT and the bash choosing how to respond to them

Posted: 01 Nov 2021 10:22 AM PDT

I'm reading the documentation on the freezer-subsystem and I came across the following example on why SIGSTOP and SIGCONT are not always working as we expect them to:

$ echo $$  16644  $ bash  $ echo $$  16690      From a second, unrelated bash shell:  $ kill -SIGSTOP 16690  $ kill -SIGCONT 16690    <at this point 16690 exits and causes 16644 to exit too>  

What I don't understand is the explanation: "This happens because bash can observe both signals and choose how it responds to them." Why can the bash choose here, isn't the command not clearly stating what to do?

Can't rsync to a destination in /etc

Posted: 01 Nov 2021 09:53 AM PDT

After an upgrade from Debian 10 to 11 an rsync job with some destination in /etc does not work anymore.

On the server (destination) side I have this:

uid = root  gid = root    hosts allow = mysender    [mymodule]    # NOGO    path = /etc/tmp      # GO    # path = /root/tmp      comment = Just for testing    read only = false  

On the client side (source) I enter this command:

rsync -a /etc/temp/test myserver::mymodule  

Which makes rsync fail with this message:

rsync: [generator] recv_generator: mkdir "/test" (in mymodule) failed: Read-only file system (30)  *** Skipping any contents from this failed directory ***  rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1333) [sender=3.2.3]  

Configuring the server to use a path outside of /etc (for example /root/tmp) does work as expected.

The root file system (which also contains the /etc directory) is not mounted read-only of course and it is clean. Why does rsyncd consider /etc being part of a read-only file system then?

How to search and replace text in files but keep/reuse a certain part of text?

Posted: 01 Nov 2021 10:02 AM PDT

I want to refactor many JavaScript files in many directories which contain access to an object in this format: myObj.something.somethingElse and I want it to be myObj.getSomething(somethingElse) how can I achieve this in shell?

Why do I get permission denied for a directory with acl set for the owner of the directory (after removing all standard posix permissions)?

Posted: 01 Nov 2021 10:36 AM PDT

I'm attempting to execute the ls command on a directory which has acl permissions for the owner and group of the directory (with no standard posix permissions set). This results in a Permission Denied even though getfacl says the user should be able to do so.

Here's what I'm doing:

  1. Create a directory and a file inside it.

mkdir /tmp/mydir && touch /tmp/mydir/myfile

  1. Check if I can execute ls on this directory.
jgazula@gazula:/tmp$ ls -al /tmp/mydir/  total 896  drwxrwxr-x  2 jgazula jgazula   4096 Nov  1 11:57 .  drwxrwxrwt 25 root    root    909312 Nov  1 11:57 ..  -rw-rw-r--  1 jgazula jgazula      0 Nov  1 11:57 myfile  
  1. Now, let's remove all the standard posix permissions on this directory.

chmod 000 /tmp/mydir

  1. Verify the permissions.
jgazula@gazula:/tmp$ ls -al /tmp | grep mydir  d---------  2 jgazula jgazula   4096 Nov  1 11:57 mydir  
  1. We shouldn't be able to ls now.
jgazula@gazula:/tmp$ ls -al /tmp/mydir/  ls: cannot open directory '/tmp/mydir/': Permission denied  
  1. Set the acl permissions for the jgazula user and group.

sudo setfacl --mask -Rm u:jgazula:rwx,g:jgazula:rwx /tmp/mydir/

  1. Verify the acl permissions.
jgazula@gazula:/tmp$ getfacl -ep /tmp/mydir/  # file: /tmp/mydir/  # owner: jgazula  # group: jgazula  user::---  user:jgazula:rwx        #effective:rwx  group::---          #effective:---  group:jgazula:rwx       #effective:rwx  mask::rwx  other::---  
  1. Since the acl permissions (including the effective permissions) look good, I should be able to execute ls on the directory?
jgazula@gazula:/tmp$ ls -al /tmp/mydir/  ls: cannot open directory '/tmp/mydir/': Permission denied  

But I can't and I don't understand why.

  1. Interestingly enough, when I check the standard posix permissions, the group permission bits have been set? Not sure I understand why only group permissions have been updated.
jgazula@gazula:/tmp$ ls -al /tmp | grep mydir  d---rwx---+  2 jgazula jgazula   4096 Nov  1 12:13 mydir  
  1. Let's set the acl permissions for the owner and group (i.e, omit the owner/group from the command).

sudo setfacl --mask -Rm u::rwx,g::rwx /tmp/mydir/

  1. Verify the acl permissions again.
jgazula@gazula:/tmp$ getfacl -ep /tmp/mydir/  # file: /tmp/mydir/  # owner: jgazula  # group: jgazula  user::rwx  user:jgazula:rwx        #effective:rwx  group::rwx          #effective:rwx  group:jgazula:rwx       #effective:rwx  mask::rwx  other::---  
  1. Check if I can execute ls now.
jgazula@gazula:/tmp$ ls -al /tmp/mydir/  total 896  drwxrwx---+  2 jgazula jgazula   4096 Nov  1 11:57 .  drwxrwxrwt  25 root    root    909312 Nov  1 11:57 ..  -rwxrwxr--+  1 jgazula jgazula      0 Nov  1 11:57 myfile  

Why does step #6 not work by itself? I'm setting the acl permissions explicitly for a user and group. Why do I need to execute step #11?

Problem with installing Realtek rtl8188eus driver for TP-Link TL-WN722N v2 WiFi adapter in Ubuntu 20.04

Posted: 01 Nov 2021 10:04 AM PDT

I'm new to linux. I'm using Linux Lite 5.2 OS (Ubuntu 20.04). While trying to install a Realtek driver for a TP-Link TL-WN722N v2 WiFi adapter using git clone https://github.com/aircrack-ng/rtl8188eus , it's giving this error message:

cut: /etc/redhat-release: No such file or directory    Building modules, stage 2.    MODPOST 1 modules    CC [M]  /home/user/rtl8188eus/8188eu.mod.o    LD [M]  /home/user/rtl8188eus/8188eu.ko  make[1]: Leaving directory '/usr/src/linux-headers-5.4.0-89-generic'  cut: /etc/redhat-release: No such file or directory  install -p -m 644 8188eu.ko  /lib/modules/5.4.0-89-generic/kernel/drivers/net/wireless/  /sbin/depmod -a 5.4.0-89-generic  

I ran the code with sudo but still got this error. I tried to create redhat-release directory myself with mkdir in /etc/ and made it writable with chmod +rwx and giving permissions with sudo chmod -R 777 /etc/redhat-release but it shows this error message:

cut: /etc/redhat-release: Is a directory  make ARCH=x86_64 CROSS_COMPILE= -C /lib/modules/5.4.0-89-generic/build M=/home/user/rtl8188eus  modules  make[1]: Entering directory '/usr/src/linux-headers-5.4.0-89-generic'  cut: /etc/redhat-release: Is a directory  cut: /etc/redhat-release: Is a directory    Building modules, stage 2.    MODPOST 1 modules  make[1]: Leaving directory '/usr/src/linux-headers-5.4.0-89-generic'  cut: /etc/redhat-release: Is a directory  install -p -m 644 8188eu.ko  /lib/modules/5.4.0-89-generic/kernel/drivers/net/wireless/  /sbin/depmod -a 5.4.0-89-generic  

The internet is working using the adapter but can't turn ON the monitor mode on it. While running sudo airmon-ng, "null" is shown under "PHY" section of the adapter which is probably due to unsuccessful driver installation.

Does anyone have a solution to this? Please help...

Wi-Fi network not working on Fedora 32 - connection times out

Posted: 01 Nov 2021 09:14 AM PDT

I am running Fedora 32 on a used HP Z420 workstation. My Internet connection worked all OK when I was using an Ethernet connection, but after I moved to an apartment that did not have an Ethernet output and had to use Wi-Fi instead, I can't get the Wi-Fi connection on the computer to work.

The problem is not in the Wi-Fi connection itself, as the Windows 10 laptop I have been using for remote work is able to connect to the Wi-Fi all OK.

I am using a TP-Link TL-WN823N USB Wi-Fi adapter plugged in to the computer's USB port. When I run lsusb it shows the adapter as: Bus 001 Device 003: ID 2357:0109 TP-Link TL-WN823N v2/v3 [Realtek RTL8192EU]

When I boot up the computer and log in to Fedora, the system is able to find the Wi-Fi network as its name shows up when I click on the network icon on the taskbar, along with several other Wi-Fi networks (that belong to other people and whose passwords I don't know). When I click on the network name, Fedora asks me for a network password. When I type it and click "Connect", nothing happens. I don't get connected and I don't even get a message about a wrong password or anything.

Here is what appears when I run dmesg:

[  622.856644] iwlwifi: unknown parameter 'wd_disable' ignored  [  622.856648] iwlwifi: unknown parameter 'bt_coex_acrive' ignored  [  622.856707] Intel(R) Wireless WiFi driver for Linux  [  640.346741] wlp0s26u1u3: authenticate with 9e:0e:8b:55:27:c7  [  640.365595] wlp0s26u1u3: send auth to 9e:0e:8b:55:27:c7 (try 1/3)  [  640.569164] wlp0s26u1u3: send auth to 9e:0e:8b:55:27:c7 (try 2/3)  [  640.777145] wlp0s26u1u3: send auth to 9e:0e:8b:55:27:c7 (try 3/3)  [  640.985121] wlp0s26u1u3: authentication with 9e:0e:8b:55:27:c7 timed out  

I have read this SuperUser StackExchange post: https://superuser.com/questions/911635/wifi-authentication-times-out and tried the solution. It didn't work. It seems Fedora is ignoring the "wd_disable" parameter. It is also ignoring another parameter "bt_coex_acrive" (yes, it is really spelled "acrive" in the dmesg output, not "active") that I don't know where it comes from. It does not appear in any file in the directory /etc/modprobe.d.

I am able to type this StackExchange post because I'm currently at work in my company's office, where there is a working Internet connection.

Does anyone have an idea what is causing this and how I can fix it? I can provide more details if needed but there is a lag because I have to switch between my home computer (to diagnose the problem) and my work computer (to report it to StackExchange).

If the solution involves installing more software with dnf then I don't know how that would work as to be able to use dnf I would need an Internet connection in the first place.

GlobalProtect (alternative) on Linux

Posted: 01 Nov 2021 09:12 AM PDT

My new company gave me a Mac and I'm having a nasty time getting used to it. This post will likely irritate some of you, but please bare in mind I've been using Linux and Windows computers for years, with many keys added to muscle memory. Almost no-one knows less about using a Mac than I do. Anyway, they're not able to get me a Windows computer yet but seem open to me, once I have it, installing Linux on it, which I'd like to do. Most of my corporate apps are web driven, including email.

My biggest hurdle now is signing into a VPN. They're using this GlobalProtect. I downloaded a client and it actually runs, but does not work. The IT people suggest to sign in via LDAP. Assuming this is a workable thing, how would this work? I have the VPN host.

cat /etc/*release  DISTRIB_ID=LinuxMint  DISTRIB_RELEASE=20.2  DISTRIB_CODENAME=uma  DISTRIB_DESCRIPTION="Linux Mint 20.2 Uma"  NAME="Linux Mint"  

How to parse a file and write to separate columns

Posted: 01 Nov 2021 10:18 AM PDT

I am trying to parse the output from antismash to count the number of BGC. I found the scripts that people have with python which I know nothing about, so I am trying to figure this out using bash script.

The gene bank format files with predicted clusters look like this:

head -30 sca_11_chr8_3_0.region001.gbk  LOCUS       sca_11_chr8_3_0        45390 bp    DNA     linear   UNK 01-JAN-1980  DEFINITION  sca_11_chr8_3_0.  ACCESSION   sca_11_chr8_3_0  VERSION     sca_11_chr8_3_0  KEYWORDS    .  SOURCE    ORGANISM              .  COMMENT     ##antiSMASH-Data-START##              Version      :: 6.0.1-a859617(changed)              Run date     :: 2021-10-31 18:00:02              NOTE: This is a single cluster extracted from a larger record!              Orig. start  :: 169481              Orig. end    :: 214871              ##antiSMASH-Data-END##  FEATURES             Location/Qualifiers       protocluster    1..45390                       /aStool="rule-based-clusters"                       /contig_edge="False"                       /core_location="join{[194767:194871](-),                       [194650:194652](-), [191596:194619](-),                       [189481:191503](-)}"                       /cutoff="20000"                       /detection_rule="cds(Condensation and (AMP-binding or                       A-OX))"                       /neighbourhood="20000"                       /product="NRPS"                       /protocluster_number="1"                       /tool="antismash"       proto_core      complement(join(20001..22022,22116..25138,25170..25171,  

tail sca_11_chr8_3_0.region001.gbk

    44881 ggagcttgtg gagagaagtg agacgtatcg cacgaatgct cttcagcaga tgctgggcag      44941 ttagaggatt tgcactttag tttcatagag ttgatgtgtc gaggagataa tttgagatac      45001 cagtatatgt aatttaccta cctacctagt cgagattgga cattgtacaa gagaaataac      45061 aactaactat acgagacaag cctgatgtgt tgatagtttc attcatgtct ggtgtttgtg      45121 gcatgtttat gttggagtag ctgtacagaa gataccgcgc tattcccagt gatcatggcc      45181 cccacgcctc caactcggca cctgaccttg atcccctttg ggaagcatgt ctcagtgtct      45241 cagccgtgag ccgtagaggc tgcacagcat ggagaagctg tcctgtcaat tcaggggatt      45301 tgcccacggg ggctatcata tgatgaatct cggacaccct acacgttgtt accgcctttc      45361 ttagctcctg ctggtagccg tcccctgaac  //  

At first I concatenated the gbk files into one so it contains all the predicted clusters and then grep the characters that gave me the locus id, start and end of cluster and cluster type.

cat sca_*.gbk > Necha2_SMclusters.gbk  grep "DEFINITION\|Orig\|product=" Necha2_SMclusters.gbk > Necha2_SMclusters_filtered.txt  

which gives me a file like this

DEFINITION  sca_32_chr11_3_0.              Orig. start  :: 381231              Orig. end    :: 428233                       /product="T1PKS"                       /product="T1PKS"                       /product="T1PKS"                       /product="T1PKS"  DEFINITION  sca_32_chr11_3_0.              Orig. start  :: 464307              Orig. end    :: 486217                       /product="terpene"                       /product="terpene"                       /product="terpene"                       /product="terpene"  DEFINITION  sca_33_chr6_1_0.              Orig. start  :: 140267              Orig. end    :: 227928                       /product="NRPS-like"                       /product="T1PKS"                       /product="NRPS-like"                       /product="T1PKS"                       /product="NRPS-like"                       /product="NRPS-like"                       /product="NRPS-like"                       /product="T1PKS"                       /product="T1PKS"                       /product="T1PKS"  DEFINITION  sca_39_chr11_5_0.              Orig. start  :: 270154              Orig. end    :: 324310                       /product="NRPS"                       /product="NRPS"                       /product="NRPS"                       /product="NRPS"  

From this file I want to obtain a file which looks like this.

Locus name  start   end ClusterType  sca_9_chr7_10_0.    369577  421460  T1PKS,NRPS  sca_33_chr6_1_0.    140267  227928  NRPS-like, T1PKS  sca_32_chr11_3_0    381231  428233  T1PKS  

For now this is what I need a file with all predicted clusters in it.

Thank you so much!!

count the number of rows which are with number of colums

Posted: 01 Nov 2021 08:28 AM PDT

I have several files with the following content:

GGHTERR_01218   GGHTERR_02418   GGHTERR_01991  GGHTERR_02211   GGHTERR_02297   GGHTERR_02379  GGHTERR_02294   GGHTERR_02455   GGHTERR_02374  GGHTERR_00532   GGHTERR_00534  GGHTERR_00533   GGHTERR_00535  GGHTERR_00776   GGHTERR_00779  GGHTERR_01220   GGHTERR_01620  GGHTERR_01760   GGHTERR_01761  GGHTERR_01774   GGHTERR_02404  GGHTERR_01889   GGHTERR_01890  GGHTERR_02081   GGHTERR_02287  GGHTERR_02152   GGHTERR_02153  GGHTERR_02260   GGHTERR_02321  GGHTERR_02295   GGHTERR_02375  GGHTERR_02419   GGHTERR_02437  GGHTERR_02420   GGHTERR_02438  GGHTERR_02430   GGHTERR_02448  GGHTERR_00001  GGHTERR_00002  GGHTERR_00003  GGHTERR_00004  GGHTERR_00005  GGHTERR_00006  GGHTERR_00007  

I woulk like to know if there is an easy way to count the number of rows that have 3 columns, 2 columns and 1 column.

So the output should look like:

3 columns: 3  2 columns: 14  1 colums: 7  

Compare ownership and permissions of all files in 2 directories in bash

Posted: 01 Nov 2021 10:37 AM PDT

I'm trying to fetch file ownership and permissions of all files in 2 directory and compare them. Report file with same name but different file ownership or permission. I have fetched file ownership and permission of all files in first directory to file1.txt and second directory to file2.txt

My script progress:

[root@test]# cat file1.txt  644 root root /home/user2/sample-test/abc  644 root root /home/user2/sample-test/bcd  644 root root /home/user2/sample-test/efg  644 root root /home/user2/sample-test/mama  644 root root /home/user2/sample-test/ngins2  644 root root /home/user2/sample-test/nils45  644 root root /home/user2/sample-test/sample2  644 root root /home/user2/sample-test/t1  644 root root /home/user2/sample-test/t2  644 root root /home/user2/sample-test/test1  755 root root /home/user2/sample-test  644 root root /home/user2/sample-test1/abc  644 root root /home/user2/sample-test1/ppp  644 root root /home/user2/sample-test1/werwre  755 root root /home/user2/sample-test1  644 root root /home/user2/testing123    [root@test]# cat file2.txt  644 root root /home/user2/sample-test/ip  644 root root /home/user2/sample-test/new-file  644 root root /home/user2/sample-test/ngins2  644 root root /home/user2/sample-test/nils45  644 root root /home/user2/sample-test/sample2  755 root root /home/user2/sample-test  755 root root /home/user2/sample-test/test1.sh  644 apache apache /home/user2/sample-test1/ppp  644 apache fes /home/user2/sample-test1/abc  644 root root /home/user2/sample-test1/perms.saved  644 root root /home/user2/sample-test1/test  644 root root /home/user2/sample-test1/test1  644 root root /home/user2/sample-test1/werwre  755 root root /home/user2/sample-test1  755 root root /home/user2/sample-test1/1.sh  644 root root /home/user2/testing123      find /path/to/dir1 -depth -exec stat --format '%a %U %G %n' {} + | sort -n" >> file1.txt  find /path/to/dir2 -depth -exec stat --format '%a %U %G %n' {} + | sort -n" >> file2.txt    t1=`cat file1.txt`  t5=`cat file2.txt`    #find lines only in file1  only1=$(comm -23 "$t1"_sorted "$t5"_sorted)    #find lines only in file2  only2=$(comm -13 "$t1"_sorted "$t5"_sorted)  

I'm facing challenges while handling these 2 situations:

  1. If file is missing in dir1 or dir2 should be handled. Consider files in dir1 are correct files and files in dir2 are having messed up permissions/ownerships. I just want to compare files which have same name in dir1 and dir2 but different ownership/permission.

In Unix, what it means by "Everything is a byte stream"?

Posted: 01 Nov 2021 09:11 AM PDT

I am a newbie to Linux and while exploring around file system - I quite often encounter the phrase "Everything is a file". I do see an answer to this question here but I am still failing to grasp the concept. In the answer it is mentioned that precisely "Everything is a stream of bytes"

I am not getting what it means by saying monitor/keyboard etc. is represented as "stream of bytes". Can someone help me to visualize this?

Can I post-format column tab sizes so that spacing is at lowest common denominator in Linux?

Posted: 01 Nov 2021 08:46 AM PDT

I can use stat to create an ls output that shows both formats of permission information which can be handy:

stat --printf="%A\t%a\t%h\t%U\t%G\t%s\t%.19y\t%n\n" . .*    drwxr-xr-x      755     4       boss    boss    4096    2021-10-29 22:49:12     .  drwxr-xr-x      755     4       boss    boss    4096    2021-10-29 22:49:12     .  drwxr-xr-x      755     36      boss    boss    4096    2021-11-01 11:30:24     ..  -rw-r--r--      644     1       boss    boss    97708   2021-11-01 11:30:16     .custom  -rw-r--r--      644     1       boss    boss    4013    2021-10-11 22:04:04     .custom-dk  

However, the spacing between columns uses \t which is fine, but quite 'gappy'. This made me curious, is there a generic way to post-process any outputs like this such that the columns will be at the lowest common denominator of one-space gaps, i.e. is there a generic way to adjust the above to something like the below using awk or sed or similar (I'm also right-justifying just the number columns as an 'ideal' output, if that's possible)?

drwxr-xr-x 755  4 boss boss  4096 2021-10-29 22:49:12 .  drwxr-xr-x 755  4 boss boss  4096 2021-10-29 22:49:12 .  drwxr-xr-x 755 36 boss boss  4096 2021-11-01 11:30:24 ..  -rw-r--r-- 644  1 boss boss 97708 2021-11-01 11:30:16 .custom  -rw-r--r-- 644  1 boss boss  4013 2021-10-11 22:04:04 .custom-dk  

TPM support does not work on Fedora 35

Posted: 01 Nov 2021 09:57 AM PDT

I have this issue with latest fedora 35 beta.

Clevis encrypt does not work, although I can find the TPM being active in the logs. Tried the enable operation from bios with no luck.

Please, see details here:

dmesg | grep -i tpm

[    0.000000] efi: ACPI=0x45bfe000 ACPI 2.0=0x45bfe014 TPMFinalLog=0x45ac5000 SMBIOS=0x439e3000 SMBIOS 3.0=0x439e1000 MEMATTR=0x3f8dc018 ESRT=0x3f8ea298 MOKvar=0x3f8df000 RNG=0x439e4b18 TPMEventLog=0x39f43018   [    0.008084] ACPI: SSDT 0x0000000045BE1000 00077B (v02 INSYDE Tpm2Tabl 00001000 INTL 20160422)  [    0.008086] ACPI: TPM2 0x0000000045BE0000 00004C (v04 INSYDE TGL-ULT  00000002 ACPI 00040000)  [    0.008128] ACPI: Reserving TPM2 table memory at [mem 0x45be0000-0x45be004b]  [    1.192488] tpm_tis NTC0702:00: 2.0 TPM (device-id 0xFC, rev-id 1)  

sudo echo hi | clevis encrypt tpm2 '{}' > my.jwe

Place your finger on the fingerprint reader  ERROR:tcti:src/tss2-tcti/tcti-device.c:442:Tss2_Tcti_Device_Init() Failed to open specified TCTI device file /dev/tpmrm0: Permission denied   ERROR:tcti:src/tss2-tcti/tctildr-dl.c:154:tcti_from_file() Could not initialize TCTI file: device   ERROR:tcti:src/tss2-tcti/tctildr.c:428:Tss2_TctiLdr_Initialize_Ex() Failed to instantiate TCTI   Error executing command: TPM error: response code not recognized  

EDIT: Opened a bug: https://bugzilla.redhat.com/show_bug.cgi?id=2018978

How to get names of the files which contain the specified text only

Posted: 01 Nov 2021 09:13 AM PDT

I have a directory named "labels" in which there are text files which contain labels for "cat" or "dog" or both on separate lines.
Contents of files in labels directory are:

cat labels/1.txt  cat    cat labels/2.txt  dog    cat labels/3.txt  cat    dog  

I want to get the names of files which contain label "cat" only. I tried following command:

ls labels | grep -Rwl "cat"     labels/1.txt    labels/3.txt    

But this command returns the names of those files which contain "cat" or both. But my requirement is to get those file names which contain only "cat", not both "cat" and "dog".
Similarly when I try to get names of those files which contain "dog" only. If I search in the same fashion then it returns file names which contain "dog" or both labels.

ls labels | grep -Rwl "dog"  labels/2.txt    labels/3.txt    

Capture normal (stdout) output along with error (stderr) output

Posted: 01 Nov 2021 08:27 AM PDT

There is a script run via cron with the following line:

0 * * * * (/var/script.sh | tee -a /var/script.log)  

How do I rewrite the cron entry to capture both normal stdout and error stderr output? They are to be placed in different files.

Setting up DHCP on RHEL8

Posted: 01 Nov 2021 08:56 AM PDT

I am setting up a firewall/gateway router using rhel 8. I have a server with two NICS, one public facing which is a dhcp client, the second NIC will be internal facing. The first NIC is a Public zone, the second NIC is an Internal zone. I would like to make the internal-facing NIC a DHCP server for internal clients.

I need to block my DHCP server from receiving DHCP requests on the public zone.

Question: Can you configure dhcp to be a server only for a specific NIC, or do you manage this with firewall rules to block all DHCP from the public zone? What is a good practice when setting up a multi-function gateway like this?

Function tab-completion not matching that of wrapped command

Posted: 01 Nov 2021 10:23 AM PDT

I've got a function defined in my fish shell:

function cl --wraps=cd      cd $argv && ls -l --color=auto  end  

According to man function, the --wraps option "causes the function to inherit completions from the given wrapped command."

However, when I type cl and start to tab-complete, I'm shown options which include non-directories (like .c files). However, when I type cd and then tab-complete, I'm only shown directories.

Did I define my function incorrectly?

Bash array and output

Posted: 01 Nov 2021 09:13 AM PDT

I have a folder from part of my script output to tmp. The rev cut is necessary to remove parts of it that are not needed. I want to output each line separately to to part of my script I am using an array and I can get it to output the first line 2nd etc. if I put that in my [] as in ${myarray[0]}. What I really need is to have this section filled out for each line of the file.

The contents of tmp are like so.

C:\xxxx\DXF FILES\20038100.SLDPRT  C:\xxxx\DXF FILES\20136210.SLDPRT  C:\xxxx\DXF FILES\4_2-1.igs  C:\xxxx\DXF FILES\KC900.igs  C:\xxxx\DXF FILES\MetalSheet_Pusher.step  C:\xxxx\DXF FILES\Sheet Metal Part 8.igs  

This is what I have so far.

#!/bin/bash    set -x  cat tmp | rev | cut -d"\\" -f1 | rev | cut -d '.' -f1 > 1.txt      declare -a myarray  let i=0  while IFS=$'\n' read -r line_data; do          myarray[i]="${line_data}"          ((++i))  done < 1.txt  echo "<File>"${myarray[0]}"</File>" > out.txt  rm 1.txt  

The output looks like this. and I need each line filled out in succession with the array. Thanks

<File>20038100</File>  

@roaima Thanks for the help I really appreciate it. I should elaborate a bit further because the hard part is not getting this output but using it to fill out another section of the script. I already have an array to fill out a section for each file in the folder but it is just the location and works well enough using ls on the folder. This runs part of the script for every file in the folder. My issue is, I need to take the contents of the folder and put each section in 4 different places for each file in the folder. It is filling out an xml file that I need to run a batch from. Been scratching my head on this and I need to do it with Bash for now, might be able to go Python in the future I hope.

    while IFS=$',' read -r -a arry;  do    echo '        <Part>              <Input>' >> $file_out    echo '                <File>'${arry[0]}'</File>' >> $file_out  

The section below is where I need to place each line from tmp into 4 places in my script for each line in tmp. This can be many files or as little as 1 but I need 4 entries into my xml for each line in the original folder.

    cat tmp | rev | cut -d"\\" -f1 | rev | cut -d '.' -f1 > 1     declare -a myarray     let i=0          while IFS=$'\n' read -r line_data; do          myarray[i]="${line_data}"          ((++i))          done < 1  echo '                  <File>'${myarray[0]}'</File>                          </NCFile>                          <Graphics>                              <Save>true</Save>                                  <Directory>C:\xxxx\OUTPUT\NC FILES</Directory>                                  <File>'${myarray[0]}'</File>                          </Graphics>                          <FlatPatternDXF>                                  <Save>true</Save>                                  <Directory>C:\xxxx\OUTPUT\DXF FILES</Directory>                                  <DXFSetting>xxxx</DXFSetting>                                  <File>'${myarray[0]}'</File>                          </FlatPatternDXF>                          <xxxxile>                                  <Save>true</Save>                                  <Directory>C:\xxxx\OUTPUT\xxxx FILES</Directory>                          <File>'${myarray[0]}'</File>                          </xxxxFile>                          <ProcessDocumentation>                              <Save>true</Save>                                  <Directory>C:\xxxx\OUTPUT\PDF FILES</Directory>                                  <File>'${myarray[0]}'</File>                          </ProcessDocumentation>                  </SaveSettings>                  </Input>' >> $file_out  

lsyncd -- one way synchronisation but for the whole folder

Posted: 01 Nov 2021 10:31 AM PDT

I have two folders ~/A and ~/B. With some content.

I write an lsyncd configuration file ~/.config/lsyncd/lsyncd.conf:

# NOTE: Use only absolute path names  # NOTE: check "man rsync" for parameters inside "rsync{}".    # Global settings  settings {      logfile = "/home/ziga/.config/lsyncd/lsyncd.log",      statusFile = "/home/ziga/.config/lsyncd/lsyncd-status.log",      statusInterval = 5  }    # Synchronisation A ⟶ B  sync {      default.rsync,      source = "/home/ziga/A",      target = "/home/ziga/B",      delete = true,      rsync = {          binary = "/usr/bin/rsync",          executability = true,          existing = false,      }  }  

And I start lsyncd like this:

lsyncd -nodaemon ~/.config/lsyncd/lsyncd.conf  

Note: Parameter -nodemon is there only to give me more information in the terminal where I run the above command.

Immediately after the command is executed, synchronisation takes place and content of folder ~/A is transfered to the folder ~/B. This is okay.

According to my configuration file I would expect that if I delete a file in ~/B, it will not be deleted from ~/A which is also the case! And this is a wanted behaviour - I want to prevent accidental deletion of content in folder ~/A.

But at this point I would also expect that lsyncd will detect that folder ~/B is missing a just deleted file and will sync the folders again by copying the missing file from ~/A to ~/B like it does immediately when it is started! But this does not happen.

How can this be done?

How to take only uniq rows based on a column using linux cmd?

Posted: 01 Nov 2021 10:28 AM PDT

Here is my dataset:

col1,col2,col3  a,b,c  a,d,f  d,u,v  f,g,h  d,u,g  x,t,k  

Expected output:

f,g,h  x,t,k  

Selection criteria:

If anything occurs in col1 multiple times, then all of the associated rows will be deleted.

Can I solve it using Linux sort or uniq or anything else?

syslog message at boot: uninitialized urandom read

Posted: 01 Nov 2021 08:49 AM PDT

CPU is AMD GX-412TC SOC:

GX-412TC GE412TIYJ44JB 4 6W 2MB 1.0GHz/ 1.4GHz N/A N/A DDR-1333 0-90°C

which does not have rdrand:

grep rdrand /proc/cpuinfo  # nothing  

I see following messages in my syslog after machine boot:

kernel: random: dd: uninitialized urandom read (512 bytes read)  kernel: random: cryptsetup: uninitialized urandom read (32 bytes read)  

what exactly do these messages mean, and what can I do about it?

Does it mean dd and cryptsetup try to read from /dev/urandom, but there is not enough entropy?

I am using haveged daemon, but it is started late in the boot process, after this message appears.

Here is my boot script startup sequence:

/etc/rcS.d/S01hostname.sh  /etc/rcS.d/S01mountkernfs.sh  /etc/rcS.d/S02mountdevsubfs.sh  /etc/rcS.d/S03checkroot.sh  /etc/rcS.d/S04checkfs.sh  /etc/rcS.d/S05mountall.sh  /etc/rcS.d/S06bootmisc.sh  /etc/rcS.d/S06procps  /etc/rcS.d/S06urandom  /etc/rcS.d/S07crypto-swap  /etc/rc2.d/S01haveged  /etc/rc2.d/S01networking  /etc/rc2.d/S04rsyslog  /etc/rc2.d/S05cron  /etc/rc2.d/S05ssh  

the messages in syslog come from these two scripts:

/etc/rcS.d/S06urandom  -> dd  /etc/rcS.d/S07crypto-swap -> cryptsetup  

Should haveged be started before urandom ?

I am using Debian 10.

Also, I should add that this machine is a bare board, with no keyboard. The only interface is a serial console. I think this has an effect on the available entropy, and is the reason why I have installed haveged in the first place. Without haveged, the sshd daemon would not start for several minutes, because it does not have enough entropy.

I don't know how to download and install Dissenter Browser on raspberry pi running Ubuntu 21.04

Posted: 01 Nov 2021 10:05 AM PDT

I just installed ubuntu desktop 21.04 on my raspberry pi 4b. I want to switch from mozilla firefox, the default browser, to Dissenter. I am trying to download and install Dissenter Browser on my computer but I don't know how. I've looked into it, and I found that the version I've tried only works on amd64 processors, but what I have is an arm64 procesor. Does anyone know were to find it for arm64?

How to increase the root partition size of a virtual machine used by GNOME Boxes (QEMU)?

Posted: 01 Nov 2021 09:39 AM PDT

I use GNOME Boxes on Fedora 32 to virtualize a Fedora Silverblue installation. I did not setup a sophisticated/smart partitioning for my virtual Fedora and have run out of disk space on my virtual root partition. My objective is to be able to continue using my virtual Fedora machine as efficient and soon as possible without recreating the virtualization and without adding another partition.

Therefore, how can I increase the root partition of my QEMU-based virtualization?

Similar question: Resize qcow2 root parition

How to disable ubuntu 20.04.1 software updates for ever?

Posted: 01 Nov 2021 08:42 AM PDT

Ubuntu might be great and frequent software updates might be very helpful in keeping the system up-to-date and secure. But my problem is,

  1. the software updater is causing my system to freeze and I have to restart the system to frequently.
  2. even if (1) is not correct i.e the updater is not causing the system to freeze, another problem I see is after doing the update it's asking me to restart the system for most of the updates. And if I don't restart, the system is freezing again. And I have to do a restart to over come the system freeze.
  3. even if not for (1) and (2), i am using the system for development activities and so security and frequent updates are not my priority. I have several VM's and docker containers running on the machine, so my priority is the keep the system running for weeks and months with out restarting. And I suspect, software updater is hindering this.

Is there an easy way to disable these update prompts entirely. Please provide your views/answers.

A screenshot of software updater I am referring to is below. enter image description here
This is current least update preference I could set - enter image description here

Edit - Thanks to Roman Riabenko for suggesting to check the links on comments which in turn suggest to check unattended upgrades. But this does not seems to be the case with 20.04.1.

$ cat /etc/apt/apt.conf.d/20auto-upgrades  APT::Periodic::Update-Package-Lists "0";  APT::Periodic::Download-Upgradeable-Packages "0";  APT::Periodic::AutocleanInterval "0";  APT::Periodic::Unattended-Upgrade "0";  

and

$ cat /etc/apt/apt.conf.d/10periodic  APT::Periodic::Update-Package-Lists "0";  APT::Periodic::Download-Upgradeable-Packages "0";  APT::Periodic::AutocleanInterval "0";  APT::Periodic::Unattended-Upgrade "0";  

rsync error when copying directories ending in a period

Posted: 01 Nov 2021 10:15 AM PDT

This is a weird one since I can't find it mentioned anywhere else, but it's not exactly a niche issue.

I'm trying to copy all of my music over to another server, and the files follow the [artist]/[artist] - [year] - [album]/[track]. [title] structure. I'm using the command rsync -azH /path/to/music user@password:/remote/path/to/music, and everything copies over fine, except for the songs whose album ends with a period. With these, the directory gets created, but the files do not get copied, giving the error: rsync: mkstemp "/path/to/music/Dead Kennedys/Dead Kennedys - 1982 - Plastic Surgery DisastersIn God We Trust Inc./.01. Advice From Christmas Past.flac.UVzf3u" failed: No such file or directory (2). Note also the .UVzf3u on the end of the file - I don't know where that comes from but the actual file is just a .flac.

Any ideas?

Is -d64 required for java on linux?

Posted: 01 Nov 2021 09:00 AM PDT

We are generating scripts for running java on RHEL or an Amazon Linux ami. The scripts all now contain the -d64 option. We are using OpenJdk 1.8 64-bit. The oracle faq question When you download the SDK or JRE, must you choose between the 32 and 64-bit versions? indicates that this option is only on Linux for compatibility reasons. It says

All other platforms (Windows and Linux) contain separate 32 and 64-bit installation packages. If both packages are installed on a system, you select one or the other by adding the appropriate "bin" directory to your path. For consistency, the Java implementations on Linux accept the -d64 option.

So it seems like -d64 is not needed for scripts running on Linux. It may be better to only add that option when running on solaris.

Is it required? What would be the harm in removing it from our scripts?

How to install libboost-all-dev v1.40 Debian wheezy

Posted: 01 Nov 2021 10:05 AM PDT

I'm new here. I need to install libboost-all-dev package that contains this stuff: libboost1.40-dev libboost-system1.40-dev libboost-filesystem1.40-dev libboost-date-time1.40-dev libboost-regex1.40-dev libboost-thread1.40-dev exactly in this version but on my Debian 7 wheezy I have libboost-all-dev in 1.4.9 version how can I change it and install package that i need so much. Here is my sources.list file:

deb http://ftp.pl.debian.org/debian/ wheezy main non-free contrib  deb-src http://ftp.pl.debian.org/debian/ wheezy main non-free contrib  deb http://security.debian.org/ wheezy/updates main contrib non-free  deb-src http://security.debian.org/ wheezy/updates main contrib non-free  deb http://ftp.pl.debian.org/debian/ wheezy-updates main non-free contrib  deb-src http://ftp.pl.debian.org/debian/ wheezy-updates main non-free contrib  

cronjob timing - for every 3 months

Posted: 01 Nov 2021 09:25 AM PDT

I chanced upon this example:

0 9 1-7 * 1 *                       First Monday of each month, at 9a.m.  
  1. I am not sure the 1-7 portion which is supposingly to be the dates of each month. Why is it a 1-7 instead of a *?

  2. How do I write a cronjob that run every 3 months at 0000hrs?

    0 0 1 */3 * *  

Is the above correct?

su vs sudo -s vs sudo -i vs sudo bash

Posted: 01 Nov 2021 09:17 AM PDT

What is the difference between the following commands:

su  sudo -s  sudo -i  sudo bash  

I know for su I need to know the root password, and for sudo I have to be in the sudoers file, but once executed what is difference?

I know there is a difference between su and sudo -s because my home directory is /root after I execute su, but my home directory is still /home/myname after sudo -s. But I suspect this is just a symptom of an underlying difference that I'm missing.

Smitty like solution under Linux or BSD?

Posted: 01 Nov 2021 10:25 AM PDT

Are there any solutions similar to AIX smit for Linux based OSes?

Basically this would be some kind of 'terminal menu-driver' script collection perhaps using ncurses for doing things that system administrators regurarly do.

No comments:

Post a Comment