Tuesday, May 4, 2021

Recent Questions - Unix & Linux Stack Exchange

Recent Questions - Unix & Linux Stack Exchange


systemd says "networking.service failed" but network is up

Posted: 04 May 2021 09:48 AM PDT

Normally, I run Gentoo with openrc init, but am installing a Debian 10 server and having some trouble understanding systems. The server boots via a custom dracut initrd which created a bonded network interface and then boots from an iscsi root. That part all works fine. There are three interfaces which all come up on boot with their respective networks: 192.168.1.0/24, 10.0.0.0/24, and 172.16.0.0/24.

My (small) problem is systemd giving the following information:

# systemctl --failed    UNIT               LOAD   ACTIVE SUB    DESCRIPTION  ● networking.service loaded failed failed Raise network interfaces  

I assume this largely due to one of the networks being active at init time. In Gentoo, I can mark an interface as not providing the network service. Does systemd have a similar concept or is there a setting that I've missed somewhere? Again, all interfaces are actually up and working appropriately (2 bridges, 1 bond) I've snipped local interfaces and the NICS in the bond and bridges.

 # ip addr list  [...]  7: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP group default qlen 1000      link/ether 24:6e:96:5e:a3:9c brd ff:ff:ff:ff:ff:ff      inet 172.16.0.14/24 brd 172.16.0.255 scope global dynamic bond0         valid_lft 249300sec preferred_lft 249300sec      inet6 fe80::266e:96ff:fe5e:a39c/64 scope link         valid_lft forever preferred_lft forever  [...]  9: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000      link/ether 00:10:18:64:0f:3c brd ff:ff:ff:ff:ff:ff      inet 192.168.1.14/24 brd 192.168.1.255 scope global vmbr0         valid_lft forever preferred_lft forever      inet6 fe80::210:18ff:fe64:f3c/64 scope link         valid_lft forever preferred_lft forever  10: vmbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000      link/ether 00:10:18:64:0f:3e brd ff:ff:ff:ff:ff:ff      inet 10.0.0.14/24 brd 10.0.0.255 scope global vmbr1         valid_lft forever preferred_lft forever      inet6 fe80::210:18ff:fe64:f3e/64 scope link         valid_lft forever preferred_lft forever    # ping -I vmbr0 -c 3 8.8.8.8  PING 8.8.8.8 (8.8.8.8) from 192.168.1.14 vmbr0: 56(84) bytes of data.  64 bytes from 8.8.8.8: icmp_seq=1 ttl=57 time=68.4 ms  64 bytes from 8.8.8.8: icmp_seq=2 ttl=57 time=114 ms  64 bytes from 8.8.8.8: icmp_seq=3 ttl=57 time=61.6 ms  

Thanks in advance for any ideas.

Can I ignore certain irrelevant lines when creating/applying a patch with diff/patch?

Posted: 04 May 2021 09:31 AM PDT

File A

Apples  Bananas  Clementines  Dates  

File B

Apples  Blueberries  Cherries  Dates  

I want to diff A and B to generate a patch that when applied to C will change Clementines > Cherries but will ignore the second line.

File C (before patch)

Apples  Blackcurrants  Clementines  Dates  

File C (after patch)

Apples  Blackcurrants  Cherries  Dates  

How can this be done?

bash: export: `172.17.0.1': not a valid identifier

Posted: 04 May 2021 09:22 AM PDT

Why my terminal have this? When I open my terminal then it appear. How to remove it?

xargs-like command templating for argv

Posted: 04 May 2021 09:14 AM PDT

Is there a common utility that is sort of like xargs, but operating on arguments instead of stdin?

For example, something like:

$ fictional-xargv 'echo $2 $1' foo bar  bar foo  $ fictional-xargv 'echo foo-$1-bar' 123  foo-123-bar  

I've considered:

  • eval but it concatenates arguments, separated by a space;
  • exec but it expects a binary;
  • sh -c but it doesn't admit argv.

Use case is just when there's very little manipulation of arguments to be done, so it would avoid creating a script for each invocation; or interactive use.

UEFI Grub not finding config file

Posted: 04 May 2021 09:06 AM PDT

I need to boot Windows and two other Linux distors using Grub. So, I have installed Grub on UEFI partition with a dedicated partition for storing the files used by Grub using the following command.

sudo grub-install --efi-directory=/mnt/efi --root-directory=/mnt/grub --bootloader-id=Grub --uefi-secure-boot --target=x86_64-efi /dev/sda  

/dev/sda1 mounted on /mnt/efi is my EFI partition, and /dev/sda2 mounted on /mnt/grub is the partition intended for Grub files.

However upon booting Grub is seemingly unable to find the grub.cfg file I placed at /mnt/grub, and shows the default Grub shell. I am able to manually able to recover my system by typing eitherconfigfile /efi/Grub/grub.cfg or configfile (hd0,gpt2)/grub/grub.cfg.

Here are some of my files

$ sudo tree /mnt/efi/EFI  /mnt/efi/EFI  ├── Grub  │   ├── BOOTX64.CSV  │   ├── fbx64.efi  │   ├── grub.cfg  │   ├── grubx64.efi  │   ├── mmx64.efi  │   └── shimx64.efi  └── Microsoft      |...      $ sudo tree /mnt/grub/grub  /mnt/grub/grub  ├── fonts  │   └── unicode.pf2  ├── grub.cfg  ├── grubenv  └── x86_64-efi      |...    $ sudo cat /mnt/efi/EFI/Grub/grub.cfg  search.fs_uuid 3110d895-a376-484a-8dba-e0475b9a977c root hd0,gpt2  set prefix=($root)'/grub'  configfile $prefix/grub.cfg    $ sudo fdisk -l /dev/sda  Disk /dev/sda: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors  Disklabel type: gpt    Device          Start        End   Sectors   Size Type  /dev/sda1        2048     526335    524288   256M EFI System  /dev/sda2      526336     657407    131072    64M Linux filesystem  /dev/sda3      657408     690175     32768    16M Microsoft reserved  /dev/sda4      690176  563607551 562917376 268.4G Microsoft basic data  /dev/sda5   563607552  697825279 134217728    64G Linux filesystem  /dev/sda6   697825280  966260735 268435456   128G Linux filesystem  /dev/sda7   966260736  983037951  16777216     8G Linux swap  /dev/sda8   983037952 1117254748 134216797    64G Linux filesystem  /dev/sda9  1117255680 1385691135 268435456   128G Linux filesystem  /dev/sda10 1385691136 1402468350  16777215     8G Linux swap  

I feel like there is some trivial mistake I am making, but I have spent too much time on this.

Linux report that lowmem region is more than physical memory available?

Posted: 04 May 2021 08:53 AM PDT

I'm running Ubuntu 20.04 64-bit version. I'm starting to learn about kernel programming and I'm now studying kernel VAS. when running a process called procmap by kaiwann on github which is supposed to give me visual representation of kernel VAS it says that kernel lowmem region is about 7.24 gigabytes while my system is only 6 gigabytes which i don't know why. I think that lowmem region is supposed to be logically mapped to system RAM on 64 bit systems that doesn't have ZONE_HIGHMEM region. so where exactly did this extra 1.24 gigabyte come from. I'm not sure if this is a bug or I miss something.

enter image description here

docker interface tears down wifi internet

Posted: 04 May 2021 09:01 AM PDT

I see this problem on my laptop: when I docker run a Docker container, then after a few seconds my WiFi internet stops working. I don't have an Ethernet connection to test that side of things.

I don't know how to troubleshoot this, I see I have 2 network interfaces relevant to this issue:

  • the wifi interface wlp2s0
  • the docker interface docker0

As soon as the network I/O starts on the network interface docker0, then it stops on the network interface wlp2s0. I can check this with a web browser because when a Docker container is running, then I am not able to reach any web page via the browser.

I see this networking configuration in Docker:

docker network list  NETWORK ID     NAME      DRIVER    SCOPE  5d408693425d   bridge    bridge    local  2eba59b04a5f   host      host      local  f22b30d7782a   none      null      local  

When using ifconfig I see this:

docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500          inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255          inet6 fe80::42:e5ff:fe8a:b85c  prefixlen 64  scopeid 0x20<link>          ether 02:42:e5:8a:b8:5c  txqueuelen 0  (Ethernet)          RX packets 9034  bytes 1228570 (1.2 MB)          RX errors 0  dropped 0  overruns 0  frame 0          TX packets 9945  bytes 94278580 (94.2 MB)          TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0    ...    wlp2s0: flags=-28605<UP,BROADCAST,RUNNING,MULTICAST,DYNAMIC>  mtu 1500          inet 192.168.1.54  netmask 255.255.255.0  broadcast 192.168.1.255          inet6 2001:b07:6477:ebfa:e46f:631e:206c:8a9e  prefixlen 64  scopeid 0x0<global>          ether 04:d3:b0:ee:2f:b9  txqueuelen 1000  (Ethernet)          RX packets 3771571  bytes 3664198427 (3.6 GB)          RX errors 0  dropped 0  overruns 0  frame 0          TX packets 2460874  bytes 1439515295 (1.4 GB)          TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0  

I don't need to do sudo ifconfig wlp2s0 up or anything after I stop the running docker containers. Internet just starts working again.

In /etc/docker I only see a file named key.json which doesn't look like it's related to networking settings (like e.g. /etc/docker/daemon.json for DNS entries, which is not there).

In this file ~/.docker/config.json I only see some authorizations to some private Docker registry, nothing related to networking. There are some other tocken related files which I believe are not relevant:

$ ls -a ~/.docker/  .  ..  .buildNodeID  config.json  .token_seed  .token_seed.lock  

If I restart NetworkManager with sudo systemctl restart network-manager.service after running the docker container, then I still cannot reach the internet with either my browser or e.g. ping 8.8.8.8 (but ping localhost keeps working regardless the fact that I am running or not a Docker container).

This issue happens to me with a variety of Linux Ubuntu/Debian versions and Docker versions, anyway my current setup is:

$ lsb_release -a  No LSB modules are available.  Distributor ID: Ubuntu  Description:    Ubuntu 20.04.2 LTS  Release:        20.04  Codename:       focal  $ docker --version  Docker version 20.10.6, build 370c289  

I read this page a few times https://docs.docker.com/network/bridge/ and I think my Docker Bridge is working, because when I run these Docker containers on my laptop they are able to reach the Internet. However that documentation on the Docker website is a bit dry and I am not sure how to deeply troubleshoot these networking issues.

  • What's wrong with the docker configuration?
  • What do I have to do to make the 2 network interfaces to work together simultaneously?

Cannot make persistant Insertion of a kernel object (debian) using Insmod

Posted: 04 May 2021 08:48 AM PDT

I'm trying to insert the [.]Ko (Kernel Object) file for the on-board GPIO into my Linux kernel which succeeds using the command insmod < file-name.ko > But when the OS/Device Reboots, the kernel no longer has the mod inserted (checking using lsmod). I have also tried placing it in /lib/modules/4.19.0-14-amd64/kernel/drivers/gpio and running the "sudo update-initramfs -u" to update boot init but didn't work.

Is there any way to insert a kernel module permanently (sustains reboot)?

P.S. I don't want to use "@reboot insmod" in crontab. Using a Debian GNU/Linux 10 (buster)

linux sort 'field skip' options not documented, but seem to work

Posted: 04 May 2021 08:36 AM PDT

I was porting over a script from AIX to linux that had code of the form

grep <pattern> $LOG | sort -b +rn4 -5 +2 -3  

On AIX, this kind of sort syntax is documented, and basically the +a -b syntax means skip a fields and consider fields between a and b as your sort key.

This didn't work on linux, because the linux sort command didn't like including the 'rn' (reverse numeric) flags in the +a 'skip fields' parameter. but this did work:

grep <pattern? $LOG | sort -b -rn +4 -5 +2 -3  

So apparently the 'field skipping logic is supported by linux sort, but not documented in the man page (that I could see, anyway). the -k option works on both systems to specify a key field number. But here's a weird quirk. On AIX

ls -l | sort -n +4  

produces a list of files sorted on the 5th field (size). But on linux, the same command produces an error:

sort: cannot read: +4: No such file or directory

ls -l | sort -n +4 -5  

does work, though. So, apparently the + skip - skip key-selection syntax sort of works, but only if you specify both the starting and ending column skip parameters. And it's not documented. So, my question - is this column skipping syntax deprecated? Does it just work because the code was just there in the command and nobody knew to take it out?

ViM iabbrev to Automatically Replace consecutive blank lines with a character

Posted: 04 May 2021 08:05 AM PDT

I was just wondering if it's possible to create a ViM rule for the following scenario.

If I am editing a file is it possible for iabbrev to detect a second blank line and insert some character or string into that lines place?

Typing in ViM *Newline*  *Newline*  *Newline*This is extra  *Newline*This is extra  

I really hope this makes sense

How do I find the config files for any application

Posted: 04 May 2021 08:19 AM PDT

I ran into this problem multiple times now.

You have to log into a server you don't know and have to find where an application is installed and where its config files are.

I know that most application configs are in /etc/... e.g. /etc/nagios/nrpe.cfg

But knowing is not reliable. How can I find them guaranteed?

And what is a good way to inspect and study the configs? Most applications allow for other files to be included so the configuration can be split into multiple files which takes ages to write down all the files' locations and look at all of them individually which gets even worse if the structure is cascading.

Configure split dns resolver to always use vpn-internal nameserver for fixed list of domains

Posted: 04 May 2021 07:39 AM PDT

What's the most reliable configuration to have a fixed list of internal domains always resolved through an internal nameserver, even if it's unreachable because the vpn is down, while still using the default nameserver for everything else?

The connection is established by NetworkManager after selecting the appropriate interface or network in KDE (or another desktop environment).

This question is specifically about defining a static list of vpn domains because:

  • Suppose there are three internal domains, which are only known to the internal nameserver within the vpn, two of which are automatically added to /etc/resolv.conf as "search" domains whenever the vpn connection is established, along with the internal nameserver which is prepended to that file, if the vpn client is configured to do so.
  • Suppose the internal domains of the main vpn are ".int", ".org" and ".test.net" and the first two would be added as search domains if the client was configured to change the resolv.conf file.
  • If the vpn client changes /etc/resolv.conf after establishing the connection, all dns requests are sent to the internal nameserver which should only handle the three internal domains but can't handle some other domains.
  • Whenever the vpn connection is lost, the vpn client would automatically reset the resolv.conf file, leaving only the standard nameserver assigned by DHCP, added by NetworkManager. So whenever the vpn is down, internal dns requests are leaked, they're sent to the standard nameserver which either doesn't know about them or resolves them to different, external ips.
  • Whenever a connection is established in KDE, the nameserver assigned by DHCP is written to the file. NetworkManager takes care of that and if it was a link to a file containing only nameserver localhost, it would probably have to write the received nameserver ip to some uplink.conf file which would be used by the resolver.
  • It wouldn't be necessary to make any additional changes to the resolv.conf file if it was a symlink to a file containing only nameserver localhost and if the local resolver would always how how to forward queries depending on the domain.
  • The ip of the internal nameserver is static and so is the list of internal domains: DNS queries for these internal domains should only ever be sent to the internal nameserver, no exceptions.
  • DNS queries to other domains should be handled by the standard nameserver, as usual. Similar questions have been solved by manually configuring a nameserver but in this case, the system should use the one assigned by DHCP.
  • A quick attempt with dnsmasq failed. After configuring server=/int/10.1.1.1 etc., it still sent queries for xxx.int to the standard nameserver instead. The log file showed both using nameserver 10.1.1.1#53 for domain int and using nameserver 192.168.1.1#53 for domain int, where 192.168.1.1 was the standard nameserver assigned by DHCP (so dns queries were leaking). A solution with systemd-resolved is preferred but config examples for other resolvers are welcome as well.
  • The focus of the question is the configuration of a local resolver like systemd-resolved, regardless of the state of the vpn connection or if a second vpn connection is currently active.
  • This question is not about one specific distribution, it's for any distribution that uses systemd and has KDE. Standard tools like dig, host, nslookup or ping must work (i.e., not send queries to the wrong nameserver).

There are similar questions on this site, usually without the requirement of not leaking queries, as well as on other sites. For example, an article on gnome.org seems to address the split dns question under "My Corporate VPN is Missing a Routing Domain, What Should I Do?", but it expects all internal domains to be set by the vpn client and states:

Sadly, not all VPNs actually do this properly, since it doesn't matter for traditional non-split DNS. Worse, there is no graphical configuration in GNOME System Settings to fix this. There really should be. But for now, you'll have to use nmcli

Having to use such a command does not seem like a reliable configuration. And DNS queries shouldn't be leaked if the connection is lost for a moment. The article continues with:

Hopefully you never have to mess with this.

What if you do? The scenario shouldn't be too uncommon, there must be at least one standard solution.


For the record, this dnsmasq config was tried but it doesn't fulfill the requirement of defaulting to the standard nameserver provided by DHCP:

/etc/dnsmasq.d/dns-int.conf:

no-resolv  server=/int/10.1.1.1  server=/org/10.1.1.1  server=/test.net/10.1.1.1  server=/google.com/8.8.8.8  server=9.9.9.9  log-queries  

In /etc/NetworkManager/NetworkManager.conf, under section [main]:

dns=dnsmasq  

Add a symlink to the config directory used by NetworkManager pointing to the config file that would be used when starting dnsmasq directly using systemctl start dnsmasq:

`/etc/NetworkManager/dnsmasq.d/dns-int.conf` -> `/etc/dnsmasq.d/dns-int.conf`  

However, as explained above, this configuration is not completely correct.

How to detect EOF on a BASH script's stdin?

Posted: 04 May 2021 09:24 AM PDT

I have a bash function inside a script that needs to read data from stdin in fixed size blocks and send those, one at a time, to external programs for further processing. The function itself should run in a loop for as long as there is data (the input is always guaranteed to be a whole number of blocks), but it doesn't otherwise need to interpret the data, so i'd like to have a way to detect EOF on the function's stdin without consuming data in case there is still some to process.

The apparently natural way to do this would be to use the read builtin, as in:

while read -r -n 0 ; do external_program ; done

The -n option to read tells it to read only at most those many bytes instead of up to newline, but unfortunately it doesn't work with 0 bytes, which would make it an ideal test for EOF. It does work with -n 1, but then it consumes the first byte of a block, which has to be 'replayed' into the stream going into the external program.

So, is there a better way, preferably using only bash builtins?

Why do snapshots consume much less space than the data which is changed after they have been taken?

Posted: 04 May 2021 07:51 AM PDT

Although I am using ZFS since quite a while, I am still failing to understand some aspects of it from time to time. Currently I am trying to understand how ZFS snapshots take up space on the disk and why that space is much smaller then I would expect.

My problem is best explained by an example. I have a VM running from a ZVOL with no compression (compression=off). These are the snapshots of that volume:

root@server01 ~ # zfs list -r -t all -o name,type,available,used,referenced,usedbyrefreservation,usedbydataset,usedbychildren,usedbysnapshots,volsize,refreservation,reservation rpool01/vm-server01  NAME                                       TYPE      AVAIL   USED  REFER  USEDREFRESERV  USEDDS  USEDCHILD  USEDSNAP  VOLSIZE  REFRESERV  RESERV  rpool01/vm-server01                        volume    1.60T  1.97T  1.01T             0B   1.01T         0B      985G    1.50T       none    none  rpool01/vm-server01@Y-2020-05-27-11-35-15  snapshot      -  1.06G  1.00T              -       -          -         -       1T          -       -  rpool01/vm-server01@T-2020-06-02-11-41-15  snapshot      -  1.04G  1.00T              -       -          -         -       1T          -       -  rpool01/vm-server01@Y-2021-04-24-05-36-24  snapshot      -  1.66G  1.00T              -       -          -         -    1.50T          -       -  rpool01/vm-server01@M-2021-04-24-21-22-30  snapshot      -  3.78G  1.01T              -       -          -         -    1.50T          -       -  rpool01/vm-server01@T-2021-04-25-14-27-15  snapshot      -     0B  1.01T              -       -          -         -    1.50T          -       -  rpool01/vm-server01@T-2021-04-25-14-27-30  snapshot      -     0B  1.01T              -       -          -         -    1.50T          -       -  rpool01/vm-server01@W-2021-04-25-21-55-43  snapshot      -   555M  1.01T              -       -          -         -    1.50T          -       -  rpool01/vm-server01@D-2021-04-27-17-49-00  snapshot      -  1.52G  1.01T              -       -          -         -    1.50T          -       -  rpool01/vm-server01@D-2021-04-29-08-48-16  snapshot      -  1.06G  1.01T              -       -          -         -    1.50T          -       -  rpool01/vm-server01@D-2021-05-03-09-42-01  snapshot      -  1.08G  1.01T              -       -          -         -    1.50T          -       -  rpool01/vm-server01@D-2021-05-04-12-12-01  snapshot      -  45.3M  1.01T              -       -          -         -    1.50T          -       -  

So far, so good. The worrying thing is:

For example, after having taken the next-to-last snapshot, I have copied about 18 GB of new data to the running VM. However, that snapshot's USED size is reported as 1.08 GB. In my understanding, when a snapshot has been taken, it is read-only and thus actually can't increase. But of course, the file system needs space to record the changes that happen to the dataset / ZVOL afterwards, and that's what it is reported as a snapshot's USED size (please correct me if I am wrong).

As a second, more extreme example, that VM ran about seven hours after I had taken snapshot @T-2021-04-25-14-27-30. I am absolutely sure that quite a few GB of data have been changed in the VM during that time. But that snapshot's USED size even is reported to be 0.

I have found a weird way to see how much data has actually changed after a snapshot has been taken: We can "simulate" sending the snapshot by something like the following command line (command on the first line, output on the following two lines):

root@server01 ~ # zfs send -v -n -R -i rpool01/vm-server01@D-2021-05-03-09-42-01 rpool01/vm-server01@D-2021-05-04-12-12-01  send from @D-2021-05-03-09-42-01 to rpool01/vm-server01@D-2021-05-04-12-12-01 estimated size is 18.6G  total estimated size is 18.6G  

(-n tells zfs to not do anything, but report what it would do; -v means verbose; -i means incremental; for -R, please have a look into man zfs, it's too long to explain here)

Here we see that incrementally sending the last snapshot based on the next-to-last snapshot would take approximately 18 GB, which nearly exactly is the amount of data which has been changed in or added to the VM, respectively, after @D-2021-05-03-09-42-01 had been taken, before D-2021-05-04-12-12-01 had been taken. In other words, ZFS knows how much data has been altered between the next-to-last and the last snapshot, but still for the next-to-last snapshot shows a USED value of 1.08 GB instead of about 18 GB.

Could anybody please give an explanation?

P.S. I already have read several articles about how to interpret zfs list's size values, and I have understood that it can get quite hard to understand when reservations, refreservations, nested datasets, clones, snapshots and the like come into play, but the situation here is quite simple, isn't it? Anyway, I didn't see a hint about the difference between the expected and the reported size of snapshots yet.

Read file accessible only from script in Unix

Posted: 04 May 2021 08:52 AM PDT

I have written a Base shell script that needs password/key which is retrieved from a file.

Base Script---Calls---Key/Password File ---Base Script does further authenticates using the key retrieved

Requirements:

  1. No user should be able to get the contents of the file directly

  2. Every user should be able to execute Base Script

  3. Only the script should be allowed to access the contents of the file

Any other suggestions to use password/key not visible to anyone but configurable in script is welcome

Note: I tried the sudoers approach but still no success

%usersgroup ALL=(root) NOPASSWD:/key/path

Mutt: reply in thread and add message from another conversation

Posted: 04 May 2021 07:25 AM PDT

I'm using mutt.

There are two different e-mail threads (A and B) and I'd like to forward a single message from thread B to thread A.

I could enter thread B and forward said message to the correspondents of A, but that would start a new thread C.

How can I reply within conversation A and attach the message from B, so the thread stays intact?

Can't login with normal user with GUI

Posted: 04 May 2021 08:16 AM PDT

I tried to log in to my user through my gnome desktop manager with valid credentials. It doesn't tell me "you have bad credentials", it just doesn't let me in.

I can log in with the root user and I created another user and I can log in with that too, but not this user. It leads me to a blank screen with a cursor blinking in it and after 5 to 7 seconds I get back to the part where I should enter my username.

  1. I checked the .Xauthority permissions are referring to my user.
  2. I can log in to my user with the terminal but not the GUI.
  3. After I enter my username and password, it creates the session for my user but closes it after like 5 seconds and logs me out.
  4. After logging in to a tty, I can startx to go into the GUI.

my /var/log/auth.log

May  4 14:00:34  gdm-password]: gkr-pam: unable to locate daemon control file  May  4 14:00:34  gdm-password]: pam_unix(gdm-password:session): session opened for user faran by (uid=0)  May  4 14:00:34  systemd-logind[524]: New session 29 of user faran.  May  4 14:00:34  systemd: pam_unix(systemd-user:session): session opened for user faran by (uid=0)  May  4 14:00:41  gdm-password]: pam_unix(gdm-password:session): session closed for user faran  May  4 14:00:41  systemd-logind[524]: Session 29 logged out. Waiting for processes to exit.  May  4 14:00:41  systemd-logind[524]: Removed session 29.  May  4 14:00:48  gdm-password]: pam_unix(gdm-password:auth): Couldn't open /etc/securetty: No such file or directory  May  4 14:00:51  gdm-password]: pam_unix(gdm-password:auth): Couldn't open /etc/securetty: No such file or directory  

I also set another user home folder to my special users (the one that I can't log in with) home folder, and I can't log in to that user either, so I guess there is a problem with my home folder and configs inside. The other direction doesn't work -- if I set my special user home folder to another user home folder I can't log in.

What is wrong here?

rhel + resolve.conf + what is the right resolve.conf settings

Posted: 04 May 2021 09:20 AM PDT

we want to know what is the right configuration in resolve.conf and about domain name in resolve.conf

in our rhel 7 server we configured the following example of resolve.conf

more /etc/resolv.conf  ; generated by /usr/sbin/dhclient-script  search sandyam.com  nameserver 12.21.16.17  domain sandyam.com  

but we can also set the following resolve.conf , without domain sandyam.com , and resolving will works fine

more /etc/resolv.conf  ; generated by /usr/sbin/dhclient-script  search sandyam.com  nameserver 12.21.16.17  

or to set as the following without search sandyam.com , and resolving will works fine

more /etc/resolv.conf  ; generated by /usr/sbin/dhclient-script  nameserver 12.21.16.17  domain sandyam.com  

so we are little not sure what is the right resolve.conf settings ( option 1 or option 2 or option 3 )

option 1

more /etc/resolv.conf  ; generated by /usr/sbin/dhclient-script  search sandyam.com  nameserver 12.21.16.17  domain sandyam.com  

option 2

more /etc/resolv.conf  ; generated by /usr/sbin/dhclient-script  search sandyam.com  nameserver 12.21.16.17  

option 3

more /etc/resolv.conf  ; generated by /usr/sbin/dhclient-script  nameserver 12.21.16.17  domain sandyam.com  

reference https://www.distributednetworks.com/redhat-linux-admin/module4/usingDns-resolve.php

Use sed to insert text before two blank lines

Posted: 04 May 2021 09:23 AM PDT

I'm trying to use sed for updating a commented config file. Rather than tacking everything at the EOF, I'm trying to keep things sectioned off. My config looks something like this.

# SECTION ONE  data...  data...  data...  <insert line here>    # SECTION TWO  data...  data...  

I'm trying to insert lines at the end of section one, but I'm having a hard time writing a search pattern since it won't allow "\n" and you can't have multiple "^$" in a pattern. I'd like something like the following:

sed -i "/^\n\n# SECTION TWO.*/i data..." somefile.conf        or    sed -i "/^$^$# SECTION TWO.*/i data..." somefile.conf  

I'm open to other suggestions as well, but I'd like to keep it to a single line if possible. This is part of a much larger script. I know this is pretty easy with Python, Perl, etc., but I'm trying to keep this to a "shell" solution.

OPNSense (FreeBSD) doesn't boot from USB: stuck at "UFS found 1 partition"

Posted: 04 May 2021 09:06 AM PDT

I'm trying to boot OPNSense (the amd64 VGA image, 21.1, from https://opnsense.org/download/) from USB. The machine is fairly new (to me), but I've seen

  • it booting a Linux kernel successfully from a USB stick (so it's probably mostly okay)
  • the OPNSense USB stick boots reasonably in qemu on another machine, too.

Nevertheless, OPNSense gets stuck right away, right after boot:

>> FreeBSD EFI boot block     Loader path: /boot/loader.efi          Initializing modules ZFS UFS     Load Device: PciRoot(0x0)/Pci(0x1d,0x0)/USB(0x1,0x0)/USB(0x4,0x0)/HD(1,GPT,[lots of hex],0x3,0x640)     BootCurrent: 0004     BootOrder: 0004[*] 0003 0001 0002     Probing 5 block devices........* done      ZFS found no pools      UFS found 1 partition  

... I've also dd'd all of it onto the USB stick another time in case it's a few bits of corruption (no).

Any ideas how to start debugging this?

Update: it's most likely just a bad UEFI implementation; switching over to MBR boot is a workaround that happened to work well.

Compare row count of a unix file with trailer count for multiple types of record

Posted: 04 May 2021 07:28 AM PDT

My file has multiple headers and multiple record types (eg. 0001, 0002, 0003, 0004). Count is given for each record type in the trailer row along with overall detail record count.

Sample File:

XYZH001  YZXH002  0001Rec1  0001Rec2  YZXH002  0002Rec1  0002Rec2  YZXH002  0003Rec1  0003Rec2  0003Rec3  YZXH002  0004Rec1  T999008002002004001  

File details:

Detail records are where 1 to 4 position data in (0001, 0002, 0003, 0004)    Trailer:  Trailer identifier(position 1 to 4)             = T999  total data count (position 5 to 7)              = 008  count of record type 0001 (position 8 to 10)    = 002  count of record type 0002 (position 11 to 13)   = 002  count of record type 0003 (position 14 to 16)   = 004  count of record type 0004 (position 17 to 19)   = 001  

Requirement:

-- Compare overall detail row count where 1 to 4 position data in (0001, 0002, 0003, 0004) with trailer record count (position 5 to 7)   -- Compare each Record type row count with trailer record count       eg. Compare row count where 1 to 4 position data = 0001 with trailer record count for 0001 (position 5 to 7)         .....  -- Stop execution in case of detail record row count and trailer count mismatch  

Expected output :

Overall detail row count 8 matches with trailer record count 8.  Row count for 0001 record type 2 matches with trailer record count 2.  Row count for 0002 record type 2 matches with trailer record count 2.  Row count for 0003 record type 3 does not match with trailer record count 4.  Stopping execution.  

Ubuntu stuck at boot

Posted: 04 May 2021 09:45 AM PDT

My Ubuntu 20.04 is stuck at the boot. The screen becomes black and the only visible thing is a small underscore on top. I cannot enter the command line. I think I might have screwed something with systemd. Any suggestions?

Debian preseed: How to force prompt for hostname and domain?

Posted: 04 May 2021 07:50 AM PDT

I have a preseed file which works perfectly in that the install goes from start to finish fully automated without prompts.

However, I want to force a prompt for hostname and domain.

I have tried adding:

d-i netcfg/get_hostname seen false  d-i netcfg/get_domain seen false  

However the installer just ignores this and I end up with a system with the default debian hostname etc.

netcfg/get_hostname, d-i netcfg/dhcp_hostname and netcfg/get_domain are not defined in my preseed file.

If it makes any difference, this question relates to Debian 10.

find and replace "tabs" using search and replace in nano

Posted: 04 May 2021 08:53 AM PDT

How can I search and replace horizontal-tabs in nano? I've been trying to use [\t] in regex mode, but this only matches every occurrence of the character t?

...just been using sed 's/\t//g' file which works fine, but I would still be interested in a nano solution.

Open multiple instances of a given application

Posted: 04 May 2021 08:57 AM PDT

I am using Ubuntu (Ubuntu 16.04.4 LTS 64-bit) with the Gnome desktop environment.

My issue is that in the Activities menu, when searching for and selecting certain softwares (like Videos), if an instance of that software is already open, clicking on it won't open a new instance (new window) but will redirect me to the already open one. Right clicking on the icon and selecting "open a new window" does exactly the same.

I'm guessing it is a configuration setting in a a file like dconf that screws it up, but can't find where it is and haven't found a single thread describing my issue.

(Interesting to note that, I am able to open a new window of a given application if that one contains an option to open a new Window itself like Firefox or VS Code)

Does anybody know the answer ?

Running sshd in cygwin: "/var/empty must be owned by root..."

Posted: 04 May 2021 09:01 AM PDT

I installed OpenSSH on my Windows 7 system so I could tunnel my VNC into it from my Arch machine. However, when I run /usr/sbin/sshd -D on the W7 machine, I get the error: /var/empty must be owned by root and not group or world-writable.

This is the output of the ls -All /var:

$ ls -All /var  total 0  drwxr-xr-x+ 1 {my_usrnm} None           0 Jul 15 21:39 cache  drw-------+ 1 cyg_server Administrators 0 Jul 15 21:43 empty  drwxr-xr-x+ 1 {my_usrnm} None           0 Jul 15 21:39 lib  drwxrwxrwt+ 1 {my_usrnm} None           0 Jul 15 21:45 log  drwxrwxrwt+ 1 {my_usrnm} None           0 Jul 15 23:36 run  drwxrwxrwt+ 1 {my_usrnm} None           0 Jul 15 21:39 tmp  

I've tried a few of the permissions fixes and rebooted and reinstalled OpenSSH (by running ssh-host-config) at least 10 times, but nothing had fixed it.

How do I fix this error? Thanks!

Can I connect a Ubuntu Linux laptop to a Windows 10 laptop via ethernet cable

Posted: 04 May 2021 08:08 AM PDT

I have seen people connect two computers with an Ethernet cable, but the instructions I've seen were for Windows to Windows or Mac to Mac or Windows to Mac. I never came across any for connecting Windows to Linux. Is it possible to connect a Windows system to a Linux system via Ethernet cable?

How to unzip and dd a disk image to an SD Card with a single command?

Posted: 04 May 2021 09:51 AM PDT

I am under the following restrictions:

  • I have a 1.0 GB .zip file on my computer which contains one file, a disk image of raspbian. When uncompressed, this file is 3.2 GB large and named 2015-11-21-raspbian-jessie.img.
  • After having downloaded the zip file, I have just under 1.0 GB of storage space on my computer, not enough space to extract the image to my computer.
  • This file needs to be uncompressed and written to an SD card using plain old dd.

Is it possible for me to write the image to the SD card under these restrictions?

I know it's possible to pipe data through tar and then pipe that data elsewhere, however, will this still work for the zip file format, or does the entire archive need to be uncompressed before any files are accessible?

find -maxdepth 0 not returning me any output

Posted: 04 May 2021 08:42 AM PDT

I am trying to understand how to use find -maxdepth 0 option.

I have the below directory structure.

--> file1  --> parent            --> child1                     --> file1                     --> file2            --> child2                     --> file1                     --> file2            --> file1  

Now, I execute my find command as below.

find ./parent -maxdepth 0 -name "file1"  find ./ -maxdepth 0 -name "file1"  find . -maxdepth 0 -name "file1"  

With none of the above find commands, file1 gets returned.

From man page of find, I see the below information.

-maxdepth 0 means only apply the tests and actions to the command line arguments.

I searched for some examples with -maxdepth 0 option and couldn't find any proper example.

My find version is,

find --version  find (GNU findutils) 4.4.2  

Can someone please provide me some pointers on which cases -maxdepth 0 option would be useful?

EDIT

When I execute the below command, I get the file1 getting listed twice. Is this intended to work this way?

find . file1 -maxdepth 1 -name "file1"  ./file1  file1  

How to print certain columns by name?

Posted: 04 May 2021 07:34 AM PDT

I have the following file:

id  name  age  1   ed    50  2   joe   70     

I want to print just the id and age columns. Right now I just use awk:

cat file.tsv | awk '{ print $1, $3 }'  

However, this requires knowing the column numbers. Is there a way to do it where I can use the name of the column (specified on the first row), instead of the column number?

No comments:

Post a Comment